issue
dict
pr
dict
pr_details
dict
{ "body": "An NPE was encountered when upgrading from 1.1.1 to 1.3.4. During the rolling upgrade, a background cron tried to execute a delete-by-query which included a parent/child query. This was allowed in 1.1.1, but [disabled in later versions](https://github.com/elasticsearch/elasticsearch/pull/5916).\n\nThis caused a delete-by-query to queue up in the translog of a 1.1.1 node. Before the translog was cleared, the shard tried to move to a 1.3.4 node, which caused an NPE. The shards repeatedly failed recovery and kept bouncing around the cluster. Because allocation filtering was being used to migrate data from old -> new, the cluster tried to recover the shards on only 1.3.4 nodes...leading to a continuous failure.\n\nThe situation eventually resolved itself, likely because a background flush cleared out the translog and allowed the recovery to finally proceed normally.\n\nStack trace (sanitized to remove sensitive names/ips):\n\n```\n\n[2014-10-08 21:43:26,881][WARN ][indices.cluster ] [prod-1.3.4] [my_index][6] failed to start shard\norg.elasticsearch.indices.recovery.RecoveryFailedException: [my_index][6]: Recovery failed from [prod-1.1.1][YhcqkTzLTGSF8dyKAQPRBQ][prod-1.1.1.localdomain][inet[...]]{aws_availability_zone=us-east-1e, max_local_storage_nodes=1} into [prod-1.3.4][0cRcLbzTTAm15PMu_R_U2w][prod-1.3.4.localdomain][inet[prod-1.3.4.localdomain/...]]{aws_availability_zone=us-east-1e, max_local_storage_nodes=1}\n at org.elasticsearch.indices.recovery.RecoveryTarget.doRecovery(RecoveryTarget.java:306)\n at org.elasticsearch.indices.recovery.RecoveryTarget.access$200(RecoveryTarget.java:65)\n at org.elasticsearch.indices.recovery.RecoveryTarget$2.run(RecoveryTarget.java:175)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n at java.lang.Thread.run(Thread.java:745)\nCaused by: org.elasticsearch.transport.RemoteTransportException: [prod-1.1.1][inet[/...]][index/shard/recovery/startRecovery]\nCaused by: org.elasticsearch.index.engine.RecoveryEngineException: [my_index][6] Phase[2] Execution failed\n at org.elasticsearch.index.engine.internal.InternalEngine.recover(InternalEngine.java:1109)\n at org.elasticsearch.index.shard.service.InternalIndexShard.recover(InternalIndexShard.java:627)\n at org.elasticsearch.indices.recovery.RecoverySource.recover(RecoverySource.java:117)\n at org.elasticsearch.indices.recovery.RecoverySource.access$1600(RecoverySource.java:61)\n at org.elasticsearch.indices.recovery.RecoverySource$StartRecoveryTransportRequestHandler.messageReceived(RecoverySource.java:337)\n at org.elasticsearch.indices.recovery.RecoverySource$StartRecoveryTransportRequestHandler.messageReceived(RecoverySource.java:323)\n at org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.run(MessageChannelHandler.java:270)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n at java.lang.Thread.run(Thread.java:745)\nCaused by: org.elasticsearch.transport.RemoteTransportException: [prod-1.3.4][inet[/...]][index/shard/recovery/translogOps]\nCaused by: org.elasticsearch.index.query.QueryParsingException: [my_index] Failed to parse\n at org.elasticsearch.index.query.IndexQueryParserService.parseQuery(IndexQueryParserService.java:330)\n at org.elasticsearch.index.shard.service.InternalIndexShard.prepareDeleteByQuery(InternalIndexShard.java:449)\n at org.elasticsearch.index.shard.service.InternalIndexShard.performRecoveryOperation(InternalIndexShard.java:780)\n at org.elasticsearch.indices.recovery.RecoveryTarget$TranslogOperationsRequestHandler.messageReceived(RecoveryTarget.java:431)\n at org.elasticsearch.indices.recovery.RecoveryTarget$TranslogOperationsRequestHandler.messageReceived(RecoveryTarget.java:410)\n at org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.run(MessageChannelHandler.java:275)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n at java.lang.Thread.run(Thread.java:745)\nCaused by: java.lang.NullPointerException\n at org.elasticsearch.index.query.QueryParserUtils.ensureNotDeleteByQuery(QueryParserUtils.java:36)\n at org.elasticsearch.index.query.HasParentFilterParser.parse(HasParentFilterParser.java:52)\n at org.elasticsearch.index.query.QueryParseContext.executeFilterParser(QueryParseContext.java:302)\n at org.elasticsearch.index.query.QueryParseContext.parseInnerFilter(QueryParseContext.java:283)\n at org.elasticsearch.index.query.NotFilterParser.parse(NotFilterParser.java:63)\n at org.elasticsearch.index.query.QueryParseContext.executeFilterParser(QueryParseContext.java:302)\n at org.elasticsearch.index.query.QueryParseContext.parseInnerFilter(QueryParseContext.java:283)\n at org.elasticsearch.index.query.FilteredQueryParser.parse(FilteredQueryParser.java:74)\n at org.elasticsearch.index.query.QueryParseContext.parseInnerQuery(QueryParseContext.java:239)\n at org.elasticsearch.index.query.IndexQueryParserService.innerParse(IndexQueryParserService.java:342)\n at org.elasticsearch.index.query.IndexQueryParserService.parse(IndexQueryParserService.java:268)\n at org.elasticsearch.index.query.IndexQueryParserService.parse(IndexQueryParserService.java:263)\n at org.elasticsearch.index.query.IndexQueryParserService.parseQuery(IndexQueryParserService.java:314)\n ... 8 more\n```\n", "comments": [ { "body": "This is bad. First of all a the actual exception should be a `QueryParsingException` with the message the p/c queries are unsupported in the delete by query api and second I think the translog should just skip a operation if it fails with a QueryParsingException.\n", "created_at": "2014-10-09T07:28:21Z" }, { "body": "@martijnvg can we somehow reproduce this with bwc test? just curious.... I think we should work on something with @dakrone to be able to skip individual operations in the translog... might be even a standalone tool? @dakrone any ideas?\n", "created_at": "2014-10-09T16:06:23Z" }, { "body": "@s1monw I'm sure that this can be reproduced in a bwc test :)\n", "created_at": "2014-10-09T16:07:08Z" }, { "body": "@martijnvg assigned this to you, but perhaps @dakrone is the person best placed to look at this?\n", "created_at": "2014-10-15T19:55:47Z" }, { "body": "This issue is less severe as I initially thought. What it boils down to is that any delete by query translog operation with a p/c query is just ignored, but the rest of all translog operations are successfully executed and the shard gets assigned.\n\nThe NPE is annoying (which I will fix) but that gets wrapped by a QueryParsingException (in IndexQueryParserService#parseQuery(...) line 370) and because of this in LocalIndexShardGateway#recover(...) at line 276 we ignore the delete by query operation. A QueryParsingException exception status is seen as bad request, so the idea here is to ignore it.\n", "created_at": "2014-10-21T12:15:02Z" }, { "body": "I opened this PR for the NPE during recovery: #8177\n", "created_at": "2014-10-21T12:21:09Z" } ], "number": 8031, "title": "NPE due to delete-by-query with parent/child when upgrading from 1.1.1 to 1.3.x" }
{ "body": "Also added a bwc test that runs a delete by query with a has_child query and verifies that only that operation is ignored when recovering from disk during a upgrade.\n\nPR for #8031\n", "number": 8177, "review_comments": [], "title": "Check if there is a search context, otherwise throw a query parse exception." }
{ "commits": [ { "message": "Parent/child: Check if there is a search context, otherwise throw a query parse exception.\n\nAlso added a bwc test that runs a delete by query with a has_child query and verifies that only that operation is ignored when recovering from disk during a upgrade." } ], "files": [ { "diff": "@@ -33,8 +33,13 @@ private QueryParserUtils() {\n * Ensures that the query parsing wasn't invoked via the delete by query api.\n */\n public static void ensureNotDeleteByQuery(String name, QueryParseContext parseContext) {\n- if (TransportShardDeleteByQueryAction.DELETE_BY_QUERY_API.equals(SearchContext.current().source())) {\n- throw new QueryParsingException(parseContext.index(), \"[\" + name + \"] unsupported in delete_by_query api\");\n+ SearchContext context = SearchContext.current();\n+ if (context == null) {\n+ throw new QueryParsingException(parseContext.index(), \"[\" + name + \"] query and filter requires a search context\");\n+ }\n+\n+ if (TransportShardDeleteByQueryAction.DELETE_BY_QUERY_API.equals(context.source())) {\n+ throw new QueryParsingException(parseContext.index(), \"[\" + name + \"] query and filter unsupported in delete_by_query api\");\n }\n }\n ", "filename": "src/main/java/org/elasticsearch/index/query/QueryParserUtils.java", "status": "modified" }, { "diff": "@@ -0,0 +1,114 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.bwcompat;\n+\n+import org.elasticsearch.Version;\n+import org.elasticsearch.action.index.IndexRequestBuilder;\n+import org.elasticsearch.action.search.SearchResponse;\n+import org.elasticsearch.common.settings.ImmutableSettings;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.test.ElasticsearchBackwardsCompatIntegrationTest;\n+import org.junit.BeforeClass;\n+import org.junit.Test;\n+\n+import java.util.ArrayList;\n+import java.util.List;\n+\n+import static org.elasticsearch.index.query.QueryBuilders.hasChildQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.*;\n+import static org.hamcrest.core.Is.is;\n+\n+/**\n+ */\n+public class ParentChildDeleteByQueryBackwardsCompatibilityTest extends ElasticsearchBackwardsCompatIntegrationTest {\n+\n+ @BeforeClass\n+ public static void checkVersion() {\n+ assumeTrue(\"parent child queries in delete by query is forbidden from 1.1.2 and up\", globalCompatibilityVersion().onOrBefore(Version.V_1_1_1));\n+ }\n+\n+ @Override\n+ public void assertAllShardsOnNodes(String index, String pattern) {\n+ super.assertAllShardsOnNodes(index, pattern);\n+ }\n+\n+ @Override\n+ protected Settings externalNodeSettings(int nodeOrdinal) {\n+ return ImmutableSettings.builder()\n+ .put(super.externalNodeSettings(nodeOrdinal))\n+ .put(\"index.translog.disable_flush\", true)\n+ .build();\n+ }\n+\n+ @Test\n+ public void testHasChild() throws Exception {\n+ assertAcked(prepareCreate(\"idx\")\n+ .setSettings(ImmutableSettings.builder()\n+ .put(indexSettings())\n+ .put(\"index.refresh_interval\", \"-1\")\n+ .put(\"index.routing.allocation.exclude._name\", backwardsCluster().newNodePattern())\n+ )\n+ .addMapping(\"parent\")\n+ .addMapping(\"child\", \"_parent\", \"type=parent\"));\n+\n+ List<IndexRequestBuilder> requests = new ArrayList<>();\n+ requests.add(client().prepareIndex(\"idx\", \"parent\", \"1\").setSource(\"{}\"));\n+ requests.add(client().prepareIndex(\"idx\", \"child\", \"1\").setParent(\"1\").setSource(\"{}\"));\n+ indexRandom(true, requests);\n+\n+ SearchResponse response = client().prepareSearch(\"idx\")\n+ .setQuery(hasChildQuery(\"child\", matchAllQuery()))\n+ .get();\n+ assertNoFailures(response);\n+ assertHitCount(response, 1);\n+\n+ client().prepareDeleteByQuery(\"idx\")\n+ .setQuery(hasChildQuery(\"child\", matchAllQuery()))\n+ .get();\n+ refresh();\n+\n+ response = client().prepareSearch(\"idx\")\n+ .setQuery(hasChildQuery(\"child\", matchAllQuery()))\n+ .get();\n+ assertNoFailures(response);\n+ assertHitCount(response, 0);\n+\n+ client().prepareIndex(\"idx\", \"type\", \"1\").setSource(\"{}\").get();\n+ assertThat(client().prepareGet(\"idx\", \"type\", \"1\").get().isExists(), is(true));\n+\n+ backwardsCluster().upgradeAllNodes();\n+ backwardsCluster().allowOnAllNodes(\"idx\");\n+ ensureGreen(\"idx\");\n+\n+ response = client().prepareSearch(\"idx\")\n+ .setQuery(hasChildQuery(\"child\", matchAllQuery()))\n+ .get();\n+ assertNoFailures(response);\n+ assertHitCount(response, 1); // The delete by query has failed on recovery so that parent doc is still there\n+\n+ // But the rest of the recovery did execute, we just skipped over the delete by query with the p/c query.\n+ assertThat(client().prepareGet(\"idx\", \"type\", \"1\").get().isExists(), is(true));\n+ response = client().prepareSearch(\"idx\").setTypes(\"type\").get();\n+ assertNoFailures(response);\n+ assertHitCount(response, 1);\n+ }\n+\n+}", "filename": "src/test/java/org/elasticsearch/bwcompat/ParentChildDeleteByQueryBackwardsCompatibilityTest.java", "status": "added" } ] }
{ "body": "I've seen the stacktrace below in our integration tests a couple of time. We're starting elasticsearch as an embedded node. The error appears to be non fatal since our integration tests pass anyway. I've seen this stacktrace twice in the past week but can't reproduce it reliably.\n\nWe are running our maven tests concurrently and in randomized order, so there are a lot of integration tests hitting our elasticsearch node all at once right after it starts and reports a green status.\n\nUsing elasticsearch 1.4.0 Beta1\n\n17-10-2014T15:56:40+0200 W warmer - [test-node-gstJI] [inbot_users_v27][2] failed to load random access for [_type:usercontact]\norg.elasticsearch.common.util.concurrent.UncheckedExecutionException: java.lang.NullPointerException\n at org.elasticsearch.common.cache.LocalCache$Segment.get(LocalCache.java:2203) ~[elasticsearch-1.4.0.Beta1.jar:na]\n at org.elasticsearch.common.cache.LocalCache.get(LocalCache.java:3937) ~[elasticsearch-1.4.0.Beta1.jar:na]\n at org.elasticsearch.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4739) ~[elasticsearch-1.4.0.Beta1.jar:na]\n at org.elasticsearch.index.cache.fixedbitset.FixedBitSetFilterCache.getAndLoadIfNotPresent(FixedBitSetFilterCache.java:132) ~[elasticsearch-1.4.0.Beta1.jar:na]\n at org.elasticsearch.index.cache.fixedbitset.FixedBitSetFilterCache.access$100(FixedBitSetFilterCache.java:75) ~[elasticsearch-1.4.0.Beta1.jar:na]\n at org.elasticsearch.index.cache.fixedbitset.FixedBitSetFilterCache$FixedBitSetFilterWarmer$1.run(FixedBitSetFilterCache.java:284) ~[elasticsearch-1.4.0.Beta1.jar:na]\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0]\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0]\n at java.lang.Thread.run(Thread.java:744) [na:1.8.0]\nCaused by: java.lang.NullPointerException: null\n at org.elasticsearch.index.cache.fixedbitset.FixedBitSetFilterCache$2.call(FixedBitSetFilterCache.java:157) ~[elasticsearch-1.4.0.Beta1.jar:na]\n at org.elasticsearch.index.cache.fixedbitset.FixedBitSetFilterCache$2.call(FixedBitSetFilterCache.java:132) ~[elasticsearch-1.4.0.Beta1.jar:na]\n at org.elasticsearch.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4742) ~[elasticsearch-1.4.0.Beta1.jar:na]\n at org.elasticsearch.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3527) ~[elasticsearch-1.4.0.Beta1.jar:na]\n at org.elasticsearch.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2319) ~[elasticsearch-1.4.0.Beta1.jar:na]\n at org.elasticsearch.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2282) ~[elasticsearch-1.4.0.Beta1.jar:na]\n at org.elasticsearch.common.cache.LocalCache$Segment.get(LocalCache.java:2197) ~[elasticsearch-1.4.0.Beta1.jar:na]\n ... 8 common frames omitted\n", "comments": [ { "body": "thanks for reporting @jillesvangurp - we'll take a look\n", "created_at": "2014-10-17T14:17:48Z" }, { "body": "@jillesvangurp I think I found a small issue where there is a small window of time that a field is unset, but the warmer that needs is running. In order to confirm this can you share how you start the embedded node and run your test? (for example you wait for green status before running?) Sharing code snippets how the node is brought up before the tests run would even be more helpful.\n", "created_at": "2014-10-17T15:24:26Z" }, { "body": "Sure no problem:\n\n```\n String defaultIndexDirectory = \"target/data-\"+UUID.randomUUID().toString();\n String indexDir = config.getString(\"estestserver.indexdir\",defaultIndexDirectory);\n String logDir = config.getString(\"estestserver.logdir\",defaultIndexDirectory + \"/logs\");\n String esPort = config.getString(\"estestserver.port\",\"9299\");\n File file = new File(defaultIndexDirectory);\n file.mkdirs();\n LOG.info(\"using \" + file.getAbsolutePath() + \" for es data and logs\");\n Settings settings = ImmutableSettings.settingsBuilder()\n .put(\"name\", \"test-node-\"+RandomStringUtils.randomAlphabetic(5))\n .put(\"cluster.name\", \"linko-dev-cluster-\"+RandomStringUtils.randomAlphabetic(5))\n .put(\"index.gateway.type\", \"none\")\n .put(\"gateway.type\", \"none\")\n .put(\"discovery.zen.ping.multicast.ping.enabled\", \"false\")\n .put(\"discovery.zen.ping.multicast.enabled\", \"false\")\n .put(\"path.data\", indexDir)\n .put(\"path.logs\", logDir)\n .put(\"foreground\", \"true\")\n .put(\"http.port\", esPort)\n .put(\"http.cors.enabled\", \"true\")\n .put(\"http.cors.allow-origin\",\"/https?:\\\\/\\\\/(localhost|kibana.*\\\\.linko\\\\.io)(:[0-9]+)?/\")\n .build();\n\n LOG.info(settings.toDelimitedString(';'));\n\n NodeBuilder nodeBuilder = NodeBuilder.nodeBuilder()\n .settings(settings)\n .loadConfigSettings(false);\n node = nodeBuilder\n .build();\n\n\n // register a shutdown hook\n Runtime.getRuntime().addShutdownHook(new Thread() {\n @Override\n public void run() {\n node.close();\n }\n });\n node.start();\n\n // wait until the shards are ready\n node.client().admin().cluster().prepareHealth().setWaitForGreenStatus().execute().actionGet();\n```\n", "created_at": "2014-10-17T15:28:37Z" }, { "body": "One additional bit of information that I just realized may be relevant here is that we have a parent child relation between user and usercontact. So, the exception is happening when it is doing something with the child type. For reference, here's a gist with the full mapping for the index: https://gist.github.com/jillesvangurp/d0cd29573b876f9cc4d3\n", "created_at": "2014-10-18T10:39:42Z" }, { "body": "Also, we're using testNg and surefire. The elasticsearch node is started in a @BeforeSuite. We use a very large threadcount and randomized order to surface any issues related to inter test dependencies and stability of our system. This pretty much means all our integration test classes are starting at the same time. We generally use randomized test data and there are a lot of calls to /_refresh to ensure indices are committed in each tests. \n\n```\n<plugin>\n <groupId>org.apache.maven.plugins</groupId>\n <artifactId>maven-surefire-plugin</artifactId>\n <configuration>\n <parallel>classes</parallel>\n <threadCount>50</threadCount>\n <runOrder>random</runOrder>\n <argLine>-Xms1024m -Xmx2048m</argLine>\n <properties>\n <property>\n <name>listener</name>\n <value>io.linko.ng.testutil.TestProgressLogger</value>\n </property>\n </properties>\n </configuration>\n </plugin>\n```\n", "created_at": "2014-10-18T10:50:34Z" }, { "body": "Thanks for the provided information @jillesvangurp \n\nI opened: #8168 for this issue. Are you able to verify if the non fatal NPE doesn't occur any more with this fix in your test infrastructure?\n", "created_at": "2014-10-20T14:59:15Z" }, { "body": "No problem. Unfortunately, most of our builds don't trigger this exception; so it is a bit hard for me to confirm. I've only spotted it twice out of dozens of test runs over the past week. If you issue another beta, I'll be able to depend on that at least.\n", "created_at": "2014-10-20T15:11:16Z" } ], "number": 8140, "title": "non fatal npe in warmer" }
{ "body": "Add warmer listener only when index service is set, in order to prevent possible NPE.\n\nThe IndicesWarmer gets set before the InternalIndexService gets set, which can lead to a small time window were InternalIndexService isn't set\n\nCloses #8140\n", "number": 8168, "review_comments": [], "title": "In fixed bitset service fix order where the warmer listener is added." }
{ "commits": [ { "message": "Core: Add warmer listener only when index service is set, in order to prevent possible NPE.\n\nThe IndicesWarmer gets set before the InternalIndexService gets set, which can lead to a small time window were InternalIndexService isn't set" } ], "files": [ { "diff": "@@ -94,6 +94,10 @@ public FixedBitSetFilterCache(Index index, @IndexSettings Settings indexSettings\n @Inject(optional = true)\n public void setIndicesWarmer(IndicesWarmer indicesWarmer) {\n this.indicesWarmer = indicesWarmer;\n+ }\n+\n+ public void setIndexService(InternalIndexService indexService) {\n+ this.indexService = indexService;\n indicesWarmer.addListener(warmer);\n }\n \n@@ -164,10 +168,6 @@ public Value call() throws Exception {\n }).fixedBitSet;\n }\n \n- public void setIndexService(InternalIndexService indexService) {\n- this.indexService = indexService;\n- }\n-\n @Override\n public void onRemoval(RemovalNotification<Object, Cache<Filter, Value>> notification) {\n Object key = notification.getKey();\n@@ -283,10 +283,10 @@ public void run() {\n final long start = System.nanoTime();\n getAndLoadIfNotPresent(filterToWarm, ctx);\n if (indexShard.warmerService().logger().isTraceEnabled()) {\n- indexShard.warmerService().logger().trace(\"warmed random access for [{}], took [{}]\", filterToWarm, TimeValue.timeValueNanos(System.nanoTime() - start));\n+ indexShard.warmerService().logger().trace(\"warmed fixed bitset for [{}], took [{}]\", filterToWarm, TimeValue.timeValueNanos(System.nanoTime() - start));\n }\n } catch (Throwable t) {\n- indexShard.warmerService().logger().warn(\"failed to load random access for [{}]\", t, filterToWarm);\n+ indexShard.warmerService().logger().warn(\"failed to load fixed bitset for [{}]\", t, filterToWarm);\n } finally {\n latch.countDown();\n }", "filename": "src/main/java/org/elasticsearch/index/cache/fixedbitset/FixedBitSetFilterCache.java", "status": "modified" } ] }
{ "body": "To reproduce, download latest 1.3.x or 1.4 beta and update config/elasticsearch.yml to include:\naction.auto_create_index: +willwork*\n\nThen create a requests file that contains:\n\n```\n{ \"index\" : { \"_index\" : \"willwork\", \"_type\" : \"type1\", \"_id\" : \"1\" } }\n{ \"field1\" : \"value1\" }\n{ \"index\" : { \"_index\" : \"noway\", \"_type\" : \"type1\", \"_id\" : \"1\" } }\n{ \"field1\" : \"value1\" }\n```\n\nRun the command to bulk insert:\n\n```\ncurl -s -XPOST localhost:9200/_bulk --data-binary @requests; echo\n```\n\nThe command hangs and doesn't return. \n", "comments": [ { "body": "thanks for the succinct report @ppearcy - will look into it\n", "created_at": "2014-10-17T05:28:55Z" }, { "body": "Reproduced and I have a fix - just need to write the automated test for our test suite as it obviously failed to be a case we covered.\n", "created_at": "2014-10-17T17:23:03Z" } ], "number": 8125, "title": "Bulk request hangs when one index can be auto created an another cannot" }
{ "body": "If a bulk request contains a mix of indexing requests for an existing index and one that needs to be auto-created but a cluster configuration prevents the auto-create of the new index, the ingest process hangs. The exception for the failure to create an index was not caught or reported back properly. Added a Junit test to recreate the issue and the associated fix is in TransportBulkAction.\n\nCloses #8125\n", "number": 8163, "review_comments": [ { "body": "I think we simplify this code by merging the two use cases. The only difference is the exception that goes into the BulkItemResponse.Failure object. Can we have that as a variable instead of isClosed , i.e., `Exception unavailableException = null` and later check:\n\n```\n if (unavailableException != null) {\n BulkItemResponse.Failure failure = new BulkItemResponse.Failure(request.index(), request.type(), request.id(), unavailableException);\n BulkItemResponse bulkItemResponse = new BulkItemResponse(idx, \"index\", failure);\n responses.set(idx, bulkItemResponse);\n // make sure the request gets never processed again\n bulkRequest.requests.set(idx, null);\n }\n```\n", "created_at": "2014-10-21T19:15:52Z" } ], "title": "Handle failed request when auto create index is disabled" }
{ "commits": [ { "message": "Bulk indexing: Fix 8125 hanged request when auto create index is off.\nIf a bulk request contains a mix of indexing requests for an existing index and one that needs to be auto-created but a cluster configuration prevents the auto-create of the new index the ingest process hangs. The exception for the failure to create an index was not caught or reported back properly. Added a Junit test to recreate the issue and the associated fix is in TransportBulkAction.\n\nCloses #8125" } ], "files": [ { "diff": "@@ -53,11 +53,16 @@\n import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.indices.IndexAlreadyExistsException;\n import org.elasticsearch.indices.IndexClosedException;\n+import org.elasticsearch.indices.IndexMissingException;\n import org.elasticsearch.rest.RestStatus;\n import org.elasticsearch.threadpool.ThreadPool;\n import org.elasticsearch.transport.TransportService;\n \n-import java.util.*;\n+import java.util.HashMap;\n+import java.util.List;\n+import java.util.Locale;\n+import java.util.Map;\n+import java.util.Set;\n import java.util.concurrent.atomic.AtomicInteger;\n \n /**\n@@ -117,7 +122,11 @@ protected void doExecute(final BulkRequest bulkRequest, final ActionListener<Bul\n @Override\n public void onResponse(CreateIndexResponse result) {\n if (counter.decrementAndGet() == 0) {\n- executeBulk(bulkRequest, startTime, listener, responses);\n+ try {\n+ executeBulk(bulkRequest, startTime, listener, responses);\n+ } catch (Throwable t) {\n+ listener.onFailure(t);\n+ }\n }\n }\n \n@@ -205,7 +214,7 @@ private void executeBulk(final BulkRequest bulkRequest, final long startTime, fi\n if (request instanceof DocumentRequest) {\n DocumentRequest req = (DocumentRequest) request;\n \n- if (addFailureIfIndexIsClosed(req, bulkRequest, responses, i, concreteIndices, metaData)) {\n+ if (addFailureIfIndexIsUnavailable(req, bulkRequest, responses, i, concreteIndices, metaData)) {\n continue;\n }\n \n@@ -344,31 +353,38 @@ private void finishHim() {\n }\n }\n \n- private boolean addFailureIfIndexIsClosed(DocumentRequest request, BulkRequest bulkRequest, AtomicArray<BulkItemResponse> responses, int idx,\n+ private boolean addFailureIfIndexIsUnavailable(DocumentRequest request, BulkRequest bulkRequest, AtomicArray<BulkItemResponse> responses, int idx,\n final ConcreteIndices concreteIndices,\n final MetaData metaData) {\n String concreteIndex = concreteIndices.getConcreteIndex(request.index());\n- boolean isClosed = false;\n+ Exception unavailableException = null;\n if (concreteIndex == null) {\n try {\n concreteIndex = concreteIndices.resolveIfAbsent(request.index(), request.indicesOptions());\n } catch (IndexClosedException ice) {\n- isClosed = true;\n+ unavailableException = ice;\n+ } catch (IndexMissingException ime) {\n+ // Fix for issue where bulk request references an index that\n+ // cannot be auto-created see issue #8125\n+ unavailableException = ime;\n }\n }\n- if (!isClosed) {\n+ if (unavailableException == null) {\n IndexMetaData indexMetaData = metaData.index(concreteIndex);\n- isClosed = indexMetaData.getState() == IndexMetaData.State.CLOSE;\n+ if (indexMetaData.getState() == IndexMetaData.State.CLOSE) {\n+ unavailableException = new IndexClosedException(new Index(metaData.index(request.index()).getIndex()));\n+ }\n }\n- if (isClosed) {\n+ if (unavailableException != null) {\n BulkItemResponse.Failure failure = new BulkItemResponse.Failure(request.index(), request.type(), request.id(),\n- new IndexClosedException(new Index(metaData.index(request.index()).getIndex())));\n+ unavailableException);\n BulkItemResponse bulkItemResponse = new BulkItemResponse(idx, \"index\", failure);\n responses.set(idx, bulkItemResponse);\n // make sure the request gets never processed again\n bulkRequest.requests.set(idx, null);\n+ return true;\n }\n- return isClosed;\n+ return false;\n }\n \n ", "filename": "src/main/java/org/elasticsearch/action/bulk/TransportBulkAction.java", "status": "modified" }, { "diff": "@@ -0,0 +1,54 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.action.bulk;\n+\n+import org.elasticsearch.common.settings.ImmutableSettings;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.test.ElasticsearchIntegrationTest;\n+import org.elasticsearch.test.ElasticsearchIntegrationTest.ClusterScope;\n+import org.elasticsearch.test.ElasticsearchIntegrationTest.Scope;\n+import org.junit.Test;\n+\n+@ClusterScope(scope = Scope.TEST, numDataNodes = 0)\n+public class BulkProcessorClusterSettingsTests extends ElasticsearchIntegrationTest {\n+\n+ @Test\n+ public void testBulkProcessorAutoCreateRestrictions() throws Exception {\n+ // See issue #8125\n+ Settings settings = ImmutableSettings.settingsBuilder().put(\"action.auto_create_index\", false).build();\n+\n+ internalCluster().startNode(settings);\n+\n+ createIndex(\"willwork\");\n+ client().admin().cluster().prepareHealth(\"willwork\").setWaitForGreenStatus().execute().actionGet();\n+\n+ BulkRequestBuilder bulkRequestBuilder = client().prepareBulk();\n+ bulkRequestBuilder.add(client().prepareIndex(\"willwork\", \"type1\", \"1\").setSource(\"{\\\"foo\\\":1}\"));\n+ bulkRequestBuilder.add(client().prepareIndex(\"wontwork\", \"type1\", \"2\").setSource(\"{\\\"foo\\\":2}\"));\n+ bulkRequestBuilder.add(client().prepareIndex(\"willwork\", \"type1\", \"3\").setSource(\"{\\\"foo\\\":3}\"));\n+ BulkResponse br = bulkRequestBuilder.get();\n+ BulkItemResponse[] responses = br.getItems();\n+ assertEquals(3, responses.length);\n+ assertFalse(\"Operation on existing index should succeed\", responses[0].isFailed());\n+ assertTrue(\"Missing index should have been flagged\", responses[1].isFailed());\n+ assertEquals(\"IndexMissingException[[wontwork] missing]\", responses[1].getFailureMessage());\n+ assertFalse(\"Operation on existing index should succeed\", responses[2].isFailed());\n+ }\n+}", "filename": "src/test/java/org/elasticsearch/action/bulk/BulkProcessorClusterSettingsTests.java", "status": "added" } ] }
{ "body": "As far as I can see since 1.3.3 when query is using \"lenient\" parameter it yelds results when we have format based failures (at least in 1.3.2 it does not return any results).\nWhen searching by non-existing subfields (i.e. \"field.nonexisting\") no errors and no results are found (which seems to be correct)\nThe problem comes when I combine queries from both types - with \"lenient\" and format failure and with non-existing fields in a \"should\" boolean query\nI'm tested with 1.3.4 version and 1.4.0-beta1\nHere are the structure, test data and queries I tested with:\nStructure\nhttps://gist.github.com/shoteff/3ee9f2ad320410375c91\nData\nhttps://gist.github.com/shoteff/f5c23bd6bb85f6fecdc9\nQueries - here I also described what is working and what not\nhttps://gist.github.com/shoteff/20fa2411ede09068ce95\n", "comments": [ { "body": "Simple demo:\n\n```\nDELETE t\n\nPUT /t/test/1\n{\n \"id\": 1,\n \"title\": \"this is test1\"\n}\n\nGET /_validate/query?explain\n{\n \"query\": {\n \"bool\": {\n \"should\": [\n {\n \"simple_query_string\": {\n \"query\": \"this\",\n \"fields\": [\n \"id\",\n \"title\"\n ],\n \"default_operator\": \"and\",\n \"lenient\": true\n }\n }\n ]\n }\n }\n}\n```\n\nThe above results in a match-all query:\n\n```\n{\n \"valid\": true,\n \"_shards\": {\n \"total\": 1,\n \"successful\": 1,\n \"failed\": 0\n },\n \"explanations\": [\n {\n \"index\": \"t\",\n \"valid\": true,\n \"explanation\": \"*:*\"\n }\n ]\n}\n```\n", "created_at": "2014-10-14T21:01:03Z" } ], "number": 7967, "title": "Leniency makes simple_query_string query a match_all" }
{ "body": "Previously, the leniency was on a per-query basis, with each query being\nparsed into multiple queries, one for each field. If any one of these\nqueries failed, the entire query was discarded in the name of being\nlenient.\n\nNow query parts will only be discarded if they fail for a particular\nfield, the entire query is not discarded. This helps when performing a\nquery over a numeric and string field, as only the sub-queries that are\ninvalid due to format exceptions will be discarded.\n\nAlso moves the `simple_query_string` queries out of SimpleQueryTests and\ninto a dedicated SimpleQueryStringTests class.\n\nFixes #7967\n", "number": 8162, "review_comments": [], "title": "Make simple_query_string leniency more fine-grained" }
{ "commits": [ { "message": "Make simple_query_string leniency more fine-grained\n\nPreviously, the leniency was on a per-query basis, with each query being\nparsed into multiple queries, one for each field. If any one of these\nqueries failed, the entire query was discarded in the name of being\nlenient.\n\nNow query parts will only be discarded if they fail for a particular\nfield, the entire query is not discarded. This helps when performing a\nquery over a numeric and string field, as only the sub-queries that are\ninvalid due to format exceptions will be discarded.\n\nAlso moves the `simple_query_string` queries out of SimpleQueryTests and\ninto a dedicated SimpleQueryStringTests class.\n\nFixes #7967" } ], "files": [ { "diff": "@@ -19,7 +19,8 @@\n package org.elasticsearch.index.query;\n \n import org.apache.lucene.analysis.Analyzer;\n-import org.apache.lucene.search.Query;\n+import org.apache.lucene.index.Term;\n+import org.apache.lucene.search.*;\n \n import java.util.Locale;\n import java.util.Map;\n@@ -50,11 +51,19 @@ private Query rethrowUnlessLenient(RuntimeException e) {\n \n @Override\n public Query newDefaultQuery(String text) {\n- try {\n- return super.newDefaultQuery(text);\n- } catch (RuntimeException e) {\n- return rethrowUnlessLenient(e);\n+ BooleanQuery bq = new BooleanQuery(true);\n+ for (Map.Entry<String,Float> entry : weights.entrySet()) {\n+ try {\n+ Query q = createBooleanQuery(entry.getKey(), text, super.getDefaultOperator());\n+ if (q != null) {\n+ q.setBoost(entry.getValue());\n+ bq.add(q, BooleanClause.Occur.SHOULD);\n+ }\n+ } catch (RuntimeException e) {\n+ rethrowUnlessLenient(e);\n+ }\n }\n+ return super.simplify(bq);\n }\n \n /**\n@@ -66,20 +75,36 @@ public Query newFuzzyQuery(String text, int fuzziness) {\n if (settings.lowercaseExpandedTerms()) {\n text = text.toLowerCase(settings.locale());\n }\n- try {\n- return super.newFuzzyQuery(text, fuzziness);\n- } catch (RuntimeException e) {\n- return rethrowUnlessLenient(e);\n+ BooleanQuery bq = new BooleanQuery(true);\n+ for (Map.Entry<String,Float> entry : weights.entrySet()) {\n+ try {\n+ Query q = new FuzzyQuery(new Term(entry.getKey(), text), fuzziness);\n+ if (q != null) {\n+ q.setBoost(entry.getValue());\n+ bq.add(q, BooleanClause.Occur.SHOULD);\n+ }\n+ } catch (RuntimeException e) {\n+ rethrowUnlessLenient(e);\n+ }\n }\n+ return super.simplify(bq);\n }\n \n @Override\n public Query newPhraseQuery(String text, int slop) {\n- try {\n- return super.newPhraseQuery(text, slop);\n- } catch (RuntimeException e) {\n- return rethrowUnlessLenient(e);\n+ BooleanQuery bq = new BooleanQuery(true);\n+ for (Map.Entry<String,Float> entry : weights.entrySet()) {\n+ try {\n+ Query q = createPhraseQuery(entry.getKey(), text, slop);\n+ if (q != null) {\n+ q.setBoost(entry.getValue());\n+ bq.add(q, BooleanClause.Occur.SHOULD);\n+ }\n+ } catch (RuntimeException e) {\n+ rethrowUnlessLenient(e);\n+ }\n }\n+ return super.simplify(bq);\n }\n \n /**\n@@ -91,11 +116,17 @@ public Query newPrefixQuery(String text) {\n if (settings.lowercaseExpandedTerms()) {\n text = text.toLowerCase(settings.locale());\n }\n- try {\n- return super.newPrefixQuery(text);\n- } catch (RuntimeException e) {\n- return rethrowUnlessLenient(e);\n+ BooleanQuery bq = new BooleanQuery(true);\n+ for (Map.Entry<String,Float> entry : weights.entrySet()) {\n+ try {\n+ PrefixQuery prefix = new PrefixQuery(new Term(entry.getKey(), text));\n+ prefix.setBoost(entry.getValue());\n+ bq.add(prefix, BooleanClause.Occur.SHOULD);\n+ } catch (RuntimeException e) {\n+ return rethrowUnlessLenient(e);\n+ }\n }\n+ return super.simplify(bq);\n }\n \n /**", "filename": "src/main/java/org/elasticsearch/index/query/SimpleQueryParser.java", "status": "modified" }, { "diff": "@@ -0,0 +1,269 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.search.query;\n+\n+import org.elasticsearch.action.search.SearchResponse;\n+import org.elasticsearch.index.query.BoolQueryBuilder;\n+import org.elasticsearch.index.query.SimpleQueryStringBuilder;\n+import org.elasticsearch.index.query.SimpleQueryStringFlag;\n+import org.elasticsearch.test.ElasticsearchIntegrationTest;\n+import org.junit.Test;\n+\n+import java.io.IOException;\n+import java.util.Locale;\n+import java.util.concurrent.ExecutionException;\n+\n+import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n+import static org.elasticsearch.index.query.QueryBuilders.boolQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.queryString;\n+import static org.elasticsearch.index.query.QueryBuilders.simpleQueryString;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.*;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchHits;\n+import static org.hamcrest.Matchers.equalTo;\n+\n+/**\n+ * Tests for the {@code simple_query_string} query\n+ */\n+public class SimpleQueryStringTests extends ElasticsearchIntegrationTest {\n+\n+ @Test\n+ public void testSimpleQueryString() throws ExecutionException, InterruptedException {\n+ createIndex(\"test\");\n+ indexRandom(true, false,\n+ client().prepareIndex(\"test\", \"type1\", \"1\").setSource(\"body\", \"foo\"),\n+ client().prepareIndex(\"test\", \"type1\", \"2\").setSource(\"body\", \"bar\"),\n+ client().prepareIndex(\"test\", \"type1\", \"3\").setSource(\"body\", \"foo bar\"),\n+ client().prepareIndex(\"test\", \"type1\", \"4\").setSource(\"body\", \"quux baz eggplant\"),\n+ client().prepareIndex(\"test\", \"type1\", \"5\").setSource(\"body\", \"quux baz spaghetti\"),\n+ client().prepareIndex(\"test\", \"type1\", \"6\").setSource(\"otherbody\", \"spaghetti\"));\n+\n+ SearchResponse searchResponse = client().prepareSearch().setQuery(simpleQueryString(\"foo bar\")).get();\n+ assertHitCount(searchResponse, 3l);\n+ assertSearchHits(searchResponse, \"1\", \"2\", \"3\");\n+\n+ searchResponse = client().prepareSearch().setQuery(\n+ simpleQueryString(\"foo bar\").defaultOperator(SimpleQueryStringBuilder.Operator.AND)).get();\n+ assertHitCount(searchResponse, 1l);\n+ assertFirstHit(searchResponse, hasId(\"3\"));\n+\n+ searchResponse = client().prepareSearch().setQuery(simpleQueryString(\"\\\"quux baz\\\" +(eggplant | spaghetti)\")).get();\n+ assertHitCount(searchResponse, 2l);\n+ assertSearchHits(searchResponse, \"4\", \"5\");\n+\n+ searchResponse = client().prepareSearch().setQuery(\n+ simpleQueryString(\"eggplants\").analyzer(\"snowball\")).get();\n+ assertHitCount(searchResponse, 1l);\n+ assertFirstHit(searchResponse, hasId(\"4\"));\n+\n+ searchResponse = client().prepareSearch().setQuery(\n+ simpleQueryString(\"spaghetti\").field(\"body\", 10.0f).field(\"otherbody\", 2.0f).queryName(\"myquery\")).get();\n+ assertHitCount(searchResponse, 2l);\n+ assertFirstHit(searchResponse, hasId(\"5\"));\n+ assertSearchHits(searchResponse, \"5\", \"6\");\n+ assertThat(searchResponse.getHits().getAt(0).getMatchedQueries()[0], equalTo(\"myquery\"));\n+\n+ searchResponse = client().prepareSearch().setQuery(simpleQueryString(\"spaghetti\").field(\"*body\")).get();\n+ assertHitCount(searchResponse, 2l);\n+ assertSearchHits(searchResponse, \"5\", \"6\");\n+\n+ // Have to bypass the builder here because the builder always uses \"fields\" instead of \"field\"\n+ searchResponse = client().prepareSearch().setQuery(\"{\\\"simple_query_string\\\": {\\\"query\\\": \\\"spaghetti\\\", \\\"field\\\": \\\"_all\\\"}}\").get();\n+ assertHitCount(searchResponse, 2l);\n+ assertSearchHits(searchResponse, \"5\", \"6\");\n+ }\n+\n+ @Test\n+ public void testSimpleQueryStringLowercasing() {\n+ createIndex(\"test\");\n+ client().prepareIndex(\"test\", \"type1\", \"1\").setSource(\"body\", \"Professional\").get();\n+ refresh();\n+\n+ SearchResponse searchResponse = client().prepareSearch().setQuery(simpleQueryString(\"Professio*\")).get();\n+ assertHitCount(searchResponse, 1l);\n+ assertSearchHits(searchResponse, \"1\");\n+\n+ searchResponse = client().prepareSearch().setQuery(\n+ simpleQueryString(\"Professio*\").lowercaseExpandedTerms(false)).get();\n+ assertHitCount(searchResponse, 0l);\n+\n+ searchResponse = client().prepareSearch().setQuery(\n+ simpleQueryString(\"Professionan~1\")).get();\n+ assertHitCount(searchResponse, 1l);\n+ assertSearchHits(searchResponse, \"1\");\n+\n+ searchResponse = client().prepareSearch().setQuery(\n+ simpleQueryString(\"Professionan~1\").lowercaseExpandedTerms(false)).get();\n+ assertHitCount(searchResponse, 0l);\n+ }\n+\n+ @Test\n+ public void testQueryStringLocale() {\n+ createIndex(\"test\");\n+ client().prepareIndex(\"test\", \"type1\", \"1\").setSource(\"body\", \"bılly\").get();\n+ refresh();\n+\n+ SearchResponse searchResponse = client().prepareSearch().setQuery(simpleQueryString(\"BILL*\")).get();\n+ assertHitCount(searchResponse, 0l);\n+ searchResponse = client().prepareSearch().setQuery(queryString(\"body:BILL*\")).get();\n+ assertHitCount(searchResponse, 0l);\n+\n+ searchResponse = client().prepareSearch().setQuery(\n+ simpleQueryString(\"BILL*\").locale(new Locale(\"tr\", \"TR\"))).get();\n+ assertHitCount(searchResponse, 1l);\n+ assertSearchHits(searchResponse, \"1\");\n+ searchResponse = client().prepareSearch().setQuery(\n+ queryString(\"body:BILL*\").locale(new Locale(\"tr\", \"TR\"))).get();\n+ assertHitCount(searchResponse, 1l);\n+ assertSearchHits(searchResponse, \"1\");\n+ }\n+\n+ @Test\n+ public void testNestedFieldSimpleQueryString() throws IOException {\n+ assertAcked(prepareCreate(\"test\")\n+ .addMapping(\"type1\", jsonBuilder()\n+ .startObject()\n+ .startObject(\"type1\")\n+ .startObject(\"properties\")\n+ .startObject(\"body\").field(\"type\", \"string\")\n+ .startObject(\"fields\")\n+ .startObject(\"sub\").field(\"type\", \"string\")\n+ .endObject() // sub\n+ .endObject() // fields\n+ .endObject() // body\n+ .endObject() // properties\n+ .endObject() // type1\n+ .endObject()));\n+ client().prepareIndex(\"test\", \"type1\", \"1\").setSource(\"body\", \"foo bar baz\").get();\n+ refresh();\n+\n+ SearchResponse searchResponse = client().prepareSearch().setQuery(\n+ simpleQueryString(\"foo bar baz\").field(\"body\")).get();\n+ assertHitCount(searchResponse, 1l);\n+ assertSearchHits(searchResponse, \"1\");\n+\n+ searchResponse = client().prepareSearch().setQuery(\n+ simpleQueryString(\"foo bar baz\").field(\"type1.body\")).get();\n+ assertHitCount(searchResponse, 1l);\n+ assertSearchHits(searchResponse, \"1\");\n+\n+ searchResponse = client().prepareSearch().setQuery(\n+ simpleQueryString(\"foo bar baz\").field(\"body.sub\")).get();\n+ assertHitCount(searchResponse, 1l);\n+ assertSearchHits(searchResponse, \"1\");\n+\n+ searchResponse = client().prepareSearch().setQuery(\n+ simpleQueryString(\"foo bar baz\").field(\"type1.body.sub\")).get();\n+ assertHitCount(searchResponse, 1l);\n+ assertSearchHits(searchResponse, \"1\");\n+ }\n+\n+ @Test\n+ public void testSimpleQueryStringFlags() throws ExecutionException, InterruptedException {\n+ createIndex(\"test\");\n+ indexRandom(true,\n+ client().prepareIndex(\"test\", \"type1\", \"1\").setSource(\"body\", \"foo\"),\n+ client().prepareIndex(\"test\", \"type1\", \"2\").setSource(\"body\", \"bar\"),\n+ client().prepareIndex(\"test\", \"type1\", \"3\").setSource(\"body\", \"foo bar\"),\n+ client().prepareIndex(\"test\", \"type1\", \"4\").setSource(\"body\", \"quux baz eggplant\"),\n+ client().prepareIndex(\"test\", \"type1\", \"5\").setSource(\"body\", \"quux baz spaghetti\"),\n+ client().prepareIndex(\"test\", \"type1\", \"6\").setSource(\"otherbody\", \"spaghetti\"));\n+\n+ SearchResponse searchResponse = client().prepareSearch().setQuery(\n+ simpleQueryString(\"foo bar\").flags(SimpleQueryStringFlag.ALL)).get();\n+ assertHitCount(searchResponse, 3l);\n+ assertSearchHits(searchResponse, \"1\", \"2\", \"3\");\n+\n+ // Sending a negative 'flags' value is the same as SimpleQueryStringFlag.ALL\n+ searchResponse = client().prepareSearch().setQuery(\"{\\\"simple_query_string\\\": {\\\"query\\\": \\\"foo bar\\\", \\\"flags\\\": -1}}\").get();\n+ assertHitCount(searchResponse, 3l);\n+ assertSearchHits(searchResponse, \"1\", \"2\", \"3\");\n+\n+ searchResponse = client().prepareSearch().setQuery(\n+ simpleQueryString(\"foo | bar\")\n+ .defaultOperator(SimpleQueryStringBuilder.Operator.AND)\n+ .flags(SimpleQueryStringFlag.OR)).get();\n+ assertHitCount(searchResponse, 3l);\n+ assertSearchHits(searchResponse, \"1\", \"2\", \"3\");\n+\n+ searchResponse = client().prepareSearch().setQuery(\n+ simpleQueryString(\"foo | bar\")\n+ .defaultOperator(SimpleQueryStringBuilder.Operator.AND)\n+ .flags(SimpleQueryStringFlag.NONE)).get();\n+ assertHitCount(searchResponse, 1l);\n+ assertFirstHit(searchResponse, hasId(\"3\"));\n+\n+ searchResponse = client().prepareSearch().setQuery(\n+ simpleQueryString(\"baz | egg*\")\n+ .defaultOperator(SimpleQueryStringBuilder.Operator.AND)\n+ .flags(SimpleQueryStringFlag.NONE)).get();\n+ assertHitCount(searchResponse, 0l);\n+\n+ searchResponse = client().prepareSearch().setSource(\"{\\n\" +\n+ \" \\\"query\\\": {\\n\" +\n+ \" \\\"simple_query_string\\\": {\\n\" +\n+ \" \\\"query\\\": \\\"foo|bar\\\",\\n\" +\n+ \" \\\"default_operator\\\": \\\"AND\\\",\" +\n+ \" \\\"flags\\\": \\\"NONE\\\"\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \"}\").get();\n+ assertHitCount(searchResponse, 1l);\n+\n+ searchResponse = client().prepareSearch().setQuery(\n+ simpleQueryString(\"baz | egg*\")\n+ .defaultOperator(SimpleQueryStringBuilder.Operator.AND)\n+ .flags(SimpleQueryStringFlag.WHITESPACE, SimpleQueryStringFlag.PREFIX)).get();\n+ assertHitCount(searchResponse, 1l);\n+ assertFirstHit(searchResponse, hasId(\"4\"));\n+ }\n+\n+ @Test\n+ public void testSimpleQueryStringLenient() throws ExecutionException, InterruptedException {\n+ createIndex(\"test1\", \"test2\");\n+ indexRandom(true, client().prepareIndex(\"test1\", \"type1\", \"1\").setSource(\"field\", \"foo\"),\n+ client().prepareIndex(\"test2\", \"type1\", \"10\").setSource(\"field\", 5));\n+ refresh();\n+\n+ SearchResponse searchResponse = client().prepareSearch().setQuery(simpleQueryString(\"foo\").field(\"field\")).get();\n+ assertFailures(searchResponse);\n+ assertHitCount(searchResponse, 1l);\n+ assertSearchHits(searchResponse, \"1\");\n+\n+ searchResponse = client().prepareSearch().setQuery(simpleQueryString(\"foo\").field(\"field\").lenient(true)).get();\n+ assertNoFailures(searchResponse);\n+ assertHitCount(searchResponse, 1l);\n+ assertSearchHits(searchResponse, \"1\");\n+ }\n+\n+ @Test // see: https://github.com/elasticsearch/elasticsearch/issues/7967\n+ public void testLenientFlagBeingTooLenient() throws Exception {\n+ indexRandom(true,\n+ client().prepareIndex(\"test\", \"doc\", \"1\").setSource(\"num\", 1, \"body\", \"foo bar baz\"),\n+ client().prepareIndex(\"test\", \"doc\", \"2\").setSource(\"num\", 2, \"body\", \"eggplant spaghetti lasagna\"));\n+\n+ BoolQueryBuilder q = boolQuery().should(simpleQueryString(\"bar\").field(\"num\").field(\"body\").lenient(true));\n+ SearchResponse resp = client().prepareSearch(\"test\").setQuery(q).get();\n+ assertNoFailures(resp);\n+ // the bug is that this would be parsed into basically a match_all\n+ // query and this would match both documents\n+ assertHitCount(resp, 1);\n+ assertSearchHits(resp, \"1\");\n+ }\n+}", "filename": "src/test/java/org/elasticsearch/search/query/SimpleQueryStringTests.java", "status": "added" }, { "diff": "@@ -2071,215 +2071,6 @@ private static FilterBuilder rangeFilter(String field, Object from, Object to) {\n }\n }\n \n- @Test\n- public void testSimpleQueryString() throws ExecutionException, InterruptedException {\n- createIndex(\"test\");\n- indexRandom(true, false,\n- client().prepareIndex(\"test\", \"type1\", \"1\").setSource(\"body\", \"foo\"),\n- client().prepareIndex(\"test\", \"type1\", \"2\").setSource(\"body\", \"bar\"),\n- client().prepareIndex(\"test\", \"type1\", \"3\").setSource(\"body\", \"foo bar\"),\n- client().prepareIndex(\"test\", \"type1\", \"4\").setSource(\"body\", \"quux baz eggplant\"),\n- client().prepareIndex(\"test\", \"type1\", \"5\").setSource(\"body\", \"quux baz spaghetti\"),\n- client().prepareIndex(\"test\", \"type1\", \"6\").setSource(\"otherbody\", \"spaghetti\"));\n-\n- SearchResponse searchResponse = client().prepareSearch().setQuery(simpleQueryString(\"foo bar\")).get();\n- assertHitCount(searchResponse, 3l);\n- assertSearchHits(searchResponse, \"1\", \"2\", \"3\");\n-\n- searchResponse = client().prepareSearch().setQuery(\n- simpleQueryString(\"foo bar\").defaultOperator(SimpleQueryStringBuilder.Operator.AND)).get();\n- assertHitCount(searchResponse, 1l);\n- assertFirstHit(searchResponse, hasId(\"3\"));\n-\n- searchResponse = client().prepareSearch().setQuery(simpleQueryString(\"\\\"quux baz\\\" +(eggplant | spaghetti)\")).get();\n- assertHitCount(searchResponse, 2l);\n- assertSearchHits(searchResponse, \"4\", \"5\");\n-\n- searchResponse = client().prepareSearch().setQuery(\n- simpleQueryString(\"eggplants\").analyzer(\"snowball\")).get();\n- assertHitCount(searchResponse, 1l);\n- assertFirstHit(searchResponse, hasId(\"4\"));\n-\n- searchResponse = client().prepareSearch().setQuery(\n- simpleQueryString(\"spaghetti\").field(\"body\", 10.0f).field(\"otherbody\", 2.0f).queryName(\"myquery\")).get();\n- assertHitCount(searchResponse, 2l);\n- assertFirstHit(searchResponse, hasId(\"5\"));\n- assertSearchHits(searchResponse, \"5\", \"6\");\n- assertThat(searchResponse.getHits().getAt(0).getMatchedQueries()[0], equalTo(\"myquery\"));\n-\n- searchResponse = client().prepareSearch().setQuery(simpleQueryString(\"spaghetti\").field(\"*body\")).get();\n- assertHitCount(searchResponse, 2l);\n- assertSearchHits(searchResponse, \"5\", \"6\");\n-\n- // Have to bypass the builder here because the builder always uses \"fields\" instead of \"field\"\n- searchResponse = client().prepareSearch().setQuery(\"{\\\"simple_query_string\\\": {\\\"query\\\": \\\"spaghetti\\\", \\\"field\\\": \\\"_all\\\"}}\").get();\n- assertHitCount(searchResponse, 2l);\n- assertSearchHits(searchResponse, \"5\", \"6\");\n- }\n-\n- @Test\n- public void testSimpleQueryStringLowercasing() {\n- createIndex(\"test\");\n- client().prepareIndex(\"test\", \"type1\", \"1\").setSource(\"body\", \"Professional\").get();\n- refresh();\n-\n- SearchResponse searchResponse = client().prepareSearch().setQuery(simpleQueryString(\"Professio*\")).get();\n- assertHitCount(searchResponse, 1l);\n- assertSearchHits(searchResponse, \"1\");\n-\n- searchResponse = client().prepareSearch().setQuery(\n- simpleQueryString(\"Professio*\").lowercaseExpandedTerms(false)).get();\n- assertHitCount(searchResponse, 0l);\n-\n- searchResponse = client().prepareSearch().setQuery(\n- simpleQueryString(\"Professionan~1\")).get();\n- assertHitCount(searchResponse, 1l);\n- assertSearchHits(searchResponse, \"1\");\n-\n- searchResponse = client().prepareSearch().setQuery(\n- simpleQueryString(\"Professionan~1\").lowercaseExpandedTerms(false)).get();\n- assertHitCount(searchResponse, 0l);\n- }\n-\n- @Test\n- public void testQueryStringLocale() {\n- createIndex(\"test\");\n- client().prepareIndex(\"test\", \"type1\", \"1\").setSource(\"body\", \"bılly\").get();\n- refresh();\n-\n- SearchResponse searchResponse = client().prepareSearch().setQuery(simpleQueryString(\"BILL*\")).get();\n- assertHitCount(searchResponse, 0l);\n- searchResponse = client().prepareSearch().setQuery(queryString(\"body:BILL*\")).get();\n- assertHitCount(searchResponse, 0l);\n-\n- searchResponse = client().prepareSearch().setQuery(\n- simpleQueryString(\"BILL*\").locale(new Locale(\"tr\", \"TR\"))).get();\n- assertHitCount(searchResponse, 1l);\n- assertSearchHits(searchResponse, \"1\");\n- searchResponse = client().prepareSearch().setQuery(\n- queryString(\"body:BILL*\").locale(new Locale(\"tr\", \"TR\"))).get();\n- assertHitCount(searchResponse, 1l);\n- assertSearchHits(searchResponse, \"1\");\n- }\n-\n- @Test\n- public void testNestedFieldSimpleQueryString() throws IOException {\n- assertAcked(prepareCreate(\"test\")\n- .addMapping(\"type1\", jsonBuilder()\n- .startObject()\n- .startObject(\"type1\")\n- .startObject(\"properties\")\n- .startObject(\"body\").field(\"type\", \"string\")\n- .startObject(\"fields\")\n- .startObject(\"sub\").field(\"type\", \"string\")\n- .endObject() // sub\n- .endObject() // fields\n- .endObject() // body\n- .endObject() // properties\n- .endObject() // type1\n- .endObject()));\n- client().prepareIndex(\"test\", \"type1\", \"1\").setSource(\"body\", \"foo bar baz\").get();\n- refresh();\n-\n- SearchResponse searchResponse = client().prepareSearch().setQuery(\n- simpleQueryString(\"foo bar baz\").field(\"body\")).get();\n- assertHitCount(searchResponse, 1l);\n- assertSearchHits(searchResponse, \"1\");\n-\n- searchResponse = client().prepareSearch().setQuery(\n- simpleQueryString(\"foo bar baz\").field(\"type1.body\")).get();\n- assertHitCount(searchResponse, 1l);\n- assertSearchHits(searchResponse, \"1\");\n-\n- searchResponse = client().prepareSearch().setQuery(\n- simpleQueryString(\"foo bar baz\").field(\"body.sub\")).get();\n- assertHitCount(searchResponse, 1l);\n- assertSearchHits(searchResponse, \"1\");\n-\n- searchResponse = client().prepareSearch().setQuery(\n- simpleQueryString(\"foo bar baz\").field(\"type1.body.sub\")).get();\n- assertHitCount(searchResponse, 1l);\n- assertSearchHits(searchResponse, \"1\");\n- }\n-\n- @Test\n- public void testSimpleQueryStringFlags() throws ExecutionException, InterruptedException {\n- createIndex(\"test\");\n- indexRandom(true,\n- client().prepareIndex(\"test\", \"type1\", \"1\").setSource(\"body\", \"foo\"),\n- client().prepareIndex(\"test\", \"type1\", \"2\").setSource(\"body\", \"bar\"),\n- client().prepareIndex(\"test\", \"type1\", \"3\").setSource(\"body\", \"foo bar\"),\n- client().prepareIndex(\"test\", \"type1\", \"4\").setSource(\"body\", \"quux baz eggplant\"),\n- client().prepareIndex(\"test\", \"type1\", \"5\").setSource(\"body\", \"quux baz spaghetti\"),\n- client().prepareIndex(\"test\", \"type1\", \"6\").setSource(\"otherbody\", \"spaghetti\"));\n-\n- SearchResponse searchResponse = client().prepareSearch().setQuery(\n- simpleQueryString(\"foo bar\").flags(SimpleQueryStringFlag.ALL)).get();\n- assertHitCount(searchResponse, 3l);\n- assertSearchHits(searchResponse, \"1\", \"2\", \"3\");\n-\n- // Sending a negative 'flags' value is the same as SimpleQueryStringFlag.ALL\n- searchResponse = client().prepareSearch().setQuery(\"{\\\"simple_query_string\\\": {\\\"query\\\": \\\"foo bar\\\", \\\"flags\\\": -1}}\").get();\n- assertHitCount(searchResponse, 3l);\n- assertSearchHits(searchResponse, \"1\", \"2\", \"3\");\n-\n- searchResponse = client().prepareSearch().setQuery(\n- simpleQueryString(\"foo | bar\")\n- .defaultOperator(SimpleQueryStringBuilder.Operator.AND)\n- .flags(SimpleQueryStringFlag.OR)).get();\n- assertHitCount(searchResponse, 3l);\n- assertSearchHits(searchResponse, \"1\", \"2\", \"3\");\n-\n- searchResponse = client().prepareSearch().setQuery(\n- simpleQueryString(\"foo | bar\")\n- .defaultOperator(SimpleQueryStringBuilder.Operator.AND)\n- .flags(SimpleQueryStringFlag.NONE)).get();\n- assertHitCount(searchResponse, 1l);\n- assertFirstHit(searchResponse, hasId(\"3\"));\n-\n- searchResponse = client().prepareSearch().setQuery(\n- simpleQueryString(\"baz | egg*\")\n- .defaultOperator(SimpleQueryStringBuilder.Operator.AND)\n- .flags(SimpleQueryStringFlag.NONE)).get();\n- assertHitCount(searchResponse, 0l);\n-\n- searchResponse = client().prepareSearch().setSource(\"{\\n\" +\n- \" \\\"query\\\": {\\n\" +\n- \" \\\"simple_query_string\\\": {\\n\" +\n- \" \\\"query\\\": \\\"foo|bar\\\",\\n\" +\n- \" \\\"default_operator\\\": \\\"AND\\\",\" +\n- \" \\\"flags\\\": \\\"NONE\\\"\\n\" +\n- \" }\\n\" +\n- \" }\\n\" +\n- \"}\").get();\n- assertHitCount(searchResponse, 1l);\n-\n- searchResponse = client().prepareSearch().setQuery(\n- simpleQueryString(\"baz | egg*\")\n- .defaultOperator(SimpleQueryStringBuilder.Operator.AND)\n- .flags(SimpleQueryStringFlag.WHITESPACE, SimpleQueryStringFlag.PREFIX)).get();\n- assertHitCount(searchResponse, 1l);\n- assertFirstHit(searchResponse, hasId(\"4\"));\n- }\n-\n- @Test\n- public void testSimpleQueryStringLenient() throws ExecutionException, InterruptedException {\n- createIndex(\"test1\", \"test2\");\n- indexRandom(true, client().prepareIndex(\"test1\", \"type1\", \"1\").setSource(\"field\", \"foo\"),\n- client().prepareIndex(\"test2\", \"type1\", \"10\").setSource(\"field\", 5));\n- refresh();\n-\n- SearchResponse searchResponse = client().prepareSearch().setQuery(simpleQueryString(\"foo\").field(\"field\")).get();\n- assertFailures(searchResponse);\n- assertHitCount(searchResponse, 1l);\n- assertSearchHits(searchResponse, \"1\");\n-\n- searchResponse = client().prepareSearch().setQuery(simpleQueryString(\"foo\").field(\"field\").lenient(true)).get();\n- assertNoFailures(searchResponse);\n- assertHitCount(searchResponse, 1l);\n- assertSearchHits(searchResponse, \"1\");\n- }\n-\n @Test\n public void testDateProvidedAsNumber() throws ExecutionException, InterruptedException {\n createIndex(\"test\");", "filename": "src/test/java/org/elasticsearch/search/query/SimpleQueryTests.java", "status": "modified" } ] }
{ "body": "With #6066 we added index throttling when merges cannot keep up, which is important since this ensures index remains healthy (does not develop ridiculous number of segments).\n\nIt works by watching the number of merges that need to run, and if this exceeds max_merge_count, it starts throttling.\n\nHowever, max_merge_count is dynamically updatable, but when you update it dynamically, the index throttling doesn't notice and keeps throttling at the original max_merge_count (on ES startup).\n", "comments": [], "number": 8132, "title": "Core: dynamic updates to max_merge_count is ignored by index throttling" }
{ "body": "Today, index throttling won't notice any dynamic/live changes to max_merge_count.\n\nSo, I just fixed the throttle code to ask the MergeSchedulerProvider for its maxMergeCount every time a merge starts/finishes.\n\nThis means after a dynamic change, it will be the next merge that starts/finishes until the throttling notices the change. We could also install an UpdateSettingsListener to force throttling to notice the change immediately, but that's more complex and I think this simple solution is sufficient.\n\nCloses #8132\n", "number": 8136, "review_comments": [], "title": "Dynamic changes to `max_merge_count` are now picked up by index throttling" }
{ "commits": [ { "message": "pull maxNumMerges each time a merge starts/finishes" } ], "files": [ { "diff": "@@ -272,7 +272,7 @@ public void start() throws EngineException {\n try {\n this.indexWriter = createWriter();\n mergeScheduler.removeListener(this.throttle);\n- this.throttle = new IndexThrottle(mergeScheduler.getMaxMerges(), logger);\n+ this.throttle = new IndexThrottle(mergeScheduler, logger);\n mergeScheduler.addListener(throttle);\n } catch (IOException e) {\n maybeFailEngine(e, \"start\");\n@@ -844,7 +844,7 @@ public void flush(Flush flush) throws EngineException {\n currentIndexWriter().close(false);\n indexWriter = createWriter();\n mergeScheduler.removeListener(this.throttle);\n- this.throttle = new IndexThrottle(mergeScheduler.getMaxMerges(), this.logger);\n+ this.throttle = new IndexThrottle(mergeScheduler, this.logger);\n mergeScheduler.addListener(throttle);\n // commit on a just opened writer will commit even if there are no changes done to it\n // we rely on that for the commit data translog id key\n@@ -1722,13 +1722,13 @@ private static final class IndexThrottle implements MergeSchedulerProvider.Liste\n private final InternalLock lockReference = new InternalLock(new ReentrantLock());\n private final AtomicInteger numMergesInFlight = new AtomicInteger(0);\n private final AtomicBoolean isThrottling = new AtomicBoolean();\n- private final int maxNumMerges;\n+ private final MergeSchedulerProvider mergeScheduler;\n private final ESLogger logger;\n \n private volatile InternalLock lock = NOOP_LOCK;\n \n- public IndexThrottle(int maxNumMerges, ESLogger logger) {\n- this.maxNumMerges = maxNumMerges;\n+ public IndexThrottle(MergeSchedulerProvider mergeScheduler, ESLogger logger) {\n+ this.mergeScheduler = mergeScheduler;\n this.logger = logger;\n }\n \n@@ -1738,6 +1738,7 @@ public Releasable acquireThrottle() {\n \n @Override\n public synchronized void beforeMerge(OnGoingMerge merge) {\n+ int maxNumMerges = mergeScheduler.getMaxMerges();\n if (numMergesInFlight.incrementAndGet() > maxNumMerges) {\n if (isThrottling.getAndSet(true) == false) {\n logger.info(\"now throttling indexing: numMergesInFlight={}, maxNumMerges={}\", numMergesInFlight, maxNumMerges);\n@@ -1748,6 +1749,7 @@ public synchronized void beforeMerge(OnGoingMerge merge) {\n \n @Override\n public synchronized void afterMerge(OnGoingMerge merge) {\n+ int maxNumMerges = mergeScheduler.getMaxMerges();\n if (numMergesInFlight.decrementAndGet() < maxNumMerges) {\n if (isThrottling.getAndSet(false)) {\n logger.info(\"stop throttling indexing: numMergesInFlight={}, maxNumMerges={}\", numMergesInFlight, maxNumMerges);", "filename": "src/main/java/org/elasticsearch/index/engine/internal/InternalEngine.java", "status": "modified" } ] }
{ "body": "This issue does not exist on master because facets were removed, but it persists on 1.x where they're only deprecated. Chances are that this method is rarely used in this form.\n\n``` java\nsourceBuilder().facets(aggregations, aggregationsOffset, aggregationsLength);\n```\n\n`facets` should be `aggregations`.\n", "comments": [ { "body": "Fixed by #8121\n", "created_at": "2015-06-19T22:14:11Z" } ], "number": 8120, "title": "SearchRequestBuilder passes aggregation bytes as facet bytes" }
{ "body": "This is already fixed in master thanks to the removal of facets.\n\nCloses #8120\n", "number": 8121, "review_comments": [], "title": "Fixing SearchRequestBuilder aggregations call to facets" }
{ "commits": [ { "message": "Fixing aggregation call to facets\n\nThis is already fixed in master thanks to the removal of facets." } ], "files": [ { "diff": "@@ -620,7 +620,7 @@ public SearchRequestBuilder setAggregations(byte[] aggregations) {\n * Sets a raw (xcontent) binary representation of addAggregation to use.\n */\n public SearchRequestBuilder setAggregations(byte[] aggregations, int aggregationsOffset, int aggregationsLength) {\n- sourceBuilder().facets(aggregations, aggregationsOffset, aggregationsLength);\n+ sourceBuilder().aggregations(aggregations, aggregationsOffset, aggregationsLength);\n return this;\n }\n ", "filename": "src/main/java/org/elasticsearch/action/search/SearchRequestBuilder.java", "status": "modified" } ] }
{ "body": "This is incredibly minor, and it really only helps with client debug that is unlikely to be needed. The method with the signature:\n\n``` java\npublic SearchRequest extraSource(Map extraSource)\n```\n\nIncludes the re-thrown exception:\n\n``` java\nthrow new ElasticsearchGenerationException(\"Failed to generate [\" + source + \"]\", e);\n```\n\nwhere `source` is the internal field due to copy/paste.\n", "comments": [], "number": 8117, "title": "SearchRequest - Map extraSource exception uses source field accidently" }
{ "body": "Uses the `extraSource` parameter for debug instead of the `source` field.\n\nCloses #8117\n", "number": 8118, "review_comments": [], "title": "Fixing copy/paste mistake in SearchRequest.extraSource's exception message" }
{ "commits": [ { "message": "Fixing copy/paste mistake in SearchRequest.extraSource's exception message." } ], "files": [ { "diff": "@@ -371,7 +371,7 @@ public SearchRequest extraSource(Map extraSource) {\n builder.map(extraSource);\n return extraSource(builder);\n } catch (IOException e) {\n- throw new ElasticsearchGenerationException(\"Failed to generate [\" + source + \"]\", e);\n+ throw new ElasticsearchGenerationException(\"Failed to generate [\" + extraSource + \"]\", e);\n }\n }\n ", "filename": "src/main/java/org/elasticsearch/action/search/SearchRequest.java", "status": "modified" } ] }
{ "body": "We currently cancel recoveries when the shard is no longer assigned to the target node, or the primary shard (source of copying) is moved to another node (and there are more scenarios). That cancel logic doesn't clean up any temporary files created during the recovery.\n\nNormally that's not a problem as the files will be cleaned up once the shard is safely recovered somewhere else (or locally). However, if one runs into continuous failure cycles we can fill up disk space, causing bigger problems like corrupting other shards on the node.\n", "comments": [ { "body": "fixed with #8092 \n", "created_at": "2014-11-03T12:01:03Z" } ], "number": 7893, "title": "Resiliency: Cancelling a recovery may leave temporary files behind" }
{ "body": "The PR rewrites the state controls in the RecoveryTarget family classes to make it easier to guarantee that:\n- recovery resources are only cleared once there are no ongoing requests\n- recovery is automatically canceled when the target shard is closed/removed\n- canceled recoveries do not leave temp files behind when canceled. \n\nHighlights of the change:\n1) All temporary files are cleared upon failure/cancel (see #7315 )\n2) All newly created files are always temporary \n3) Doesn't list local files on the cluster state update thread (which throw unwanted exception)\n4) Recoveries are canceled by a listener to IndicesLifecycle.beforeIndexShardClosed, so we don't need to explicitly call it.\n5) Simplifies RecoveryListener to only notify when a recovery is done or failed. Removed subtleties like ignore and retry (they are dealt with internally)\n\nRelates to #7893\n", "number": 8092, "review_comments": [ { "body": "can we get a javadoc string what this class does?\n", "created_at": "2014-10-16T08:24:08Z" }, { "body": "I assume the `onIgnoreRecovery` is unused?\n", "created_at": "2014-10-16T08:24:26Z" }, { "body": "can be make this a `putIfAbsent` and assert it's not there?\n", "created_at": "2014-10-16T08:25:20Z" }, { "body": "extra newline here?\n", "created_at": "2014-10-16T08:28:50Z" }, { "body": "this file header looks wrong\n", "created_at": "2014-10-16T08:29:07Z" }, { "body": "should this go into the ctor?\n", "created_at": "2014-10-16T08:29:28Z" }, { "body": "unrelated but I think `openIndexOutputs` can be final and no need to be volatile?\n", "created_at": "2014-10-16T08:43:34Z" }, { "body": "make finished `private final`?\n", "created_at": "2014-10-16T08:43:49Z" }, { "body": "can we return and assert that the CAS op was successful?\n", "created_at": "2014-10-16T08:45:43Z" }, { "body": "you also need to catch `NoSuchFileException` it's OS dependent\n", "created_at": "2014-10-16T08:47:40Z" }, { "body": "just for kicks I think we should inc the refcount on the store here before we access it\n", "created_at": "2014-10-16T08:48:20Z" }, { "body": "what happens if this rename doesn't work here?\n", "created_at": "2014-10-16T08:49:44Z" }, { "body": "make this final?\n", "created_at": "2014-10-16T08:51:32Z" }, { "body": "decRef should happen in a finally \n", "created_at": "2014-10-16T08:51:58Z" }, { "body": "make this a hard exception?\n", "created_at": "2014-10-16T08:52:29Z" }, { "body": "prevent double closing here? maybe you can reuse the `finished.compareAndSet(false, true)` pattern?\n", "created_at": "2014-10-16T08:53:35Z" }, { "body": "I wonder if we can somehow factor this refcoutning logic out into a util class. something like\n\n``` Java\n\npublic class RefCounted {\n\npublic final void decRef() {\n//...\n}\n\npublic final boolean tryIncRef() {\n//...\n}\n\npublic final void incRef() {\n//...\n}\n\npublic inteface CloseListener {\n public void close(); // called when we reach 0\n}\n}\n```\n\nI think we can then also just use this in `Store.java`?\n", "created_at": "2014-10-16T08:57:45Z" }, { "body": "I think you can just do:\n\n``` Java\nIOUtils.closeWhileHandlingException(openIndexOutputs.values());\nopenIndexOutputs.clear();\n```\n", "created_at": "2014-10-16T09:02:32Z" }, { "body": "any reason why we don't do this inside the try/finally?\n", "created_at": "2014-10-16T09:03:24Z" }, { "body": "nevermind it gets updated\n", "created_at": "2014-10-16T09:06:36Z" }, { "body": "I think since you change the finally part you should do this like:\n\n``` Java\ntry {\n Store.verify(indexOutput);\n} finally {\n indexOutput.close();\n}\n```\n\njust to make sure we are closing the stream asap\n", "created_at": "2014-10-16T09:09:57Z" }, { "body": "Yes. it is now removed from the listener interface.\n", "created_at": "2014-10-16T11:17:39Z" }, { "body": "will do\n", "created_at": "2014-10-16T11:17:49Z" }, { "body": "Argh IntelliJ. will fix.\n", "created_at": "2014-10-16T11:18:23Z" }, { "body": "Yes, it now can (given the new access patterns). Good point.\n", "created_at": "2014-10-16T11:18:57Z" }, { "body": "+1. will do.\n", "created_at": "2014-10-16T11:19:14Z" }, { "body": "I can definitely return the value. I'm a bit conflicted regarding the assert as strictly speaking we can't guarantee it will work due to the retry logic when may set the thread before the clear command of the previous thread run. In practice it shouldn't be a problem because it only kicks in after 500ms. But still, I'm not sure it adds value to assert here?\n", "created_at": "2014-10-16T11:26:52Z" }, { "body": "KK. I only copied the old code. Will change.\n", "created_at": "2014-10-16T11:27:23Z" }, { "body": "Maybe better is to except if the ref count of the local object is <0 (which guarantees the store is kept alive)? Semantically you should only call methods on this object when having a ref count. \n", "created_at": "2014-10-16T11:29:12Z" }, { "body": "Then we should fail the shard imho. I copied the old code. I'll double check that this is what happens.\n", "created_at": "2014-10-16T11:30:36Z" } ], "title": "Refactor RecoveryTarget state management" }
{ "commits": [ { "message": "Recovery: clean up temporary files when canceling recovery\n\nAt the moment, we leave around temporary files if a peer (replica) recovery is canceled. Those files will normally be cleaned up once the shard is started else but in case of errors this can lead to trouble. If recovery are started and canceled often, we may cause nodes to run out of disk space.\n\nCloses #7893" }, { "message": "temp file names registry - not there yet." }, { "message": "wip" }, { "message": "Some more cleanup and java docs" }, { "message": "Beter encapsulate temporary files" }, { "message": "Fix compilation after rebasing to 1.x" }, { "message": "testCancellationCleansTempFiles: use assertBusy to verify all files were cleaned\n\nThese are now background processes.." }, { "message": "Feedback round" }, { "message": "moved package line" }, { "message": "one more private final" }, { "message": "Fail recovery on every error while listing local files." } ], "files": [ { "diff": "@@ -40,7 +40,6 @@\n import org.elasticsearch.index.shard.service.InternalIndexShard;\n import org.elasticsearch.indices.IndicesService;\n import org.elasticsearch.indices.recovery.RecoveryState;\n-import org.elasticsearch.indices.recovery.RecoveryStatus;\n import org.elasticsearch.indices.recovery.RecoveryTarget;\n import org.elasticsearch.threadpool.ThreadPool;\n import org.elasticsearch.transport.TransportService;\n@@ -144,19 +143,15 @@ protected ShardRecoveryResponse shardOperation(ShardRecoveryRequest request) thr\n \n InternalIndexService indexService = (InternalIndexService) indicesService.indexServiceSafe(request.shardId().getIndex());\n InternalIndexShard indexShard = (InternalIndexShard) indexService.shardSafe(request.shardId().id());\n- ShardRouting shardRouting = indexShard.routingEntry();\n ShardRecoveryResponse shardRecoveryResponse = new ShardRecoveryResponse(request.shardId());\n \n- RecoveryState state;\n- RecoveryStatus recoveryStatus = indexShard.recoveryStatus();\n+ RecoveryState state = indexShard.recoveryState();\n \n- if (recoveryStatus == null) {\n- recoveryStatus = recoveryTarget.recoveryStatus(indexShard);\n+ if (state == null) {\n+ state = recoveryTarget.recoveryState(indexShard);\n }\n \n- if (recoveryStatus != null) {\n- state = recoveryStatus.recoveryState();\n- } else {\n+ if (state == null) {\n IndexShardGatewayService gatewayService =\n indexService.shardInjector(request.shardId().id()).getInstance(IndexShardGatewayService.class);\n state = gatewayService.recoveryState();\n@@ -183,7 +178,8 @@ protected ClusterBlockException checkRequestBlock(ClusterState state, RecoveryRe\n \n static class ShardRecoveryRequest extends BroadcastShardOperationRequest {\n \n- ShardRecoveryRequest() { }\n+ ShardRecoveryRequest() {\n+ }\n \n ShardRecoveryRequest(ShardId shardId, RecoveryRequest request) {\n super(shardId, request);", "filename": "src/main/java/org/elasticsearch/action/admin/indices/recovery/TransportRecoveryAction.java", "status": "modified" }, { "diff": "@@ -180,13 +180,13 @@ protected ShardStatus shardOperation(IndexShardStatusRequest request) throws Ela\n \n if (request.recovery) {\n // check on going recovery (from peer or gateway)\n- RecoveryStatus peerRecoveryStatus = indexShard.recoveryStatus();\n- if (peerRecoveryStatus == null) {\n- peerRecoveryStatus = peerRecoveryTarget.recoveryStatus(indexShard);\n+ RecoveryState peerRecoveryState = indexShard.recoveryState();\n+ if (peerRecoveryState == null) {\n+ peerRecoveryState = peerRecoveryTarget.recoveryState(indexShard);\n }\n- if (peerRecoveryStatus != null) {\n+ if (peerRecoveryState != null) {\n PeerRecoveryStatus.Stage stage;\n- switch (peerRecoveryStatus.stage()) {\n+ switch (peerRecoveryState.getStage()) {\n case INIT:\n stage = PeerRecoveryStatus.Stage.INIT;\n break;\n@@ -205,11 +205,11 @@ protected ShardStatus shardOperation(IndexShardStatusRequest request) throws Ela\n default:\n stage = PeerRecoveryStatus.Stage.INIT;\n }\n- shardStatus.peerRecoveryStatus = new PeerRecoveryStatus(stage, peerRecoveryStatus.recoveryState().getTimer().startTime(),\n- peerRecoveryStatus.recoveryState().getTimer().time(),\n- peerRecoveryStatus.recoveryState().getIndex().totalByteCount(),\n- peerRecoveryStatus.recoveryState().getIndex().reusedByteCount(),\n- peerRecoveryStatus.recoveryState().getIndex().recoveredByteCount(), peerRecoveryStatus.recoveryState().getTranslog().currentTranslogOperations());\n+ shardStatus.peerRecoveryStatus = new PeerRecoveryStatus(stage, peerRecoveryState.getTimer().startTime(),\n+ peerRecoveryState.getTimer().time(),\n+ peerRecoveryState.getIndex().totalByteCount(),\n+ peerRecoveryState.getIndex().reusedByteCount(),\n+ peerRecoveryState.getIndex().recoveredByteCount(), peerRecoveryState.getTranslog().currentTranslogOperations());\n }\n \n IndexShardGatewayService gatewayService = indexService.shardInjector(request.shardId().id()).getInstance(IndexShardGatewayService.class);", "filename": "src/main/java/org/elasticsearch/action/admin/indices/status/TransportIndicesStatusAction.java", "status": "modified" }, { "diff": "@@ -62,6 +62,7 @@ public IndexShardGatewayService(ShardId shardId, @IndexSettings Settings indexSe\n this.shardGateway = shardGateway;\n this.snapshotService = snapshotService;\n this.recoveryState = new RecoveryState(shardId);\n+ this.recoveryState.setType(RecoveryState.Type.GATEWAY);\n this.clusterService = clusterService;\n }\n ", "filename": "src/main/java/org/elasticsearch/index/gateway/IndexShardGatewayService.java", "status": "modified" }, { "diff": "@@ -89,7 +89,7 @@\n import org.elasticsearch.index.warmer.WarmerStats;\n import org.elasticsearch.indices.IndicesLifecycle;\n import org.elasticsearch.indices.InternalIndicesLifecycle;\n-import org.elasticsearch.indices.recovery.RecoveryStatus;\n+import org.elasticsearch.indices.recovery.RecoveryState;\n import org.elasticsearch.search.suggest.completion.Completion090PostingsFormat;\n import org.elasticsearch.search.suggest.completion.CompletionStats;\n import org.elasticsearch.threadpool.ThreadPool;\n@@ -146,7 +146,8 @@ public class InternalIndexShard extends AbstractIndexShardComponent implements I\n private volatile ScheduledFuture mergeScheduleFuture;\n private volatile ShardRouting shardRouting;\n \n- private RecoveryStatus recoveryStatus;\n+ @Nullable\n+ private RecoveryState recoveryState;\n \n private ApplyRefreshSettings applyRefreshSettings = new ApplyRefreshSettings();\n \n@@ -733,15 +734,15 @@ public void performRecoveryPrepareForTranslog() throws ElasticsearchException {\n }\n \n /**\n- * The peer recovery status if this shard recovered from a peer shard.\n+ * The peer recovery state if this shard recovered from a peer shard, null o.w.\n */\n- public RecoveryStatus recoveryStatus() {\n- return this.recoveryStatus;\n+ public RecoveryState recoveryState() {\n+ return this.recoveryState;\n }\n \n- public void performRecoveryFinalization(boolean withFlush, RecoveryStatus recoveryStatus) throws ElasticsearchException {\n+ public void performRecoveryFinalization(boolean withFlush, RecoveryState recoveryState) throws ElasticsearchException {\n performRecoveryFinalization(withFlush);\n- this.recoveryStatus = recoveryStatus;\n+ this.recoveryState = recoveryState;\n }\n \n public void performRecoveryFinalization(boolean withFlush) throws ElasticsearchException {", "filename": "src/main/java/org/elasticsearch/index/shard/service/InternalIndexShard.java", "status": "modified" }, { "diff": "@@ -61,9 +61,10 @@\n import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.index.shard.service.IndexShard;\n import org.elasticsearch.index.shard.service.InternalIndexShard;\n-import org.elasticsearch.index.store.Store;\n import org.elasticsearch.indices.IndicesService;\n-import org.elasticsearch.indices.recovery.*;\n+import org.elasticsearch.indices.recovery.RecoveryFailedException;\n+import org.elasticsearch.indices.recovery.RecoveryState;\n+import org.elasticsearch.indices.recovery.RecoveryTarget;\n import org.elasticsearch.threadpool.ThreadPool;\n \n import java.util.HashMap;\n@@ -559,19 +560,18 @@ private void applyNewOrUpdatedShards(final ClusterChangedEvent event) throws Ela\n boolean shardHasBeenRemoved = false;\n if (currentRoutingEntry.initializing() && shardRouting.initializing() && !currentRoutingEntry.equals(shardRouting)) {\n logger.debug(\"[{}][{}] removing shard (different instance of it allocated on this node, current [{}], global [{}])\", shardRouting.index(), shardRouting.id(), currentRoutingEntry, shardRouting);\n- // cancel recovery just in case we are in recovery (its fine if we are not in recovery, it will be a noop).\n- recoveryTarget.cancelRecovery(indexShard);\n+ // closing the shard will also cancel any ongoing recovery.\n indexService.removeShard(shardRouting.id(), \"removing shard (different instance of it allocated on this node)\");\n shardHasBeenRemoved = true;\n } else if (isPeerRecovery(shardRouting)) {\n // check if there is an existing recovery going, and if so, and the source node is not the same, cancel the recovery to restart it\n- RecoveryStatus recoveryStatus = recoveryTarget.recoveryStatus(indexShard);\n- if (recoveryStatus != null && recoveryStatus.stage() != RecoveryState.Stage.DONE) {\n+ RecoveryState recoveryState = recoveryTarget.recoveryState(indexShard);\n+ if (recoveryState != null && recoveryState.getStage() != RecoveryState.Stage.DONE) {\n // we have an ongoing recovery, find the source based on current routing and compare them\n DiscoveryNode sourceNode = findSourceNodeForPeerRecovery(routingTable, nodes, shardRouting);\n- if (!recoveryStatus.sourceNode().equals(sourceNode)) {\n+ if (!recoveryState.getSourceNode().equals(sourceNode)) {\n logger.debug(\"[{}][{}] removing shard (recovery source changed), current [{}], global [{}])\", shardRouting.index(), shardRouting.id(), currentRoutingEntry, shardRouting);\n- recoveryTarget.cancelRecovery(indexShard);\n+ // closing the shard will also cancel any ongoing recovery.\n indexService.removeShard(shardRouting.id(), \"removing shard (recovery source node changed)\");\n shardHasBeenRemoved = true;\n }\n@@ -728,17 +728,7 @@ private void applyInitializingShard(final RoutingTable routingTable, final Disco\n // the edge case where its mark as relocated, and we might need to roll it back...\n // For replicas: we are recovering a backup from a primary\n RecoveryState.Type type = shardRouting.primary() ? RecoveryState.Type.RELOCATION : RecoveryState.Type.REPLICA;\n- final Store store = indexShard.store();\n- final StartRecoveryRequest request;\n- store.incRef();\n- try {\n- store.failIfCorrupted();\n- request = new StartRecoveryRequest(indexShard.shardId(), sourceNode, nodes.localNode(),\n- false, store.getMetadata().asMap(), type, recoveryIdGenerator.incrementAndGet());\n- } finally {\n- store.decRef();\n- }\n- recoveryTarget.startRecovery(request, indexShard, new PeerRecoveryListener(request, shardRouting, indexService, indexMetaData));\n+ recoveryTarget.startRecovery(indexShard, type, sourceNode, new PeerRecoveryListener(shardRouting, indexService, indexMetaData));\n \n } catch (Throwable e) {\n indexShard.engine().failEngine(\"corrupted preexisting index\", e);\n@@ -808,68 +798,41 @@ private boolean isPeerRecovery(ShardRouting shardRouting) {\n \n private class PeerRecoveryListener implements RecoveryTarget.RecoveryListener {\n \n- private final StartRecoveryRequest request;\n private final ShardRouting shardRouting;\n private final IndexService indexService;\n private final IndexMetaData indexMetaData;\n \n- private PeerRecoveryListener(StartRecoveryRequest request, ShardRouting shardRouting, IndexService indexService, IndexMetaData indexMetaData) {\n- this.request = request;\n+ private PeerRecoveryListener(ShardRouting shardRouting, IndexService indexService, IndexMetaData indexMetaData) {\n this.shardRouting = shardRouting;\n this.indexService = indexService;\n this.indexMetaData = indexMetaData;\n }\n \n @Override\n- public void onRecoveryDone() {\n- shardStateAction.shardStarted(shardRouting, indexMetaData.getUUID(), \"after recovery (replica) from node [\" + request.sourceNode() + \"]\");\n- }\n-\n- @Override\n- public void onRetryRecovery(TimeValue retryAfter, RecoveryStatus recoveryStatus) {\n- recoveryTarget.retryRecovery(request, retryAfter, recoveryStatus, PeerRecoveryListener.this);\n- }\n-\n- @Override\n- public void onIgnoreRecovery(boolean removeShard, String reason) {\n- if (!removeShard) {\n- return;\n- }\n- synchronized (mutex) {\n- if (indexService.hasShard(shardRouting.shardId().id())) {\n- if (logger.isDebugEnabled()) {\n- logger.debug(\"[{}][{}] removing shard on ignored recovery, reason [{}]\", shardRouting.index(), shardRouting.shardId().id(), reason);\n- }\n- try {\n- indexService.removeShard(shardRouting.shardId().id(), \"ignore recovery: \" + reason);\n- } catch (IndexShardMissingException e) {\n- // the node got closed on us, ignore it\n- } catch (Throwable e1) {\n- logger.warn(\"[{}][{}] failed to delete shard after ignore recovery\", e1, indexService.index().name(), shardRouting.shardId().id());\n- }\n- }\n- }\n+ public void onRecoveryDone(RecoveryState state) {\n+ shardStateAction.shardStarted(shardRouting, indexMetaData.getUUID(), \"after recovery (replica) from node [\" + state.getSourceNode() + \"]\");\n }\n \n @Override\n- public void onRecoveryFailure(RecoveryFailedException e, boolean sendShardFailure) {\n+ public void onRecoveryFailure(RecoveryState state, RecoveryFailedException e, boolean sendShardFailure) {\n handleRecoveryFailure(indexService, indexMetaData, shardRouting, sendShardFailure, e);\n }\n }\n \n private void handleRecoveryFailure(IndexService indexService, IndexMetaData indexMetaData, ShardRouting shardRouting, boolean sendShardFailure, Throwable failure) {\n- logger.warn(\"[{}][{}] failed to start shard\", failure, indexService.index().name(), shardRouting.shardId().id());\n synchronized (mutex) {\n if (indexService.hasShard(shardRouting.shardId().id())) {\n try {\n+ logger.debug(\"[{}][{}] removing shard on failed recovery [{}]\", shardRouting.index(), shardRouting.shardId().id(), failure.getMessage());\n indexService.removeShard(shardRouting.shardId().id(), \"recovery failure [\" + ExceptionsHelper.detailedMessage(failure) + \"]\");\n } catch (IndexShardMissingException e) {\n // the node got closed on us, ignore it\n } catch (Throwable e1) {\n- logger.warn(\"[{}][{}] failed to delete shard after failed startup\", e1, indexService.index().name(), shardRouting.shardId().id());\n+ logger.warn(\"[{}][{}] failed to delete shard after recovery failure\", e1, indexService.index().name(), shardRouting.shardId().id());\n }\n }\n if (sendShardFailure) {\n+ logger.warn(\"[{}][{}] sending failed shard after recovery failure\", failure, indexService.index().name(), shardRouting.shardId().id());\n try {\n failedShards.put(shardRouting.shardId(), new FailedShard(shardRouting.version()));\n shardStateAction.shardFailed(shardRouting, indexMetaData.getUUID(), \"Failed to start shard, message [\" + detailedMessage(failure) + \"]\");", "filename": "src/main/java/org/elasticsearch/indices/cluster/IndicesClusterStateService.java", "status": "modified" }, { "diff": "@@ -0,0 +1,184 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.indices.recovery;\n+\n+import org.elasticsearch.cluster.node.DiscoveryNode;\n+import org.elasticsearch.common.Nullable;\n+import org.elasticsearch.common.logging.ESLogger;\n+import org.elasticsearch.common.util.concurrent.ConcurrentCollections;\n+import org.elasticsearch.index.shard.IndexShardClosedException;\n+import org.elasticsearch.index.shard.ShardId;\n+import org.elasticsearch.index.shard.service.IndexShard;\n+import org.elasticsearch.index.shard.service.InternalIndexShard;\n+import org.elasticsearch.index.store.Store;\n+\n+import java.io.IOException;\n+import java.sql.Timestamp;\n+import java.util.Map;\n+import java.util.concurrent.ConcurrentMap;\n+import java.util.concurrent.atomic.AtomicBoolean;\n+\n+/**\n+ * This class holds a collection of all on going recoveries on the current node (i.e., the node is the target node\n+ * of those recoveries). The class is used to guarantee concurrent semantics such that once a recoveries was done/cancelled/failed\n+ * no other thread will be able to find it. Last, the {@link StatusRef} inner class verifies that recovery temporary files\n+ * and store will only be cleared once on going usage is finished.\n+ */\n+public class RecoveriesCollection {\n+\n+ /** This is the single source of truth for ongoing recoveries. If it's not here, it was canceled or done */\n+ private final ConcurrentMap<Long, RecoveryStatus> onGoingRecoveries = ConcurrentCollections.newConcurrentMap();\n+\n+ final private ESLogger logger;\n+\n+ public RecoveriesCollection(ESLogger logger) {\n+ this.logger = logger;\n+ }\n+\n+ /**\n+ * Starts are new recovery for the given shard, source node and state\n+ *\n+ * @return the id of the new recovery.\n+ */\n+ public long startRecovery(InternalIndexShard indexShard, DiscoveryNode sourceNode, RecoveryState state, RecoveryTarget.RecoveryListener listener) {\n+ RecoveryStatus status = new RecoveryStatus(indexShard, sourceNode, state, listener);\n+ RecoveryStatus existingStatus = onGoingRecoveries.putIfAbsent(status.recoveryId(), status);\n+ assert existingStatus == null : \"found two RecoveryStatus instances with the same id\";\n+ logger.trace(\"{} started recovery from {}, id [{}]\", indexShard.shardId(), sourceNode, status.recoveryId());\n+ return status.recoveryId();\n+ }\n+\n+ /**\n+ * gets the {@link RecoveryStatus } for a given id. The RecoveryStatus returned has it's ref count already incremented\n+ * to make sure it's safe to use. However, you must call {@link RecoveryStatus#decRef()} when you are done with it, typically\n+ * by using this method in a try-with-resources clause.\n+ * <p/>\n+ * Returns null if recovery is not found\n+ */\n+ public StatusRef getStatus(long id) {\n+ RecoveryStatus status = onGoingRecoveries.get(id);\n+ if (status != null && status.tryIncRef()) {\n+ return new StatusRef(status);\n+ }\n+ return null;\n+ }\n+\n+ /** Similar to {@link #getStatus(long)} but throws an exception if no recovery is found */\n+ public StatusRef getStatusSafe(long id, ShardId shardId) {\n+ StatusRef statusRef = getStatus(id);\n+ if (statusRef == null) {\n+ throw new IndexShardClosedException(shardId);\n+ }\n+ assert statusRef.status().shardId().equals(shardId);\n+ return statusRef;\n+ }\n+\n+ /** cancel the recovery with the given id (if found) and remove it from the recovery collection */\n+ public void cancelRecovery(long id, String reason) {\n+ RecoveryStatus removed = onGoingRecoveries.remove(id);\n+ if (removed != null) {\n+ logger.trace(\"{} canceled recovery from {}, id [{}] (reason [{}])\",\n+ removed.shardId(), removed.sourceNode(), removed.recoveryId(), reason);\n+ removed.cancel(reason);\n+ }\n+ }\n+\n+ /**\n+ * fail the recovery with the given id (if found) and remove it from the recovery collection\n+ *\n+ * @param id id of the recovery to fail\n+ * @param e exception with reason for the failure\n+ * @param sendShardFailure true a shard failed message should be sent to the master\n+ */\n+ public void failRecovery(long id, RecoveryFailedException e, boolean sendShardFailure) {\n+ RecoveryStatus removed = onGoingRecoveries.remove(id);\n+ if (removed != null) {\n+ logger.trace(\"{} failing recovery from {}, id [{}]. Send shard failure: [{}]\", removed.shardId(), removed.sourceNode(), removed.recoveryId(), sendShardFailure);\n+ removed.fail(e, sendShardFailure);\n+ }\n+ }\n+\n+ /** mark the recovery with the given id as done (if found) */\n+ public void markRecoveryAsDone(long id) {\n+ RecoveryStatus removed = onGoingRecoveries.remove(id);\n+ if (removed != null) {\n+ logger.trace(\"{} marking recovery from {} as done, id [{}]\", removed.shardId(), removed.sourceNode(), removed.recoveryId());\n+ removed.markAsDone();\n+ }\n+ }\n+\n+ /**\n+ * Try to find an ongoing recovery for a given shard. returns null if not found.\n+ */\n+ @Nullable\n+ public StatusRef findRecoveryByShard(IndexShard indexShard) {\n+ for (RecoveryStatus recoveryStatus : onGoingRecoveries.values()) {\n+ if (recoveryStatus.indexShard() == indexShard) {\n+ if (recoveryStatus.tryIncRef()) {\n+ return new StatusRef(recoveryStatus);\n+ } else {\n+ return null;\n+ }\n+ }\n+ }\n+ return null;\n+ }\n+\n+\n+ /** cancel all ongoing recoveries for the given shard. typically because the shards is closed */\n+ public void cancelRecoveriesForShard(ShardId shardId, String reason) {\n+ for (RecoveryStatus status : onGoingRecoveries.values()) {\n+ if (status.shardId().equals(shardId)) {\n+ cancelRecovery(status.recoveryId(), reason);\n+ }\n+ }\n+ }\n+\n+ /**\n+ * a reference to {@link RecoveryStatus}, which implements {@link AutoCloseable}. closing the reference\n+ * causes {@link RecoveryStatus#decRef()} to be called. This makes sure that the underlying resources\n+ * will not be freed until {@link RecoveriesCollection.StatusRef#close()} is called.\n+ */\n+ public static class StatusRef implements AutoCloseable {\n+\n+ private final RecoveryStatus status;\n+ private final AtomicBoolean closed = new AtomicBoolean(false);\n+\n+ /**\n+ * Important: {@link org.elasticsearch.indices.recovery.RecoveryStatus#tryIncRef()} should\n+ * be *successfully* called on status before\n+ */\n+ public StatusRef(RecoveryStatus status) {\n+ this.status = status;\n+ }\n+\n+ @Override\n+ public void close() {\n+ if (closed.compareAndSet(false, true)) {\n+ status.decRef();\n+ }\n+ }\n+\n+ public RecoveryStatus status() {\n+ return status;\n+ }\n+ }\n+}\n+", "filename": "src/main/java/org/elasticsearch/indices/recovery/RecoveriesCollection.java", "status": "added" }, { "diff": "@@ -21,6 +21,7 @@\n \n import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n+import org.elasticsearch.common.Nullable;\n import org.elasticsearch.index.shard.ShardId;\n \n /**\n@@ -29,10 +30,22 @@\n public class RecoveryFailedException extends ElasticsearchException {\n \n public RecoveryFailedException(StartRecoveryRequest request, Throwable cause) {\n- this(request.shardId(), request.sourceNode(), request.targetNode(), cause);\n+ this(request, null, cause);\n+ }\n+\n+ public RecoveryFailedException(StartRecoveryRequest request, @Nullable String extraInfo, Throwable cause) {\n+ this(request.shardId(), request.sourceNode(), request.targetNode(), extraInfo, cause);\n+ }\n+\n+ public RecoveryFailedException(RecoveryState state, @Nullable String extraInfo, Throwable cause) {\n+ this(state.getShardId(), state.getSourceNode(), state.getTargetNode(), extraInfo, cause);\n }\n \n public RecoveryFailedException(ShardId shardId, DiscoveryNode sourceNode, DiscoveryNode targetNode, Throwable cause) {\n- super(shardId + \": Recovery failed from \" + sourceNode + \" into \" + targetNode, cause);\n+ this(shardId, sourceNode, targetNode, null, cause);\n+ }\n+\n+ public RecoveryFailedException(ShardId shardId, DiscoveryNode sourceNode, DiscoveryNode targetNode, @Nullable String extraInfo, Throwable cause) {\n+ super(shardId + \": Recovery failed from \" + sourceNode + \" into \" + targetNode + (extraInfo == null ? \"\" : \" (\" + extraInfo + \")\"), cause);\n }\n }", "filename": "src/main/java/org/elasticsearch/indices/recovery/RecoveryFailedException.java", "status": "modified" }, { "diff": "@@ -19,106 +19,310 @@\n \n package org.elasticsearch.indices.recovery;\n \n+import org.apache.lucene.store.Directory;\n import org.apache.lucene.store.IOContext;\n import org.apache.lucene.store.IndexOutput;\n+import org.apache.lucene.util.IOUtils;\n+import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n+import org.elasticsearch.common.logging.ESLogger;\n+import org.elasticsearch.common.logging.Loggers;\n import org.elasticsearch.common.util.concurrent.ConcurrentCollections;\n import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.index.shard.service.InternalIndexShard;\n import org.elasticsearch.index.store.Store;\n import org.elasticsearch.index.store.StoreFileMetaData;\n \n+import java.io.FileNotFoundException;\n import java.io.IOException;\n+import java.nio.file.NoSuchFileException;\n+import java.util.Iterator;\n+import java.util.Map;\n import java.util.Map.Entry;\n import java.util.Set;\n import java.util.concurrent.ConcurrentMap;\n+import java.util.concurrent.atomic.AtomicBoolean;\n+import java.util.concurrent.atomic.AtomicInteger;\n+import java.util.concurrent.atomic.AtomicLong;\n+import java.util.concurrent.atomic.AtomicReference;\n \n /**\n *\n */\n+\n+\n public class RecoveryStatus {\n \n- final ShardId shardId;\n- final long recoveryId;\n- final InternalIndexShard indexShard;\n- final RecoveryState recoveryState;\n- final DiscoveryNode sourceNode;\n+ private final ESLogger logger;\n+\n+ private final static AtomicLong idGenerator = new AtomicLong();\n+\n+ private final String RECOVERY_PREFIX = \"recovery.\";\n+\n+ private final ShardId shardId;\n+ private final long recoveryId;\n+ private final InternalIndexShard indexShard;\n+ private final RecoveryState state;\n+ private final DiscoveryNode sourceNode;\n+ private final String tempFilePrefix;\n+ private final Store store;\n+ private final RecoveryTarget.RecoveryListener listener;\n+\n+ private AtomicReference<Thread> waitingRecoveryThread = new AtomicReference<>();\n+\n+ private final AtomicBoolean finished = new AtomicBoolean();\n \n- public RecoveryStatus(long recoveryId, InternalIndexShard indexShard, DiscoveryNode sourceNode) {\n- this.recoveryId = recoveryId;\n+ // we start with 1 which will be decremented on cancel/close/failure\n+ private final AtomicInteger refCount = new AtomicInteger(1);\n+\n+ private final ConcurrentMap<String, IndexOutput> openIndexOutputs = ConcurrentCollections.newConcurrentMap();\n+ private final Store.LegacyChecksums legacyChecksums = new Store.LegacyChecksums();\n+\n+ public RecoveryStatus(InternalIndexShard indexShard, DiscoveryNode sourceNode, RecoveryState state, RecoveryTarget.RecoveryListener listener) {\n+ this.recoveryId = idGenerator.incrementAndGet();\n+ this.listener = listener;\n+ this.logger = Loggers.getLogger(getClass(), indexShard.indexSettings(), indexShard.shardId());\n this.indexShard = indexShard;\n this.sourceNode = sourceNode;\n this.shardId = indexShard.shardId();\n- this.recoveryState = new RecoveryState(shardId);\n- recoveryState.getTimer().startTime(System.currentTimeMillis());\n+ this.state = state;\n+ this.state.getTimer().startTime(System.currentTimeMillis());\n+ this.tempFilePrefix = RECOVERY_PREFIX + this.state.getTimer().startTime() + \".\";\n+ this.store = indexShard.store();\n+ // make sure the store is not released until we are done.\n+ store.incRef();\n }\n \n- volatile Thread recoveryThread;\n- private volatile boolean canceled;\n- volatile boolean sentCanceledToSource;\n+ private final Set<String> tempFileNames = ConcurrentCollections.newConcurrentSet();\n+\n+ public long recoveryId() {\n+ return recoveryId;\n+ }\n \n- private volatile ConcurrentMap<String, IndexOutput> openIndexOutputs = ConcurrentCollections.newConcurrentMap();\n- public final Store.LegacyChecksums legacyChecksums = new Store.LegacyChecksums();\n+ public ShardId shardId() {\n+ return shardId;\n+ }\n+\n+ public InternalIndexShard indexShard() {\n+ ensureNotFinished();\n+ return indexShard;\n+ }\n \n public DiscoveryNode sourceNode() {\n return this.sourceNode;\n }\n \n- public RecoveryState recoveryState() {\n- return recoveryState;\n+ public RecoveryState state() {\n+ return state;\n+ }\n+\n+ public Store store() {\n+ ensureNotFinished();\n+ return store;\n+ }\n+\n+ /** set a thread that should be interrupted if the recovery is canceled */\n+ public void setWaitingRecoveryThread(Thread thread) {\n+ waitingRecoveryThread.set(thread);\n+ }\n+\n+ /**\n+ * clear the thread set by {@link #setWaitingRecoveryThread(Thread)}, making sure we\n+ * do not override another thread.\n+ */\n+ public void clearWaitingRecoveryThread(Thread threadToClear) {\n+ waitingRecoveryThread.compareAndSet(threadToClear, null);\n }\n \n public void stage(RecoveryState.Stage stage) {\n- recoveryState.setStage(stage);\n+ state.setStage(stage);\n }\n \n public RecoveryState.Stage stage() {\n- return recoveryState.getStage();\n+ return state.getStage();\n }\n \n- public boolean isCanceled() {\n- return canceled;\n+ public Store.LegacyChecksums legacyChecksums() {\n+ return legacyChecksums;\n }\n- \n- public synchronized void cancel() {\n- canceled = true;\n+\n+ /** renames all temporary files to their true name, potentially overriding existing files */\n+ public void renameAllTempFiles() throws IOException {\n+ ensureNotFinished();\n+ Iterator<String> tempFileIterator = tempFileNames.iterator();\n+ final Directory directory = store.directory();\n+ while (tempFileIterator.hasNext()) {\n+ String tempFile = tempFileIterator.next();\n+ String origFile = originalNameForTempFile(tempFile);\n+ // first, go and delete the existing ones\n+ try {\n+ directory.deleteFile(origFile);\n+ } catch (NoSuchFileException e) {\n+\n+ } catch (Throwable ex) {\n+ logger.debug(\"failed to delete file [{}]\", ex, origFile);\n+ }\n+ // now, rename the files... and fail it it won't work\n+ store.renameFile(tempFile, origFile);\n+ // upon success, remove the temp file\n+ tempFileIterator.remove();\n+ }\n }\n- \n- public IndexOutput getOpenIndexOutput(String key) {\n- final ConcurrentMap<String, IndexOutput> outputs = openIndexOutputs;\n- if (canceled || outputs == null) {\n- return null;\n+\n+ /** cancel the recovery. calling this method will clean temporary files and release the store\n+ * unless this object is in use (in which case it will be cleaned once all ongoing users call\n+ * {@link #decRef()}\n+ *\n+ * if {@link #setWaitingRecoveryThread(Thread)} was used, the thread will be interrupted.\n+ */\n+ public void cancel(String reason) {\n+ if (finished.compareAndSet(false, true)) {\n+ logger.debug(\"recovery canceled (reason: [{}])\", reason);\n+ // release the initial reference. recovery files will be cleaned as soon as ref count goes to zero, potentially now\n+ decRef();\n+\n+ final Thread thread = waitingRecoveryThread.get();\n+ if (thread != null) {\n+ thread.interrupt();\n+ }\n }\n- return outputs.get(key);\n }\n \n- public synchronized Set<Entry<String, IndexOutput>> cancelAndClearOpenIndexInputs() {\n- cancel();\n- final ConcurrentMap<String, IndexOutput> outputs = openIndexOutputs;\n- openIndexOutputs = null;\n- if (outputs == null) {\n- return null;\n+ /**\n+ * fail the recovery and call listener\n+ *\n+ * @param e exception that encapsulating the failure\n+ * @param sendShardFailure indicates whether to notify the master of the shard failure\n+ **/\n+ public void fail(RecoveryFailedException e, boolean sendShardFailure) {\n+ if (finished.compareAndSet(false, true)) {\n+ try {\n+ listener.onRecoveryFailure(state, e, sendShardFailure);\n+ } finally {\n+ // release the initial reference. recovery files will be cleaned as soon as ref count goes to zero, potentially now\n+ decRef();\n+ }\n }\n- Set<Entry<String, IndexOutput>> entrySet = outputs.entrySet();\n- return entrySet;\n }\n- \n \n- public IndexOutput removeOpenIndexOutputs(String name) {\n- final ConcurrentMap<String, IndexOutput> outputs = openIndexOutputs;\n- if (outputs == null) {\n- return null;\n+ /** mark the current recovery as done */\n+ public void markAsDone() {\n+ if (finished.compareAndSet(false, true)) {\n+ assert tempFileNames.isEmpty() : \"not all temporary files are renamed\";\n+ // release the initial reference. recovery files will be cleaned as soon as ref count goes to zero, potentially now\n+ decRef();\n+ listener.onRecoveryDone(state);\n }\n- return outputs.remove(name);\n }\n \n- public synchronized IndexOutput openAndPutIndexOutput(String key, String fileName, StoreFileMetaData metaData, Store store) throws IOException {\n- if (isCanceled()) {\n- return null;\n+ private String getTempNameForFile(String origFile) {\n+ return tempFilePrefix + origFile;\n+ }\n+\n+ /** return true if the give file is a temporary file name issued by this recovery */\n+ private boolean isTempFile(String filename) {\n+ return tempFileNames.contains(filename);\n+ }\n+\n+ public IndexOutput getOpenIndexOutput(String key) {\n+ ensureNotFinished();\n+ return openIndexOutputs.get(key);\n+ }\n+\n+ /** returns the original file name for a temporary file name issued by this recovery */\n+ private String originalNameForTempFile(String tempFile) {\n+ if (!isTempFile(tempFile)) {\n+ throw new ElasticsearchException(\"[\" + tempFile + \"] is not a temporary file made by this recovery\");\n }\n- final ConcurrentMap<String, IndexOutput> outputs = openIndexOutputs;\n- IndexOutput indexOutput = store.createVerifyingOutput(fileName, IOContext.DEFAULT, metaData);\n- outputs.put(key, indexOutput);\n+ return tempFile.substring(tempFilePrefix.length());\n+ }\n+\n+ /** remove and {@link org.apache.lucene.store.IndexOutput} for a given file. It is the caller's responsibility to close it */\n+ public IndexOutput removeOpenIndexOutputs(String name) {\n+ ensureNotFinished();\n+ return openIndexOutputs.remove(name);\n+ }\n+\n+ /**\n+ * Creates an {@link org.apache.lucene.store.IndexOutput} for the given file name. Note that the\n+ * IndexOutput actually point at a temporary file.\n+ * <p/>\n+ * Note: You can use {@link #getOpenIndexOutput(String)} with the same filename to retrieve the same IndexOutput\n+ * at a later stage\n+ */\n+ public IndexOutput openAndPutIndexOutput(String fileName, StoreFileMetaData metaData, Store store) throws IOException {\n+ ensureNotFinished();\n+ String tempFileName = getTempNameForFile(fileName);\n+ // add first, before it's created\n+ tempFileNames.add(tempFileName);\n+ IndexOutput indexOutput = store.createVerifyingOutput(tempFileName, IOContext.DEFAULT, metaData);\n+ openIndexOutputs.put(fileName, indexOutput);\n return indexOutput;\n }\n+\n+ /**\n+ * Tries to increment the refCount of this RecoveryStatus instance. This method will return <tt>true</tt> iff the refCount was\n+ * incremented successfully otherwise <tt>false</tt>. Be sure to always call a corresponding {@link #decRef}, in a finally clause;\n+ *\n+ * @see #decRef()\n+ */\n+ public final boolean tryIncRef() {\n+ do {\n+ int i = refCount.get();\n+ if (i > 0) {\n+ if (refCount.compareAndSet(i, i + 1)) {\n+ return true;\n+ }\n+ } else {\n+ return false;\n+ }\n+ } while (true);\n+ }\n+\n+ /**\n+ * Decreases the refCount of this Store instance.If the refCount drops to 0, the recovery process this status represents\n+ * is seen as done and resources and temporary files are deleted.\n+ *\n+ * @see #tryIncRef\n+ */\n+ public final void decRef() {\n+ int i = refCount.decrementAndGet();\n+ assert i >= 0;\n+ if (i == 0) {\n+ closeInternal();\n+ }\n+ }\n+\n+ private void closeInternal() {\n+ try {\n+ // clean open index outputs\n+ Iterator<Entry<String, IndexOutput>> iterator = openIndexOutputs.entrySet().iterator();\n+ while (iterator.hasNext()) {\n+ Map.Entry<String, IndexOutput> entry = iterator.next();\n+ IOUtils.closeWhileHandlingException(entry.getValue());\n+ iterator.remove();\n+ }\n+ // trash temporary files\n+ for (String file : tempFileNames) {\n+ logger.trace(\"cleaning temporary file [{}]\", file);\n+ store.deleteQuiet(file);\n+ }\n+ legacyChecksums.clear();\n+ } finally {\n+ // free store. increment happens in constructor\n+ store.decRef();\n+ }\n+ }\n+\n+ @Override\n+ public String toString() {\n+ return shardId + \" [\" + recoveryId + \"]\";\n+ }\n+\n+ private void ensureNotFinished() {\n+ if (finished.get()) {\n+ throw new ElasticsearchException(\"RecoveryStatus is used after it was finished. Probably a mismatch between incRef/decRef calls\");\n+ }\n+ }\n+\n }", "filename": "src/main/java/org/elasticsearch/indices/recovery/RecoveryStatus.java", "status": "modified" }, { "diff": "@@ -19,12 +19,13 @@\n \n package org.elasticsearch.indices.recovery;\n \n-import com.google.common.collect.Sets;\n+import com.google.common.collect.ImmutableMap;\n import org.apache.lucene.store.AlreadyClosedException;\n-import org.apache.lucene.store.Directory;\n import org.apache.lucene.store.IndexOutput;\n-import org.apache.lucene.util.IOUtils;\n+import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.ExceptionsHelper;\n+import org.elasticsearch.cluster.ClusterService;\n+import org.elasticsearch.cluster.node.DiscoveryNode;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.StopWatch;\n import org.elasticsearch.common.bytes.BytesReference;\n@@ -33,26 +34,25 @@\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.ByteSizeValue;\n import org.elasticsearch.common.unit.TimeValue;\n-import org.elasticsearch.common.util.concurrent.ConcurrentCollections;\n-import org.elasticsearch.common.util.concurrent.ConcurrentMapLong;\n+import org.elasticsearch.common.util.concurrent.AbstractRunnable;\n import org.elasticsearch.index.IndexShardMissingException;\n import org.elasticsearch.index.engine.RecoveryEngineException;\n-import org.elasticsearch.index.shard.*;\n+import org.elasticsearch.index.shard.IllegalIndexShardStateException;\n+import org.elasticsearch.index.shard.IndexShardClosedException;\n+import org.elasticsearch.index.shard.IndexShardNotStartedException;\n+import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.index.shard.service.IndexShard;\n import org.elasticsearch.index.shard.service.InternalIndexShard;\n import org.elasticsearch.index.store.Store;\n+import org.elasticsearch.index.store.StoreFileMetaData;\n import org.elasticsearch.index.translog.Translog;\n import org.elasticsearch.indices.IndexMissingException;\n import org.elasticsearch.indices.IndicesLifecycle;\n-import org.elasticsearch.indices.IndicesService;\n import org.elasticsearch.threadpool.ThreadPool;\n import org.elasticsearch.transport.*;\n \n import java.util.Collections;\n-import java.util.Iterator;\n import java.util.Map;\n-import java.util.Map.Entry;\n-import java.util.Set;\n \n import static org.elasticsearch.common.unit.TimeValue.timeValueMillis;\n \n@@ -77,20 +77,20 @@ public static class Actions {\n \n private final TransportService transportService;\n \n- private final IndicesService indicesService;\n-\n private final RecoverySettings recoverySettings;\n+ private final ClusterService clusterService;\n \n- private final ConcurrentMapLong<RecoveryStatus> onGoingRecoveries = ConcurrentCollections.newConcurrentMapLong();\n+ private final RecoveriesCollection onGoingRecoveries;\n \n @Inject\n- public RecoveryTarget(Settings settings, ThreadPool threadPool, TransportService transportService, IndicesService indicesService,\n- IndicesLifecycle indicesLifecycle, RecoverySettings recoverySettings) {\n+ public RecoveryTarget(Settings settings, ThreadPool threadPool, TransportService transportService,\n+ IndicesLifecycle indicesLifecycle, RecoverySettings recoverySettings, ClusterService clusterService) {\n super(settings);\n this.threadPool = threadPool;\n this.transportService = transportService;\n- this.indicesService = indicesService;\n this.recoverySettings = recoverySettings;\n+ this.clusterService = clusterService;\n+ this.onGoingRecoveries = new RecoveriesCollection(logger);\n \n transportService.registerHandler(Actions.FILES_INFO, new FilesInfoRequestHandler());\n transportService.registerHandler(Actions.FILE_CHUNK, new FileChunkTransportRequestHandler());\n@@ -103,261 +103,154 @@ public RecoveryTarget(Settings settings, ThreadPool threadPool, TransportService\n @Override\n public void beforeIndexShardClosed(ShardId shardId, @Nullable IndexShard indexShard) {\n if (indexShard != null) {\n- removeAndCleanOnGoingRecovery(findRecoveryByShard(indexShard));\n+ onGoingRecoveries.cancelRecoveriesForShard(shardId, \"shard closed\");\n }\n }\n });\n }\n \n- public RecoveryStatus recoveryStatus(IndexShard indexShard) {\n- RecoveryStatus recoveryStatus = findRecoveryByShard(indexShard);\n- if (recoveryStatus == null) {\n- return null;\n- }\n- if (recoveryStatus.recoveryState().getTimer().startTime() > 0 && recoveryStatus.stage() != RecoveryState.Stage.DONE) {\n- recoveryStatus.recoveryState().getTimer().time(System.currentTimeMillis() - recoveryStatus.recoveryState().getTimer().startTime());\n- }\n- return recoveryStatus;\n- }\n-\n- public void cancelRecovery(IndexShard indexShard) {\n- RecoveryStatus recoveryStatus = findRecoveryByShard(indexShard);\n- // it might be if the recovery source got canceled first\n- if (recoveryStatus == null) {\n- return;\n- }\n- if (recoveryStatus.sentCanceledToSource) {\n- return;\n- }\n- recoveryStatus.cancel();\n- try {\n- if (recoveryStatus.recoveryThread != null) {\n- recoveryStatus.recoveryThread.interrupt();\n+ public RecoveryState recoveryState(IndexShard indexShard) {\n+ try (RecoveriesCollection.StatusRef statusRef = onGoingRecoveries.findRecoveryByShard(indexShard)) {\n+ if (statusRef == null) {\n+ return null;\n }\n- // give it a grace period of actually getting the sent ack part\n- final long sleepTime = 100;\n- final long maxSleepTime = 10000;\n- long rounds = Math.round(maxSleepTime / sleepTime);\n- while (!recoveryStatus.sentCanceledToSource &&\n- transportService.nodeConnected(recoveryStatus.sourceNode) &&\n- rounds > 0) {\n- rounds--;\n- try {\n- Thread.sleep(sleepTime);\n- } catch (InterruptedException e) {\n- Thread.currentThread().interrupt();\n- break; // interrupted - step out!\n- }\n+ final RecoveryStatus recoveryStatus = statusRef.status();\n+ if (recoveryStatus.state().getTimer().startTime() > 0 && recoveryStatus.stage() != RecoveryState.Stage.DONE) {\n+ recoveryStatus.state().getTimer().time(System.currentTimeMillis() - recoveryStatus.state().getTimer().startTime());\n }\n- } finally {\n- removeAndCleanOnGoingRecovery(recoveryStatus);\n+ return recoveryStatus.state();\n+ } catch (Exception e) {\n+ // shouldn't really happen, but have to be here due to auto close\n+ throw new ElasticsearchException(\"error while getting recovery state\", e);\n }\n-\n }\n \n- public void startRecovery(final StartRecoveryRequest request, final InternalIndexShard indexShard, final RecoveryListener listener) {\n+ public void startRecovery(final InternalIndexShard indexShard, final RecoveryState.Type recoveryType, final DiscoveryNode sourceNode, final RecoveryListener listener) {\n try {\n- indexShard.recovering(\"from \" + request.sourceNode());\n+ indexShard.recovering(\"from \" + sourceNode);\n } catch (IllegalIndexShardStateException e) {\n // that's fine, since we might be called concurrently, just ignore this, we are already recovering\n- listener.onIgnoreRecovery(false, \"already in recovering process, \" + e.getMessage());\n+ logger.debug(\"{} ignore recovery. already in recovering process, {}\", indexShard.shardId(), e.getMessage());\n return;\n }\n // create a new recovery status, and process...\n- final RecoveryStatus recoveryStatus = new RecoveryStatus(request.recoveryId(), indexShard, request.sourceNode());\n- recoveryStatus.recoveryState.setType(request.recoveryType());\n- recoveryStatus.recoveryState.setSourceNode(request.sourceNode());\n- recoveryStatus.recoveryState.setTargetNode(request.targetNode());\n- recoveryStatus.recoveryState.setPrimary(indexShard.routingEntry().primary());\n- onGoingRecoveries.put(recoveryStatus.recoveryId, recoveryStatus);\n-\n- threadPool.generic().execute(new Runnable() {\n- @Override\n- public void run() {\n- doRecovery(request, recoveryStatus, listener);\n- }\n- });\n+ RecoveryState recoveryState = new RecoveryState(indexShard.shardId());\n+ recoveryState.setType(recoveryType);\n+ recoveryState.setSourceNode(sourceNode);\n+ recoveryState.setTargetNode(clusterService.localNode());\n+ recoveryState.setPrimary(indexShard.routingEntry().primary());\n+ final long recoveryId = onGoingRecoveries.startRecovery(indexShard, sourceNode, recoveryState, listener);\n+ threadPool.generic().execute(new RecoveryRunner(recoveryId));\n+\n }\n \n- public void retryRecovery(final StartRecoveryRequest request, TimeValue retryAfter, final RecoveryStatus status, final RecoveryListener listener) {\n- threadPool.schedule(retryAfter, ThreadPool.Names.GENERIC, new Runnable() {\n- @Override\n- public void run() {\n- doRecovery(request, status, listener);\n- }\n- });\n+ protected void retryRecovery(final long recoveryId, TimeValue retryAfter) {\n+ logger.trace(\"will retrying recovery with id [{}] in [{}]\", recoveryId, retryAfter);\n+ threadPool.schedule(retryAfter, ThreadPool.Names.GENERIC, new RecoveryRunner(recoveryId));\n }\n \n- private void doRecovery(final StartRecoveryRequest request, final RecoveryStatus recoveryStatus, final RecoveryListener listener) {\n- assert request.sourceNode() != null : \"can't do a recovery without a source node\";\n- final InternalIndexShard shard = recoveryStatus.indexShard;\n- if (shard == null) {\n- listener.onIgnoreRecovery(false, \"shard missing locally, stop recovery\");\n- return;\n- }\n- if (shard.state() == IndexShardState.CLOSED) {\n- listener.onIgnoreRecovery(false, \"local shard closed, stop recovery\");\n- return;\n- }\n- if (recoveryStatus.isCanceled()) {\n- // don't remove it, the cancellation code will remove it...\n- listener.onIgnoreRecovery(false, \"canceled recovery\");\n+ private void doRecovery(final RecoveryStatus recoveryStatus) {\n+ assert recoveryStatus.sourceNode() != null : \"can't do a recovery without a source node\";\n+\n+ logger.trace(\"collecting local files for {}\", recoveryStatus);\n+ final Map<String, StoreFileMetaData> existingFiles;\n+ try {\n+ existingFiles = recoveryStatus.store().getMetadata().asMap();\n+ } catch (Exception e) {\n+ logger.debug(\"error while listing local files, recovery as if there are none\", e);\n+ onGoingRecoveries.failRecovery(recoveryStatus.recoveryId(),\n+ new RecoveryFailedException(recoveryStatus.state(), \"failed to list local files\", e), true);\n return;\n }\n+ StartRecoveryRequest request = new StartRecoveryRequest(recoveryStatus.shardId(), recoveryStatus.sourceNode(), clusterService.localNode(),\n+ false, existingFiles, recoveryStatus.state().getType(), recoveryStatus.recoveryId());\n \n- recoveryStatus.recoveryThread = Thread.currentThread();\n- if (shard.store().tryIncRef()) {\n- try {\n- logger.trace(\"[{}][{}] starting recovery from {}\", request.shardId().index().name(), request.shardId().id(), request.sourceNode());\n-\n- StopWatch stopWatch = new StopWatch().start();\n- RecoveryResponse recoveryResponse = transportService.submitRequest(request.sourceNode(), RecoverySource.Actions.START_RECOVERY, request, new FutureTransportResponseHandler<RecoveryResponse>() {\n- @Override\n- public RecoveryResponse newInstance() {\n- return new RecoveryResponse();\n- }\n- }).txGet();\n- if (shard.state() == IndexShardState.CLOSED) {\n- removeAndCleanOnGoingRecovery(recoveryStatus);\n- listener.onIgnoreRecovery(false, \"local shard closed, stop recovery\");\n- return;\n- }\n- stopWatch.stop();\n- if (logger.isTraceEnabled()) {\n- StringBuilder sb = new StringBuilder();\n- sb.append('[').append(request.shardId().index().name()).append(']').append('[').append(request.shardId().id()).append(\"] \");\n- sb.append(\"recovery completed from \").append(request.sourceNode()).append(\", took[\").append(stopWatch.totalTime()).append(\"]\\n\");\n- sb.append(\" phase1: recovered_files [\").append(recoveryResponse.phase1FileNames.size()).append(\"]\").append(\" with total_size of [\").append(new ByteSizeValue(recoveryResponse.phase1TotalSize)).append(\"]\")\n- .append(\", took [\").append(timeValueMillis(recoveryResponse.phase1Time)).append(\"], throttling_wait [\").append(timeValueMillis(recoveryResponse.phase1ThrottlingWaitTime)).append(']')\n- .append(\"\\n\");\n- sb.append(\" : reusing_files [\").append(recoveryResponse.phase1ExistingFileNames.size()).append(\"] with total_size of [\").append(new ByteSizeValue(recoveryResponse.phase1ExistingTotalSize)).append(\"]\\n\");\n- sb.append(\" phase2: start took [\").append(timeValueMillis(recoveryResponse.startTime)).append(\"]\\n\");\n- sb.append(\" : recovered [\").append(recoveryResponse.phase2Operations).append(\"]\").append(\" transaction log operations\")\n- .append(\", took [\").append(timeValueMillis(recoveryResponse.phase2Time)).append(\"]\")\n- .append(\"\\n\");\n- sb.append(\" phase3: recovered [\").append(recoveryResponse.phase3Operations).append(\"]\").append(\" transaction log operations\")\n- .append(\", took [\").append(timeValueMillis(recoveryResponse.phase3Time)).append(\"]\");\n- logger.trace(sb.toString());\n- } else if (logger.isDebugEnabled()) {\n- logger.debug(\"{} recovery completed from [{}], took [{}]\", request.shardId(), request.sourceNode(), stopWatch.totalTime());\n- }\n- removeAndCleanOnGoingRecovery(recoveryStatus);\n- listener.onRecoveryDone();\n- } catch (Throwable e) {\n- if (logger.isTraceEnabled()) {\n- logger.trace(\"[{}][{}] Got exception on recovery\", e, request.shardId().index().name(), request.shardId().id());\n- }\n- if (recoveryStatus.isCanceled()) {\n- // don't remove it, the cancellation code will remove it...\n- listener.onIgnoreRecovery(false, \"canceled recovery\");\n- return;\n- }\n- if (shard.state() == IndexShardState.CLOSED) {\n- removeAndCleanOnGoingRecovery(recoveryStatus);\n- listener.onIgnoreRecovery(false, \"local shard closed, stop recovery\");\n- return;\n- }\n- Throwable cause = ExceptionsHelper.unwrapCause(e);\n- if (cause instanceof RecoveryEngineException) {\n- // unwrap an exception that was thrown as part of the recovery\n- cause = cause.getCause();\n- }\n- // do it twice, in case we have double transport exception\n- cause = ExceptionsHelper.unwrapCause(cause);\n- if (cause instanceof RecoveryEngineException) {\n- // unwrap an exception that was thrown as part of the recovery\n- cause = cause.getCause();\n- }\n-\n- // here, we would add checks against exception that need to be retried (and not removeAndClean in this case)\n-\n- if (cause instanceof IndexShardNotStartedException || cause instanceof IndexMissingException || cause instanceof IndexShardMissingException) {\n- // if the target is not ready yet, retry\n- listener.onRetryRecovery(TimeValue.timeValueMillis(500), recoveryStatus);\n- return;\n- }\n-\n- if (cause instanceof DelayRecoveryException) {\n- listener.onRetryRecovery(TimeValue.timeValueMillis(500), recoveryStatus);\n- return;\n- }\n-\n- // here, we check against ignore recovery options\n-\n- // in general, no need to clean the shard on ignored recovery, since we want to try and reuse it later\n- // it will get deleted in the IndicesStore if all are allocated and no shard exists on this node...\n+ try {\n+ logger.trace(\"[{}][{}] starting recovery from {}\", request.shardId().index().name(), request.shardId().id(), request.sourceNode());\n \n- removeAndCleanOnGoingRecovery(recoveryStatus);\n+ StopWatch stopWatch = new StopWatch().start();\n+ recoveryStatus.setWaitingRecoveryThread(Thread.currentThread());\n \n- if (cause instanceof ConnectTransportException) {\n- listener.onIgnoreRecovery(true, \"source node disconnected (\" + request.sourceNode() + \")\");\n- return;\n- }\n-\n- if (cause instanceof IndexShardClosedException) {\n- listener.onIgnoreRecovery(true, \"source shard is closed (\" + request.sourceNode() + \")\");\n- return;\n+ RecoveryResponse recoveryResponse = transportService.submitRequest(request.sourceNode(), RecoverySource.Actions.START_RECOVERY, request, new FutureTransportResponseHandler<RecoveryResponse>() {\n+ @Override\n+ public RecoveryResponse newInstance() {\n+ return new RecoveryResponse();\n }\n+ }).txGet();\n+ recoveryStatus.clearWaitingRecoveryThread(Thread.currentThread());\n+ stopWatch.stop();\n+ if (logger.isTraceEnabled()) {\n+ StringBuilder sb = new StringBuilder();\n+ sb.append('[').append(request.shardId().index().name()).append(']').append('[').append(request.shardId().id()).append(\"] \");\n+ sb.append(\"recovery completed from \").append(request.sourceNode()).append(\", took[\").append(stopWatch.totalTime()).append(\"]\\n\");\n+ sb.append(\" phase1: recovered_files [\").append(recoveryResponse.phase1FileNames.size()).append(\"]\").append(\" with total_size of [\").append(new ByteSizeValue(recoveryResponse.phase1TotalSize)).append(\"]\")\n+ .append(\", took [\").append(timeValueMillis(recoveryResponse.phase1Time)).append(\"], throttling_wait [\").append(timeValueMillis(recoveryResponse.phase1ThrottlingWaitTime)).append(']')\n+ .append(\"\\n\");\n+ sb.append(\" : reusing_files [\").append(recoveryResponse.phase1ExistingFileNames.size()).append(\"] with total_size of [\").append(new ByteSizeValue(recoveryResponse.phase1ExistingTotalSize)).append(\"]\\n\");\n+ sb.append(\" phase2: start took [\").append(timeValueMillis(recoveryResponse.startTime)).append(\"]\\n\");\n+ sb.append(\" : recovered [\").append(recoveryResponse.phase2Operations).append(\"]\").append(\" transaction log operations\")\n+ .append(\", took [\").append(timeValueMillis(recoveryResponse.phase2Time)).append(\"]\")\n+ .append(\"\\n\");\n+ sb.append(\" phase3: recovered [\").append(recoveryResponse.phase3Operations).append(\"]\").append(\" transaction log operations\")\n+ .append(\", took [\").append(timeValueMillis(recoveryResponse.phase3Time)).append(\"]\");\n+ logger.trace(sb.toString());\n+ } else if (logger.isDebugEnabled()) {\n+ logger.debug(\"{} recovery completed from [{}], took [{}]\", request.shardId(), request.sourceNode(), stopWatch.totalTime());\n+ }\n+ // do this through ongoing recoveries to remove it from the collection\n+ onGoingRecoveries.markRecoveryAsDone(recoveryStatus.recoveryId());\n+ } catch (Throwable e) {\n+ if (logger.isTraceEnabled()) {\n+ logger.trace(\"[{}][{}] Got exception on recovery\", e, request.shardId().index().name(), request.shardId().id());\n+ }\n+ Throwable cause = ExceptionsHelper.unwrapCause(e);\n+ if (cause instanceof RecoveryEngineException) {\n+ // unwrap an exception that was thrown as part of the recovery\n+ cause = cause.getCause();\n+ }\n+ // do it twice, in case we have double transport exception\n+ cause = ExceptionsHelper.unwrapCause(cause);\n+ if (cause instanceof RecoveryEngineException) {\n+ // unwrap an exception that was thrown as part of the recovery\n+ cause = cause.getCause();\n+ }\n \n- if (cause instanceof AlreadyClosedException) {\n- listener.onIgnoreRecovery(true, \"source shard is closed (\" + request.sourceNode() + \")\");\n- return;\n- }\n+ // here, we would add checks against exception that need to be retried (and not removeAndClean in this case)\n \n- logger.warn(\"[{}][{}] recovery from [{}] failed\", e, request.shardId().index().name(), request.shardId().id(), request.sourceNode());\n- listener.onRecoveryFailure(new RecoveryFailedException(request, e), true);\n- } finally {\n- shard.store().decRef();\n+ if (cause instanceof IndexShardNotStartedException || cause instanceof IndexMissingException || cause instanceof IndexShardMissingException) {\n+ // if the target is not ready yet, retry\n+ retryRecovery(recoveryStatus.recoveryId(), TimeValue.timeValueMillis(500));\n+ return;\n }\n- } else {\n- listener.onIgnoreRecovery(false, \"local store closed, stop recovery\");\n- }\n- }\n-\n- public static interface RecoveryListener {\n- void onRecoveryDone();\n \n- void onRetryRecovery(TimeValue retryAfter, RecoveryStatus status);\n+ if (cause instanceof DelayRecoveryException) {\n+ retryRecovery(recoveryStatus.recoveryId(), TimeValue.timeValueMillis(500));\n+ return;\n+ }\n \n- void onIgnoreRecovery(boolean removeShard, String reason);\n+ if (cause instanceof ConnectTransportException) {\n+ onGoingRecoveries.failRecovery(recoveryStatus.recoveryId(), new RecoveryFailedException(request, \"source node disconnected\", cause), false);\n+ return;\n+ }\n \n- void onRecoveryFailure(RecoveryFailedException e, boolean sendShardFailure);\n- }\n+ if (cause instanceof IndexShardClosedException) {\n+ onGoingRecoveries.failRecovery(recoveryStatus.recoveryId(), new RecoveryFailedException(request, \"source shard is closed\", cause), false);\n+ return;\n+ }\n \n- @Nullable\n- private RecoveryStatus findRecoveryByShard(IndexShard indexShard) {\n- for (RecoveryStatus recoveryStatus : onGoingRecoveries.values()) {\n- if (recoveryStatus.indexShard == indexShard) {\n- return recoveryStatus;\n+ if (cause instanceof AlreadyClosedException) {\n+ onGoingRecoveries.failRecovery(recoveryStatus.recoveryId(), new RecoveryFailedException(request, \"source shard is closed\", cause), false);\n+ return;\n }\n+\n+ onGoingRecoveries.failRecovery(recoveryStatus.recoveryId(), new RecoveryFailedException(request, e), true);\n }\n- return null;\n }\n \n- private void removeAndCleanOnGoingRecovery(@Nullable RecoveryStatus status) {\n- if (status == null) {\n- return;\n- }\n- // clean it from the on going recoveries since it is being closed\n- status = onGoingRecoveries.remove(status.recoveryId);\n- if (status == null) {\n- return;\n- }\n- // just mark it as canceled as well, just in case there are in flight requests\n- // coming from the recovery target\n- status.cancel();\n- // clean open index outputs\n- Set<Entry<String, IndexOutput>> entrySet = status.cancelAndClearOpenIndexInputs();\n- Iterator<Entry<String, IndexOutput>> iterator = entrySet.iterator();\n- while (iterator.hasNext()) {\n- Map.Entry<String, IndexOutput> entry = iterator.next();\n- synchronized (entry.getValue()) {\n- IOUtils.closeWhileHandlingException(entry.getValue());\n- }\n- iterator.remove();\n+ public static interface RecoveryListener {\n+ void onRecoveryDone(RecoveryState state);\n \n- }\n- status.legacyChecksums.clear();\n+ void onRecoveryFailure(RecoveryState state, RecoveryFailedException e, boolean sendShardFailure);\n }\n \n class PrepareForTranslogOperationsRequestHandler extends BaseTransportRequestHandler<RecoveryPrepareForTranslogOperationsRequest> {\n@@ -374,12 +267,12 @@ public String executor() {\n \n @Override\n public void messageReceived(RecoveryPrepareForTranslogOperationsRequest request, TransportChannel channel) throws Exception {\n- RecoveryStatus onGoingRecovery = onGoingRecoveries.get(request.recoveryId());\n- validateRecoveryStatus(onGoingRecovery, request.shardId());\n-\n- onGoingRecovery.indexShard.performRecoveryPrepareForTranslog();\n- onGoingRecovery.stage(RecoveryState.Stage.TRANSLOG);\n- onGoingRecovery.recoveryState.getStart().checkIndexTime(onGoingRecovery.indexShard.checkIndexTook());\n+ try (RecoveriesCollection.StatusRef statusRef = onGoingRecoveries.getStatusSafe(request.recoveryId(), request.shardId())) {\n+ final RecoveryStatus recoveryStatus = statusRef.status();\n+ recoveryStatus.indexShard().performRecoveryPrepareForTranslog();\n+ recoveryStatus.stage(RecoveryState.Stage.TRANSLOG);\n+ recoveryStatus.state().getStart().checkIndexTime(recoveryStatus.indexShard().checkIndexTook());\n+ }\n channel.sendResponse(TransportResponse.Empty.INSTANCE);\n }\n }\n@@ -398,13 +291,12 @@ public String executor() {\n \n @Override\n public void messageReceived(RecoveryFinalizeRecoveryRequest request, TransportChannel channel) throws Exception {\n- RecoveryStatus onGoingRecovery = onGoingRecoveries.get(request.recoveryId());\n- validateRecoveryStatus(onGoingRecovery, request.shardId());\n-\n- onGoingRecovery.stage(RecoveryState.Stage.FINALIZE);\n- onGoingRecovery.indexShard.performRecoveryFinalization(false, onGoingRecovery);\n- onGoingRecovery.recoveryState().getTimer().time(System.currentTimeMillis() - onGoingRecovery.recoveryState().getTimer().startTime());\n- onGoingRecovery.stage(RecoveryState.Stage.DONE);\n+ try (RecoveriesCollection.StatusRef statusRef = onGoingRecoveries.getStatusSafe(request.recoveryId(), request.shardId())) {\n+ final RecoveryStatus recoveryStatus = statusRef.status();\n+ recoveryStatus.indexShard().performRecoveryFinalization(false, recoveryStatus.state());\n+ recoveryStatus.state().getTimer().time(System.currentTimeMillis() - recoveryStatus.state().getTimer().startTime());\n+ recoveryStatus.stage(RecoveryState.Stage.DONE);\n+ }\n channel.sendResponse(TransportResponse.Empty.INSTANCE);\n }\n }\n@@ -424,16 +316,15 @@ public String executor() {\n \n @Override\n public void messageReceived(RecoveryTranslogOperationsRequest request, TransportChannel channel) throws Exception {\n- RecoveryStatus onGoingRecovery = onGoingRecoveries.get(request.recoveryId());\n- validateRecoveryStatus(onGoingRecovery, request.shardId());\n-\n- InternalIndexShard shard = (InternalIndexShard) indicesService.indexServiceSafe(request.shardId().index().name()).shardSafe(request.shardId().id());\n- for (Translog.Operation operation : request.operations()) {\n- validateRecoveryStatus(onGoingRecovery, request.shardId());\n- shard.performRecoveryOperation(operation);\n- onGoingRecovery.recoveryState.getTranslog().incrementTranslogOperations();\n+ try (RecoveriesCollection.StatusRef statusRef = onGoingRecoveries.getStatusSafe(request.recoveryId(), request.shardId())) {\n+ final RecoveryStatus recoveryStatus = statusRef.status();\n+ for (Translog.Operation operation : request.operations()) {\n+ recoveryStatus.indexShard().performRecoveryOperation(operation);\n+ recoveryStatus.state().getTranslog().incrementTranslogOperations();\n+ }\n }\n channel.sendResponse(TransportResponse.Empty.INSTANCE);\n+\n }\n }\n \n@@ -451,18 +342,19 @@ public String executor() {\n \n @Override\n public void messageReceived(RecoveryFilesInfoRequest request, TransportChannel channel) throws Exception {\n- RecoveryStatus onGoingRecovery = onGoingRecoveries.get(request.recoveryId());\n- validateRecoveryStatus(onGoingRecovery, request.shardId());\n- final RecoveryState.Index index = onGoingRecovery.recoveryState().getIndex();\n- index.addFileDetails(request.phase1FileNames, request.phase1FileSizes);\n- index.addReusedFileDetails(request.phase1ExistingFileNames, request.phase1ExistingFileSizes);\n- index.totalByteCount(request.phase1TotalSize);\n- index.totalFileCount(request.phase1FileNames.size() + request.phase1ExistingFileNames.size());\n- index.reusedByteCount(request.phase1ExistingTotalSize);\n- index.reusedFileCount(request.phase1ExistingFileNames.size());\n- // recoveryBytesCount / recoveryFileCount will be set as we go...\n- onGoingRecovery.stage(RecoveryState.Stage.INDEX);\n- channel.sendResponse(TransportResponse.Empty.INSTANCE);\n+ try (RecoveriesCollection.StatusRef statusRef = onGoingRecoveries.getStatusSafe(request.recoveryId(), request.shardId())) {\n+ final RecoveryStatus recoveryStatus = statusRef.status();\n+ final RecoveryState.Index index = recoveryStatus.state().getIndex();\n+ index.addFileDetails(request.phase1FileNames, request.phase1FileSizes);\n+ index.addReusedFileDetails(request.phase1ExistingFileNames, request.phase1ExistingFileSizes);\n+ index.totalByteCount(request.phase1TotalSize);\n+ index.totalFileCount(request.phase1FileNames.size() + request.phase1ExistingFileNames.size());\n+ index.reusedByteCount(request.phase1ExistingTotalSize);\n+ index.reusedFileCount(request.phase1ExistingFileNames.size());\n+ // recoveryBytesCount / recoveryFileCount will be set as we go...\n+ recoveryStatus.stage(RecoveryState.Stage.INDEX);\n+ channel.sendResponse(TransportResponse.Empty.INSTANCE);\n+ }\n }\n }\n \n@@ -480,40 +372,15 @@ public String executor() {\n \n @Override\n public void messageReceived(RecoveryCleanFilesRequest request, TransportChannel channel) throws Exception {\n- RecoveryStatus onGoingRecovery = onGoingRecoveries.get(request.recoveryId());\n- validateRecoveryStatus(onGoingRecovery, request.shardId());\n-\n- final Store store = onGoingRecovery.indexShard.store();\n- store.incRef();\n- try {\n+ try (RecoveriesCollection.StatusRef statusRef = onGoingRecoveries.getStatusSafe(request.recoveryId(), request.shardId())) {\n+ final RecoveryStatus recoveryStatus = statusRef.status();\n // first, we go and move files that were created with the recovery id suffix to\n // the actual names, its ok if we have a corrupted index here, since we have replicas\n // to recover from in case of a full cluster shutdown just when this code executes...\n- String prefix = \"recovery.\" + onGoingRecovery.recoveryState().getTimer().startTime() + \".\";\n- Set<String> filesToRename = Sets.newHashSet();\n- for (String existingFile : store.directory().listAll()) {\n- if (existingFile.startsWith(prefix)) {\n- filesToRename.add(existingFile.substring(prefix.length(), existingFile.length()));\n- }\n- }\n- Exception failureToRename = null;\n- if (!filesToRename.isEmpty()) {\n- // first, go and delete the existing ones\n- final Directory directory = store.directory();\n- for (String file : filesToRename) {\n- try {\n- directory.deleteFile(file);\n- } catch (Throwable ex) {\n- logger.debug(\"failed to delete file [{}]\", ex, file);\n- }\n- }\n- for (String fileToRename : filesToRename) {\n- // now, rename the files... and fail it it won't work\n- store.renameFile(prefix + fileToRename, fileToRename);\n- }\n- }\n+ recoveryStatus.renameAllTempFiles();\n+ final Store store = recoveryStatus.store();\n // now write checksums\n- onGoingRecovery.legacyChecksums.write(store);\n+ recoveryStatus.legacyChecksums().write(store);\n \n for (String existingFile : store.directory().listAll()) {\n // don't delete snapshot file, or the checksums file (note, this is extra protection since the Store won't delete checksum)\n@@ -526,8 +393,6 @@ public void messageReceived(RecoveryCleanFilesRequest request, TransportChannel\n }\n }\n channel.sendResponse(TransportResponse.Empty.INSTANCE);\n- } finally {\n- store.decRef();\n }\n }\n }\n@@ -546,103 +411,85 @@ public String executor() {\n \n @Override\n public void messageReceived(final RecoveryFileChunkRequest request, TransportChannel channel) throws Exception {\n- RecoveryStatus onGoingRecovery = onGoingRecoveries.get(request.recoveryId());\n- validateRecoveryStatus(onGoingRecovery, request.shardId());\n-\n- final Store store = onGoingRecovery.indexShard.store();\n- store.incRef();\n- try {\n+ try (RecoveriesCollection.StatusRef statusRef = onGoingRecoveries.getStatusSafe(request.recoveryId(), request.shardId())) {\n+ final RecoveryStatus recoveryStatus = statusRef.status();\n+ final Store store = recoveryStatus.store();\n IndexOutput indexOutput;\n if (request.position() == 0) {\n- // first request\n- onGoingRecovery.legacyChecksums.remove(request.name());\n- indexOutput = onGoingRecovery.removeOpenIndexOutputs(request.name());\n- IOUtils.closeWhileHandlingException(indexOutput);\n- // we create an output with no checksum, this is because the pure binary data of the file is not\n- // the checksum (because of seek). We will create the checksum file once copying is done\n-\n- // also, we check if the file already exists, if it does, we create a file name based\n- // on the current recovery \"id\" and later we make the switch, the reason for that is that\n- // we only want to overwrite the index files once we copied all over, and not create a\n- // case where the index is half moved\n-\n- String fileName = request.name();\n- if (store.directory().fileExists(fileName)) {\n- fileName = \"recovery.\" + onGoingRecovery.recoveryState().getTimer().startTime() + \".\" + fileName;\n- }\n- indexOutput = onGoingRecovery.openAndPutIndexOutput(request.name(), fileName, request.metadata(), store);\n+ indexOutput = recoveryStatus.openAndPutIndexOutput(request.name(), request.metadata(), store);\n } else {\n- indexOutput = onGoingRecovery.getOpenIndexOutput(request.name());\n+ indexOutput = recoveryStatus.getOpenIndexOutput(request.name());\n+ }\n+ if (recoverySettings.rateLimiter() != null) {\n+ recoverySettings.rateLimiter().pause(request.content().length());\n }\n- if (indexOutput == null) {\n- // shard is getting closed on us\n- throw new IndexShardClosedException(request.shardId());\n+ BytesReference content = request.content();\n+ if (!content.hasArray()) {\n+ content = content.toBytesArray();\n }\n- boolean success = false;\n- synchronized (indexOutput) {\n+ indexOutput.writeBytes(content.array(), content.arrayOffset(), content.length());\n+ recoveryStatus.state().getIndex().addRecoveredByteCount(content.length());\n+ RecoveryState.File file = recoveryStatus.state().getIndex().file(request.name());\n+ if (file != null) {\n+ file.updateRecovered(request.length());\n+ }\n+ if (indexOutput.getFilePointer() >= request.length() || request.lastChunk()) {\n try {\n- if (recoverySettings.rateLimiter() != null) {\n- recoverySettings.rateLimiter().pause(request.content().length());\n- }\n- BytesReference content = request.content();\n- if (!content.hasArray()) {\n- content = content.toBytesArray();\n- }\n- indexOutput.writeBytes(content.array(), content.arrayOffset(), content.length());\n- onGoingRecovery.recoveryState.getIndex().addRecoveredByteCount(content.length());\n- RecoveryState.File file = onGoingRecovery.recoveryState.getIndex().file(request.name());\n- if (file != null) {\n- file.updateRecovered(request.length());\n- }\n- if (indexOutput.getFilePointer() >= request.length() || request.lastChunk()) {\n- Store.verify(indexOutput);\n- // we are done\n- indexOutput.close();\n- // write the checksum\n- onGoingRecovery.legacyChecksums.add(request.metadata());\n- store.directory().sync(Collections.singleton(request.name()));\n- IndexOutput remove = onGoingRecovery.removeOpenIndexOutputs(request.name());\n- onGoingRecovery.recoveryState.getIndex().addRecoveredFileCount(1);\n- assert remove == null || remove == indexOutput; // remove maybe null if we got canceled\n- }\n- success = true;\n+ Store.verify(indexOutput);\n } finally {\n- if (!success || onGoingRecovery.isCanceled()) {\n- try {\n- IndexOutput remove = onGoingRecovery.removeOpenIndexOutputs(request.name());\n- assert remove == null || remove == indexOutput;\n- IOUtils.closeWhileHandlingException(indexOutput);\n- } finally {\n- // trash the file - unsuccessful\n- store.deleteQuiet(request.name(), \"recovery.\" + onGoingRecovery.recoveryState().getTimer().startTime() + \".\" + request.name());\n- }\n- }\n+ // we are done\n+ indexOutput.close();\n }\n+ // write the checksum\n+ recoveryStatus.legacyChecksums().add(request.metadata());\n+ store.directory().sync(Collections.singleton(request.name()));\n+ IndexOutput remove = recoveryStatus.removeOpenIndexOutputs(request.name());\n+ recoveryStatus.state().getIndex().addRecoveredFileCount(1);\n+ assert remove == null || remove == indexOutput; // remove maybe null if we got finished\n }\n- if (onGoingRecovery.isCanceled()) {\n- onGoingRecovery.sentCanceledToSource = true;\n- throw new IndexShardClosedException(request.shardId());\n- }\n- channel.sendResponse(TransportResponse.Empty.INSTANCE);\n- } finally {\n- store.decRef();\n }\n+ channel.sendResponse(TransportResponse.Empty.INSTANCE);\n }\n }\n \n- private void validateRecoveryStatus(RecoveryStatus onGoingRecovery, ShardId shardId) {\n- if (onGoingRecovery == null) {\n- // shard is getting closed on us\n- throw new IndexShardClosedException(shardId);\n+ class RecoveryRunner extends AbstractRunnable {\n+\n+ final long recoveryId;\n+\n+ RecoveryRunner(long recoveryId) {\n+ this.recoveryId = recoveryId;\n }\n- if (onGoingRecovery.indexShard.state() == IndexShardState.CLOSED) {\n- removeAndCleanOnGoingRecovery(onGoingRecovery);\n- onGoingRecovery.sentCanceledToSource = true;\n- throw new IndexShardClosedException(shardId);\n+\n+ @Override\n+ public void onFailure(Throwable t) {\n+ try (RecoveriesCollection.StatusRef statusRef = onGoingRecoveries.getStatus(recoveryId)) {\n+ if (statusRef == null) {\n+ logger.error(\"unexpected error during recovery [{}], failing shard\", t, recoveryId);\n+ onGoingRecoveries.failRecovery(recoveryId,\n+ new RecoveryFailedException(statusRef.status().state(), \"unexpected error\", t),\n+ true // be safe\n+ );\n+ } else {\n+ logger.debug(\"unexpected error during recovery, but recovery id [{}] is finished\", t, recoveryId);\n+ }\n+ }\n }\n- if (onGoingRecovery.isCanceled()) {\n- onGoingRecovery.sentCanceledToSource = true;\n- throw new IndexShardClosedException(shardId);\n+\n+ @Override\n+ public void doRun() {\n+ RecoveriesCollection.StatusRef statusRef = onGoingRecoveries.getStatus(recoveryId);\n+ if (statusRef == null) {\n+ logger.trace(\"not running recovery with id [{}] - can't find it (probably finished)\", recoveryId);\n+ return;\n+ }\n+ try {\n+ doRecovery(statusRef.status());\n+ } finally {\n+ // make sure we never interrupt the thread after we have released it back to the pool\n+ statusRef.status().clearWaitingRecoveryThread(Thread.currentThread());\n+ statusRef.close();\n+ }\n }\n }\n+\n }", "filename": "src/main/java/org/elasticsearch/indices/recovery/RecoveryTarget.java", "status": "modified" }, { "diff": "@@ -23,7 +23,9 @@\n import com.carrotsearch.hppc.procedures.IntProcedure;\n import com.google.common.base.Predicate;\n import com.google.common.util.concurrent.ListenableFuture;\n+import org.apache.lucene.index.IndexFileNames;\n import org.apache.lucene.util.LuceneTestCase.Slow;\n+import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.action.admin.cluster.health.ClusterHealthResponse;\n import org.elasticsearch.action.admin.cluster.state.ClusterStateResponse;\n import org.elasticsearch.action.admin.indices.recovery.RecoveryResponse;\n@@ -33,45 +35,70 @@\n import org.elasticsearch.action.search.SearchResponse;\n import org.elasticsearch.action.search.SearchType;\n import org.elasticsearch.client.Client;\n+import org.elasticsearch.cluster.ClusterService;\n+import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n+import org.elasticsearch.cluster.node.DiscoveryNode;\n+import org.elasticsearch.cluster.routing.ShardRoutingState;\n import org.elasticsearch.cluster.routing.allocation.command.MoveAllocationCommand;\n+import org.elasticsearch.cluster.routing.allocation.decider.EnableAllocationDecider;\n import org.elasticsearch.cluster.routing.allocation.decider.FilterAllocationDecider;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.Priority;\n import org.elasticsearch.common.settings.ImmutableSettings;\n+import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.discovery.DiscoveryService;\n+import org.elasticsearch.env.NodeEnvironment;\n import org.elasticsearch.index.shard.IndexShardState;\n import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.index.shard.service.IndexShard;\n import org.elasticsearch.indices.IndicesLifecycle;\n+import org.elasticsearch.indices.recovery.RecoveryFileChunkRequest;\n import org.elasticsearch.indices.recovery.RecoverySettings;\n+import org.elasticsearch.indices.recovery.RecoveryTarget;\n import org.elasticsearch.search.SearchHit;\n import org.elasticsearch.search.SearchHits;\n import org.elasticsearch.test.BackgroundIndexer;\n import org.elasticsearch.test.ElasticsearchIntegrationTest;\n import org.elasticsearch.test.ElasticsearchIntegrationTest.ClusterScope;\n+import org.elasticsearch.test.junit.annotations.TestLogging;\n+import org.elasticsearch.test.transport.MockTransportService;\n+import org.elasticsearch.transport.*;\n import org.junit.Test;\n \n+import java.io.File;\n+import java.io.IOException;\n+import java.nio.file.FileVisitResult;\n+import java.nio.file.Files;\n+import java.nio.file.Path;\n+import java.nio.file.SimpleFileVisitor;\n+import java.nio.file.attribute.BasicFileAttributes;\n import java.util.ArrayList;\n import java.util.List;\n+import java.util.concurrent.CountDownLatch;\n import java.util.concurrent.Semaphore;\n import java.util.concurrent.TimeUnit;\n \n import static org.elasticsearch.common.settings.ImmutableSettings.settingsBuilder;\n import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery;\n import static org.elasticsearch.test.ElasticsearchIntegrationTest.Scope;\n-import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertHitCount;\n-import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertNoFailures;\n-import static org.hamcrest.Matchers.equalTo;\n-import static org.hamcrest.Matchers.is;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.*;\n+import static org.hamcrest.Matchers.*;\n \n /**\n */\n @ClusterScope(scope = Scope.TEST, numDataNodes = 0)\n+@TestLogging(\"indices.recovery:TRACE\")\n public class RelocationTests extends ElasticsearchIntegrationTest {\n private final TimeValue ACCEPTABLE_RELOCATION_TIME = new TimeValue(5, TimeUnit.MINUTES);\n \n+ @Override\n+ protected Settings nodeSettings(int nodeOrdinal) {\n+ return ImmutableSettings.builder()\n+ .put(TransportModule.TRANSPORT_SERVICE_TYPE_KEY, MockTransportService.class.getName()).build();\n+ }\n+\n \n @Test\n public void testSimpleRelocationNoIndexing() {\n@@ -417,4 +444,114 @@ public boolean apply(Object input) {\n assertTrue(stateResponse.getState().readOnlyRoutingNodes().node(blueNodeId).isEmpty());\n }\n \n+ @Test\n+ @Slow\n+ @TestLogging(\"indices.recovery:TRACE\")\n+ public void testCancellationCleansTempFiles() throws Exception {\n+ final String indexName = \"test\";\n+\n+ final String p_node = internalCluster().startNode();\n+\n+ client().admin().indices().prepareCreate(indexName)\n+ .setSettings(ImmutableSettings.builder().put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1, IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0)).get();\n+\n+ internalCluster().startNodesAsync(2).get();\n+\n+ List<IndexRequestBuilder> requests = new ArrayList<>();\n+ int numDocs = scaledRandomIntBetween(25, 250);\n+ for (int i = 0; i < numDocs; i++) {\n+ requests.add(client().prepareIndex(indexName, \"type\").setCreate(true).setSource(\"{}\"));\n+ }\n+ indexRandom(true, requests);\n+ assertFalse(client().admin().cluster().prepareHealth().setWaitForNodes(\"3\").setWaitForGreenStatus().get().isTimedOut());\n+ flush();\n+\n+ int allowedFailures = randomIntBetween(3, 10);\n+ logger.info(\"--> blocking recoveries from primary (allowed failures: [{}])\", allowedFailures);\n+ CountDownLatch corruptionCount = new CountDownLatch(allowedFailures);\n+ ClusterService clusterService = internalCluster().getInstance(ClusterService.class, p_node);\n+ MockTransportService mockTransportService = (MockTransportService) internalCluster().getInstance(TransportService.class, p_node);\n+ for (DiscoveryNode node : clusterService.state().nodes()) {\n+ if (!node.equals(clusterService.localNode())) {\n+ mockTransportService.addDelegate(node, new RecoveryCorruption(mockTransportService.original(), corruptionCount));\n+ }\n+ }\n+\n+ client().admin().indices().prepareUpdateSettings(indexName).setSettings(ImmutableSettings.builder().put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 1)).get();\n+\n+ corruptionCount.await();\n+\n+ logger.info(\"--> stopping replica assignment\");\n+ assertAcked(client().admin().cluster().prepareUpdateSettings()\n+ .setTransientSettings(ImmutableSettings.builder().put(EnableAllocationDecider.CLUSTER_ROUTING_ALLOCATION_ENABLE, \"none\")));\n+\n+ logger.info(\"--> wait for all replica shards to be removed, on all nodes\");\n+ assertBusy(new Runnable() {\n+ @Override\n+ public void run() {\n+ for (String node : internalCluster().getNodeNames()) {\n+ if (node.equals(p_node)) {\n+ continue;\n+ }\n+ ClusterState state = client(node).admin().cluster().prepareState().setLocal(true).get().getState();\n+ assertThat(node + \" indicates assigned replicas\",\n+ state.getRoutingTable().index(indexName).shardsWithState(ShardRoutingState.UNASSIGNED).size(), equalTo(1));\n+ }\n+ }\n+ });\n+\n+ logger.info(\"--> verifying no temporary recoveries are left\");\n+ for (String node : internalCluster().getNodeNames()) {\n+ NodeEnvironment nodeEnvironment = internalCluster().getInstance(NodeEnvironment.class, node);\n+ for (final File shardLoc : nodeEnvironment.shardLocations(new ShardId(indexName, 0))) {\n+ assertBusy(new Runnable() {\n+ @Override\n+ public void run() {\n+ try {\n+ Files.walkFileTree(shardLoc.toPath(), new SimpleFileVisitor<Path>() {\n+ @Override\n+ public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IOException {\n+ assertThat(\"found a temporary recovery file: \" + file, file.getFileName().toString(), not(startsWith(\"recovery.\")));\n+ return FileVisitResult.CONTINUE;\n+ }\n+ });\n+ } catch (IOException e) {\n+ throw new ElasticsearchException(\"failed to walk tree\", e);\n+ }\n+ }\n+ });\n+ }\n+ }\n+ }\n+\n+ class RecoveryCorruption extends MockTransportService.DelegateTransport {\n+\n+ private final CountDownLatch corruptionCount;\n+\n+ public RecoveryCorruption(Transport transport, CountDownLatch corruptionCount) {\n+ super(transport);\n+ this.corruptionCount = corruptionCount;\n+ }\n+\n+ @Override\n+ public void sendRequest(DiscoveryNode node, long requestId, String action, TransportRequest request, TransportRequestOptions options) throws IOException, TransportException {\n+// if (action.equals(RecoveryTarget.Actions.PREPARE_TRANSLOG)) {\n+// logger.debug(\"dropped [{}] to {}\", action, node);\n+ //} else\n+ if (action.equals(RecoveryTarget.Actions.FILE_CHUNK)) {\n+ RecoveryFileChunkRequest chunkRequest = (RecoveryFileChunkRequest) request;\n+ if (chunkRequest.name().startsWith(IndexFileNames.SEGMENTS)) {\n+ // corrupting the segments_N files in order to make sure future recovery re-send files\n+ logger.debug(\"corrupting [{}] to {}. file name: [{}]\", action, node, chunkRequest.name());\n+ byte[] array = chunkRequest.content().array();\n+ array[0] = (byte) ~array[0]; // flip one byte in the content\n+ corruptionCount.countDown();\n+ }\n+ transport.sendRequest(node, requestId, action, request, options);\n+ } else {\n+ transport.sendRequest(node, requestId, action, request, options);\n+ }\n+ }\n+ }\n+\n }", "filename": "src/test/java/org/elasticsearch/recovery/RelocationTests.java", "status": "modified" } ] }
{ "body": "We have a situation where several indices need replicas either relocated or rebuilt (we're not sure exactly which of the two caused this situation, but I think it was the initial replica build, rather than relocation).\n\nIn one situation, the nodes which we were trying to send the shards to went over their high disk threshold, and the recovery was aborted.\nIn another, we tickled the recently found bug on recovery and compression.\n\nIn both cases (afaict), the shard directory on disk was littered with files named `recovery.*`. Sometimes terabytes of files.\nEven when the replica build cancelled, moved on to another host, etc, those files aren't being cleaned up.\n", "comments": [ { "body": "I also run into the same problem.\nDoes any one have the solution?\nCan I just delete the recovery.\\* files?\n", "created_at": "2014-09-13T08:40:21Z" }, { "body": "@suitingtseng which version of ES are you using?\n\n@avleen sorry for the late response. Is this issue still a problem? Which version are you currently on?\n\nI'm wondering if this is another manifestation of https://github.com/elasticsearch/elasticsearch/issues/7386#issuecomment-53110529\n", "created_at": "2014-09-14T20:55:56Z" }, { "body": "I am using 1.3.1.\nThanks for your reply.\n", "created_at": "2014-09-15T02:13:15Z" }, { "body": "I found this happening on 1.3.1 also, with the compressed recovery bug. I\nhaven't had recovery failures since then so I have no more data to gauge\nthis with :(\n\nOn Sun, Sep 14, 2014 at 10:13 PM, suitingtseng notifications@github.com\nwrote:\n\n> I am using 1.3.1.\n> Thanks for your reply.\n> \n> ## \n> \n> Reply to this email directly or view it on GitHub\n> https://github.com/elasticsearch/elasticsearch/issues/7315#issuecomment-55547654\n> .\n", "created_at": "2014-09-15T04:18:08Z" }, { "body": "@suitingtseng thx. Did you do a full cluster restart since being on 1.1.0 as #7386 (comment) ? Are there any other error in your logs? \n\nIf you cluster is all green now, you can safely delete the recovery.\\* files, though we should figure what they don't go on their own...\n", "created_at": "2014-09-15T11:15:13Z" }, { "body": "Just ran into what I believe is this situation as well. After 24 hours, some indices are moved to different nodes. On 2 of the \"slow\" nodes, 2 shards got into a state where they are growing uncontrollably (should bea round 30GB but are 1.3 TB). Inside the shard directory it's littered with recovery.\\* files. Running ES 1.3.0 and new recovery files keep being created. The log on the \"slow\" node is complaining about \"File corruption occured on recovery but checksums are ok\"\n\nTo expand a bit further, it seems they are caught in an endless recovery cycle and just creating new recovery files over and over unable to repair\n", "created_at": "2014-09-22T15:53:14Z" }, { "body": "Any one solved this issue or is there any walk around?\nSimple question: Can I just manual rm those recovery file? They are eating up most of my disk spaces.\nI'm having tons of recovery files to and I'm upgrading from 1.1 to 1.4.2.\n", "created_at": "2015-01-23T23:36:15Z" }, { "body": "I manually rm'd them here, and it was OK.\n\nOn Fri Jan 23 2015 at 6:36:59 PM yangou notifications@github.com wrote:\n\n> Any one solved this issue or is there any walk around?\n> Simple question: Can I just manual rm those recovery file? They are eating\n> up most of my disk spaces.\n> I'm having tons of recovery files to and I'm upgrading from 1.1 to 1.4.2.\n> \n> ## \n> \n> Reply to this email directly or view it on GitHub\n> https://github.com/elasticsearch/elasticsearch/issues/7315#issuecomment-71285291\n> .\n", "created_at": "2015-01-24T06:45:30Z" }, { "body": "@yangou the issue is solved but the fix will be released with 1.5.0 (no ETA yet). You can safely remove the recovery.\\* files of all old recoveries. To check what are the currently active recoveries you can call `GET _cat/recovery?active_only=true`\n", "created_at": "2015-01-24T08:50:42Z" } ], "number": 7315, "title": "Recovery files left behind when replica building fails" }
{ "body": "The PR rewrites the state controls in the RecoveryTarget family classes to make it easier to guarantee that:\n- recovery resources are only cleared once there are no ongoing requests\n- recovery is automatically canceled when the target shard is closed/removed\n- canceled recoveries do not leave temp files behind when canceled. \n\nHighlights of the change:\n1) All temporary files are cleared upon failure/cancel (see #7315 )\n2) All newly created files are always temporary \n3) Doesn't list local files on the cluster state update thread (which throw unwanted exception)\n4) Recoveries are canceled by a listener to IndicesLifecycle.beforeIndexShardClosed, so we don't need to explicitly call it.\n5) Simplifies RecoveryListener to only notify when a recovery is done or failed. Removed subtleties like ignore and retry (they are dealt with internally)\n\nRelates to #7893\n", "number": 8092, "review_comments": [ { "body": "can we get a javadoc string what this class does?\n", "created_at": "2014-10-16T08:24:08Z" }, { "body": "I assume the `onIgnoreRecovery` is unused?\n", "created_at": "2014-10-16T08:24:26Z" }, { "body": "can be make this a `putIfAbsent` and assert it's not there?\n", "created_at": "2014-10-16T08:25:20Z" }, { "body": "extra newline here?\n", "created_at": "2014-10-16T08:28:50Z" }, { "body": "this file header looks wrong\n", "created_at": "2014-10-16T08:29:07Z" }, { "body": "should this go into the ctor?\n", "created_at": "2014-10-16T08:29:28Z" }, { "body": "unrelated but I think `openIndexOutputs` can be final and no need to be volatile?\n", "created_at": "2014-10-16T08:43:34Z" }, { "body": "make finished `private final`?\n", "created_at": "2014-10-16T08:43:49Z" }, { "body": "can we return and assert that the CAS op was successful?\n", "created_at": "2014-10-16T08:45:43Z" }, { "body": "you also need to catch `NoSuchFileException` it's OS dependent\n", "created_at": "2014-10-16T08:47:40Z" }, { "body": "just for kicks I think we should inc the refcount on the store here before we access it\n", "created_at": "2014-10-16T08:48:20Z" }, { "body": "what happens if this rename doesn't work here?\n", "created_at": "2014-10-16T08:49:44Z" }, { "body": "make this final?\n", "created_at": "2014-10-16T08:51:32Z" }, { "body": "decRef should happen in a finally \n", "created_at": "2014-10-16T08:51:58Z" }, { "body": "make this a hard exception?\n", "created_at": "2014-10-16T08:52:29Z" }, { "body": "prevent double closing here? maybe you can reuse the `finished.compareAndSet(false, true)` pattern?\n", "created_at": "2014-10-16T08:53:35Z" }, { "body": "I wonder if we can somehow factor this refcoutning logic out into a util class. something like\n\n``` Java\n\npublic class RefCounted {\n\npublic final void decRef() {\n//...\n}\n\npublic final boolean tryIncRef() {\n//...\n}\n\npublic final void incRef() {\n//...\n}\n\npublic inteface CloseListener {\n public void close(); // called when we reach 0\n}\n}\n```\n\nI think we can then also just use this in `Store.java`?\n", "created_at": "2014-10-16T08:57:45Z" }, { "body": "I think you can just do:\n\n``` Java\nIOUtils.closeWhileHandlingException(openIndexOutputs.values());\nopenIndexOutputs.clear();\n```\n", "created_at": "2014-10-16T09:02:32Z" }, { "body": "any reason why we don't do this inside the try/finally?\n", "created_at": "2014-10-16T09:03:24Z" }, { "body": "nevermind it gets updated\n", "created_at": "2014-10-16T09:06:36Z" }, { "body": "I think since you change the finally part you should do this like:\n\n``` Java\ntry {\n Store.verify(indexOutput);\n} finally {\n indexOutput.close();\n}\n```\n\njust to make sure we are closing the stream asap\n", "created_at": "2014-10-16T09:09:57Z" }, { "body": "Yes. it is now removed from the listener interface.\n", "created_at": "2014-10-16T11:17:39Z" }, { "body": "will do\n", "created_at": "2014-10-16T11:17:49Z" }, { "body": "Argh IntelliJ. will fix.\n", "created_at": "2014-10-16T11:18:23Z" }, { "body": "Yes, it now can (given the new access patterns). Good point.\n", "created_at": "2014-10-16T11:18:57Z" }, { "body": "+1. will do.\n", "created_at": "2014-10-16T11:19:14Z" }, { "body": "I can definitely return the value. I'm a bit conflicted regarding the assert as strictly speaking we can't guarantee it will work due to the retry logic when may set the thread before the clear command of the previous thread run. In practice it shouldn't be a problem because it only kicks in after 500ms. But still, I'm not sure it adds value to assert here?\n", "created_at": "2014-10-16T11:26:52Z" }, { "body": "KK. I only copied the old code. Will change.\n", "created_at": "2014-10-16T11:27:23Z" }, { "body": "Maybe better is to except if the ref count of the local object is <0 (which guarantees the store is kept alive)? Semantically you should only call methods on this object when having a ref count. \n", "created_at": "2014-10-16T11:29:12Z" }, { "body": "Then we should fail the shard imho. I copied the old code. I'll double check that this is what happens.\n", "created_at": "2014-10-16T11:30:36Z" } ], "title": "Refactor RecoveryTarget state management" }
{ "commits": [ { "message": "Recovery: clean up temporary files when canceling recovery\n\nAt the moment, we leave around temporary files if a peer (replica) recovery is canceled. Those files will normally be cleaned up once the shard is started else but in case of errors this can lead to trouble. If recovery are started and canceled often, we may cause nodes to run out of disk space.\n\nCloses #7893" }, { "message": "temp file names registry - not there yet." }, { "message": "wip" }, { "message": "Some more cleanup and java docs" }, { "message": "Beter encapsulate temporary files" }, { "message": "Fix compilation after rebasing to 1.x" }, { "message": "testCancellationCleansTempFiles: use assertBusy to verify all files were cleaned\n\nThese are now background processes.." }, { "message": "Feedback round" }, { "message": "moved package line" }, { "message": "one more private final" }, { "message": "Fail recovery on every error while listing local files." } ], "files": [ { "diff": "@@ -40,7 +40,6 @@\n import org.elasticsearch.index.shard.service.InternalIndexShard;\n import org.elasticsearch.indices.IndicesService;\n import org.elasticsearch.indices.recovery.RecoveryState;\n-import org.elasticsearch.indices.recovery.RecoveryStatus;\n import org.elasticsearch.indices.recovery.RecoveryTarget;\n import org.elasticsearch.threadpool.ThreadPool;\n import org.elasticsearch.transport.TransportService;\n@@ -144,19 +143,15 @@ protected ShardRecoveryResponse shardOperation(ShardRecoveryRequest request) thr\n \n InternalIndexService indexService = (InternalIndexService) indicesService.indexServiceSafe(request.shardId().getIndex());\n InternalIndexShard indexShard = (InternalIndexShard) indexService.shardSafe(request.shardId().id());\n- ShardRouting shardRouting = indexShard.routingEntry();\n ShardRecoveryResponse shardRecoveryResponse = new ShardRecoveryResponse(request.shardId());\n \n- RecoveryState state;\n- RecoveryStatus recoveryStatus = indexShard.recoveryStatus();\n+ RecoveryState state = indexShard.recoveryState();\n \n- if (recoveryStatus == null) {\n- recoveryStatus = recoveryTarget.recoveryStatus(indexShard);\n+ if (state == null) {\n+ state = recoveryTarget.recoveryState(indexShard);\n }\n \n- if (recoveryStatus != null) {\n- state = recoveryStatus.recoveryState();\n- } else {\n+ if (state == null) {\n IndexShardGatewayService gatewayService =\n indexService.shardInjector(request.shardId().id()).getInstance(IndexShardGatewayService.class);\n state = gatewayService.recoveryState();\n@@ -183,7 +178,8 @@ protected ClusterBlockException checkRequestBlock(ClusterState state, RecoveryRe\n \n static class ShardRecoveryRequest extends BroadcastShardOperationRequest {\n \n- ShardRecoveryRequest() { }\n+ ShardRecoveryRequest() {\n+ }\n \n ShardRecoveryRequest(ShardId shardId, RecoveryRequest request) {\n super(shardId, request);", "filename": "src/main/java/org/elasticsearch/action/admin/indices/recovery/TransportRecoveryAction.java", "status": "modified" }, { "diff": "@@ -180,13 +180,13 @@ protected ShardStatus shardOperation(IndexShardStatusRequest request) throws Ela\n \n if (request.recovery) {\n // check on going recovery (from peer or gateway)\n- RecoveryStatus peerRecoveryStatus = indexShard.recoveryStatus();\n- if (peerRecoveryStatus == null) {\n- peerRecoveryStatus = peerRecoveryTarget.recoveryStatus(indexShard);\n+ RecoveryState peerRecoveryState = indexShard.recoveryState();\n+ if (peerRecoveryState == null) {\n+ peerRecoveryState = peerRecoveryTarget.recoveryState(indexShard);\n }\n- if (peerRecoveryStatus != null) {\n+ if (peerRecoveryState != null) {\n PeerRecoveryStatus.Stage stage;\n- switch (peerRecoveryStatus.stage()) {\n+ switch (peerRecoveryState.getStage()) {\n case INIT:\n stage = PeerRecoveryStatus.Stage.INIT;\n break;\n@@ -205,11 +205,11 @@ protected ShardStatus shardOperation(IndexShardStatusRequest request) throws Ela\n default:\n stage = PeerRecoveryStatus.Stage.INIT;\n }\n- shardStatus.peerRecoveryStatus = new PeerRecoveryStatus(stage, peerRecoveryStatus.recoveryState().getTimer().startTime(),\n- peerRecoveryStatus.recoveryState().getTimer().time(),\n- peerRecoveryStatus.recoveryState().getIndex().totalByteCount(),\n- peerRecoveryStatus.recoveryState().getIndex().reusedByteCount(),\n- peerRecoveryStatus.recoveryState().getIndex().recoveredByteCount(), peerRecoveryStatus.recoveryState().getTranslog().currentTranslogOperations());\n+ shardStatus.peerRecoveryStatus = new PeerRecoveryStatus(stage, peerRecoveryState.getTimer().startTime(),\n+ peerRecoveryState.getTimer().time(),\n+ peerRecoveryState.getIndex().totalByteCount(),\n+ peerRecoveryState.getIndex().reusedByteCount(),\n+ peerRecoveryState.getIndex().recoveredByteCount(), peerRecoveryState.getTranslog().currentTranslogOperations());\n }\n \n IndexShardGatewayService gatewayService = indexService.shardInjector(request.shardId().id()).getInstance(IndexShardGatewayService.class);", "filename": "src/main/java/org/elasticsearch/action/admin/indices/status/TransportIndicesStatusAction.java", "status": "modified" }, { "diff": "@@ -62,6 +62,7 @@ public IndexShardGatewayService(ShardId shardId, @IndexSettings Settings indexSe\n this.shardGateway = shardGateway;\n this.snapshotService = snapshotService;\n this.recoveryState = new RecoveryState(shardId);\n+ this.recoveryState.setType(RecoveryState.Type.GATEWAY);\n this.clusterService = clusterService;\n }\n ", "filename": "src/main/java/org/elasticsearch/index/gateway/IndexShardGatewayService.java", "status": "modified" }, { "diff": "@@ -89,7 +89,7 @@\n import org.elasticsearch.index.warmer.WarmerStats;\n import org.elasticsearch.indices.IndicesLifecycle;\n import org.elasticsearch.indices.InternalIndicesLifecycle;\n-import org.elasticsearch.indices.recovery.RecoveryStatus;\n+import org.elasticsearch.indices.recovery.RecoveryState;\n import org.elasticsearch.search.suggest.completion.Completion090PostingsFormat;\n import org.elasticsearch.search.suggest.completion.CompletionStats;\n import org.elasticsearch.threadpool.ThreadPool;\n@@ -146,7 +146,8 @@ public class InternalIndexShard extends AbstractIndexShardComponent implements I\n private volatile ScheduledFuture mergeScheduleFuture;\n private volatile ShardRouting shardRouting;\n \n- private RecoveryStatus recoveryStatus;\n+ @Nullable\n+ private RecoveryState recoveryState;\n \n private ApplyRefreshSettings applyRefreshSettings = new ApplyRefreshSettings();\n \n@@ -733,15 +734,15 @@ public void performRecoveryPrepareForTranslog() throws ElasticsearchException {\n }\n \n /**\n- * The peer recovery status if this shard recovered from a peer shard.\n+ * The peer recovery state if this shard recovered from a peer shard, null o.w.\n */\n- public RecoveryStatus recoveryStatus() {\n- return this.recoveryStatus;\n+ public RecoveryState recoveryState() {\n+ return this.recoveryState;\n }\n \n- public void performRecoveryFinalization(boolean withFlush, RecoveryStatus recoveryStatus) throws ElasticsearchException {\n+ public void performRecoveryFinalization(boolean withFlush, RecoveryState recoveryState) throws ElasticsearchException {\n performRecoveryFinalization(withFlush);\n- this.recoveryStatus = recoveryStatus;\n+ this.recoveryState = recoveryState;\n }\n \n public void performRecoveryFinalization(boolean withFlush) throws ElasticsearchException {", "filename": "src/main/java/org/elasticsearch/index/shard/service/InternalIndexShard.java", "status": "modified" }, { "diff": "@@ -61,9 +61,10 @@\n import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.index.shard.service.IndexShard;\n import org.elasticsearch.index.shard.service.InternalIndexShard;\n-import org.elasticsearch.index.store.Store;\n import org.elasticsearch.indices.IndicesService;\n-import org.elasticsearch.indices.recovery.*;\n+import org.elasticsearch.indices.recovery.RecoveryFailedException;\n+import org.elasticsearch.indices.recovery.RecoveryState;\n+import org.elasticsearch.indices.recovery.RecoveryTarget;\n import org.elasticsearch.threadpool.ThreadPool;\n \n import java.util.HashMap;\n@@ -559,19 +560,18 @@ private void applyNewOrUpdatedShards(final ClusterChangedEvent event) throws Ela\n boolean shardHasBeenRemoved = false;\n if (currentRoutingEntry.initializing() && shardRouting.initializing() && !currentRoutingEntry.equals(shardRouting)) {\n logger.debug(\"[{}][{}] removing shard (different instance of it allocated on this node, current [{}], global [{}])\", shardRouting.index(), shardRouting.id(), currentRoutingEntry, shardRouting);\n- // cancel recovery just in case we are in recovery (its fine if we are not in recovery, it will be a noop).\n- recoveryTarget.cancelRecovery(indexShard);\n+ // closing the shard will also cancel any ongoing recovery.\n indexService.removeShard(shardRouting.id(), \"removing shard (different instance of it allocated on this node)\");\n shardHasBeenRemoved = true;\n } else if (isPeerRecovery(shardRouting)) {\n // check if there is an existing recovery going, and if so, and the source node is not the same, cancel the recovery to restart it\n- RecoveryStatus recoveryStatus = recoveryTarget.recoveryStatus(indexShard);\n- if (recoveryStatus != null && recoveryStatus.stage() != RecoveryState.Stage.DONE) {\n+ RecoveryState recoveryState = recoveryTarget.recoveryState(indexShard);\n+ if (recoveryState != null && recoveryState.getStage() != RecoveryState.Stage.DONE) {\n // we have an ongoing recovery, find the source based on current routing and compare them\n DiscoveryNode sourceNode = findSourceNodeForPeerRecovery(routingTable, nodes, shardRouting);\n- if (!recoveryStatus.sourceNode().equals(sourceNode)) {\n+ if (!recoveryState.getSourceNode().equals(sourceNode)) {\n logger.debug(\"[{}][{}] removing shard (recovery source changed), current [{}], global [{}])\", shardRouting.index(), shardRouting.id(), currentRoutingEntry, shardRouting);\n- recoveryTarget.cancelRecovery(indexShard);\n+ // closing the shard will also cancel any ongoing recovery.\n indexService.removeShard(shardRouting.id(), \"removing shard (recovery source node changed)\");\n shardHasBeenRemoved = true;\n }\n@@ -728,17 +728,7 @@ private void applyInitializingShard(final RoutingTable routingTable, final Disco\n // the edge case where its mark as relocated, and we might need to roll it back...\n // For replicas: we are recovering a backup from a primary\n RecoveryState.Type type = shardRouting.primary() ? RecoveryState.Type.RELOCATION : RecoveryState.Type.REPLICA;\n- final Store store = indexShard.store();\n- final StartRecoveryRequest request;\n- store.incRef();\n- try {\n- store.failIfCorrupted();\n- request = new StartRecoveryRequest(indexShard.shardId(), sourceNode, nodes.localNode(),\n- false, store.getMetadata().asMap(), type, recoveryIdGenerator.incrementAndGet());\n- } finally {\n- store.decRef();\n- }\n- recoveryTarget.startRecovery(request, indexShard, new PeerRecoveryListener(request, shardRouting, indexService, indexMetaData));\n+ recoveryTarget.startRecovery(indexShard, type, sourceNode, new PeerRecoveryListener(shardRouting, indexService, indexMetaData));\n \n } catch (Throwable e) {\n indexShard.engine().failEngine(\"corrupted preexisting index\", e);\n@@ -808,68 +798,41 @@ private boolean isPeerRecovery(ShardRouting shardRouting) {\n \n private class PeerRecoveryListener implements RecoveryTarget.RecoveryListener {\n \n- private final StartRecoveryRequest request;\n private final ShardRouting shardRouting;\n private final IndexService indexService;\n private final IndexMetaData indexMetaData;\n \n- private PeerRecoveryListener(StartRecoveryRequest request, ShardRouting shardRouting, IndexService indexService, IndexMetaData indexMetaData) {\n- this.request = request;\n+ private PeerRecoveryListener(ShardRouting shardRouting, IndexService indexService, IndexMetaData indexMetaData) {\n this.shardRouting = shardRouting;\n this.indexService = indexService;\n this.indexMetaData = indexMetaData;\n }\n \n @Override\n- public void onRecoveryDone() {\n- shardStateAction.shardStarted(shardRouting, indexMetaData.getUUID(), \"after recovery (replica) from node [\" + request.sourceNode() + \"]\");\n- }\n-\n- @Override\n- public void onRetryRecovery(TimeValue retryAfter, RecoveryStatus recoveryStatus) {\n- recoveryTarget.retryRecovery(request, retryAfter, recoveryStatus, PeerRecoveryListener.this);\n- }\n-\n- @Override\n- public void onIgnoreRecovery(boolean removeShard, String reason) {\n- if (!removeShard) {\n- return;\n- }\n- synchronized (mutex) {\n- if (indexService.hasShard(shardRouting.shardId().id())) {\n- if (logger.isDebugEnabled()) {\n- logger.debug(\"[{}][{}] removing shard on ignored recovery, reason [{}]\", shardRouting.index(), shardRouting.shardId().id(), reason);\n- }\n- try {\n- indexService.removeShard(shardRouting.shardId().id(), \"ignore recovery: \" + reason);\n- } catch (IndexShardMissingException e) {\n- // the node got closed on us, ignore it\n- } catch (Throwable e1) {\n- logger.warn(\"[{}][{}] failed to delete shard after ignore recovery\", e1, indexService.index().name(), shardRouting.shardId().id());\n- }\n- }\n- }\n+ public void onRecoveryDone(RecoveryState state) {\n+ shardStateAction.shardStarted(shardRouting, indexMetaData.getUUID(), \"after recovery (replica) from node [\" + state.getSourceNode() + \"]\");\n }\n \n @Override\n- public void onRecoveryFailure(RecoveryFailedException e, boolean sendShardFailure) {\n+ public void onRecoveryFailure(RecoveryState state, RecoveryFailedException e, boolean sendShardFailure) {\n handleRecoveryFailure(indexService, indexMetaData, shardRouting, sendShardFailure, e);\n }\n }\n \n private void handleRecoveryFailure(IndexService indexService, IndexMetaData indexMetaData, ShardRouting shardRouting, boolean sendShardFailure, Throwable failure) {\n- logger.warn(\"[{}][{}] failed to start shard\", failure, indexService.index().name(), shardRouting.shardId().id());\n synchronized (mutex) {\n if (indexService.hasShard(shardRouting.shardId().id())) {\n try {\n+ logger.debug(\"[{}][{}] removing shard on failed recovery [{}]\", shardRouting.index(), shardRouting.shardId().id(), failure.getMessage());\n indexService.removeShard(shardRouting.shardId().id(), \"recovery failure [\" + ExceptionsHelper.detailedMessage(failure) + \"]\");\n } catch (IndexShardMissingException e) {\n // the node got closed on us, ignore it\n } catch (Throwable e1) {\n- logger.warn(\"[{}][{}] failed to delete shard after failed startup\", e1, indexService.index().name(), shardRouting.shardId().id());\n+ logger.warn(\"[{}][{}] failed to delete shard after recovery failure\", e1, indexService.index().name(), shardRouting.shardId().id());\n }\n }\n if (sendShardFailure) {\n+ logger.warn(\"[{}][{}] sending failed shard after recovery failure\", failure, indexService.index().name(), shardRouting.shardId().id());\n try {\n failedShards.put(shardRouting.shardId(), new FailedShard(shardRouting.version()));\n shardStateAction.shardFailed(shardRouting, indexMetaData.getUUID(), \"Failed to start shard, message [\" + detailedMessage(failure) + \"]\");", "filename": "src/main/java/org/elasticsearch/indices/cluster/IndicesClusterStateService.java", "status": "modified" }, { "diff": "@@ -0,0 +1,184 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.indices.recovery;\n+\n+import org.elasticsearch.cluster.node.DiscoveryNode;\n+import org.elasticsearch.common.Nullable;\n+import org.elasticsearch.common.logging.ESLogger;\n+import org.elasticsearch.common.util.concurrent.ConcurrentCollections;\n+import org.elasticsearch.index.shard.IndexShardClosedException;\n+import org.elasticsearch.index.shard.ShardId;\n+import org.elasticsearch.index.shard.service.IndexShard;\n+import org.elasticsearch.index.shard.service.InternalIndexShard;\n+import org.elasticsearch.index.store.Store;\n+\n+import java.io.IOException;\n+import java.sql.Timestamp;\n+import java.util.Map;\n+import java.util.concurrent.ConcurrentMap;\n+import java.util.concurrent.atomic.AtomicBoolean;\n+\n+/**\n+ * This class holds a collection of all on going recoveries on the current node (i.e., the node is the target node\n+ * of those recoveries). The class is used to guarantee concurrent semantics such that once a recoveries was done/cancelled/failed\n+ * no other thread will be able to find it. Last, the {@link StatusRef} inner class verifies that recovery temporary files\n+ * and store will only be cleared once on going usage is finished.\n+ */\n+public class RecoveriesCollection {\n+\n+ /** This is the single source of truth for ongoing recoveries. If it's not here, it was canceled or done */\n+ private final ConcurrentMap<Long, RecoveryStatus> onGoingRecoveries = ConcurrentCollections.newConcurrentMap();\n+\n+ final private ESLogger logger;\n+\n+ public RecoveriesCollection(ESLogger logger) {\n+ this.logger = logger;\n+ }\n+\n+ /**\n+ * Starts are new recovery for the given shard, source node and state\n+ *\n+ * @return the id of the new recovery.\n+ */\n+ public long startRecovery(InternalIndexShard indexShard, DiscoveryNode sourceNode, RecoveryState state, RecoveryTarget.RecoveryListener listener) {\n+ RecoveryStatus status = new RecoveryStatus(indexShard, sourceNode, state, listener);\n+ RecoveryStatus existingStatus = onGoingRecoveries.putIfAbsent(status.recoveryId(), status);\n+ assert existingStatus == null : \"found two RecoveryStatus instances with the same id\";\n+ logger.trace(\"{} started recovery from {}, id [{}]\", indexShard.shardId(), sourceNode, status.recoveryId());\n+ return status.recoveryId();\n+ }\n+\n+ /**\n+ * gets the {@link RecoveryStatus } for a given id. The RecoveryStatus returned has it's ref count already incremented\n+ * to make sure it's safe to use. However, you must call {@link RecoveryStatus#decRef()} when you are done with it, typically\n+ * by using this method in a try-with-resources clause.\n+ * <p/>\n+ * Returns null if recovery is not found\n+ */\n+ public StatusRef getStatus(long id) {\n+ RecoveryStatus status = onGoingRecoveries.get(id);\n+ if (status != null && status.tryIncRef()) {\n+ return new StatusRef(status);\n+ }\n+ return null;\n+ }\n+\n+ /** Similar to {@link #getStatus(long)} but throws an exception if no recovery is found */\n+ public StatusRef getStatusSafe(long id, ShardId shardId) {\n+ StatusRef statusRef = getStatus(id);\n+ if (statusRef == null) {\n+ throw new IndexShardClosedException(shardId);\n+ }\n+ assert statusRef.status().shardId().equals(shardId);\n+ return statusRef;\n+ }\n+\n+ /** cancel the recovery with the given id (if found) and remove it from the recovery collection */\n+ public void cancelRecovery(long id, String reason) {\n+ RecoveryStatus removed = onGoingRecoveries.remove(id);\n+ if (removed != null) {\n+ logger.trace(\"{} canceled recovery from {}, id [{}] (reason [{}])\",\n+ removed.shardId(), removed.sourceNode(), removed.recoveryId(), reason);\n+ removed.cancel(reason);\n+ }\n+ }\n+\n+ /**\n+ * fail the recovery with the given id (if found) and remove it from the recovery collection\n+ *\n+ * @param id id of the recovery to fail\n+ * @param e exception with reason for the failure\n+ * @param sendShardFailure true a shard failed message should be sent to the master\n+ */\n+ public void failRecovery(long id, RecoveryFailedException e, boolean sendShardFailure) {\n+ RecoveryStatus removed = onGoingRecoveries.remove(id);\n+ if (removed != null) {\n+ logger.trace(\"{} failing recovery from {}, id [{}]. Send shard failure: [{}]\", removed.shardId(), removed.sourceNode(), removed.recoveryId(), sendShardFailure);\n+ removed.fail(e, sendShardFailure);\n+ }\n+ }\n+\n+ /** mark the recovery with the given id as done (if found) */\n+ public void markRecoveryAsDone(long id) {\n+ RecoveryStatus removed = onGoingRecoveries.remove(id);\n+ if (removed != null) {\n+ logger.trace(\"{} marking recovery from {} as done, id [{}]\", removed.shardId(), removed.sourceNode(), removed.recoveryId());\n+ removed.markAsDone();\n+ }\n+ }\n+\n+ /**\n+ * Try to find an ongoing recovery for a given shard. returns null if not found.\n+ */\n+ @Nullable\n+ public StatusRef findRecoveryByShard(IndexShard indexShard) {\n+ for (RecoveryStatus recoveryStatus : onGoingRecoveries.values()) {\n+ if (recoveryStatus.indexShard() == indexShard) {\n+ if (recoveryStatus.tryIncRef()) {\n+ return new StatusRef(recoveryStatus);\n+ } else {\n+ return null;\n+ }\n+ }\n+ }\n+ return null;\n+ }\n+\n+\n+ /** cancel all ongoing recoveries for the given shard. typically because the shards is closed */\n+ public void cancelRecoveriesForShard(ShardId shardId, String reason) {\n+ for (RecoveryStatus status : onGoingRecoveries.values()) {\n+ if (status.shardId().equals(shardId)) {\n+ cancelRecovery(status.recoveryId(), reason);\n+ }\n+ }\n+ }\n+\n+ /**\n+ * a reference to {@link RecoveryStatus}, which implements {@link AutoCloseable}. closing the reference\n+ * causes {@link RecoveryStatus#decRef()} to be called. This makes sure that the underlying resources\n+ * will not be freed until {@link RecoveriesCollection.StatusRef#close()} is called.\n+ */\n+ public static class StatusRef implements AutoCloseable {\n+\n+ private final RecoveryStatus status;\n+ private final AtomicBoolean closed = new AtomicBoolean(false);\n+\n+ /**\n+ * Important: {@link org.elasticsearch.indices.recovery.RecoveryStatus#tryIncRef()} should\n+ * be *successfully* called on status before\n+ */\n+ public StatusRef(RecoveryStatus status) {\n+ this.status = status;\n+ }\n+\n+ @Override\n+ public void close() {\n+ if (closed.compareAndSet(false, true)) {\n+ status.decRef();\n+ }\n+ }\n+\n+ public RecoveryStatus status() {\n+ return status;\n+ }\n+ }\n+}\n+", "filename": "src/main/java/org/elasticsearch/indices/recovery/RecoveriesCollection.java", "status": "added" }, { "diff": "@@ -21,6 +21,7 @@\n \n import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n+import org.elasticsearch.common.Nullable;\n import org.elasticsearch.index.shard.ShardId;\n \n /**\n@@ -29,10 +30,22 @@\n public class RecoveryFailedException extends ElasticsearchException {\n \n public RecoveryFailedException(StartRecoveryRequest request, Throwable cause) {\n- this(request.shardId(), request.sourceNode(), request.targetNode(), cause);\n+ this(request, null, cause);\n+ }\n+\n+ public RecoveryFailedException(StartRecoveryRequest request, @Nullable String extraInfo, Throwable cause) {\n+ this(request.shardId(), request.sourceNode(), request.targetNode(), extraInfo, cause);\n+ }\n+\n+ public RecoveryFailedException(RecoveryState state, @Nullable String extraInfo, Throwable cause) {\n+ this(state.getShardId(), state.getSourceNode(), state.getTargetNode(), extraInfo, cause);\n }\n \n public RecoveryFailedException(ShardId shardId, DiscoveryNode sourceNode, DiscoveryNode targetNode, Throwable cause) {\n- super(shardId + \": Recovery failed from \" + sourceNode + \" into \" + targetNode, cause);\n+ this(shardId, sourceNode, targetNode, null, cause);\n+ }\n+\n+ public RecoveryFailedException(ShardId shardId, DiscoveryNode sourceNode, DiscoveryNode targetNode, @Nullable String extraInfo, Throwable cause) {\n+ super(shardId + \": Recovery failed from \" + sourceNode + \" into \" + targetNode + (extraInfo == null ? \"\" : \" (\" + extraInfo + \")\"), cause);\n }\n }", "filename": "src/main/java/org/elasticsearch/indices/recovery/RecoveryFailedException.java", "status": "modified" }, { "diff": "@@ -19,106 +19,310 @@\n \n package org.elasticsearch.indices.recovery;\n \n+import org.apache.lucene.store.Directory;\n import org.apache.lucene.store.IOContext;\n import org.apache.lucene.store.IndexOutput;\n+import org.apache.lucene.util.IOUtils;\n+import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n+import org.elasticsearch.common.logging.ESLogger;\n+import org.elasticsearch.common.logging.Loggers;\n import org.elasticsearch.common.util.concurrent.ConcurrentCollections;\n import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.index.shard.service.InternalIndexShard;\n import org.elasticsearch.index.store.Store;\n import org.elasticsearch.index.store.StoreFileMetaData;\n \n+import java.io.FileNotFoundException;\n import java.io.IOException;\n+import java.nio.file.NoSuchFileException;\n+import java.util.Iterator;\n+import java.util.Map;\n import java.util.Map.Entry;\n import java.util.Set;\n import java.util.concurrent.ConcurrentMap;\n+import java.util.concurrent.atomic.AtomicBoolean;\n+import java.util.concurrent.atomic.AtomicInteger;\n+import java.util.concurrent.atomic.AtomicLong;\n+import java.util.concurrent.atomic.AtomicReference;\n \n /**\n *\n */\n+\n+\n public class RecoveryStatus {\n \n- final ShardId shardId;\n- final long recoveryId;\n- final InternalIndexShard indexShard;\n- final RecoveryState recoveryState;\n- final DiscoveryNode sourceNode;\n+ private final ESLogger logger;\n+\n+ private final static AtomicLong idGenerator = new AtomicLong();\n+\n+ private final String RECOVERY_PREFIX = \"recovery.\";\n+\n+ private final ShardId shardId;\n+ private final long recoveryId;\n+ private final InternalIndexShard indexShard;\n+ private final RecoveryState state;\n+ private final DiscoveryNode sourceNode;\n+ private final String tempFilePrefix;\n+ private final Store store;\n+ private final RecoveryTarget.RecoveryListener listener;\n+\n+ private AtomicReference<Thread> waitingRecoveryThread = new AtomicReference<>();\n+\n+ private final AtomicBoolean finished = new AtomicBoolean();\n \n- public RecoveryStatus(long recoveryId, InternalIndexShard indexShard, DiscoveryNode sourceNode) {\n- this.recoveryId = recoveryId;\n+ // we start with 1 which will be decremented on cancel/close/failure\n+ private final AtomicInteger refCount = new AtomicInteger(1);\n+\n+ private final ConcurrentMap<String, IndexOutput> openIndexOutputs = ConcurrentCollections.newConcurrentMap();\n+ private final Store.LegacyChecksums legacyChecksums = new Store.LegacyChecksums();\n+\n+ public RecoveryStatus(InternalIndexShard indexShard, DiscoveryNode sourceNode, RecoveryState state, RecoveryTarget.RecoveryListener listener) {\n+ this.recoveryId = idGenerator.incrementAndGet();\n+ this.listener = listener;\n+ this.logger = Loggers.getLogger(getClass(), indexShard.indexSettings(), indexShard.shardId());\n this.indexShard = indexShard;\n this.sourceNode = sourceNode;\n this.shardId = indexShard.shardId();\n- this.recoveryState = new RecoveryState(shardId);\n- recoveryState.getTimer().startTime(System.currentTimeMillis());\n+ this.state = state;\n+ this.state.getTimer().startTime(System.currentTimeMillis());\n+ this.tempFilePrefix = RECOVERY_PREFIX + this.state.getTimer().startTime() + \".\";\n+ this.store = indexShard.store();\n+ // make sure the store is not released until we are done.\n+ store.incRef();\n }\n \n- volatile Thread recoveryThread;\n- private volatile boolean canceled;\n- volatile boolean sentCanceledToSource;\n+ private final Set<String> tempFileNames = ConcurrentCollections.newConcurrentSet();\n+\n+ public long recoveryId() {\n+ return recoveryId;\n+ }\n \n- private volatile ConcurrentMap<String, IndexOutput> openIndexOutputs = ConcurrentCollections.newConcurrentMap();\n- public final Store.LegacyChecksums legacyChecksums = new Store.LegacyChecksums();\n+ public ShardId shardId() {\n+ return shardId;\n+ }\n+\n+ public InternalIndexShard indexShard() {\n+ ensureNotFinished();\n+ return indexShard;\n+ }\n \n public DiscoveryNode sourceNode() {\n return this.sourceNode;\n }\n \n- public RecoveryState recoveryState() {\n- return recoveryState;\n+ public RecoveryState state() {\n+ return state;\n+ }\n+\n+ public Store store() {\n+ ensureNotFinished();\n+ return store;\n+ }\n+\n+ /** set a thread that should be interrupted if the recovery is canceled */\n+ public void setWaitingRecoveryThread(Thread thread) {\n+ waitingRecoveryThread.set(thread);\n+ }\n+\n+ /**\n+ * clear the thread set by {@link #setWaitingRecoveryThread(Thread)}, making sure we\n+ * do not override another thread.\n+ */\n+ public void clearWaitingRecoveryThread(Thread threadToClear) {\n+ waitingRecoveryThread.compareAndSet(threadToClear, null);\n }\n \n public void stage(RecoveryState.Stage stage) {\n- recoveryState.setStage(stage);\n+ state.setStage(stage);\n }\n \n public RecoveryState.Stage stage() {\n- return recoveryState.getStage();\n+ return state.getStage();\n }\n \n- public boolean isCanceled() {\n- return canceled;\n+ public Store.LegacyChecksums legacyChecksums() {\n+ return legacyChecksums;\n }\n- \n- public synchronized void cancel() {\n- canceled = true;\n+\n+ /** renames all temporary files to their true name, potentially overriding existing files */\n+ public void renameAllTempFiles() throws IOException {\n+ ensureNotFinished();\n+ Iterator<String> tempFileIterator = tempFileNames.iterator();\n+ final Directory directory = store.directory();\n+ while (tempFileIterator.hasNext()) {\n+ String tempFile = tempFileIterator.next();\n+ String origFile = originalNameForTempFile(tempFile);\n+ // first, go and delete the existing ones\n+ try {\n+ directory.deleteFile(origFile);\n+ } catch (NoSuchFileException e) {\n+\n+ } catch (Throwable ex) {\n+ logger.debug(\"failed to delete file [{}]\", ex, origFile);\n+ }\n+ // now, rename the files... and fail it it won't work\n+ store.renameFile(tempFile, origFile);\n+ // upon success, remove the temp file\n+ tempFileIterator.remove();\n+ }\n }\n- \n- public IndexOutput getOpenIndexOutput(String key) {\n- final ConcurrentMap<String, IndexOutput> outputs = openIndexOutputs;\n- if (canceled || outputs == null) {\n- return null;\n+\n+ /** cancel the recovery. calling this method will clean temporary files and release the store\n+ * unless this object is in use (in which case it will be cleaned once all ongoing users call\n+ * {@link #decRef()}\n+ *\n+ * if {@link #setWaitingRecoveryThread(Thread)} was used, the thread will be interrupted.\n+ */\n+ public void cancel(String reason) {\n+ if (finished.compareAndSet(false, true)) {\n+ logger.debug(\"recovery canceled (reason: [{}])\", reason);\n+ // release the initial reference. recovery files will be cleaned as soon as ref count goes to zero, potentially now\n+ decRef();\n+\n+ final Thread thread = waitingRecoveryThread.get();\n+ if (thread != null) {\n+ thread.interrupt();\n+ }\n }\n- return outputs.get(key);\n }\n \n- public synchronized Set<Entry<String, IndexOutput>> cancelAndClearOpenIndexInputs() {\n- cancel();\n- final ConcurrentMap<String, IndexOutput> outputs = openIndexOutputs;\n- openIndexOutputs = null;\n- if (outputs == null) {\n- return null;\n+ /**\n+ * fail the recovery and call listener\n+ *\n+ * @param e exception that encapsulating the failure\n+ * @param sendShardFailure indicates whether to notify the master of the shard failure\n+ **/\n+ public void fail(RecoveryFailedException e, boolean sendShardFailure) {\n+ if (finished.compareAndSet(false, true)) {\n+ try {\n+ listener.onRecoveryFailure(state, e, sendShardFailure);\n+ } finally {\n+ // release the initial reference. recovery files will be cleaned as soon as ref count goes to zero, potentially now\n+ decRef();\n+ }\n }\n- Set<Entry<String, IndexOutput>> entrySet = outputs.entrySet();\n- return entrySet;\n }\n- \n \n- public IndexOutput removeOpenIndexOutputs(String name) {\n- final ConcurrentMap<String, IndexOutput> outputs = openIndexOutputs;\n- if (outputs == null) {\n- return null;\n+ /** mark the current recovery as done */\n+ public void markAsDone() {\n+ if (finished.compareAndSet(false, true)) {\n+ assert tempFileNames.isEmpty() : \"not all temporary files are renamed\";\n+ // release the initial reference. recovery files will be cleaned as soon as ref count goes to zero, potentially now\n+ decRef();\n+ listener.onRecoveryDone(state);\n }\n- return outputs.remove(name);\n }\n \n- public synchronized IndexOutput openAndPutIndexOutput(String key, String fileName, StoreFileMetaData metaData, Store store) throws IOException {\n- if (isCanceled()) {\n- return null;\n+ private String getTempNameForFile(String origFile) {\n+ return tempFilePrefix + origFile;\n+ }\n+\n+ /** return true if the give file is a temporary file name issued by this recovery */\n+ private boolean isTempFile(String filename) {\n+ return tempFileNames.contains(filename);\n+ }\n+\n+ public IndexOutput getOpenIndexOutput(String key) {\n+ ensureNotFinished();\n+ return openIndexOutputs.get(key);\n+ }\n+\n+ /** returns the original file name for a temporary file name issued by this recovery */\n+ private String originalNameForTempFile(String tempFile) {\n+ if (!isTempFile(tempFile)) {\n+ throw new ElasticsearchException(\"[\" + tempFile + \"] is not a temporary file made by this recovery\");\n }\n- final ConcurrentMap<String, IndexOutput> outputs = openIndexOutputs;\n- IndexOutput indexOutput = store.createVerifyingOutput(fileName, IOContext.DEFAULT, metaData);\n- outputs.put(key, indexOutput);\n+ return tempFile.substring(tempFilePrefix.length());\n+ }\n+\n+ /** remove and {@link org.apache.lucene.store.IndexOutput} for a given file. It is the caller's responsibility to close it */\n+ public IndexOutput removeOpenIndexOutputs(String name) {\n+ ensureNotFinished();\n+ return openIndexOutputs.remove(name);\n+ }\n+\n+ /**\n+ * Creates an {@link org.apache.lucene.store.IndexOutput} for the given file name. Note that the\n+ * IndexOutput actually point at a temporary file.\n+ * <p/>\n+ * Note: You can use {@link #getOpenIndexOutput(String)} with the same filename to retrieve the same IndexOutput\n+ * at a later stage\n+ */\n+ public IndexOutput openAndPutIndexOutput(String fileName, StoreFileMetaData metaData, Store store) throws IOException {\n+ ensureNotFinished();\n+ String tempFileName = getTempNameForFile(fileName);\n+ // add first, before it's created\n+ tempFileNames.add(tempFileName);\n+ IndexOutput indexOutput = store.createVerifyingOutput(tempFileName, IOContext.DEFAULT, metaData);\n+ openIndexOutputs.put(fileName, indexOutput);\n return indexOutput;\n }\n+\n+ /**\n+ * Tries to increment the refCount of this RecoveryStatus instance. This method will return <tt>true</tt> iff the refCount was\n+ * incremented successfully otherwise <tt>false</tt>. Be sure to always call a corresponding {@link #decRef}, in a finally clause;\n+ *\n+ * @see #decRef()\n+ */\n+ public final boolean tryIncRef() {\n+ do {\n+ int i = refCount.get();\n+ if (i > 0) {\n+ if (refCount.compareAndSet(i, i + 1)) {\n+ return true;\n+ }\n+ } else {\n+ return false;\n+ }\n+ } while (true);\n+ }\n+\n+ /**\n+ * Decreases the refCount of this Store instance.If the refCount drops to 0, the recovery process this status represents\n+ * is seen as done and resources and temporary files are deleted.\n+ *\n+ * @see #tryIncRef\n+ */\n+ public final void decRef() {\n+ int i = refCount.decrementAndGet();\n+ assert i >= 0;\n+ if (i == 0) {\n+ closeInternal();\n+ }\n+ }\n+\n+ private void closeInternal() {\n+ try {\n+ // clean open index outputs\n+ Iterator<Entry<String, IndexOutput>> iterator = openIndexOutputs.entrySet().iterator();\n+ while (iterator.hasNext()) {\n+ Map.Entry<String, IndexOutput> entry = iterator.next();\n+ IOUtils.closeWhileHandlingException(entry.getValue());\n+ iterator.remove();\n+ }\n+ // trash temporary files\n+ for (String file : tempFileNames) {\n+ logger.trace(\"cleaning temporary file [{}]\", file);\n+ store.deleteQuiet(file);\n+ }\n+ legacyChecksums.clear();\n+ } finally {\n+ // free store. increment happens in constructor\n+ store.decRef();\n+ }\n+ }\n+\n+ @Override\n+ public String toString() {\n+ return shardId + \" [\" + recoveryId + \"]\";\n+ }\n+\n+ private void ensureNotFinished() {\n+ if (finished.get()) {\n+ throw new ElasticsearchException(\"RecoveryStatus is used after it was finished. Probably a mismatch between incRef/decRef calls\");\n+ }\n+ }\n+\n }", "filename": "src/main/java/org/elasticsearch/indices/recovery/RecoveryStatus.java", "status": "modified" }, { "diff": "@@ -19,12 +19,13 @@\n \n package org.elasticsearch.indices.recovery;\n \n-import com.google.common.collect.Sets;\n+import com.google.common.collect.ImmutableMap;\n import org.apache.lucene.store.AlreadyClosedException;\n-import org.apache.lucene.store.Directory;\n import org.apache.lucene.store.IndexOutput;\n-import org.apache.lucene.util.IOUtils;\n+import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.ExceptionsHelper;\n+import org.elasticsearch.cluster.ClusterService;\n+import org.elasticsearch.cluster.node.DiscoveryNode;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.StopWatch;\n import org.elasticsearch.common.bytes.BytesReference;\n@@ -33,26 +34,25 @@\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.ByteSizeValue;\n import org.elasticsearch.common.unit.TimeValue;\n-import org.elasticsearch.common.util.concurrent.ConcurrentCollections;\n-import org.elasticsearch.common.util.concurrent.ConcurrentMapLong;\n+import org.elasticsearch.common.util.concurrent.AbstractRunnable;\n import org.elasticsearch.index.IndexShardMissingException;\n import org.elasticsearch.index.engine.RecoveryEngineException;\n-import org.elasticsearch.index.shard.*;\n+import org.elasticsearch.index.shard.IllegalIndexShardStateException;\n+import org.elasticsearch.index.shard.IndexShardClosedException;\n+import org.elasticsearch.index.shard.IndexShardNotStartedException;\n+import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.index.shard.service.IndexShard;\n import org.elasticsearch.index.shard.service.InternalIndexShard;\n import org.elasticsearch.index.store.Store;\n+import org.elasticsearch.index.store.StoreFileMetaData;\n import org.elasticsearch.index.translog.Translog;\n import org.elasticsearch.indices.IndexMissingException;\n import org.elasticsearch.indices.IndicesLifecycle;\n-import org.elasticsearch.indices.IndicesService;\n import org.elasticsearch.threadpool.ThreadPool;\n import org.elasticsearch.transport.*;\n \n import java.util.Collections;\n-import java.util.Iterator;\n import java.util.Map;\n-import java.util.Map.Entry;\n-import java.util.Set;\n \n import static org.elasticsearch.common.unit.TimeValue.timeValueMillis;\n \n@@ -77,20 +77,20 @@ public static class Actions {\n \n private final TransportService transportService;\n \n- private final IndicesService indicesService;\n-\n private final RecoverySettings recoverySettings;\n+ private final ClusterService clusterService;\n \n- private final ConcurrentMapLong<RecoveryStatus> onGoingRecoveries = ConcurrentCollections.newConcurrentMapLong();\n+ private final RecoveriesCollection onGoingRecoveries;\n \n @Inject\n- public RecoveryTarget(Settings settings, ThreadPool threadPool, TransportService transportService, IndicesService indicesService,\n- IndicesLifecycle indicesLifecycle, RecoverySettings recoverySettings) {\n+ public RecoveryTarget(Settings settings, ThreadPool threadPool, TransportService transportService,\n+ IndicesLifecycle indicesLifecycle, RecoverySettings recoverySettings, ClusterService clusterService) {\n super(settings);\n this.threadPool = threadPool;\n this.transportService = transportService;\n- this.indicesService = indicesService;\n this.recoverySettings = recoverySettings;\n+ this.clusterService = clusterService;\n+ this.onGoingRecoveries = new RecoveriesCollection(logger);\n \n transportService.registerHandler(Actions.FILES_INFO, new FilesInfoRequestHandler());\n transportService.registerHandler(Actions.FILE_CHUNK, new FileChunkTransportRequestHandler());\n@@ -103,261 +103,154 @@ public RecoveryTarget(Settings settings, ThreadPool threadPool, TransportService\n @Override\n public void beforeIndexShardClosed(ShardId shardId, @Nullable IndexShard indexShard) {\n if (indexShard != null) {\n- removeAndCleanOnGoingRecovery(findRecoveryByShard(indexShard));\n+ onGoingRecoveries.cancelRecoveriesForShard(shardId, \"shard closed\");\n }\n }\n });\n }\n \n- public RecoveryStatus recoveryStatus(IndexShard indexShard) {\n- RecoveryStatus recoveryStatus = findRecoveryByShard(indexShard);\n- if (recoveryStatus == null) {\n- return null;\n- }\n- if (recoveryStatus.recoveryState().getTimer().startTime() > 0 && recoveryStatus.stage() != RecoveryState.Stage.DONE) {\n- recoveryStatus.recoveryState().getTimer().time(System.currentTimeMillis() - recoveryStatus.recoveryState().getTimer().startTime());\n- }\n- return recoveryStatus;\n- }\n-\n- public void cancelRecovery(IndexShard indexShard) {\n- RecoveryStatus recoveryStatus = findRecoveryByShard(indexShard);\n- // it might be if the recovery source got canceled first\n- if (recoveryStatus == null) {\n- return;\n- }\n- if (recoveryStatus.sentCanceledToSource) {\n- return;\n- }\n- recoveryStatus.cancel();\n- try {\n- if (recoveryStatus.recoveryThread != null) {\n- recoveryStatus.recoveryThread.interrupt();\n+ public RecoveryState recoveryState(IndexShard indexShard) {\n+ try (RecoveriesCollection.StatusRef statusRef = onGoingRecoveries.findRecoveryByShard(indexShard)) {\n+ if (statusRef == null) {\n+ return null;\n }\n- // give it a grace period of actually getting the sent ack part\n- final long sleepTime = 100;\n- final long maxSleepTime = 10000;\n- long rounds = Math.round(maxSleepTime / sleepTime);\n- while (!recoveryStatus.sentCanceledToSource &&\n- transportService.nodeConnected(recoveryStatus.sourceNode) &&\n- rounds > 0) {\n- rounds--;\n- try {\n- Thread.sleep(sleepTime);\n- } catch (InterruptedException e) {\n- Thread.currentThread().interrupt();\n- break; // interrupted - step out!\n- }\n+ final RecoveryStatus recoveryStatus = statusRef.status();\n+ if (recoveryStatus.state().getTimer().startTime() > 0 && recoveryStatus.stage() != RecoveryState.Stage.DONE) {\n+ recoveryStatus.state().getTimer().time(System.currentTimeMillis() - recoveryStatus.state().getTimer().startTime());\n }\n- } finally {\n- removeAndCleanOnGoingRecovery(recoveryStatus);\n+ return recoveryStatus.state();\n+ } catch (Exception e) {\n+ // shouldn't really happen, but have to be here due to auto close\n+ throw new ElasticsearchException(\"error while getting recovery state\", e);\n }\n-\n }\n \n- public void startRecovery(final StartRecoveryRequest request, final InternalIndexShard indexShard, final RecoveryListener listener) {\n+ public void startRecovery(final InternalIndexShard indexShard, final RecoveryState.Type recoveryType, final DiscoveryNode sourceNode, final RecoveryListener listener) {\n try {\n- indexShard.recovering(\"from \" + request.sourceNode());\n+ indexShard.recovering(\"from \" + sourceNode);\n } catch (IllegalIndexShardStateException e) {\n // that's fine, since we might be called concurrently, just ignore this, we are already recovering\n- listener.onIgnoreRecovery(false, \"already in recovering process, \" + e.getMessage());\n+ logger.debug(\"{} ignore recovery. already in recovering process, {}\", indexShard.shardId(), e.getMessage());\n return;\n }\n // create a new recovery status, and process...\n- final RecoveryStatus recoveryStatus = new RecoveryStatus(request.recoveryId(), indexShard, request.sourceNode());\n- recoveryStatus.recoveryState.setType(request.recoveryType());\n- recoveryStatus.recoveryState.setSourceNode(request.sourceNode());\n- recoveryStatus.recoveryState.setTargetNode(request.targetNode());\n- recoveryStatus.recoveryState.setPrimary(indexShard.routingEntry().primary());\n- onGoingRecoveries.put(recoveryStatus.recoveryId, recoveryStatus);\n-\n- threadPool.generic().execute(new Runnable() {\n- @Override\n- public void run() {\n- doRecovery(request, recoveryStatus, listener);\n- }\n- });\n+ RecoveryState recoveryState = new RecoveryState(indexShard.shardId());\n+ recoveryState.setType(recoveryType);\n+ recoveryState.setSourceNode(sourceNode);\n+ recoveryState.setTargetNode(clusterService.localNode());\n+ recoveryState.setPrimary(indexShard.routingEntry().primary());\n+ final long recoveryId = onGoingRecoveries.startRecovery(indexShard, sourceNode, recoveryState, listener);\n+ threadPool.generic().execute(new RecoveryRunner(recoveryId));\n+\n }\n \n- public void retryRecovery(final StartRecoveryRequest request, TimeValue retryAfter, final RecoveryStatus status, final RecoveryListener listener) {\n- threadPool.schedule(retryAfter, ThreadPool.Names.GENERIC, new Runnable() {\n- @Override\n- public void run() {\n- doRecovery(request, status, listener);\n- }\n- });\n+ protected void retryRecovery(final long recoveryId, TimeValue retryAfter) {\n+ logger.trace(\"will retrying recovery with id [{}] in [{}]\", recoveryId, retryAfter);\n+ threadPool.schedule(retryAfter, ThreadPool.Names.GENERIC, new RecoveryRunner(recoveryId));\n }\n \n- private void doRecovery(final StartRecoveryRequest request, final RecoveryStatus recoveryStatus, final RecoveryListener listener) {\n- assert request.sourceNode() != null : \"can't do a recovery without a source node\";\n- final InternalIndexShard shard = recoveryStatus.indexShard;\n- if (shard == null) {\n- listener.onIgnoreRecovery(false, \"shard missing locally, stop recovery\");\n- return;\n- }\n- if (shard.state() == IndexShardState.CLOSED) {\n- listener.onIgnoreRecovery(false, \"local shard closed, stop recovery\");\n- return;\n- }\n- if (recoveryStatus.isCanceled()) {\n- // don't remove it, the cancellation code will remove it...\n- listener.onIgnoreRecovery(false, \"canceled recovery\");\n+ private void doRecovery(final RecoveryStatus recoveryStatus) {\n+ assert recoveryStatus.sourceNode() != null : \"can't do a recovery without a source node\";\n+\n+ logger.trace(\"collecting local files for {}\", recoveryStatus);\n+ final Map<String, StoreFileMetaData> existingFiles;\n+ try {\n+ existingFiles = recoveryStatus.store().getMetadata().asMap();\n+ } catch (Exception e) {\n+ logger.debug(\"error while listing local files, recovery as if there are none\", e);\n+ onGoingRecoveries.failRecovery(recoveryStatus.recoveryId(),\n+ new RecoveryFailedException(recoveryStatus.state(), \"failed to list local files\", e), true);\n return;\n }\n+ StartRecoveryRequest request = new StartRecoveryRequest(recoveryStatus.shardId(), recoveryStatus.sourceNode(), clusterService.localNode(),\n+ false, existingFiles, recoveryStatus.state().getType(), recoveryStatus.recoveryId());\n \n- recoveryStatus.recoveryThread = Thread.currentThread();\n- if (shard.store().tryIncRef()) {\n- try {\n- logger.trace(\"[{}][{}] starting recovery from {}\", request.shardId().index().name(), request.shardId().id(), request.sourceNode());\n-\n- StopWatch stopWatch = new StopWatch().start();\n- RecoveryResponse recoveryResponse = transportService.submitRequest(request.sourceNode(), RecoverySource.Actions.START_RECOVERY, request, new FutureTransportResponseHandler<RecoveryResponse>() {\n- @Override\n- public RecoveryResponse newInstance() {\n- return new RecoveryResponse();\n- }\n- }).txGet();\n- if (shard.state() == IndexShardState.CLOSED) {\n- removeAndCleanOnGoingRecovery(recoveryStatus);\n- listener.onIgnoreRecovery(false, \"local shard closed, stop recovery\");\n- return;\n- }\n- stopWatch.stop();\n- if (logger.isTraceEnabled()) {\n- StringBuilder sb = new StringBuilder();\n- sb.append('[').append(request.shardId().index().name()).append(']').append('[').append(request.shardId().id()).append(\"] \");\n- sb.append(\"recovery completed from \").append(request.sourceNode()).append(\", took[\").append(stopWatch.totalTime()).append(\"]\\n\");\n- sb.append(\" phase1: recovered_files [\").append(recoveryResponse.phase1FileNames.size()).append(\"]\").append(\" with total_size of [\").append(new ByteSizeValue(recoveryResponse.phase1TotalSize)).append(\"]\")\n- .append(\", took [\").append(timeValueMillis(recoveryResponse.phase1Time)).append(\"], throttling_wait [\").append(timeValueMillis(recoveryResponse.phase1ThrottlingWaitTime)).append(']')\n- .append(\"\\n\");\n- sb.append(\" : reusing_files [\").append(recoveryResponse.phase1ExistingFileNames.size()).append(\"] with total_size of [\").append(new ByteSizeValue(recoveryResponse.phase1ExistingTotalSize)).append(\"]\\n\");\n- sb.append(\" phase2: start took [\").append(timeValueMillis(recoveryResponse.startTime)).append(\"]\\n\");\n- sb.append(\" : recovered [\").append(recoveryResponse.phase2Operations).append(\"]\").append(\" transaction log operations\")\n- .append(\", took [\").append(timeValueMillis(recoveryResponse.phase2Time)).append(\"]\")\n- .append(\"\\n\");\n- sb.append(\" phase3: recovered [\").append(recoveryResponse.phase3Operations).append(\"]\").append(\" transaction log operations\")\n- .append(\", took [\").append(timeValueMillis(recoveryResponse.phase3Time)).append(\"]\");\n- logger.trace(sb.toString());\n- } else if (logger.isDebugEnabled()) {\n- logger.debug(\"{} recovery completed from [{}], took [{}]\", request.shardId(), request.sourceNode(), stopWatch.totalTime());\n- }\n- removeAndCleanOnGoingRecovery(recoveryStatus);\n- listener.onRecoveryDone();\n- } catch (Throwable e) {\n- if (logger.isTraceEnabled()) {\n- logger.trace(\"[{}][{}] Got exception on recovery\", e, request.shardId().index().name(), request.shardId().id());\n- }\n- if (recoveryStatus.isCanceled()) {\n- // don't remove it, the cancellation code will remove it...\n- listener.onIgnoreRecovery(false, \"canceled recovery\");\n- return;\n- }\n- if (shard.state() == IndexShardState.CLOSED) {\n- removeAndCleanOnGoingRecovery(recoveryStatus);\n- listener.onIgnoreRecovery(false, \"local shard closed, stop recovery\");\n- return;\n- }\n- Throwable cause = ExceptionsHelper.unwrapCause(e);\n- if (cause instanceof RecoveryEngineException) {\n- // unwrap an exception that was thrown as part of the recovery\n- cause = cause.getCause();\n- }\n- // do it twice, in case we have double transport exception\n- cause = ExceptionsHelper.unwrapCause(cause);\n- if (cause instanceof RecoveryEngineException) {\n- // unwrap an exception that was thrown as part of the recovery\n- cause = cause.getCause();\n- }\n-\n- // here, we would add checks against exception that need to be retried (and not removeAndClean in this case)\n-\n- if (cause instanceof IndexShardNotStartedException || cause instanceof IndexMissingException || cause instanceof IndexShardMissingException) {\n- // if the target is not ready yet, retry\n- listener.onRetryRecovery(TimeValue.timeValueMillis(500), recoveryStatus);\n- return;\n- }\n-\n- if (cause instanceof DelayRecoveryException) {\n- listener.onRetryRecovery(TimeValue.timeValueMillis(500), recoveryStatus);\n- return;\n- }\n-\n- // here, we check against ignore recovery options\n-\n- // in general, no need to clean the shard on ignored recovery, since we want to try and reuse it later\n- // it will get deleted in the IndicesStore if all are allocated and no shard exists on this node...\n+ try {\n+ logger.trace(\"[{}][{}] starting recovery from {}\", request.shardId().index().name(), request.shardId().id(), request.sourceNode());\n \n- removeAndCleanOnGoingRecovery(recoveryStatus);\n+ StopWatch stopWatch = new StopWatch().start();\n+ recoveryStatus.setWaitingRecoveryThread(Thread.currentThread());\n \n- if (cause instanceof ConnectTransportException) {\n- listener.onIgnoreRecovery(true, \"source node disconnected (\" + request.sourceNode() + \")\");\n- return;\n- }\n-\n- if (cause instanceof IndexShardClosedException) {\n- listener.onIgnoreRecovery(true, \"source shard is closed (\" + request.sourceNode() + \")\");\n- return;\n+ RecoveryResponse recoveryResponse = transportService.submitRequest(request.sourceNode(), RecoverySource.Actions.START_RECOVERY, request, new FutureTransportResponseHandler<RecoveryResponse>() {\n+ @Override\n+ public RecoveryResponse newInstance() {\n+ return new RecoveryResponse();\n }\n+ }).txGet();\n+ recoveryStatus.clearWaitingRecoveryThread(Thread.currentThread());\n+ stopWatch.stop();\n+ if (logger.isTraceEnabled()) {\n+ StringBuilder sb = new StringBuilder();\n+ sb.append('[').append(request.shardId().index().name()).append(']').append('[').append(request.shardId().id()).append(\"] \");\n+ sb.append(\"recovery completed from \").append(request.sourceNode()).append(\", took[\").append(stopWatch.totalTime()).append(\"]\\n\");\n+ sb.append(\" phase1: recovered_files [\").append(recoveryResponse.phase1FileNames.size()).append(\"]\").append(\" with total_size of [\").append(new ByteSizeValue(recoveryResponse.phase1TotalSize)).append(\"]\")\n+ .append(\", took [\").append(timeValueMillis(recoveryResponse.phase1Time)).append(\"], throttling_wait [\").append(timeValueMillis(recoveryResponse.phase1ThrottlingWaitTime)).append(']')\n+ .append(\"\\n\");\n+ sb.append(\" : reusing_files [\").append(recoveryResponse.phase1ExistingFileNames.size()).append(\"] with total_size of [\").append(new ByteSizeValue(recoveryResponse.phase1ExistingTotalSize)).append(\"]\\n\");\n+ sb.append(\" phase2: start took [\").append(timeValueMillis(recoveryResponse.startTime)).append(\"]\\n\");\n+ sb.append(\" : recovered [\").append(recoveryResponse.phase2Operations).append(\"]\").append(\" transaction log operations\")\n+ .append(\", took [\").append(timeValueMillis(recoveryResponse.phase2Time)).append(\"]\")\n+ .append(\"\\n\");\n+ sb.append(\" phase3: recovered [\").append(recoveryResponse.phase3Operations).append(\"]\").append(\" transaction log operations\")\n+ .append(\", took [\").append(timeValueMillis(recoveryResponse.phase3Time)).append(\"]\");\n+ logger.trace(sb.toString());\n+ } else if (logger.isDebugEnabled()) {\n+ logger.debug(\"{} recovery completed from [{}], took [{}]\", request.shardId(), request.sourceNode(), stopWatch.totalTime());\n+ }\n+ // do this through ongoing recoveries to remove it from the collection\n+ onGoingRecoveries.markRecoveryAsDone(recoveryStatus.recoveryId());\n+ } catch (Throwable e) {\n+ if (logger.isTraceEnabled()) {\n+ logger.trace(\"[{}][{}] Got exception on recovery\", e, request.shardId().index().name(), request.shardId().id());\n+ }\n+ Throwable cause = ExceptionsHelper.unwrapCause(e);\n+ if (cause instanceof RecoveryEngineException) {\n+ // unwrap an exception that was thrown as part of the recovery\n+ cause = cause.getCause();\n+ }\n+ // do it twice, in case we have double transport exception\n+ cause = ExceptionsHelper.unwrapCause(cause);\n+ if (cause instanceof RecoveryEngineException) {\n+ // unwrap an exception that was thrown as part of the recovery\n+ cause = cause.getCause();\n+ }\n \n- if (cause instanceof AlreadyClosedException) {\n- listener.onIgnoreRecovery(true, \"source shard is closed (\" + request.sourceNode() + \")\");\n- return;\n- }\n+ // here, we would add checks against exception that need to be retried (and not removeAndClean in this case)\n \n- logger.warn(\"[{}][{}] recovery from [{}] failed\", e, request.shardId().index().name(), request.shardId().id(), request.sourceNode());\n- listener.onRecoveryFailure(new RecoveryFailedException(request, e), true);\n- } finally {\n- shard.store().decRef();\n+ if (cause instanceof IndexShardNotStartedException || cause instanceof IndexMissingException || cause instanceof IndexShardMissingException) {\n+ // if the target is not ready yet, retry\n+ retryRecovery(recoveryStatus.recoveryId(), TimeValue.timeValueMillis(500));\n+ return;\n }\n- } else {\n- listener.onIgnoreRecovery(false, \"local store closed, stop recovery\");\n- }\n- }\n-\n- public static interface RecoveryListener {\n- void onRecoveryDone();\n \n- void onRetryRecovery(TimeValue retryAfter, RecoveryStatus status);\n+ if (cause instanceof DelayRecoveryException) {\n+ retryRecovery(recoveryStatus.recoveryId(), TimeValue.timeValueMillis(500));\n+ return;\n+ }\n \n- void onIgnoreRecovery(boolean removeShard, String reason);\n+ if (cause instanceof ConnectTransportException) {\n+ onGoingRecoveries.failRecovery(recoveryStatus.recoveryId(), new RecoveryFailedException(request, \"source node disconnected\", cause), false);\n+ return;\n+ }\n \n- void onRecoveryFailure(RecoveryFailedException e, boolean sendShardFailure);\n- }\n+ if (cause instanceof IndexShardClosedException) {\n+ onGoingRecoveries.failRecovery(recoveryStatus.recoveryId(), new RecoveryFailedException(request, \"source shard is closed\", cause), false);\n+ return;\n+ }\n \n- @Nullable\n- private RecoveryStatus findRecoveryByShard(IndexShard indexShard) {\n- for (RecoveryStatus recoveryStatus : onGoingRecoveries.values()) {\n- if (recoveryStatus.indexShard == indexShard) {\n- return recoveryStatus;\n+ if (cause instanceof AlreadyClosedException) {\n+ onGoingRecoveries.failRecovery(recoveryStatus.recoveryId(), new RecoveryFailedException(request, \"source shard is closed\", cause), false);\n+ return;\n }\n+\n+ onGoingRecoveries.failRecovery(recoveryStatus.recoveryId(), new RecoveryFailedException(request, e), true);\n }\n- return null;\n }\n \n- private void removeAndCleanOnGoingRecovery(@Nullable RecoveryStatus status) {\n- if (status == null) {\n- return;\n- }\n- // clean it from the on going recoveries since it is being closed\n- status = onGoingRecoveries.remove(status.recoveryId);\n- if (status == null) {\n- return;\n- }\n- // just mark it as canceled as well, just in case there are in flight requests\n- // coming from the recovery target\n- status.cancel();\n- // clean open index outputs\n- Set<Entry<String, IndexOutput>> entrySet = status.cancelAndClearOpenIndexInputs();\n- Iterator<Entry<String, IndexOutput>> iterator = entrySet.iterator();\n- while (iterator.hasNext()) {\n- Map.Entry<String, IndexOutput> entry = iterator.next();\n- synchronized (entry.getValue()) {\n- IOUtils.closeWhileHandlingException(entry.getValue());\n- }\n- iterator.remove();\n+ public static interface RecoveryListener {\n+ void onRecoveryDone(RecoveryState state);\n \n- }\n- status.legacyChecksums.clear();\n+ void onRecoveryFailure(RecoveryState state, RecoveryFailedException e, boolean sendShardFailure);\n }\n \n class PrepareForTranslogOperationsRequestHandler extends BaseTransportRequestHandler<RecoveryPrepareForTranslogOperationsRequest> {\n@@ -374,12 +267,12 @@ public String executor() {\n \n @Override\n public void messageReceived(RecoveryPrepareForTranslogOperationsRequest request, TransportChannel channel) throws Exception {\n- RecoveryStatus onGoingRecovery = onGoingRecoveries.get(request.recoveryId());\n- validateRecoveryStatus(onGoingRecovery, request.shardId());\n-\n- onGoingRecovery.indexShard.performRecoveryPrepareForTranslog();\n- onGoingRecovery.stage(RecoveryState.Stage.TRANSLOG);\n- onGoingRecovery.recoveryState.getStart().checkIndexTime(onGoingRecovery.indexShard.checkIndexTook());\n+ try (RecoveriesCollection.StatusRef statusRef = onGoingRecoveries.getStatusSafe(request.recoveryId(), request.shardId())) {\n+ final RecoveryStatus recoveryStatus = statusRef.status();\n+ recoveryStatus.indexShard().performRecoveryPrepareForTranslog();\n+ recoveryStatus.stage(RecoveryState.Stage.TRANSLOG);\n+ recoveryStatus.state().getStart().checkIndexTime(recoveryStatus.indexShard().checkIndexTook());\n+ }\n channel.sendResponse(TransportResponse.Empty.INSTANCE);\n }\n }\n@@ -398,13 +291,12 @@ public String executor() {\n \n @Override\n public void messageReceived(RecoveryFinalizeRecoveryRequest request, TransportChannel channel) throws Exception {\n- RecoveryStatus onGoingRecovery = onGoingRecoveries.get(request.recoveryId());\n- validateRecoveryStatus(onGoingRecovery, request.shardId());\n-\n- onGoingRecovery.stage(RecoveryState.Stage.FINALIZE);\n- onGoingRecovery.indexShard.performRecoveryFinalization(false, onGoingRecovery);\n- onGoingRecovery.recoveryState().getTimer().time(System.currentTimeMillis() - onGoingRecovery.recoveryState().getTimer().startTime());\n- onGoingRecovery.stage(RecoveryState.Stage.DONE);\n+ try (RecoveriesCollection.StatusRef statusRef = onGoingRecoveries.getStatusSafe(request.recoveryId(), request.shardId())) {\n+ final RecoveryStatus recoveryStatus = statusRef.status();\n+ recoveryStatus.indexShard().performRecoveryFinalization(false, recoveryStatus.state());\n+ recoveryStatus.state().getTimer().time(System.currentTimeMillis() - recoveryStatus.state().getTimer().startTime());\n+ recoveryStatus.stage(RecoveryState.Stage.DONE);\n+ }\n channel.sendResponse(TransportResponse.Empty.INSTANCE);\n }\n }\n@@ -424,16 +316,15 @@ public String executor() {\n \n @Override\n public void messageReceived(RecoveryTranslogOperationsRequest request, TransportChannel channel) throws Exception {\n- RecoveryStatus onGoingRecovery = onGoingRecoveries.get(request.recoveryId());\n- validateRecoveryStatus(onGoingRecovery, request.shardId());\n-\n- InternalIndexShard shard = (InternalIndexShard) indicesService.indexServiceSafe(request.shardId().index().name()).shardSafe(request.shardId().id());\n- for (Translog.Operation operation : request.operations()) {\n- validateRecoveryStatus(onGoingRecovery, request.shardId());\n- shard.performRecoveryOperation(operation);\n- onGoingRecovery.recoveryState.getTranslog().incrementTranslogOperations();\n+ try (RecoveriesCollection.StatusRef statusRef = onGoingRecoveries.getStatusSafe(request.recoveryId(), request.shardId())) {\n+ final RecoveryStatus recoveryStatus = statusRef.status();\n+ for (Translog.Operation operation : request.operations()) {\n+ recoveryStatus.indexShard().performRecoveryOperation(operation);\n+ recoveryStatus.state().getTranslog().incrementTranslogOperations();\n+ }\n }\n channel.sendResponse(TransportResponse.Empty.INSTANCE);\n+\n }\n }\n \n@@ -451,18 +342,19 @@ public String executor() {\n \n @Override\n public void messageReceived(RecoveryFilesInfoRequest request, TransportChannel channel) throws Exception {\n- RecoveryStatus onGoingRecovery = onGoingRecoveries.get(request.recoveryId());\n- validateRecoveryStatus(onGoingRecovery, request.shardId());\n- final RecoveryState.Index index = onGoingRecovery.recoveryState().getIndex();\n- index.addFileDetails(request.phase1FileNames, request.phase1FileSizes);\n- index.addReusedFileDetails(request.phase1ExistingFileNames, request.phase1ExistingFileSizes);\n- index.totalByteCount(request.phase1TotalSize);\n- index.totalFileCount(request.phase1FileNames.size() + request.phase1ExistingFileNames.size());\n- index.reusedByteCount(request.phase1ExistingTotalSize);\n- index.reusedFileCount(request.phase1ExistingFileNames.size());\n- // recoveryBytesCount / recoveryFileCount will be set as we go...\n- onGoingRecovery.stage(RecoveryState.Stage.INDEX);\n- channel.sendResponse(TransportResponse.Empty.INSTANCE);\n+ try (RecoveriesCollection.StatusRef statusRef = onGoingRecoveries.getStatusSafe(request.recoveryId(), request.shardId())) {\n+ final RecoveryStatus recoveryStatus = statusRef.status();\n+ final RecoveryState.Index index = recoveryStatus.state().getIndex();\n+ index.addFileDetails(request.phase1FileNames, request.phase1FileSizes);\n+ index.addReusedFileDetails(request.phase1ExistingFileNames, request.phase1ExistingFileSizes);\n+ index.totalByteCount(request.phase1TotalSize);\n+ index.totalFileCount(request.phase1FileNames.size() + request.phase1ExistingFileNames.size());\n+ index.reusedByteCount(request.phase1ExistingTotalSize);\n+ index.reusedFileCount(request.phase1ExistingFileNames.size());\n+ // recoveryBytesCount / recoveryFileCount will be set as we go...\n+ recoveryStatus.stage(RecoveryState.Stage.INDEX);\n+ channel.sendResponse(TransportResponse.Empty.INSTANCE);\n+ }\n }\n }\n \n@@ -480,40 +372,15 @@ public String executor() {\n \n @Override\n public void messageReceived(RecoveryCleanFilesRequest request, TransportChannel channel) throws Exception {\n- RecoveryStatus onGoingRecovery = onGoingRecoveries.get(request.recoveryId());\n- validateRecoveryStatus(onGoingRecovery, request.shardId());\n-\n- final Store store = onGoingRecovery.indexShard.store();\n- store.incRef();\n- try {\n+ try (RecoveriesCollection.StatusRef statusRef = onGoingRecoveries.getStatusSafe(request.recoveryId(), request.shardId())) {\n+ final RecoveryStatus recoveryStatus = statusRef.status();\n // first, we go and move files that were created with the recovery id suffix to\n // the actual names, its ok if we have a corrupted index here, since we have replicas\n // to recover from in case of a full cluster shutdown just when this code executes...\n- String prefix = \"recovery.\" + onGoingRecovery.recoveryState().getTimer().startTime() + \".\";\n- Set<String> filesToRename = Sets.newHashSet();\n- for (String existingFile : store.directory().listAll()) {\n- if (existingFile.startsWith(prefix)) {\n- filesToRename.add(existingFile.substring(prefix.length(), existingFile.length()));\n- }\n- }\n- Exception failureToRename = null;\n- if (!filesToRename.isEmpty()) {\n- // first, go and delete the existing ones\n- final Directory directory = store.directory();\n- for (String file : filesToRename) {\n- try {\n- directory.deleteFile(file);\n- } catch (Throwable ex) {\n- logger.debug(\"failed to delete file [{}]\", ex, file);\n- }\n- }\n- for (String fileToRename : filesToRename) {\n- // now, rename the files... and fail it it won't work\n- store.renameFile(prefix + fileToRename, fileToRename);\n- }\n- }\n+ recoveryStatus.renameAllTempFiles();\n+ final Store store = recoveryStatus.store();\n // now write checksums\n- onGoingRecovery.legacyChecksums.write(store);\n+ recoveryStatus.legacyChecksums().write(store);\n \n for (String existingFile : store.directory().listAll()) {\n // don't delete snapshot file, or the checksums file (note, this is extra protection since the Store won't delete checksum)\n@@ -526,8 +393,6 @@ public void messageReceived(RecoveryCleanFilesRequest request, TransportChannel\n }\n }\n channel.sendResponse(TransportResponse.Empty.INSTANCE);\n- } finally {\n- store.decRef();\n }\n }\n }\n@@ -546,103 +411,85 @@ public String executor() {\n \n @Override\n public void messageReceived(final RecoveryFileChunkRequest request, TransportChannel channel) throws Exception {\n- RecoveryStatus onGoingRecovery = onGoingRecoveries.get(request.recoveryId());\n- validateRecoveryStatus(onGoingRecovery, request.shardId());\n-\n- final Store store = onGoingRecovery.indexShard.store();\n- store.incRef();\n- try {\n+ try (RecoveriesCollection.StatusRef statusRef = onGoingRecoveries.getStatusSafe(request.recoveryId(), request.shardId())) {\n+ final RecoveryStatus recoveryStatus = statusRef.status();\n+ final Store store = recoveryStatus.store();\n IndexOutput indexOutput;\n if (request.position() == 0) {\n- // first request\n- onGoingRecovery.legacyChecksums.remove(request.name());\n- indexOutput = onGoingRecovery.removeOpenIndexOutputs(request.name());\n- IOUtils.closeWhileHandlingException(indexOutput);\n- // we create an output with no checksum, this is because the pure binary data of the file is not\n- // the checksum (because of seek). We will create the checksum file once copying is done\n-\n- // also, we check if the file already exists, if it does, we create a file name based\n- // on the current recovery \"id\" and later we make the switch, the reason for that is that\n- // we only want to overwrite the index files once we copied all over, and not create a\n- // case where the index is half moved\n-\n- String fileName = request.name();\n- if (store.directory().fileExists(fileName)) {\n- fileName = \"recovery.\" + onGoingRecovery.recoveryState().getTimer().startTime() + \".\" + fileName;\n- }\n- indexOutput = onGoingRecovery.openAndPutIndexOutput(request.name(), fileName, request.metadata(), store);\n+ indexOutput = recoveryStatus.openAndPutIndexOutput(request.name(), request.metadata(), store);\n } else {\n- indexOutput = onGoingRecovery.getOpenIndexOutput(request.name());\n+ indexOutput = recoveryStatus.getOpenIndexOutput(request.name());\n+ }\n+ if (recoverySettings.rateLimiter() != null) {\n+ recoverySettings.rateLimiter().pause(request.content().length());\n }\n- if (indexOutput == null) {\n- // shard is getting closed on us\n- throw new IndexShardClosedException(request.shardId());\n+ BytesReference content = request.content();\n+ if (!content.hasArray()) {\n+ content = content.toBytesArray();\n }\n- boolean success = false;\n- synchronized (indexOutput) {\n+ indexOutput.writeBytes(content.array(), content.arrayOffset(), content.length());\n+ recoveryStatus.state().getIndex().addRecoveredByteCount(content.length());\n+ RecoveryState.File file = recoveryStatus.state().getIndex().file(request.name());\n+ if (file != null) {\n+ file.updateRecovered(request.length());\n+ }\n+ if (indexOutput.getFilePointer() >= request.length() || request.lastChunk()) {\n try {\n- if (recoverySettings.rateLimiter() != null) {\n- recoverySettings.rateLimiter().pause(request.content().length());\n- }\n- BytesReference content = request.content();\n- if (!content.hasArray()) {\n- content = content.toBytesArray();\n- }\n- indexOutput.writeBytes(content.array(), content.arrayOffset(), content.length());\n- onGoingRecovery.recoveryState.getIndex().addRecoveredByteCount(content.length());\n- RecoveryState.File file = onGoingRecovery.recoveryState.getIndex().file(request.name());\n- if (file != null) {\n- file.updateRecovered(request.length());\n- }\n- if (indexOutput.getFilePointer() >= request.length() || request.lastChunk()) {\n- Store.verify(indexOutput);\n- // we are done\n- indexOutput.close();\n- // write the checksum\n- onGoingRecovery.legacyChecksums.add(request.metadata());\n- store.directory().sync(Collections.singleton(request.name()));\n- IndexOutput remove = onGoingRecovery.removeOpenIndexOutputs(request.name());\n- onGoingRecovery.recoveryState.getIndex().addRecoveredFileCount(1);\n- assert remove == null || remove == indexOutput; // remove maybe null if we got canceled\n- }\n- success = true;\n+ Store.verify(indexOutput);\n } finally {\n- if (!success || onGoingRecovery.isCanceled()) {\n- try {\n- IndexOutput remove = onGoingRecovery.removeOpenIndexOutputs(request.name());\n- assert remove == null || remove == indexOutput;\n- IOUtils.closeWhileHandlingException(indexOutput);\n- } finally {\n- // trash the file - unsuccessful\n- store.deleteQuiet(request.name(), \"recovery.\" + onGoingRecovery.recoveryState().getTimer().startTime() + \".\" + request.name());\n- }\n- }\n+ // we are done\n+ indexOutput.close();\n }\n+ // write the checksum\n+ recoveryStatus.legacyChecksums().add(request.metadata());\n+ store.directory().sync(Collections.singleton(request.name()));\n+ IndexOutput remove = recoveryStatus.removeOpenIndexOutputs(request.name());\n+ recoveryStatus.state().getIndex().addRecoveredFileCount(1);\n+ assert remove == null || remove == indexOutput; // remove maybe null if we got finished\n }\n- if (onGoingRecovery.isCanceled()) {\n- onGoingRecovery.sentCanceledToSource = true;\n- throw new IndexShardClosedException(request.shardId());\n- }\n- channel.sendResponse(TransportResponse.Empty.INSTANCE);\n- } finally {\n- store.decRef();\n }\n+ channel.sendResponse(TransportResponse.Empty.INSTANCE);\n }\n }\n \n- private void validateRecoveryStatus(RecoveryStatus onGoingRecovery, ShardId shardId) {\n- if (onGoingRecovery == null) {\n- // shard is getting closed on us\n- throw new IndexShardClosedException(shardId);\n+ class RecoveryRunner extends AbstractRunnable {\n+\n+ final long recoveryId;\n+\n+ RecoveryRunner(long recoveryId) {\n+ this.recoveryId = recoveryId;\n }\n- if (onGoingRecovery.indexShard.state() == IndexShardState.CLOSED) {\n- removeAndCleanOnGoingRecovery(onGoingRecovery);\n- onGoingRecovery.sentCanceledToSource = true;\n- throw new IndexShardClosedException(shardId);\n+\n+ @Override\n+ public void onFailure(Throwable t) {\n+ try (RecoveriesCollection.StatusRef statusRef = onGoingRecoveries.getStatus(recoveryId)) {\n+ if (statusRef == null) {\n+ logger.error(\"unexpected error during recovery [{}], failing shard\", t, recoveryId);\n+ onGoingRecoveries.failRecovery(recoveryId,\n+ new RecoveryFailedException(statusRef.status().state(), \"unexpected error\", t),\n+ true // be safe\n+ );\n+ } else {\n+ logger.debug(\"unexpected error during recovery, but recovery id [{}] is finished\", t, recoveryId);\n+ }\n+ }\n }\n- if (onGoingRecovery.isCanceled()) {\n- onGoingRecovery.sentCanceledToSource = true;\n- throw new IndexShardClosedException(shardId);\n+\n+ @Override\n+ public void doRun() {\n+ RecoveriesCollection.StatusRef statusRef = onGoingRecoveries.getStatus(recoveryId);\n+ if (statusRef == null) {\n+ logger.trace(\"not running recovery with id [{}] - can't find it (probably finished)\", recoveryId);\n+ return;\n+ }\n+ try {\n+ doRecovery(statusRef.status());\n+ } finally {\n+ // make sure we never interrupt the thread after we have released it back to the pool\n+ statusRef.status().clearWaitingRecoveryThread(Thread.currentThread());\n+ statusRef.close();\n+ }\n }\n }\n+\n }", "filename": "src/main/java/org/elasticsearch/indices/recovery/RecoveryTarget.java", "status": "modified" }, { "diff": "@@ -23,7 +23,9 @@\n import com.carrotsearch.hppc.procedures.IntProcedure;\n import com.google.common.base.Predicate;\n import com.google.common.util.concurrent.ListenableFuture;\n+import org.apache.lucene.index.IndexFileNames;\n import org.apache.lucene.util.LuceneTestCase.Slow;\n+import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.action.admin.cluster.health.ClusterHealthResponse;\n import org.elasticsearch.action.admin.cluster.state.ClusterStateResponse;\n import org.elasticsearch.action.admin.indices.recovery.RecoveryResponse;\n@@ -33,45 +35,70 @@\n import org.elasticsearch.action.search.SearchResponse;\n import org.elasticsearch.action.search.SearchType;\n import org.elasticsearch.client.Client;\n+import org.elasticsearch.cluster.ClusterService;\n+import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n+import org.elasticsearch.cluster.node.DiscoveryNode;\n+import org.elasticsearch.cluster.routing.ShardRoutingState;\n import org.elasticsearch.cluster.routing.allocation.command.MoveAllocationCommand;\n+import org.elasticsearch.cluster.routing.allocation.decider.EnableAllocationDecider;\n import org.elasticsearch.cluster.routing.allocation.decider.FilterAllocationDecider;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.Priority;\n import org.elasticsearch.common.settings.ImmutableSettings;\n+import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.discovery.DiscoveryService;\n+import org.elasticsearch.env.NodeEnvironment;\n import org.elasticsearch.index.shard.IndexShardState;\n import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.index.shard.service.IndexShard;\n import org.elasticsearch.indices.IndicesLifecycle;\n+import org.elasticsearch.indices.recovery.RecoveryFileChunkRequest;\n import org.elasticsearch.indices.recovery.RecoverySettings;\n+import org.elasticsearch.indices.recovery.RecoveryTarget;\n import org.elasticsearch.search.SearchHit;\n import org.elasticsearch.search.SearchHits;\n import org.elasticsearch.test.BackgroundIndexer;\n import org.elasticsearch.test.ElasticsearchIntegrationTest;\n import org.elasticsearch.test.ElasticsearchIntegrationTest.ClusterScope;\n+import org.elasticsearch.test.junit.annotations.TestLogging;\n+import org.elasticsearch.test.transport.MockTransportService;\n+import org.elasticsearch.transport.*;\n import org.junit.Test;\n \n+import java.io.File;\n+import java.io.IOException;\n+import java.nio.file.FileVisitResult;\n+import java.nio.file.Files;\n+import java.nio.file.Path;\n+import java.nio.file.SimpleFileVisitor;\n+import java.nio.file.attribute.BasicFileAttributes;\n import java.util.ArrayList;\n import java.util.List;\n+import java.util.concurrent.CountDownLatch;\n import java.util.concurrent.Semaphore;\n import java.util.concurrent.TimeUnit;\n \n import static org.elasticsearch.common.settings.ImmutableSettings.settingsBuilder;\n import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery;\n import static org.elasticsearch.test.ElasticsearchIntegrationTest.Scope;\n-import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertHitCount;\n-import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertNoFailures;\n-import static org.hamcrest.Matchers.equalTo;\n-import static org.hamcrest.Matchers.is;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.*;\n+import static org.hamcrest.Matchers.*;\n \n /**\n */\n @ClusterScope(scope = Scope.TEST, numDataNodes = 0)\n+@TestLogging(\"indices.recovery:TRACE\")\n public class RelocationTests extends ElasticsearchIntegrationTest {\n private final TimeValue ACCEPTABLE_RELOCATION_TIME = new TimeValue(5, TimeUnit.MINUTES);\n \n+ @Override\n+ protected Settings nodeSettings(int nodeOrdinal) {\n+ return ImmutableSettings.builder()\n+ .put(TransportModule.TRANSPORT_SERVICE_TYPE_KEY, MockTransportService.class.getName()).build();\n+ }\n+\n \n @Test\n public void testSimpleRelocationNoIndexing() {\n@@ -417,4 +444,114 @@ public boolean apply(Object input) {\n assertTrue(stateResponse.getState().readOnlyRoutingNodes().node(blueNodeId).isEmpty());\n }\n \n+ @Test\n+ @Slow\n+ @TestLogging(\"indices.recovery:TRACE\")\n+ public void testCancellationCleansTempFiles() throws Exception {\n+ final String indexName = \"test\";\n+\n+ final String p_node = internalCluster().startNode();\n+\n+ client().admin().indices().prepareCreate(indexName)\n+ .setSettings(ImmutableSettings.builder().put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1, IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0)).get();\n+\n+ internalCluster().startNodesAsync(2).get();\n+\n+ List<IndexRequestBuilder> requests = new ArrayList<>();\n+ int numDocs = scaledRandomIntBetween(25, 250);\n+ for (int i = 0; i < numDocs; i++) {\n+ requests.add(client().prepareIndex(indexName, \"type\").setCreate(true).setSource(\"{}\"));\n+ }\n+ indexRandom(true, requests);\n+ assertFalse(client().admin().cluster().prepareHealth().setWaitForNodes(\"3\").setWaitForGreenStatus().get().isTimedOut());\n+ flush();\n+\n+ int allowedFailures = randomIntBetween(3, 10);\n+ logger.info(\"--> blocking recoveries from primary (allowed failures: [{}])\", allowedFailures);\n+ CountDownLatch corruptionCount = new CountDownLatch(allowedFailures);\n+ ClusterService clusterService = internalCluster().getInstance(ClusterService.class, p_node);\n+ MockTransportService mockTransportService = (MockTransportService) internalCluster().getInstance(TransportService.class, p_node);\n+ for (DiscoveryNode node : clusterService.state().nodes()) {\n+ if (!node.equals(clusterService.localNode())) {\n+ mockTransportService.addDelegate(node, new RecoveryCorruption(mockTransportService.original(), corruptionCount));\n+ }\n+ }\n+\n+ client().admin().indices().prepareUpdateSettings(indexName).setSettings(ImmutableSettings.builder().put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 1)).get();\n+\n+ corruptionCount.await();\n+\n+ logger.info(\"--> stopping replica assignment\");\n+ assertAcked(client().admin().cluster().prepareUpdateSettings()\n+ .setTransientSettings(ImmutableSettings.builder().put(EnableAllocationDecider.CLUSTER_ROUTING_ALLOCATION_ENABLE, \"none\")));\n+\n+ logger.info(\"--> wait for all replica shards to be removed, on all nodes\");\n+ assertBusy(new Runnable() {\n+ @Override\n+ public void run() {\n+ for (String node : internalCluster().getNodeNames()) {\n+ if (node.equals(p_node)) {\n+ continue;\n+ }\n+ ClusterState state = client(node).admin().cluster().prepareState().setLocal(true).get().getState();\n+ assertThat(node + \" indicates assigned replicas\",\n+ state.getRoutingTable().index(indexName).shardsWithState(ShardRoutingState.UNASSIGNED).size(), equalTo(1));\n+ }\n+ }\n+ });\n+\n+ logger.info(\"--> verifying no temporary recoveries are left\");\n+ for (String node : internalCluster().getNodeNames()) {\n+ NodeEnvironment nodeEnvironment = internalCluster().getInstance(NodeEnvironment.class, node);\n+ for (final File shardLoc : nodeEnvironment.shardLocations(new ShardId(indexName, 0))) {\n+ assertBusy(new Runnable() {\n+ @Override\n+ public void run() {\n+ try {\n+ Files.walkFileTree(shardLoc.toPath(), new SimpleFileVisitor<Path>() {\n+ @Override\n+ public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IOException {\n+ assertThat(\"found a temporary recovery file: \" + file, file.getFileName().toString(), not(startsWith(\"recovery.\")));\n+ return FileVisitResult.CONTINUE;\n+ }\n+ });\n+ } catch (IOException e) {\n+ throw new ElasticsearchException(\"failed to walk tree\", e);\n+ }\n+ }\n+ });\n+ }\n+ }\n+ }\n+\n+ class RecoveryCorruption extends MockTransportService.DelegateTransport {\n+\n+ private final CountDownLatch corruptionCount;\n+\n+ public RecoveryCorruption(Transport transport, CountDownLatch corruptionCount) {\n+ super(transport);\n+ this.corruptionCount = corruptionCount;\n+ }\n+\n+ @Override\n+ public void sendRequest(DiscoveryNode node, long requestId, String action, TransportRequest request, TransportRequestOptions options) throws IOException, TransportException {\n+// if (action.equals(RecoveryTarget.Actions.PREPARE_TRANSLOG)) {\n+// logger.debug(\"dropped [{}] to {}\", action, node);\n+ //} else\n+ if (action.equals(RecoveryTarget.Actions.FILE_CHUNK)) {\n+ RecoveryFileChunkRequest chunkRequest = (RecoveryFileChunkRequest) request;\n+ if (chunkRequest.name().startsWith(IndexFileNames.SEGMENTS)) {\n+ // corrupting the segments_N files in order to make sure future recovery re-send files\n+ logger.debug(\"corrupting [{}] to {}. file name: [{}]\", action, node, chunkRequest.name());\n+ byte[] array = chunkRequest.content().array();\n+ array[0] = (byte) ~array[0]; // flip one byte in the content\n+ corruptionCount.countDown();\n+ }\n+ transport.sendRequest(node, requestId, action, request, options);\n+ } else {\n+ transport.sendRequest(node, requestId, action, request, options);\n+ }\n+ }\n+ }\n+\n }", "filename": "src/test/java/org/elasticsearch/recovery/RelocationTests.java", "status": "modified" } ] }
{ "body": "As reported on the user's list:\n\n```\nhttps://groups.google.com/d/msg/elasticsearch/IQWvod8hq_Q/H6358j_24B0J\n```\n\nIt looks like there is a concurrency bug when you dynamically update refresh_interval down to a value <= 0. We cancel the scheduled future when this happens, but if the future was already executing (which we don't try to cancel because we pass false to the cancel call), EngineRefresher.run will then forever continue rescheduling itself for the immediate future.\n", "comments": [], "number": 8085, "title": "Core: changing refresh_interval to non-positive (0, -1, etc.) value might cause 100% CPU spin" }
{ "body": "If a EngingRefresher.run was already running when the refresh_interval is\ndynamically updated down to a non-positive value (0, -1, etc.), then\nit's was possible for the refresh thread to go into while (true)\nrefresh() loop.\n\nCloses #8085\n", "number": 8087, "review_comments": [], "title": "Only schedule another refresh if `refresh_interval` is positive" }
{ "commits": [ { "message": "Core: only schedule another refresh if refresh_interval is positive\n\nIf a refresh was already running when the refresh_interval is\ndynamically updated down to a non-positive value (0, -1, etc.), then\nit's was possible for the refresh thread to go into while (true)\nrefresh() loop.\n\nCloses #8085" }, { "message": "add comment about the unbearable importance of 'false'" } ], "files": [ { "diff": "@@ -929,6 +929,9 @@ public void onRefreshSettings(Settings settings) {\n if (!refreshInterval.equals(InternalIndexShard.this.refreshInterval)) {\n logger.info(\"updating refresh_interval from [{}] to [{}]\", InternalIndexShard.this.refreshInterval, refreshInterval);\n if (refreshScheduledFuture != null) {\n+ // NOTE: we pass false here so we do NOT attempt Thread.interrupt if EngineRefresher.run is currently running. This is\n+ // very important, because doing so can cause files to suddenly be closed if they were doing IO when the interrupt\n+ // hit. See https://issues.apache.org/jira/browse/LUCENE-2239\n refreshScheduledFuture.cancel(false);\n refreshScheduledFuture = null;\n }\n@@ -946,11 +949,7 @@ class EngineRefresher implements Runnable {\n public void run() {\n // we check before if a refresh is needed, if not, we reschedule, otherwise, we fork, refresh, and then reschedule\n if (!engine().refreshNeeded()) {\n- synchronized (mutex) {\n- if (state != IndexShardState.CLOSED) {\n- refreshScheduledFuture = threadPool.schedule(refreshInterval, ThreadPool.Names.SAME, this);\n- }\n- }\n+ reschedule();\n return;\n }\n threadPool.executor(ThreadPool.Names.REFRESH).execute(new Runnable() {\n@@ -979,14 +978,20 @@ public void run() {\n logger.warn(\"Failed to perform scheduled engine refresh\", e);\n }\n }\n- synchronized (mutex) {\n- if (state != IndexShardState.CLOSED) {\n- refreshScheduledFuture = threadPool.schedule(refreshInterval, ThreadPool.Names.SAME, EngineRefresher.this);\n- }\n- }\n+\n+ reschedule();\n }\n });\n }\n+\n+ /** Schedules another (future) refresh, if refresh_interval is still enabled. */\n+ private void reschedule() {\n+ synchronized (mutex) {\n+ if (state != IndexShardState.CLOSED && refreshInterval.millis() > 0) {\n+ refreshScheduledFuture = threadPool.schedule(refreshInterval, ThreadPool.Names.SAME, this);\n+ }\n+ }\n+ }\n }\n \n class EngineMerger implements Runnable {", "filename": "src/main/java/org/elasticsearch/index/shard/service/InternalIndexShard.java", "status": "modified" } ] }
{ "body": "For instance, PathTrie builds two tries: \"/a/{x}/b\", \"/{y}/c/d\". While retrieving \"/a/c/d\", it breaks for the params are {x=c, y=a} instead of {y=a}. We should push params before retrieving next token and pop them if we have to use wildcard to retrieve again.\n", "comments": [ { "body": "Hi @wy96f I would like to understand more about the actual usecase here. I would be curious to know what made you find this problem for instance, is this something that happens with some of the standard REST endpoints, or was this triggered by a plugin that registers custom ones? \n", "created_at": "2014-12-08T12:22:28Z" }, { "body": "Hi @javanna Found this while reading the code:)\n", "created_at": "2014-12-09T13:27:42Z" }, { "body": "Is this causing any actual bugs in Elasticsearch today? That is, are there any situations today where this issue causes Elasticsearch to do something unexpected or wrong? If there isn't a bug today, are there any situations where this issue _could_ lead to a bug in Elasticsearch?\n", "created_at": "2015-12-19T16:47:31Z" }, { "body": "No further feedback. Closing\n", "created_at": "2016-03-08T18:58:40Z" } ], "number": 8071, "title": "PathTrie wrongly adds params" }
{ "body": "fix #8071\n", "number": 8072, "review_comments": [ { "body": "Instead of creating the formerParams and replacing them afterwards could we not create a `nextParams` map which is a copy of the params map and pass it into the node.retrieve method? This would save us having to put things back after the method returns\n", "created_at": "2014-10-17T13:01:06Z" }, { "body": "I think if you do `nextParams.putAll(params)` here you don't need to clear the `nextParams` map or do the `params.putAll(nextParams)` call below as the `nextParams` map will be used if `node.retrieve()` returns something and if it returns `null`, `params` will be untouched for the next attempt at retrieving the node.\n", "created_at": "2014-10-23T09:00:25Z" }, { "body": "```\n Map<String, String> nextParams = null;\n if (params != null) {\n nextParams = newHashMap();\n nextParams.putAll(params);\n }\n T res = node.retrieve(path, index + 1, nextParams);\n if (res == null && !usedWildcard) {\n node = children.get(wildcard);\n /*\n if (nextParams != null) {\n nextParams.clear();\n }\n */\n if (node != null) {\n put(params, node, token);\n res = node.retrieve(path, index + 1, nextParams);\n }\n }\n\n /*\n if (res != null && nextParams != null) {\n params.putAll(nextParams);\n }\n */\n\n return res;\n```\n\nyou mean this? If node.retrive() returns null, nextParams may contain a namedWildcard(the next node may be namedWildcard) and we need to clear it. How can we save right params if we don't put nextParams into params?\n", "created_at": "2014-10-23T12:30:49Z" }, { "body": "Yes sorry you are right, I didn't see that the params variable was a parameter passed in and therefore need to be updated and was just thinking about the params map in the node that would be returned.\n", "created_at": "2014-10-24T08:18:14Z" }, { "body": "In which case your original solution would have worked too. Sorry\n", "created_at": "2014-10-24T08:20:25Z" } ], "title": "add params properly when using wildcard" }
{ "commits": [ { "message": "add params properly when using wildcard" }, { "message": "put nextParams into the method" } ], "files": [ { "diff": "@@ -24,6 +24,7 @@\n \n import java.util.Map;\n \n+import static com.google.common.collect.Maps.newHashMap;\n import static org.elasticsearch.common.collect.MapBuilder.newMapBuilder;\n \n /**\n@@ -181,15 +182,26 @@ public T retrieve(String[] path, int index, Map<String, String> params) {\n return node.value;\n }\n \n- T res = node.retrieve(path, index + 1, params);\n+ Map<String, String> nextParams = null;\n+ if (params != null) {\n+ nextParams = newHashMap();\n+ }\n+ T res = node.retrieve(path, index + 1, nextParams);\n if (res == null && !usedWildcard) {\n node = children.get(wildcard);\n+ if (nextParams != null) {\n+ nextParams.clear();\n+ }\n if (node != null) {\n put(params, node, token);\n- res = node.retrieve(path, index + 1, params);\n+ res = node.retrieve(path, index + 1, nextParams);\n }\n }\n \n+ if (res != null && nextParams != null) {\n+ params.putAll(nextParams);\n+ }\n+\n return res;\n }\n ", "filename": "src/main/java/org/elasticsearch/common/path/PathTrie.java", "status": "modified" }, { "diff": "@@ -26,6 +26,7 @@\n \n import static com.google.common.collect.Maps.newHashMap;\n import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.is;\n import static org.hamcrest.Matchers.nullValue;\n \n /**\n@@ -161,4 +162,16 @@ public void testNamedWildcardAndLookupWithWildcard() {\n assertThat(trie.retrieve(\"a/*/_endpoint\", params), equalTo(\"test5\"));\n assertThat(params.get(\"test\"), equalTo(\"*\"));\n }\n+\n+ @Test\n+ public void testNamedWildcardWithParams() {\n+ PathTrie<String> trie = new PathTrie<>();\n+ trie.insert(\"/a/{x}/b\", \"test1\");\n+ trie.insert(\"/{y}/c/d\", \"test2\");\n+\n+ Map<String, String> params = newHashMap();\n+ assertThat(trie.retrieve(\"/a/c/d\", params), equalTo(\"test2\"));\n+ assertThat(params.size(), is(1));\n+ assertThat(params.get(\"y\"), equalTo(\"a\"));\n+ }\n }", "filename": "src/test/java/org/elasticsearch/common/path/PathTrieTests.java", "status": "modified" } ] }
{ "body": "To reproduce\n- start elasticsearch (tested 1.3.4, 1.4.0Beta1) \n- run this script: https://gist.github.com/brwe/5ba123604c4cc0af9c3a\n\nResults in:\n\n```\n...\n[2014-10-14 09:26:28,859][DEBUG][index.engine.internal ] [Chan Luichow] [testidx][3] updating index_buffer_size from [64mb] to [60.7mb]\n[2014-10-14 09:26:28,860][DEBUG][index.engine.internal ] [Chan Luichow] [testidx][4] updating index_buffer_size from [64mb] to [60.7mb]\njava.lang.OutOfMemoryError: PermGen space\nDumping heap to java_pid3947.hprof ...\nHeap dump file created [82885613 bytes in 1.006 secs]\n```\n\nThis works fine with mvel.\n", "comments": [ { "body": "@brwe see #7658 and #8062\n", "created_at": "2014-10-14T07:33:16Z" }, { "body": "@dakrone thanks, I did not see\n", "created_at": "2014-10-14T07:35:14Z" } ], "number": 8073, "title": "OOM when updating docs with groovy" }
{ "body": "Since we don't use the cache, it's okay to clear it entirely if needed,\nElasticsearch maintains its own cache for compiled scripts.\n\nFixes #7658\nFixes #8073\n", "number": 8062, "review_comments": [ { "body": "maybe catch exceptions here and put them into an arraylist, you can then just use ExceptionHelper to rerthrow and surpress....it has a utility for this\n", "created_at": "2014-10-14T08:09:18Z" }, { "body": "for safety can you do `\"groovy\".equals(script.lang())` instead \n", "created_at": "2014-10-14T08:10:14Z" } ], "title": "Clear the GroovyClassLoader cache before compiling" }
{ "commits": [ { "message": "Clear the GroovyClassLoader cache before compiling\n\nSince we don't use the cache, it's okay to clear it entirely if needed,\nElasticsearch maintains its own cache for compiled scripts.\n\nAdds loader.clearCache() into a listener, the listener is called when a\nscript is removed from the Guava cache.\n\nThis also lowers the amount of cached scripts to 100, since 500 is\naround the limit some users have run into before hitting an out of\nmemory error in permgem.\n\nFixes #7658" } ], "files": [ { "diff": "@@ -93,4 +93,9 @@ public Object unwrap(Object value) {\n @Override\n public void close() {\n }\n+\n+ @Override\n+ public void scriptRemoved(CompiledScript script) {\n+ // Nothing to do here\n+ }\n }\n\\ No newline at end of file", "filename": "src/main/java/org/elasticsearch/script/NativeScriptEngineService.java", "status": "modified" }, { "diff": "@@ -46,4 +46,11 @@ public interface ScriptEngineService {\n Object unwrap(Object value);\n \n void close();\n+\n+ /**\n+ * Handler method called when a script is removed from the Guava cache.\n+ *\n+ * The passed script may be null if it has already been garbage collected.\n+ * */\n+ void scriptRemoved(@Nullable CompiledScript script);\n }", "filename": "src/main/java/org/elasticsearch/script/ScriptEngineService.java", "status": "modified" }, { "diff": "@@ -22,9 +22,12 @@\n import com.google.common.base.Charsets;\n import com.google.common.cache.Cache;\n import com.google.common.cache.CacheBuilder;\n+import com.google.common.cache.RemovalListener;\n+import com.google.common.cache.RemovalNotification;\n import com.google.common.collect.ImmutableMap;\n import org.elasticsearch.ElasticsearchIllegalArgumentException;\n import org.elasticsearch.ElasticsearchIllegalStateException;\n+import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.action.ActionListener;\n import org.elasticsearch.action.delete.DeleteRequest;\n import org.elasticsearch.action.delete.DeleteResponse;\n@@ -64,12 +67,15 @@\n import java.io.FileInputStream;\n import java.io.IOException;\n import java.io.InputStreamReader;\n+import java.util.List;\n import java.util.Locale;\n import java.util.Map;\n import java.util.Set;\n import java.util.concurrent.ConcurrentMap;\n import java.util.concurrent.TimeUnit;\n \n+import static com.google.common.collect.Lists.newArrayList;\n+\n /**\n *\n */\n@@ -206,7 +212,7 @@ public ScriptService(Settings settings, Environment env, Set<ScriptEngineService\n ResourceWatcherService resourceWatcherService) {\n super(settings);\n \n- int cacheMaxSize = settings.getAsInt(SCRIPT_CACHE_SIZE_SETTING, 500);\n+ int cacheMaxSize = settings.getAsInt(SCRIPT_CACHE_SIZE_SETTING, 100);\n TimeValue cacheExpire = settings.getAsTime(SCRIPT_CACHE_EXPIRE_SETTING, null);\n logger.debug(\"using script cache with max_size [{}], expire [{}]\", cacheMaxSize, cacheExpire);\n \n@@ -220,6 +226,7 @@ public ScriptService(Settings settings, Environment env, Set<ScriptEngineService\n if (cacheExpire != null) {\n cacheBuilder.expireAfterAccess(cacheExpire.nanos(), TimeUnit.NANOSECONDS);\n }\n+ cacheBuilder.removalListener(new ScriptCacheRemovalListener());\n this.cache = cacheBuilder.build();\n \n ImmutableMap.Builder<String, ScriptEngineService> builder = ImmutableMap.builder();\n@@ -483,6 +490,30 @@ private boolean dynamicScriptEnabled(String lang) {\n }\n }\n \n+ /**\n+ * A small listener for the script cache that calls each\n+ * {@code ScriptEngineService}'s {@code scriptRemoved} method when the\n+ * script has been removed from the cache\n+ */\n+ private class ScriptCacheRemovalListener implements RemovalListener<CacheKey, CompiledScript> {\n+\n+ @Override\n+ public void onRemoval(RemovalNotification<CacheKey, CompiledScript> notification) {\n+ if (logger.isDebugEnabled()) {\n+ logger.debug(\"notifying script services of script removal due to: [{}]\", notification.getCause());\n+ }\n+ List<Exception> errors = newArrayList();\n+ for (ScriptEngineService service : scriptEngines.values()) {\n+ try {\n+ service.scriptRemoved(notification.getValue());\n+ } catch (Exception e) {\n+ errors.add(e);\n+ }\n+ }\n+ ExceptionsHelper.maybeThrowRuntimeAndSuppress(errors);\n+ }\n+ }\n+\n private class ScriptChangesListener extends FileChangesListener {\n \n private Tuple<String, String> scriptNameExt(File file) {", "filename": "src/main/java/org/elasticsearch/script/ScriptService.java", "status": "modified" }, { "diff": "@@ -33,6 +33,7 @@\n import org.elasticsearch.index.mapper.FieldMapper;\n import org.elasticsearch.index.mapper.MapperService;\n import org.elasticsearch.index.mapper.core.NumberFieldMapper;\n+import org.elasticsearch.script.CompiledScript;\n import org.elasticsearch.script.ExecutableScript;\n import org.elasticsearch.script.ScriptEngineService;\n import org.elasticsearch.script.SearchScript;\n@@ -154,4 +155,9 @@ public Object unwrap(Object value) {\n \n @Override\n public void close() {}\n+\n+ @Override\n+ public void scriptRemoved(CompiledScript script) {\n+ // Nothing to do\n+ }\n }", "filename": "src/main/java/org/elasticsearch/script/expression/ExpressionScriptEngineService.java", "status": "modified" }, { "diff": "@@ -43,7 +43,6 @@\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.script.*;\n import org.elasticsearch.search.lookup.SearchLookup;\n-import org.elasticsearch.search.suggest.term.TermSuggestion;\n \n import java.io.IOException;\n import java.math.BigDecimal;\n@@ -89,6 +88,16 @@ public void close() {\n }\n }\n \n+ @Override\n+ public void scriptRemoved(@Nullable CompiledScript script) {\n+ // script could be null, meaning the script has already been garbage collected\n+ if (script == null || \"groovy\".equals(script.lang())) {\n+ // Clear the cache, this removes old script versions from the\n+ // cache to prevent running out of PermGen space\n+ loader.clearCache();\n+ }\n+ }\n+\n @Override\n public String[] types() {\n return new String[]{\"groovy\"};\n@@ -313,4 +322,4 @@ public Expression transform(Expression expr) {\n }\n }\n \n-}\n\\ No newline at end of file\n+}", "filename": "src/main/java/org/elasticsearch/script/groovy/GroovyScriptEngineService.java", "status": "modified" }, { "diff": "@@ -26,6 +26,7 @@\n import org.elasticsearch.common.io.UTF8StreamWriter;\n import org.elasticsearch.common.io.stream.BytesStreamOutput;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.script.CompiledScript;\n import org.elasticsearch.script.ExecutableScript;\n import org.elasticsearch.script.ScriptEngineService;\n import org.elasticsearch.script.SearchScript;\n@@ -148,6 +149,11 @@ public void close() {\n // Nothing to do here\n }\n \n+ @Override\n+ public void scriptRemoved(CompiledScript script) {\n+ // Nothing to do here\n+ }\n+\n /**\n * Used at query execution time by script service in order to execute a query template.\n * */", "filename": "src/main/java/org/elasticsearch/script/mustache/MustacheScriptEngineService.java", "status": "modified" }, { "diff": "@@ -132,6 +132,11 @@ public Object unwrap(Object value) {\n public void close() {\n \n }\n+\n+ @Override\n+ public void scriptRemoved(CompiledScript script) {\n+ // Nothing to do here\n+ }\n }\n \n }", "filename": "src/test/java/org/elasticsearch/script/ScriptServiceTests.java", "status": "modified" } ] }
{ "body": "To reproduce close and index and start snapshot by specifying the index name explicitly in the list of indices. \n", "comments": [], "number": 8046, "title": "Snapshot of a closed index can leave snapshot hanging in initializing state" }
{ "body": "Snapshot of a closed index can leave snapshot hanging in initializing state.\n\nFixes #8046\n", "number": 8047, "review_comments": [ { "body": "do we need trace here?\n", "created_at": "2014-10-13T10:33:23Z" }, { "body": "Debuging leftovers. I will remove.\n", "created_at": "2014-10-13T10:35:04Z" } ], "title": "Fix snapshotting of a single closed index" }
{ "commits": [ { "message": "Snapshot/Restore: fix snapshot of a single closed index\n\nSnapshot of a closed index can leave snapshot hanging in initializing state.\n\nFixes #8046" } ], "files": [ { "diff": "@@ -323,6 +323,12 @@ public ClusterState execute(ClusterState currentState) {\n @Override\n public void onFailure(String source, Throwable t) {\n logger.warn(\"[{}] failed to create snapshot\", t, snapshot.snapshotId());\n+ removeSnapshotFromClusterState(snapshot.snapshotId(), null, t);\n+ try {\n+ repositoriesService.repository(snapshot.snapshotId().getRepository()).finalizeSnapshot(snapshot.snapshotId(), ExceptionsHelper.detailedMessage(t), 0, ImmutableList.<SnapshotShardFailure>of());\n+ } catch (Throwable t2) {\n+ logger.warn(\"[{}] failed to close snapshot in repository\", snapshot.snapshotId());\n+ }\n userCreateSnapshotListener.onFailure(t);\n }\n \n@@ -345,28 +351,7 @@ public void clusterStateProcessed(String source, ClusterState oldState, ClusterS\n });\n } catch (Throwable t) {\n logger.warn(\"failed to create snapshot [{}]\", t, snapshot.snapshotId());\n- clusterService.submitStateUpdateTask(\"fail_snapshot [\" + snapshot.snapshotId() + \"]\", new ClusterStateUpdateTask() {\n-\n- @Override\n- public ClusterState execute(ClusterState currentState) {\n- MetaData metaData = currentState.metaData();\n- MetaData.Builder mdBuilder = MetaData.builder(currentState.metaData());\n- SnapshotMetaData snapshots = metaData.custom(SnapshotMetaData.TYPE);\n- ImmutableList.Builder<SnapshotMetaData.Entry> entries = ImmutableList.builder();\n- for (SnapshotMetaData.Entry entry : snapshots.entries()) {\n- if (!entry.snapshotId().equals(snapshot.snapshotId())) {\n- entries.add(entry);\n- }\n- }\n- mdBuilder.putCustom(SnapshotMetaData.TYPE, new SnapshotMetaData(entries.build()));\n- return ClusterState.builder(currentState).metaData(mdBuilder).build();\n- }\n-\n- @Override\n- public void onFailure(String source, Throwable t) {\n- logger.warn(\"[{}] failed to delete snapshot\", t, snapshot.snapshotId());\n- }\n- });\n+ removeSnapshotFromClusterState(snapshot.snapshotId(), null, t);\n if (snapshotCreated) {\n try {\n repositoriesService.repository(snapshot.snapshotId().getRepository()).finalizeSnapshot(snapshot.snapshotId(), ExceptionsHelper.detailedMessage(t), 0, ImmutableList.<SnapshotShardFailure>of());\n@@ -1046,7 +1031,7 @@ public void clusterStateProcessed(String source, ClusterState oldState, ClusterS\n listener.onSnapshotFailure(snapshotId, t);\n }\n } catch (Throwable t) {\n- logger.warn(\"failed to refresh settings for [{}]\", t, listener);\n+ logger.warn(\"failed to notify listener [{}]\", t, listener);\n }\n }\n \n@@ -1127,17 +1112,21 @@ public void clusterStateProcessed(String source, ClusterState oldState, ClusterS\n logger.trace(\"adding snapshot completion listener to wait for deleted snapshot to finish\");\n addListener(new SnapshotCompletionListener() {\n @Override\n- public void onSnapshotCompletion(SnapshotId snapshotId, SnapshotInfo snapshot) {\n- logger.trace(\"deleted snapshot completed - deleting files\");\n- removeListener(this);\n- deleteSnapshotFromRepository(snapshotId, listener);\n+ public void onSnapshotCompletion(SnapshotId completedSnapshotId, SnapshotInfo snapshot) {\n+ if (completedSnapshotId.equals(snapshotId)) {\n+ logger.trace(\"deleted snapshot completed - deleting files\");\n+ removeListener(this);\n+ deleteSnapshotFromRepository(snapshotId, listener);\n+ }\n }\n \n @Override\n- public void onSnapshotFailure(SnapshotId snapshotId, Throwable t) {\n- logger.trace(\"deleted snapshot failed - deleting files\", t);\n- removeListener(this);\n- deleteSnapshotFromRepository(snapshotId, listener);\n+ public void onSnapshotFailure(SnapshotId failedSnapshotId, Throwable t) {\n+ if (failedSnapshotId.equals(snapshotId)) {\n+ logger.trace(\"deleted snapshot failed - deleting files\", t);\n+ removeListener(this);\n+ deleteSnapshotFromRepository(snapshotId, listener);\n+ }\n }\n });\n } else {\n@@ -1203,21 +1192,22 @@ private ImmutableMap<ShardId, SnapshotMetaData.ShardSnapshotStatus> shards(Snaps\n for (String index : indices) {\n IndexMetaData indexMetaData = metaData.index(index);\n IndexRoutingTable indexRoutingTable = clusterState.getRoutingTable().index(index);\n- if (indexRoutingTable == null) {\n- throw new SnapshotCreationException(snapshotId, \"Missing routing table for index [\" + index + \"]\");\n- }\n for (int i = 0; i < indexMetaData.numberOfShards(); i++) {\n ShardId shardId = new ShardId(index, i);\n- ShardRouting primary = indexRoutingTable.shard(i).primaryShard();\n- if (primary == null || !primary.assignedToNode()) {\n- builder.put(shardId, new SnapshotMetaData.ShardSnapshotStatus(null, State.MISSING, \"primary shard is not allocated\"));\n- } else if (clusterState.getNodes().smallestVersion().onOrAfter(Version.V_1_2_0) && (primary.relocating() || primary.initializing())) {\n- // The WAITING state was introduced in V1.2.0 - don't use it if there are nodes with older version in the cluster\n- builder.put(shardId, new SnapshotMetaData.ShardSnapshotStatus(primary.currentNodeId(), State.WAITING));\n- } else if (!primary.started()) {\n- builder.put(shardId, new SnapshotMetaData.ShardSnapshotStatus(primary.currentNodeId(), State.MISSING, \"primary shard hasn't been started yet\"));\n+ if (indexRoutingTable != null) {\n+ ShardRouting primary = indexRoutingTable.shard(i).primaryShard();\n+ if (primary == null || !primary.assignedToNode()) {\n+ builder.put(shardId, new SnapshotMetaData.ShardSnapshotStatus(null, State.MISSING, \"primary shard is not allocated\"));\n+ } else if (clusterState.getNodes().smallestVersion().onOrAfter(Version.V_1_2_0) && (primary.relocating() || primary.initializing())) {\n+ // The WAITING state was introduced in V1.2.0 - don't use it if there are nodes with older version in the cluster\n+ builder.put(shardId, new SnapshotMetaData.ShardSnapshotStatus(primary.currentNodeId(), State.WAITING));\n+ } else if (!primary.started()) {\n+ builder.put(shardId, new SnapshotMetaData.ShardSnapshotStatus(primary.currentNodeId(), State.MISSING, \"primary shard hasn't been started yet\"));\n+ } else {\n+ builder.put(shardId, new SnapshotMetaData.ShardSnapshotStatus(primary.currentNodeId()));\n+ }\n } else {\n- builder.put(shardId, new SnapshotMetaData.ShardSnapshotStatus(primary.currentNodeId()));\n+ builder.put(shardId, new SnapshotMetaData.ShardSnapshotStatus(null, State.MISSING, \"missing routing table\"));\n }\n }\n }", "filename": "src/main/java/org/elasticsearch/snapshots/SnapshotsService.java", "status": "modified" }, { "diff": "@@ -804,6 +804,27 @@ public void snapshotClosedIndexTest() throws Exception {\n client.admin().cluster().prepareDeleteSnapshot(\"test-repo\", \"test-snap\").get();\n }\n \n+ @Test\n+ public void snapshotSingleClosedIndexTest() throws Exception {\n+ Client client = client();\n+\n+ logger.info(\"--> creating repository\");\n+ assertAcked(client.admin().cluster().preparePutRepository(\"test-repo\")\n+ .setType(\"fs\").setSettings(ImmutableSettings.settingsBuilder()\n+ .put(\"location\", newTempDir(LifecycleScope.SUITE))));\n+\n+ createIndex(\"test-idx\");\n+ ensureGreen();\n+ logger.info(\"--> closing index test-idx\");\n+ assertAcked(client.admin().indices().prepareClose(\"test-idx\"));\n+\n+ logger.info(\"--> snapshot\");\n+ CreateSnapshotResponse createSnapshotResponse = client.admin().cluster().prepareCreateSnapshot(\"test-repo\", \"test-snap-1\")\n+ .setWaitForCompletion(true).setIndices(\"test-idx\").get();\n+ assertThat(createSnapshotResponse.getSnapshotInfo().indices().size(), equalTo(1));\n+ assertThat(createSnapshotResponse.getSnapshotInfo().state(), equalTo(SnapshotState.FAILED));\n+ }\n+\n @Test\n public void renameOnRestoreTest() throws Exception {\n Client client = client();", "filename": "src/test/java/org/elasticsearch/snapshots/SharedClusterSnapshotRestoreTests.java", "status": "modified" } ] }
{ "body": "The script metric aggregation does not separate script scopes based on the bucketOrd so when using it as a sub aggregation the results are given for all parent buckets combined rather than each parent bucket separately\n\nSee the following comment for more details:\nhttps://github.com/elasticsearch/elasticsearch/pull/7075#issuecomment-58403364\n", "comments": [], "number": 8036, "title": "Aggregations: scripted metric agg does not separate parent buckets" }
{ "body": "The scripted metric aggregation is now a PER_BUCKET aggregation so that parent buckets are evaluated independently. Also the params and reduceParams are copied for each instance of the aggregator (each parent bucket) so modifications to the values are kept only within the scope of its parent bucket\n\nCloses #8036\n", "number": 8037, "review_comments": [ { "body": "Can you make this one call this instead of super now that there is a more generic constructor?\n", "created_at": "2014-10-09T15:07:55Z" }, { "body": "Why isn't a Map enough?\n", "created_at": "2014-10-09T15:11:27Z" }, { "body": "Let's maybe add `Boolean` to that list?\n", "created_at": "2014-10-09T15:13:00Z" }, { "body": "FYI you can avoid compiler warnings in such cases by doing `Map<?, ?> originalMap = (Map<?, ?>) original;` instead. (but we still need the `@SuppressWarnings({ \"unchecked\" })` because of the cast to `T` unfortunately)\n", "created_at": "2014-10-09T15:19:17Z" }, { "body": "Maybe we should have a more-user friendly message saying that we found an unsupported parameter type in the script params (instead of something that we cannot clone)?\n", "created_at": "2014-10-09T15:25:46Z" }, { "body": "Just seen that I didn't clean these up so they throw nice exceptions. I'll do that as part of the review commits\n", "created_at": "2014-10-09T15:27:44Z" }, { "body": "Is it just to make sure that some shards are getting several buckets? If yes, I'm not sure we need it since every document generates its own bucket and we have 10 documents at least?\n", "created_at": "2014-10-09T15:28:17Z" }, { "body": "Great!\n", "created_at": "2014-10-09T15:29:35Z" }, { "body": "Thanks. I've changed it to use `Map<?, ?>` even though we still need the `@SuppressWarnings`\n", "created_at": "2014-10-09T17:53:52Z" }, { "body": "Put this in while I was trying to debug and forgot to take it back out. I'll remove it now\n", "created_at": "2014-10-09T18:20:16Z" }, { "body": "s/primatives/primitives/\n", "created_at": "2014-10-10T07:47:52Z" } ], "title": "Fixes scripted metrics aggregation when used as a sub aggregation" }
{ "commits": [ { "message": "Aggregations: Fixes scripted metrics aggregation when used as a sub aggregation\n\nThe scripted metric aggregation is now a PER_BUCKET aggregation so that parent buckets are evaluated independently. Also the params and reduceParams are copied for each instance of the aggregator (each parent bucket) so modifications to the values are kept only within the scope of its parent bucket\n\nCloses #8036" } ], "files": [ { "diff": "@@ -26,6 +26,10 @@\n public abstract class MetricsAggregator extends Aggregator {\n \n protected MetricsAggregator(String name, long estimatedBucketsCount, AggregationContext context, Aggregator parent) {\n- super(name, BucketAggregationMode.MULTI_BUCKETS, AggregatorFactories.EMPTY, estimatedBucketsCount, context, parent);\n+ this(name, estimatedBucketsCount, BucketAggregationMode.MULTI_BUCKETS, context, parent);\n+ }\n+ \n+ protected MetricsAggregator(String name, long estimatedBucketsCount, BucketAggregationMode bucketAggregationMode, AggregationContext context, Aggregator parent) {\n+ super(name, bucketAggregationMode, AggregatorFactories.EMPTY, estimatedBucketsCount, context, parent);\n }\n }", "filename": "src/main/java/org/elasticsearch/search/aggregations/metrics/MetricsAggregator.java", "status": "modified" }, { "diff": "@@ -24,15 +24,17 @@\n import org.elasticsearch.script.ScriptService;\n import org.elasticsearch.script.ScriptService.ScriptType;\n import org.elasticsearch.script.SearchScript;\n+import org.elasticsearch.search.SearchParseException;\n import org.elasticsearch.search.aggregations.Aggregator;\n import org.elasticsearch.search.aggregations.AggregatorFactory;\n import org.elasticsearch.search.aggregations.InternalAggregation;\n import org.elasticsearch.search.aggregations.metrics.MetricsAggregator;\n import org.elasticsearch.search.aggregations.support.AggregationContext;\n+import org.elasticsearch.search.internal.SearchContext;\n \n import java.io.IOException;\n-import java.util.HashMap;\n-import java.util.Map;\n+import java.util.*;\n+import java.util.Map.Entry;\n \n public class ScriptedMetricAggregator extends MetricsAggregator {\n \n@@ -51,15 +53,15 @@ public class ScriptedMetricAggregator extends MetricsAggregator {\n protected ScriptedMetricAggregator(String name, String scriptLang, ScriptType initScriptType, String initScript,\n ScriptType mapScriptType, String mapScript, ScriptType combineScriptType, String combineScript, ScriptType reduceScriptType,\n String reduceScript, Map<String, Object> params, Map<String, Object> reduceParams, AggregationContext context, Aggregator parent) {\n- super(name, 1, context, parent);\n+ super(name, 1, BucketAggregationMode.PER_BUCKET, context, parent);\n this.scriptService = context.searchContext().scriptService();\n this.scriptLang = scriptLang;\n this.reduceScriptType = reduceScriptType;\n if (params == null) {\n this.params = new HashMap<>();\n this.params.put(\"_agg\", new HashMap<>());\n } else {\n- this.params = params;\n+ this.params = new HashMap<>(params);\n }\n if (reduceParams == null) {\n this.reduceParams = new HashMap<>();\n@@ -142,9 +144,45 @@ public Factory(String name, String scriptLang, ScriptType initScriptType, String\n \n @Override\n public Aggregator create(AggregationContext context, Aggregator parent, long expectedBucketsCount) {\n+ Map<String, Object> params = null;\n+ if (this.params != null) {\n+ params = deepCopyParams(this.params, context.searchContext());\n+ }\n+ Map<String, Object> reduceParams = null;\n+ if (this.reduceParams != null) {\n+ reduceParams = deepCopyParams(this.reduceParams, context.searchContext());\n+ }\n return new ScriptedMetricAggregator(name, scriptLang, initScriptType, initScript, mapScriptType, mapScript, combineScriptType,\n combineScript, reduceScriptType, reduceScript, params, reduceParams, context, parent);\n }\n+ \n+ @SuppressWarnings({ \"unchecked\" })\n+ private static <T> T deepCopyParams(T original, SearchContext context) {\n+ T clone;\n+ if (original instanceof Map) {\n+ Map<?, ?> originalMap = (Map<?, ?>) original;\n+ Map<Object, Object> clonedMap = new HashMap<>();\n+ for (Entry<?, ?> e : originalMap.entrySet()) {\n+ clonedMap.put(deepCopyParams(e.getKey(), context), deepCopyParams(e.getValue(), context));\n+ }\n+ clone = (T) clonedMap;\n+ } else if (original instanceof List) {\n+ List<?> originalList = (List<?>) original;\n+ List<Object> clonedList = new ArrayList<Object>();\n+ for (Object o : originalList) {\n+ clonedList.add(deepCopyParams(o, context));\n+ }\n+ clone = (T) clonedList;\n+ } else if (original instanceof String || original instanceof Integer || original instanceof Long || original instanceof Short\n+ || original instanceof Byte || original instanceof Float || original instanceof Double || original instanceof Character\n+ || original instanceof Boolean) {\n+ clone = original;\n+ } else {\n+ throw new SearchParseException(context, \"Can only clone primitives, String, ArrayList, and HashMap. Found: \"\n+ + original.getClass().getCanonicalName());\n+ }\n+ return clone;\n+ }\n \n }\n ", "filename": "src/main/java/org/elasticsearch/search/aggregations/metrics/scripted/ScriptedMetricAggregator.java", "status": "modified" }, { "diff": "@@ -25,7 +25,9 @@\n import org.elasticsearch.common.settings.ImmutableSettings;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.search.aggregations.Aggregation;\n+import org.elasticsearch.search.aggregations.Aggregations;\n import org.elasticsearch.search.aggregations.bucket.histogram.Histogram;\n+import org.elasticsearch.search.aggregations.bucket.histogram.Histogram.Bucket;\n import org.elasticsearch.search.aggregations.metrics.scripted.ScriptedMetric;\n import org.elasticsearch.test.ElasticsearchIntegrationTest;\n import org.elasticsearch.test.ElasticsearchIntegrationTest.ClusterScope;\n@@ -59,7 +61,8 @@ public void setupSuiteScopeCluster() throws Exception {\n numDocs = randomIntBetween(10, 100);\n for (int i = 0; i < numDocs; i++) {\n builders.add(client().prepareIndex(\"idx\", \"type\", \"\" + i).setSource(\n- jsonBuilder().startObject().field(\"value\", randomAsciiOfLengthBetween(5, 15)).endObject()));\n+ jsonBuilder().startObject().field(\"value\", randomAsciiOfLengthBetween(5, 15))\n+ .field(\"l_value\", i).endObject()));\n }\n indexRandom(true, builders);\n \n@@ -561,6 +564,62 @@ public void testInitMapCombineReduce_withParams_File() {\n assertThat(((Number) object).longValue(), equalTo(numDocs * 3));\n }\n \n+ @Test\n+ public void testInitMapCombineReduce_withParams_asSubAgg() {\n+ Map<String, Object> varsMap = new HashMap<>();\n+ varsMap.put(\"multiplier\", 1);\n+ Map<String, Object> params = new HashMap<>();\n+ params.put(\"_agg\", new ArrayList<>());\n+ params.put(\"vars\", varsMap);\n+\n+ SearchResponse response = client()\n+ .prepareSearch(\"idx\")\n+ .setQuery(matchAllQuery()).setSize(1000)\n+ .addAggregation(\n+ histogram(\"histo\")\n+ .field(\"l_value\")\n+ .interval(1)\n+ .subAggregation(\n+ scriptedMetric(\"scripted\")\n+ .params(params)\n+ .initScript(\"vars.multiplier = 3\")\n+ .mapScript(\"_agg.add(vars.multiplier)\")\n+ .combineScript(\n+ \"newaggregation = []; sum = 0;for (a in _agg) { sum += a}; newaggregation.add(sum); return newaggregation\")\n+ .reduceScript(\n+ \"newaggregation = []; sum = 0;for (aggregation in _aggs) { for (a in aggregation) { sum += a} }; newaggregation.add(sum); return newaggregation\")))\n+ .execute().actionGet();\n+ assertSearchResponse(response);\n+ assertThat(response.getHits().getTotalHits(), equalTo(numDocs));\n+ Aggregation aggregation = response.getAggregations().get(\"histo\");\n+ assertThat(aggregation, notNullValue());\n+ assertThat(aggregation, instanceOf(Histogram.class));\n+ Histogram histoAgg = (Histogram) aggregation;\n+ assertThat(histoAgg.getName(), equalTo(\"histo\"));\n+ List<? extends Bucket> buckets = histoAgg.getBuckets();\n+ assertThat(buckets, notNullValue());\n+ for (Bucket b : buckets) {\n+ assertThat(b, notNullValue());\n+ assertThat(b.getDocCount(), equalTo(1l));\n+ Aggregations subAggs = b.getAggregations();\n+ assertThat(subAggs, notNullValue());\n+ assertThat(subAggs.asList().size(), equalTo(1));\n+ Aggregation subAgg = subAggs.get(\"scripted\");\n+ assertThat(subAgg, notNullValue());\n+ assertThat(subAgg, instanceOf(ScriptedMetric.class));\n+ ScriptedMetric scriptedMetricAggregation = (ScriptedMetric) subAgg;\n+ assertThat(scriptedMetricAggregation.getName(), equalTo(\"scripted\"));\n+ assertThat(scriptedMetricAggregation.aggregation(), notNullValue());\n+ assertThat(scriptedMetricAggregation.aggregation(), instanceOf(ArrayList.class));\n+ List<?> aggregationList = (List<?>) scriptedMetricAggregation.aggregation();\n+ assertThat(aggregationList.size(), equalTo(1));\n+ Object object = aggregationList.get(0);\n+ assertThat(object, notNullValue());\n+ assertThat(object, instanceOf(Number.class));\n+ assertThat(((Number) object).longValue(), equalTo(3l));\n+ }\n+ }\n+\n @Test\n public void testEmptyAggregation() throws Exception {\n Map<String, Object> varsMap = new HashMap<>();", "filename": "src/test/java/org/elasticsearch/search/aggregations/metrics/ScriptedMetricTests.java", "status": "modified" } ] }
{ "body": "I'm trying to use the more_like_this handler in almost the exact same way it's used in the documentation here:\n\nhttp://www.elasticsearch.org/guide/reference/api/more-like-this/\n\ncurl -XGET \"http://localhost:9200/foo/document/1008534/_mlt?mlt_fields=cs,ks,tpcs&min_doc_freq=2\"\n\n{\"error\":\"ElasticSearchException[No fields found to fetch the 'likeText' from]\",\"status\":500}\n\nI'm guessing this bug stems from the fact that source is disabled, but I'm not really sure. If it is the case that source is required for MLT, you should document that fact.\n", "comments": [ { "body": "I think you either need the source or the field needs to be stored or you need to store term vectors for the field. But I agree we should document that!\n\nthanks for raising this... what is your mapping for those fields?\n", "created_at": "2013-04-17T19:55:21Z" }, { "body": "``` json\n{\n \"document\": {\n \"_source\" : {\n \"enabled\" : false\n },\n \"term_vector\": \"yes\",\n \"dynamic\": false,\n \"properties\": {\n \"_id\": {\n \"type\": \"long\", \n \"index\": \"not_analyzed\"\n },\n \"cs\": {\n \"type\": \"string\", \n \"analyzer\": \"keyword\",\n \"store\": \"no\"\n }, \n \"ks\": {\n \"type\": \"string\", \n \"analyzer\": \"keyword\", \n \"store\": \"no\"\n },\n \"tpcs\": {\n \"type\": \"string\",\n \"analyzer\": \"keyword\", \n \"store\": \"no\"\n }\n }\n }\n}\n```\n", "created_at": "2013-04-17T20:37:46Z" }, { "body": "ah I see you should put `term_vector` next to `store` for each filed you want to store term vectors. Can you try that?\n\nlike this:\n\n```\n{\n \"type\" : \"string\",\n \"store\" : \"no\",\n \"term_vector\" : \"yes\"\n}\n```\n\nsimon\n", "created_at": "2013-04-18T08:40:42Z" }, { "body": "I pushed a fix to the documentation: https://github.com/elasticsearch/elasticsearch.github.com/commit/25614ced9513e24dc3ad99b976b00e8c384ff9f2\n", "created_at": "2013-04-18T08:44:25Z" }, { "body": "Thanks -- I'll make that fix. What is the effect (if any) of enabling term_vector storage at the top-level as I have done here?\n", "created_at": "2013-04-18T16:21:57Z" }, { "body": "hmm it seems that this only works if it's stored or you enabled source. we should be able to support this if TV are stored for the fields as well... reopening\n", "created_at": "2013-04-19T20:13:00Z" }, { "body": "Hey @s1monw -- have you had an opportunity to look into this issue?\n", "created_at": "2013-05-02T21:17:08Z" }, { "body": "I am not a fan of supporting it for tern vector and no store, cause then we need to get that info(TV) from the document on the specific shard and then send it to all the shards to do the MLT based on it. Just store the source and MLT based on that. You can also, btw, always use the MLT query as part of a search request and provide the text there externally.\n", "created_at": "2013-05-02T21:20:34Z" }, { "body": "@kimchy can you explain how storing the source alleviates the problem of distributing the term vector to all the shards for the MLT computation?\n", "created_at": "2013-05-02T21:32:08Z" }, { "body": "cause with the source text to do MLT by, you don't need the term vectors.\n", "created_at": "2013-05-02T21:33:47Z" }, { "body": "I agree this seems odd... isn't the TV just a different representation of a field?\n", "created_at": "2013-05-02T21:34:10Z" }, { "body": "@kimchy @s1monw so why store the term vectors at all? (I was only storing them because of the following doc: http://www.elasticsearch.org/guide/reference/api/more-like-this/) If MLT doesn't need them when it has the source text, does it then recompute term vectors given the source text? \n", "created_at": "2013-05-02T21:46:00Z" }, { "body": "I agree this should also work on TV though. yet at this point it doesn't so you might want to get rid of TV if you don't need them.\n", "created_at": "2013-05-02T21:46:59Z" }, { "body": "@kimchy @s1monw I'd like to try to write a plugin similar to more-like-this that does exactly what I want. Can you suggest any plugins that access term vectors that I might use as references? Any tips / documentation are much appreciated.\n", "created_at": "2013-06-12T17:01:22Z" }, { "body": "hey, we just added TermVector support lately. this issue is on our list to make use of the feature. Can you wait for it?\n", "created_at": "2013-06-12T17:47:24Z" }, { "body": "@s1monw Unfortunately, my company has a rapidly narrowing window for determining whether elasticsearch is right for the problem we're trying to solve. Given that the current built-in functionality doesn't seem to handle our use-case, a plugin seems like our only option in the short-term.\n", "created_at": "2013-06-12T18:51:21Z" }, { "body": "Excuse me but I'm currently trying to use the MLT feature. I read http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-more-like-this.html#search-more-like-this and either my english is completely bad of I have not the remotest idea what it is supposed to mean:\n\n\"Note: In order to use the mlt feature a mlt_field needs to be either be stored, store term_vector or source needs to be enabled.\"\n\nWhat is \"stored\"? Which \"source\"? I've been searching the internet for two hours now and can't any example of how to use MLT successfully. And to be honest this issue report doesn't help me either. Could anyone shed some light on it and fix the documentation please?\n", "created_at": "2013-11-19T20:12:28Z" }, { "body": "In Elasticsearch you can either store the entire document (the json you send to ES when you index) aka. the `source` or you can mark a field as `stored : true` then we only store the value of that particular field. By default the `source` is stored (or `enabled`) but you can also `disable` it via the mapping. The term_vectors don't work yet with `MLT` hence this issue. \n\nhope that helps\n", "created_at": "2013-11-19T20:50:21Z" }, { "body": "@s1monw Thanks for the reply. So to rephrase: any field I'm using as \"mlt_fields=...\" needs to\n- either part of the actual source/document\n- or be explicitly marked as \"stored:true\"\n\nOkay. In my case the documents contain two fields. Example:\n\n<pre>\n{\n_index: \"debshots\",\n_type: \"jdbc\",\n_id: \"396\",\n_version: 35,\nexists: true,\n_source: {\n description: \"Alarm Clock for GTK Environments\",\n name: \"alarm-clock\"\n }\n}\n</pre>\n\nBut when I'm GETting http://localhost:9200/debshots/jdbc/396/_mlt Elasticsearch returns zero results:\n\n<pre>\n{\ntook: 3,\ntimed_out: false,\n_shards: {\n total: 1,\n successful: 1,\n failed: 0\n },\nhits: {\n total: 0,\n max_score: null,\n hits: [ ]\n }\n}\n</pre>\n\nThere are many other documents with a description like \"Alarm curl plugin for uWSGI\" so I had assumed that at least the \"Alarm\" is a term that makes it \"more-like-that\"-style.\n\nI'd welcome a hint what is going wrong here. Thanks.\n\nAnd I would also welcome a rewrite of that quoted phrase in the documentation because it's wrong english and hard to understand. (I still didn't.)\n", "created_at": "2013-11-19T21:01:48Z" }, { "body": "Can you take this please to the mailing list this is only for development issues. \n\nthanks\n", "created_at": "2013-11-19T21:03:02Z" }, { "body": "@s1monw Will do. Please still consider rewriting this sentence in the documentation to make it understandable.\n", "created_at": "2013-11-19T21:05:38Z" }, { "body": "This issue is now outdated, closing.\n", "created_at": "2015-07-06T14:58:22Z" } ], "number": 2914, "title": "MLT bug when source disabled?" }
{ "body": "Previously, the MLT API would create one MLT query per field and per value.\nThis would make the parameters related to the term selection and query\nformation such as `max_query_terms`, `min_term_freq`, `minimum_should_match`\n(previously `percent_terms_to_match`) or `boost_terms` behave in an unexpected\nmanner. Let's take the common example of looking up similar documents with\nrespect to a list of tag names. Suppose these tags are modeled by a multi-\nvalue field with a keyword analyzer. Performing a MLT request would therefore\nresult in one MLT query per tag, regardless of the value of `max_query_terms`\nor `minimum_should_match`. This would result in a query made of all the tag\nnames, if `min_term_freq` = 1 (no actual selection of terms is taking place),\nor zero tag whatsoever, if `min_term_freq` > 1 (note the default is 2). The\n`boost_terms` parameter would also have unexpected effects as it would depend\non the frequency of the term within field value and, again, not within the\nwhole field.\n\nThis commit fixes these issues by calling upon the term vector API and by\ndirectly passing the response (the terms) to the MoreLikeThisQueryParser. Now\nboth the API and the query yield exactly the same results under any given set\nof parameters, but while keeping the added benefit for the API of calling upon\nthe TV API only once.\n\nCloses #2914\n", "number": 8028, "review_comments": [ { "body": "can we name this `term_vector` ?\n", "created_at": "2014-10-09T19:53:52Z" }, { "body": "this can be a `public static class`\n", "created_at": "2014-10-09T19:54:48Z" }, { "body": "I like the idea yet - this `reindexing` step seems very very wasteful. Can't we simply return a dummy Fields impl that under the hood uses the map returned from `parser.map()`?\n", "created_at": "2014-10-09T20:01:56Z" }, { "body": "the problem with that is that the response already has a field called `term_vectors` ... But this might not be an issue because it might fit into the new `like` parameter in such a way that any object that has the field \"term_vectors\" will be treated as a term vector response.\n\n``` json\n{\n \"query\": {\n \"more_like_this\": {\n \"like\": {\n \"_index\": \"index\",\n \"_type\": \"type\",\n \"_id\": \"id\",\n \"term_vectors\": {\n \"field_name\": {\n ...\n }\n }\n }\n }\n }\n}\n```\n", "created_at": "2014-10-10T14:44:33Z" } ], "title": "Fixes many misbehaving user parameters" }
{ "commits": [ { "message": "MLT API: fixes many miss behaving user parameters\n\nPreviously, the MLT API would create one MLT query per field and per value.\nThis would make the parameters related to the term selection and query\nformation such as `max_query_terms`, `min_term_freq`, `minimum_should_match`\n(previously `percent_terms_to_match`) or `boost_terms` behave in an unexpected\nmanner. Let's take the common example of looking up similar documents with\nrespect to a list of tag names. Suppose these tags are modeled by a multi-\nvalue field with a keyword analyzer. Performing a MLT request would therefore\nresult in one MLT query per tag, regardless of the value of `max_query_terms`\nor `minimum_should_match`. This would result in a query made of all the tag\nnames, if `min_term_freq` = 1 (no actual selection of terms is taking place),\nor zero tag whatsoever, if `min_term_freq` > 1 (note the default is 2). The\n`boost_terms` parameter would also have unexpected effects as it would depend\non the frequency of the term within field value and, again, not within the\nwhole field.\n\nThis commit fixes these issues by calling upon the term vector API and by\ndirectly passing the response (the terms) to the MoreLikeThisQueryParser. Now\nboth the API and the query yield exactly the same results under any given set\nof parameters, but while keeping the added benefit for the API of calling upon\nthe TV API only once.\n\nCloses #2914" }, { "message": "use dummy Fields" }, { "message": "rebased on master + adding breaking changes" } ], "files": [ { "diff": "@@ -15,7 +15,11 @@ to change this behavior\n \n Partial fields were deprecated since 1.0.0beta1 in favor of <<search-request-source-filtering,source filtering>>.\n \n-=== More Like This Field\n+=== More Like This (MLT)\n \n-The More Like This Field query has been removed in favor of the <<query-dsl-mlt-query, More Like This Query>>\n-restrained set to a specific `field`.\n\\ No newline at end of file\n+* The MLT Field Query has been removed in favor of the <<query-dsl-mlt-query,\n+ More Like This Query>> restrained set to a specific `field`.\n+\n+* The MLT API has been improved to better take into account `max_query_terms`\n+ and `minimum_should_match`. As a consequence the query should return\n+ different but more relevant results.\n\\ No newline at end of file", "filename": "docs/reference/migration/migrate_2_0.asciidoc", "status": "modified" }, { "diff": "@@ -19,20 +19,16 @@\n \n package org.elasticsearch.action.mlt;\n \n-import org.apache.lucene.document.Field;\n-import org.apache.lucene.index.Term;\n-import org.apache.lucene.util.BytesRef;\n import org.elasticsearch.ElasticsearchException;\n-import org.elasticsearch.ElasticsearchIllegalStateException;\n import org.elasticsearch.action.ActionListener;\n-import org.elasticsearch.action.get.GetRequest;\n-import org.elasticsearch.action.get.GetResponse;\n-import org.elasticsearch.action.get.TransportGetAction;\n import org.elasticsearch.action.search.SearchRequest;\n import org.elasticsearch.action.search.SearchResponse;\n import org.elasticsearch.action.search.TransportSearchAction;\n import org.elasticsearch.action.support.ActionFilters;\n import org.elasticsearch.action.support.HandledTransportAction;\n+import org.elasticsearch.action.termvector.TermVectorRequest;\n+import org.elasticsearch.action.termvector.TermVectorResponse;\n+import org.elasticsearch.action.termvector.TransportSingleShardTermVectorAction;\n import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n@@ -42,10 +38,6 @@\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.index.engine.DocumentMissingException;\n-import org.elasticsearch.index.get.GetField;\n-import org.elasticsearch.index.mapper.*;\n-import org.elasticsearch.index.mapper.internal.SourceFieldMapper;\n-import org.elasticsearch.index.query.BoolQueryBuilder;\n import org.elasticsearch.index.query.MoreLikeThisQueryBuilder;\n import org.elasticsearch.indices.IndicesService;\n import org.elasticsearch.search.builder.SearchSourceBuilder;\n@@ -54,12 +46,7 @@\n import org.elasticsearch.transport.TransportResponseHandler;\n import org.elasticsearch.transport.TransportService;\n \n-import java.util.Collections;\n-import java.util.Iterator;\n-import java.util.Set;\n-\n-import static com.google.common.collect.Sets.newHashSet;\n-import static org.elasticsearch.index.query.QueryBuilders.*;\n+import static org.elasticsearch.index.query.QueryBuilders.moreLikeThisQuery;\n import static org.elasticsearch.search.builder.SearchSourceBuilder.searchSource;\n \n /**\n@@ -69,7 +56,7 @@ public class TransportMoreLikeThisAction extends HandledTransportAction<MoreLike\n \n private final TransportSearchAction searchAction;\n \n- private final TransportGetAction getAction;\n+ private final TransportSingleShardTermVectorAction termVectorAction;\n \n private final IndicesService indicesService;\n \n@@ -78,11 +65,11 @@ public class TransportMoreLikeThisAction extends HandledTransportAction<MoreLike\n private final TransportService transportService;\n \n @Inject\n- public TransportMoreLikeThisAction(Settings settings, ThreadPool threadPool, TransportSearchAction searchAction, TransportGetAction getAction,\n+ public TransportMoreLikeThisAction(Settings settings, ThreadPool threadPool, TransportSearchAction searchAction, TransportSingleShardTermVectorAction getAction,\n ClusterService clusterService, IndicesService indicesService, TransportService transportService, ActionFilters actionFilters) {\n super(settings, MoreLikeThisAction.NAME, threadPool, transportService, actionFilters);\n this.searchAction = searchAction;\n- this.getAction = getAction;\n+ this.termVectorAction = getAction;\n this.indicesService = indicesService;\n this.clusterService = clusterService;\n this.transportService = transportService;\n@@ -116,80 +103,28 @@ protected void doExecute(final MoreLikeThisRequest request, final ActionListener\n redirect(request, concreteIndex, listener, clusterState);\n return;\n }\n- Set<String> getFields = newHashSet();\n- if (request.fields() != null) {\n- Collections.addAll(getFields, request.fields());\n- }\n- // add the source, in case we need to parse it to get fields\n- getFields.add(SourceFieldMapper.NAME);\n \n- GetRequest getRequest = new GetRequest(request, request.index())\n- .fields(getFields.toArray(new String[getFields.size()]))\n+ String[] selectedFields = (request.fields() == null || request.fields().length == 0) ? new String[]{\"*\"} : request.fields();\n+ TermVectorRequest termVectorRequest = new TermVectorRequest()\n+ .index(request.index())\n .type(request.type())\n .id(request.id())\n+ .selectedFields(selectedFields)\n .routing(request.routing())\n .listenerThreaded(true)\n .operationThreaded(true);\n \n request.beforeLocalFork();\n- getAction.execute(getRequest, new ActionListener<GetResponse>() {\n+ termVectorAction.execute(termVectorRequest, new ActionListener<TermVectorResponse>() {\n @Override\n- public void onResponse(GetResponse getResponse) {\n- if (!getResponse.isExists()) {\n+ public void onResponse(TermVectorResponse termVectorResponse) {\n+ if (!termVectorResponse.isExists()) {\n listener.onFailure(new DocumentMissingException(null, request.type(), request.id()));\n return;\n }\n- final BoolQueryBuilder boolBuilder = boolQuery();\n+ final MoreLikeThisQueryBuilder mltQuery = getMoreLikeThis(request, true);\n try {\n- final DocumentMapper docMapper = indicesService.indexServiceSafe(concreteIndex).mapperService().documentMapper(request.type());\n- if (docMapper == null) {\n- throw new ElasticsearchException(\"No DocumentMapper found for type [\" + request.type() + \"]\");\n- }\n- final Set<String> fields = newHashSet();\n- if (request.fields() != null) {\n- for (String field : request.fields()) {\n- FieldMappers fieldMappers = docMapper.mappers().smartName(field);\n- if (fieldMappers != null) {\n- fields.add(fieldMappers.mapper().names().indexName());\n- } else {\n- fields.add(field);\n- }\n- }\n- }\n-\n- if (!fields.isEmpty()) {\n- // if fields are not empty, see if we got them in the response\n- for (Iterator<String> it = fields.iterator(); it.hasNext(); ) {\n- String field = it.next();\n- GetField getField = getResponse.getField(field);\n- if (getField != null) {\n- for (Object value : getField.getValues()) {\n- addMoreLikeThis(request, boolBuilder, getField.getName(), value.toString(), true);\n- }\n- it.remove();\n- }\n- }\n- if (!fields.isEmpty()) {\n- // if we don't get all the fields in the get response, see if we can parse the source\n- parseSource(getResponse, boolBuilder, docMapper, fields, request);\n- }\n- } else {\n- // we did not ask for any fields, try and get it from the source\n- parseSource(getResponse, boolBuilder, docMapper, fields, request);\n- }\n-\n- if (!boolBuilder.hasClauses()) {\n- // no field added, fail\n- listener.onFailure(new ElasticsearchException(\"No fields found to fetch the 'likeText' from\"));\n- return;\n- }\n-\n- // exclude myself\n- if (!request.include()) {\n- Term uidTerm = docMapper.uidMapper().term(request.type(), request.id());\n- boolBuilder.mustNot(termQuery(uidTerm.field(), uidTerm.text()));\n- boolBuilder.adjustPureNegative(false);\n- }\n+ mltQuery.setTermVectorResponse(termVectorResponse);\n } catch (Throwable e) {\n listener.onFailure(e);\n return;\n@@ -210,7 +145,7 @@ public void onResponse(GetResponse getResponse) {\n .scroll(request.searchScroll())\n .listenerThreaded(request.listenerThreaded());\n \n- SearchSourceBuilder extraSource = searchSource().query(boolBuilder);\n+ SearchSourceBuilder extraSource = searchSource().query(mltQuery);\n if (request.searchFrom() != 0) {\n extraSource.from(request.searchFrom());\n }\n@@ -277,52 +212,8 @@ public String executor() {\n });\n }\n \n- private void parseSource(GetResponse getResponse, final BoolQueryBuilder boolBuilder, DocumentMapper docMapper, final Set<String> fields, final MoreLikeThisRequest request) {\n- if (getResponse.isSourceEmpty()) {\n- return;\n- }\n- docMapper.parse(SourceToParse.source(getResponse.getSourceAsBytesRef()).type(request.type()).id(request.id()), new DocumentMapper.ParseListenerAdapter() {\n- @Override\n- public boolean beforeFieldAdded(FieldMapper fieldMapper, Field field, Object parseContext) {\n- if (!field.fieldType().indexed()) {\n- return false;\n- }\n- if (fieldMapper instanceof InternalMapper) {\n- return true;\n- }\n- String value = fieldMapper.value(convertField(field)).toString();\n- if (value == null) {\n- return false;\n- }\n-\n- if (fields.isEmpty() || fields.contains(field.name())) {\n- addMoreLikeThis(request, boolBuilder, fieldMapper, field, !fields.isEmpty());\n- }\n-\n- return false;\n- }\n- });\n- }\n-\n- private Object convertField(Field field) {\n- if (field.stringValue() != null) {\n- return field.stringValue();\n- } else if (field.binaryValue() != null) {\n- return BytesRef.deepCopyOf(field.binaryValue()).bytes;\n- } else if (field.numericValue() != null) {\n- return field.numericValue();\n- } else {\n- throw new ElasticsearchIllegalStateException(\"Field should have either a string, numeric or binary value\");\n- }\n- }\n-\n- private void addMoreLikeThis(MoreLikeThisRequest request, BoolQueryBuilder boolBuilder, FieldMapper fieldMapper, Field field, boolean failOnUnsupportedField) {\n- addMoreLikeThis(request, boolBuilder, field.name(), fieldMapper.value(convertField(field)).toString(), failOnUnsupportedField);\n- }\n-\n- private void addMoreLikeThis(MoreLikeThisRequest request, BoolQueryBuilder boolBuilder, String fieldName, String likeText, boolean failOnUnsupportedField) {\n- MoreLikeThisQueryBuilder mlt = moreLikeThisQuery(fieldName)\n- .likeText(likeText)\n+ private MoreLikeThisQueryBuilder getMoreLikeThis(MoreLikeThisRequest request, boolean failOnUnsupportedField) {\n+ return moreLikeThisQuery(request.fields())\n .minimumShouldMatch(request.minimumShouldMatch())\n .boostTerms(request.boostTerms())\n .minDocFreq(request.minDocFreq())\n@@ -332,7 +223,7 @@ private void addMoreLikeThis(MoreLikeThisRequest request, BoolQueryBuilder boolB\n .minTermFreq(request.minTermFreq())\n .maxQueryTerms(request.maxQueryTerms())\n .stopWords(request.stopWords())\n+ .include(request.include())\n .failOnUnsupportedField(failOnUnsupportedField);\n- boolBuilder.should(mlt);\n }\n }", "filename": "src/main/java/org/elasticsearch/action/mlt/TransportMoreLikeThisAction.java", "status": "modified" }, { "diff": "@@ -391,7 +391,7 @@ public boolean hasPayloads() {\n }\n }\n \n- private final class TermVectorDocsAndPosEnum extends DocsAndPositionsEnum {\n+ public static final class TermVectorDocsAndPosEnum extends DocsAndPositionsEnum {\n private boolean hasPositions;\n private boolean hasOffsets;\n private boolean hasPayloads;\n@@ -403,7 +403,7 @@ private final class TermVectorDocsAndPosEnum extends DocsAndPositionsEnum {\n private BytesRefBuilder[] payloads;\n private int[] endOffsets;\n \n- private DocsAndPositionsEnum reset(int[] positions, int[] startOffsets, int[] endOffsets, BytesRefBuilder[] payloads, int freq) {\n+ DocsAndPositionsEnum reset(int[] positions, int[] startOffsets, int[] endOffsets, BytesRefBuilder[] payloads, int freq) {\n curPos = -1;\n doc = -1;\n this.hasPositions = positions != null;", "filename": "src/main/java/org/elasticsearch/action/termvector/TermVectorFields.java", "status": "modified" }, { "diff": "@@ -0,0 +1,296 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.action.termvector;\n+\n+import org.apache.lucene.index.*;\n+import org.apache.lucene.util.Bits;\n+import org.apache.lucene.util.BytesRef;\n+import org.elasticsearch.ElasticsearchParseException;\n+import org.elasticsearch.action.termvector.TermVectorFields.TermVectorDocsAndPosEnum;\n+import org.elasticsearch.common.xcontent.XContentParser;\n+\n+import java.io.IOException;\n+import java.util.Comparator;\n+import java.util.HashMap;\n+import java.util.Iterator;\n+import java.util.Map;\n+\n+/**\n+ * This class is meant to parse the JSON response of a {@link TermVectorResponse} so that term vectors\n+ * could be passed from {@link org.elasticsearch.action.mlt.TransportMoreLikeThisAction}\n+ * to {@link org.elasticsearch.index.query.MoreLikeThisQueryParser}.\n+ *\n+ * <p>\n+ * At the moment only <em>_index</em>, <em>_type</em>, <em>_id</em> and <em>term_vectors</em> are\n+ * parsed from the response. Term vectors are returned as a {@link Fields} object.\n+ * </p>\n+*/\n+public class TermVectorResponseParser {\n+\n+ public static class ParsedTermVectorResponse {\n+\n+ private final String index;\n+\n+ private final String type;\n+\n+ private final String id;\n+\n+ private final Fields termVectorFields;\n+\n+ public ParsedTermVectorResponse(String index, String type, String id, Fields termVectorResponseFields) {\n+ this.index = index;\n+ this.type = type;\n+ this.id = id;\n+ this.termVectorFields = termVectorResponseFields;\n+ }\n+\n+ public String index() {\n+ return index;\n+ }\n+\n+ public String type() {\n+ return type;\n+ }\n+\n+ public String id() {\n+ return id;\n+ }\n+\n+ public Fields termVectorFields() {\n+ return termVectorFields;\n+ }\n+ }\n+\n+ private XContentParser parser;\n+\n+ public TermVectorResponseParser(XContentParser parser) throws IOException {\n+ this.parser = parser;\n+ }\n+\n+ public ParsedTermVectorResponse parse() throws IOException {\n+ String index = null;\n+ String type = null;\n+ String id = null;\n+ Fields termVectorFields = null;\n+ XContentParser.Token token;\n+ String currentFieldName = null;\n+ while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {\n+ if (token == XContentParser.Token.FIELD_NAME) {\n+ currentFieldName = parser.currentName();\n+ } else if (currentFieldName != null) {\n+ if (currentFieldName.equals(\"_index\")) {\n+ index = parser.text();\n+ } else if (currentFieldName.equals(\"_type\")) {\n+ type = parser.text();\n+ } else if (currentFieldName.equals(\"_id\")) {\n+ id = parser.text();\n+ } else if (currentFieldName.equals(\"term_vectors\")) {\n+ termVectorFields = parseTermVectors();\n+ }\n+ }\n+ }\n+ if (index == null || type == null || id == null || termVectorFields == null) {\n+ throw new ElasticsearchParseException(\"\\\"_index\\\", \\\"_type\\\", \\\"_id\\\" or \\\"term_vectors\\\" missing from the response!\");\n+ }\n+ return new ParsedTermVectorResponse(index, type, id, termVectorFields);\n+ }\n+\n+ private Fields parseTermVectors() throws IOException {\n+ Map<String, Terms> termVectors = new HashMap<>();\n+ XContentParser.Token token;\n+ String currentFieldName;\n+ while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {\n+ if (token == XContentParser.Token.FIELD_NAME) {\n+ currentFieldName = parser.currentName();\n+ Map<String, Object> terms = null;\n+ Map<String, Object> fieldStatistics = null;\n+ while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {\n+ if (parser.currentName().equals(\"terms\")) {\n+ parser.nextToken();\n+ terms = parser.map();\n+ }\n+ if (parser.currentName().equals(\"field_statistics\")) {\n+ parser.nextToken();\n+ fieldStatistics = parser.map();\n+ }\n+ }\n+ if (terms != null) {\n+ termVectors.put(currentFieldName, makeTermVector(terms, fieldStatistics));\n+ }\n+ }\n+ }\n+ return makeTermVectors(termVectors);\n+ }\n+\n+ private Terms makeTermVector(final Map<String, Object> terms, final Map<String, Object> fieldStatistics) {\n+ return new Terms() {\n+ @Override\n+ public TermsEnum iterator(TermsEnum reuse) throws IOException {\n+ return makeTermsEnum(terms);\n+ }\n+\n+ @Override\n+ public Comparator<BytesRef> getComparator() {\n+ return BytesRef.getUTF8SortedAsUnicodeComparator();\n+ }\n+\n+ @Override\n+ public long size() throws IOException {\n+ return terms.size();\n+ }\n+\n+ @Override\n+ public long getSumTotalTermFreq() throws IOException {\n+ return fieldStatistics != null ? (long) fieldStatistics.get(\"sum_ttf\") : -1;\n+ }\n+\n+ @Override\n+ public long getSumDocFreq() throws IOException {\n+ return fieldStatistics != null ? (long) fieldStatistics.get(\"sum_doc_freq\") : -1;\n+ }\n+\n+ @Override\n+ public int getDocCount() throws IOException {\n+ return fieldStatistics != null ? (int) fieldStatistics.get(\"doc_count\") : -1;\n+ }\n+\n+ @Override\n+ public boolean hasFreqs() {\n+ return true;\n+ }\n+\n+ @Override\n+ public boolean hasOffsets() {\n+ return false;\n+ }\n+\n+ @Override\n+ public boolean hasPositions() {\n+ return false;\n+ }\n+\n+ @Override\n+ public boolean hasPayloads() {\n+ return false;\n+ }\n+ };\n+ }\n+\n+ private TermsEnum makeTermsEnum(final Map<String, Object> terms) {\n+ final Iterator<String> iterator = terms.keySet().iterator();\n+ return new TermsEnum() {\n+ BytesRef currentTerm;\n+ int termFreq = -1;\n+ int docFreq = -1;\n+ long totalTermFreq = -1;\n+\n+ @Override\n+ public BytesRef next() throws IOException {\n+ if (iterator.hasNext()) {\n+ String term = iterator.next();\n+ setTermStats(term);\n+ currentTerm = new BytesRef(term);\n+ return currentTerm;\n+ } else {\n+ return null;\n+ }\n+ }\n+\n+ private void setTermStats(String term) {\n+ // we omit positions, offsets and payloads\n+ Map<String, Object> termStats = (Map<String, Object>) terms.get(term);\n+ termFreq = (int) termStats.get(\"term_freq\");\n+ if (termStats.containsKey(\"doc_freq\")) {\n+ docFreq = (int) termStats.get(\"doc_freq\");\n+ }\n+ if (termStats.containsKey(\"total_term_freq\")) {\n+ totalTermFreq = (int) termStats.get(\"total_term_freq\");\n+ }\n+ }\n+\n+ @Override\n+ public BytesRef term() throws IOException {\n+ return currentTerm;\n+ }\n+\n+ @Override\n+ public SeekStatus seekCeil(BytesRef text) throws IOException {\n+ throw new UnsupportedOperationException();\n+ }\n+\n+ @Override\n+ public void seekExact(long ord) throws IOException {\n+ throw new UnsupportedOperationException();\n+ }\n+\n+ @Override\n+ public long ord() throws IOException {\n+ throw new UnsupportedOperationException();\n+ }\n+\n+ @Override\n+ public int docFreq() throws IOException {\n+ return docFreq;\n+ }\n+\n+ @Override\n+ public long totalTermFreq() throws IOException {\n+ return totalTermFreq;\n+ }\n+\n+ @Override\n+ public DocsEnum docs(Bits liveDocs, DocsEnum reuse, int flags) throws IOException {\n+ return docsAndPositions(liveDocs, reuse instanceof DocsAndPositionsEnum ? (DocsAndPositionsEnum) reuse : null, 0);\n+ }\n+\n+ @Override\n+ public DocsAndPositionsEnum docsAndPositions(Bits liveDocs, DocsAndPositionsEnum reuse, int flags) throws IOException {\n+ final TermVectorDocsAndPosEnum retVal = reuse instanceof TermVectorDocsAndPosEnum ? (TermVectorDocsAndPosEnum) reuse\n+ : new TermVectorDocsAndPosEnum();\n+ return retVal.reset(null, null, null, null, termFreq); // only care about term freq\n+ }\n+\n+ @Override\n+ public Comparator<BytesRef> getComparator() {\n+ return BytesRef.getUTF8SortedAsUnicodeComparator();\n+ }\n+ };\n+ }\n+\n+ private Fields makeTermVectors(final Map<String, Terms> termVectors) {\n+ return new Fields() {\n+ @Override\n+ public Iterator<String> iterator() {\n+ return termVectors.keySet().iterator();\n+ }\n+\n+ @Override\n+ public Terms terms(String field) throws IOException {\n+ return termVectors.get(field);\n+ }\n+\n+ @Override\n+ public int size() {\n+ return termVectors.size();\n+ }\n+ };\n+ }\n+}\n+", "filename": "src/main/java/org/elasticsearch/action/termvector/TermVectorResponseParser.java", "status": "added" }, { "diff": "@@ -19,11 +19,9 @@\n \n package org.elasticsearch.index.query;\n \n-import com.google.common.collect.Lists;\n-import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.ElasticsearchIllegalArgumentException;\n-import org.elasticsearch.ElasticsearchParseException;\n import org.elasticsearch.action.get.MultiGetRequest;\n+import org.elasticsearch.action.termvector.TermVectorResponse;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.lucene.uid.Versions;\n@@ -132,6 +130,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n \n private final String[] fields;\n private List<Item> docs = new ArrayList<>();\n+ private TermVectorResponse termVectorResponse = null;\n private Boolean include = null;\n private String minimumShouldMatch = null;\n private int minTermFreq = -1;\n@@ -208,6 +207,15 @@ public MoreLikeThisQueryBuilder docs(Item... docs) {\n return like(docs);\n }\n \n+ /* Allow to directly pass the terms as is. Only used internally by MLT API.\n+ *\n+ * @param termVectorResponse\n+ */\n+ public MoreLikeThisQueryBuilder setTermVectorResponse(TermVectorResponse termVectorResponse) {\n+ this.termVectorResponse = termVectorResponse;\n+ return this;\n+ }\n+\n public MoreLikeThisQueryBuilder include(boolean include) {\n this.include = include;\n return this;\n@@ -346,15 +354,22 @@ protected void doXContent(XContentBuilder builder, Params params) throws IOExcep\n }\n builder.endArray();\n }\n- if (this.docs.isEmpty()) {\n+ // at least like_text or one item is required\n+ if (docs.isEmpty() && termVectorResponse == null) {\n throw new ElasticsearchIllegalArgumentException(\"more_like_this requires '\" + likeFieldName + \"' to be provided\");\n- } else {\n+ }\n+ if (!docs.isEmpty()) {\n if (docs.size() == 1) {\n builder.field(likeFieldName, docs);\n } else {\n builder.array(likeFieldName, docs);\n }\n }\n+ if (termVectorResponse != null) {\n+ builder.startObject(\"term_vector_response\");\n+ builder.value(termVectorResponse);\n+ builder.endObject();\n+ }\n if (minimumShouldMatch != null) {\n builder.field(MoreLikeThisQueryParser.Fields.MINIMUM_SHOULD_MATCH.getPreferredName(), minimumShouldMatch);\n }", "filename": "src/main/java/org/elasticsearch/index/query/MoreLikeThisQueryBuilder.java", "status": "modified" }, { "diff": "@@ -28,8 +28,10 @@\n import org.apache.lucene.search.ConstantScoreQuery;\n import org.apache.lucene.search.Query;\n import org.elasticsearch.ElasticsearchIllegalArgumentException;\n+import org.elasticsearch.action.DocumentRequest;\n import org.elasticsearch.action.termvector.MultiTermVectorsRequest;\n import org.elasticsearch.action.termvector.TermVectorRequest;\n+import org.elasticsearch.action.termvector.TermVectorResponseParser;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.ParseField;\n import org.elasticsearch.common.Strings;\n@@ -71,6 +73,7 @@ public static class Fields {\n public static final ParseField DOCUMENT_IDS = new ParseField(\"ids\").withAllDeprecated(\"like\");\n public static final ParseField DOCUMENTS = new ParseField(\"docs\").withAllDeprecated(\"like\");\n public static final ParseField LIKE = new ParseField(\"like\");\n+ public static final ParseField TERM_VECTOR_RESPONSE = new ParseField(\"term_vector_response\");\n public static final ParseField INCLUDE = new ParseField(\"include\");\n }\n \n@@ -105,7 +108,7 @@ public Query parse(QueryParseContext parseContext) throws IOException, QueryPars\n \n List<String> likeTexts = new ArrayList<>();\n MultiTermVectorsRequest items = new MultiTermVectorsRequest();\n-\n+ TermVectorResponseParser.ParsedTermVectorResponse parsedTermVectorResponse = null; // only used by MLT API\n while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {\n if (token == XContentParser.Token.FIELD_NAME) {\n currentFieldName = parser.currentName();\n@@ -185,18 +188,17 @@ public Query parse(QueryParseContext parseContext) throws IOException, QueryPars\n } else if (token == XContentParser.Token.START_OBJECT) {\n if (Fields.LIKE.match(currentFieldName, parseContext.parseFlags())) {\n parseLikeField(parser, likeTexts, items);\n+ } else if (Fields.TERM_VECTOR_RESPONSE.match(currentFieldName, parseContext.parseFlags())) {\n+ parsedTermVectorResponse = new TermVectorResponseParser(parser).parse();\n } else {\n throw new QueryParsingException(parseContext.index(), \"[mlt] query does not support [\" + currentFieldName + \"]\");\n }\n }\n }\n \n- if (likeTexts.isEmpty() && items.isEmpty()) {\n+ if (likeTexts.isEmpty() && items.isEmpty() && parsedTermVectorResponse == null) {\n throw new QueryParsingException(parseContext.index(), \"more_like_this requires at least 'like_text' or 'ids/docs' to be specified\");\n }\n- if (moreLikeFields != null && moreLikeFields.isEmpty()) {\n- throw new QueryParsingException(parseContext.index(), \"more_like_this requires 'fields' to be non-empty\");\n- }\n \n // set analyzer\n if (analyzer == null) {\n@@ -205,7 +207,7 @@ public Query parse(QueryParseContext parseContext) throws IOException, QueryPars\n mltQuery.setAnalyzer(analyzer);\n \n // set like text fields\n- boolean useDefaultField = (moreLikeFields == null);\n+ boolean useDefaultField = (moreLikeFields == null) || moreLikeFields.isEmpty();\n if (useDefaultField) {\n moreLikeFields = Lists.newArrayList(parseContext.defaultField());\n }\n@@ -221,6 +223,17 @@ public Query parse(QueryParseContext parseContext) throws IOException, QueryPars\n parseContext.addNamedQuery(queryName, mltQuery);\n }\n \n+ // handle term vectors directly, only used internally by MLT API\n+ if (parsedTermVectorResponse != null) {\n+ mltQuery.setLikeText(parsedTermVectorResponse.termVectorFields());\n+ BooleanQuery boolQuery = new BooleanQuery();\n+ boolQuery.add(mltQuery, BooleanClause.Occur.SHOULD);\n+ if (!include) {\n+ addExcludeClause(boolQuery, parsedTermVectorResponse.type(), parsedTermVectorResponse.id());\n+ }\n+ return boolQuery;\n+ }\n+\n // handle like texts\n if (!likeTexts.isEmpty()) {\n mltQuery.setLikeText(likeTexts);\n@@ -257,9 +270,7 @@ public Query parse(QueryParseContext parseContext) throws IOException, QueryPars\n boolQuery.add(mltQuery, BooleanClause.Occur.SHOULD);\n // exclude the items from the search\n if (!include) {\n- TermsFilter filter = new TermsFilter(UidFieldMapper.NAME, Uid.createUids(items.getRequests()));\n- ConstantScoreQuery query = new ConstantScoreQuery(filter);\n- boolQuery.add(query, BooleanClause.Occur.MUST_NOT);\n+ addExcludeClause(boolQuery, items.getRequests());\n }\n return boolQuery;\n }\n@@ -305,4 +316,14 @@ private List<String> removeUnsupportedFields(List<String> moreLikeFields, Analyz\n }\n return moreLikeFields;\n }\n+\n+ private void addExcludeClause(BooleanQuery boolQuery, List<? extends DocumentRequest> requests) {\n+ TermsFilter filter = new TermsFilter(UidFieldMapper.NAME, Uid.createUids(requests));\n+ ConstantScoreQuery query = new ConstantScoreQuery(filter);\n+ boolQuery.add(query, BooleanClause.Occur.MUST_NOT);\n+ }\n+\n+ private void addExcludeClause(BooleanQuery boolQuery, String type, String id) {\n+ addExcludeClause(boolQuery, Lists.newArrayList(new TermVectorRequest().id(id).type(type)));\n+ }\n }\n\\ No newline at end of file", "filename": "src/main/java/org/elasticsearch/index/query/MoreLikeThisQueryParser.java", "status": "modified" }, { "diff": "@@ -56,7 +56,8 @@ public void handleRequest(final RestRequest request, final RestChannel channel,\n //needs some work if it is to be used in a REST context like this too\n // See the MoreLikeThisQueryParser constants that hold the valid syntax\n mltRequest.fields(request.paramAsStringArray(\"mlt_fields\", null));\n- mltRequest.minimumShouldMatch(request.param(\"minimum_should_match\", \"0\"));\n+ mltRequest.percentTermsToMatch(request.paramAsFloat(\"percent_terms_to_match\", 0));\n+ mltRequest.minimumShouldMatch(request.param(\"minimum_should_match\", mltRequest.minimumShouldMatch()));\n mltRequest.minTermFreq(request.paramAsInt(\"min_term_freq\", -1));\n mltRequest.maxQueryTerms(request.paramAsInt(\"max_query_terms\", -1));\n mltRequest.stopWords(request.paramAsStringArray(\"stop_words\", null));", "filename": "src/main/java/org/elasticsearch/rest/action/mlt/RestMoreLikeThisAction.java", "status": "modified" }, { "diff": "@@ -525,7 +525,7 @@ public void testMoreLikeThisMultiValueFields() throws Exception {\n .maxQueryTerms(max_query_terms).percentTermsToMatch(0))\n .actionGet();\n assertSearchResponse(response);\n- assertHitCount(response, values.length);\n+ assertHitCount(response, max_query_terms);\n }\n }\n ", "filename": "src/test/java/org/elasticsearch/mlt/MoreLikeThisActionTests.java", "status": "modified" } ] }
{ "body": "PR for #7349\n", "comments": [ { "body": "Given that the extraction of the parent filter gets a bit more complicated, maybe this should be extracted to a utility method to make sure it doesn't get out-of-sync between the query and the filter?\n", "created_at": "2014-08-26T08:49:40Z" }, { "body": "@jpountz good point, I'll change that.\n", "created_at": "2014-08-26T09:12:00Z" }, { "body": "@jpountz Updated the pr, and common parsing logic has been moved to a static helper method. \n", "created_at": "2014-08-26T16:40:52Z" }, { "body": "LGTM\n", "created_at": "2014-08-26T16:54:52Z" } ], "number": 7362, "title": "If _parent field points to a non existing parent type, then skip the has_parent query/filter" }
{ "body": "A bug introduced via: #7362\n\nThe has_parent query does take the parent filter into account when executing the inner query.\n", "number": 8020, "review_comments": [], "title": "`has_parent` filter must take parent filter into account when executing the inner query/filter" }
{ "commits": [ { "message": "Parent/child: has_parent filter must take parent filter into account when executing the inner query/filter.\n\nCloses #8020\nCloses #7943" } ], "files": [ { "diff": "@@ -23,10 +23,6 @@\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.xcontent.XContentParser;\n-import org.elasticsearch.index.cache.fixedbitset.FixedBitSetFilter;\n-import org.elasticsearch.index.fielddata.plain.ParentChildIndexFieldData;\n-import org.elasticsearch.index.mapper.DocumentMapper;\n-import org.elasticsearch.index.mapper.internal.ParentFieldMapper;\n import org.elasticsearch.index.query.support.XContentStructure;\n import org.elasticsearch.index.search.child.CustomQueryWrappingFilter;\n \n@@ -63,7 +59,7 @@ public Filter parse(QueryParseContext parseContext) throws IOException, QueryPar\n String filterName = null;\n String currentFieldName = null;\n XContentParser.Token token;\n- XContentStructure.InnerQuery innerQuery = null;\n+ XContentStructure.InnerQuery iq = null;\n XContentStructure.InnerFilter innerFilter = null;\n while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {\n if (token == XContentParser.Token.FIELD_NAME) {\n@@ -74,7 +70,7 @@ public Filter parse(QueryParseContext parseContext) throws IOException, QueryPar\n // XContentStructure.<type> facade to parse if available,\n // or delay parsing if not.\n if (\"query\".equals(currentFieldName)) {\n- innerQuery = new XContentStructure.InnerQuery(parseContext, parentType == null ? null : new String[] {parentType});\n+ iq = new XContentStructure.InnerQuery(parseContext, parentType == null ? null : new String[] {parentType});\n queryFound = true;\n } else if (\"filter\".equals(currentFieldName)) {\n innerFilter = new XContentStructure.InnerFilter(parseContext, parentType == null ? null : new String[] {parentType});\n@@ -103,18 +99,18 @@ public Filter parse(QueryParseContext parseContext) throws IOException, QueryPar\n throw new QueryParsingException(parseContext.index(), \"[has_parent] filter requires 'parent_type' field\");\n }\n \n- Query query;\n+ Query innerQuery;\n if (queryFound) {\n- query = innerQuery.asQuery(parentType);\n+ innerQuery = iq.asQuery(parentType);\n } else {\n- query = innerFilter.asFilter(parentType);\n+ innerQuery = innerFilter.asFilter(parentType);\n }\n \n- if (query == null) {\n+ if (innerQuery == null) {\n return null;\n }\n \n- Query parentQuery = createParentQuery(query, parentType, false, parseContext);\n+ Query parentQuery = createParentQuery(innerQuery, parentType, false, parseContext);\n if (parentQuery == null) {\n return null;\n }", "filename": "src/main/java/org/elasticsearch/index/query/HasParentFilterParser.java", "status": "modified" }, { "diff": "@@ -122,14 +122,7 @@ public Query parse(QueryParseContext parseContext) throws IOException, QueryPars\n return null;\n }\n \n- DocumentMapper parentDocMapper = parseContext.mapperService().documentMapper(parentType);\n- if (parentDocMapper == null) {\n- throw new QueryParsingException(parseContext.index(), \"[has_parent] query configured 'parent_type' [\" + parentType + \"] is not a valid type\");\n- }\n-\n innerQuery.setBoost(boost);\n- // wrap the query with type query\n- innerQuery = new XFilteredQuery(innerQuery, parseContext.cacheFilter(parentDocMapper.typeFilter(), null));\n Query query = createParentQuery(innerQuery, parentType, score, parseContext);\n if (query == null) {\n return null;\n@@ -143,8 +136,13 @@ public Query parse(QueryParseContext parseContext) throws IOException, QueryPars\n }\n \n static Query createParentQuery(Query innerQuery, String parentType, boolean score, QueryParseContext parseContext) {\n+ DocumentMapper parentDocMapper = parseContext.mapperService().documentMapper(parentType);\n+ if (parentDocMapper == null) {\n+ throw new QueryParsingException(parseContext.index(), \"[has_parent] query configured 'parent_type' [\" + parentType + \"] is not a valid type\");\n+ }\n+\n Set<String> parentTypes = new HashSet<>(5);\n- parentTypes.add(parentType);\n+ parentTypes.add(parentDocMapper.type());\n ParentChildIndexFieldData parentChildIndexFieldData = null;\n for (DocumentMapper documentMapper : parseContext.mapperService().docMappers(false)) {\n ParentFieldMapper parentFieldMapper = documentMapper.parentFieldMapper();\n@@ -182,11 +180,13 @@ static Query createParentQuery(Query innerQuery, String parentType, boolean scor\n return null;\n }\n \n+ // wrap the query with type query\n+ innerQuery = new XFilteredQuery(innerQuery, parseContext.cacheFilter(parentDocMapper.typeFilter(), null));\n FixedBitSetFilter childrenFilter = parseContext.fixedBitSetFilter(new NotFilter(parentFilter));\n if (score) {\n- return new ParentQuery(parentChildIndexFieldData, innerQuery, parentType, childrenFilter);\n+ return new ParentQuery(parentChildIndexFieldData, innerQuery, parentDocMapper.type(), childrenFilter);\n } else {\n- return new ParentConstantScoreQuery(parentChildIndexFieldData, innerQuery, parentType, childrenFilter);\n+ return new ParentConstantScoreQuery(parentChildIndexFieldData, innerQuery, parentDocMapper.type(), childrenFilter);\n }\n }\n ", "filename": "src/main/java/org/elasticsearch/index/query/HasParentQueryParser.java", "status": "modified" }, { "diff": "@@ -2160,6 +2160,40 @@ public void testParentFieldInMultiMatchField() throws Exception {\n assertThat(response.getHits().getAt(0).id(), equalTo(\"1\"));\n }\n \n+ @Test\n+ public void testTypeIsAppliedInHasParentInnerQuery() throws Exception {\n+ assertAcked(prepareCreate(\"test\")\n+ .addMapping(\"parent\")\n+ .addMapping(\"child\", \"_parent\", \"type=parent\"));\n+ ensureGreen();\n+\n+ List<IndexRequestBuilder> indexRequests = new ArrayList<>();\n+ indexRequests.add(client().prepareIndex(\"test\", \"parent\", \"1\").setSource(\"field1\", \"a\"));\n+ indexRequests.add(client().prepareIndex(\"test\", \"child\", \"1\").setParent(\"1\").setSource(\"{}\"));\n+ indexRequests.add(client().prepareIndex(\"test\", \"child\", \"2\").setParent(\"1\").setSource(\"{}\"));\n+ indexRandom(true, indexRequests);\n+\n+ SearchResponse searchResponse = client().prepareSearch(\"test\")\n+ .setQuery(constantScoreQuery(hasParentFilter(\"parent\", notFilter(termFilter(\"field1\", \"a\")))))\n+ .get();\n+ assertHitCount(searchResponse, 0l);\n+\n+ searchResponse = client().prepareSearch(\"test\")\n+ .setQuery(hasParentQuery(\"parent\", constantScoreQuery(notFilter(termFilter(\"field1\", \"a\")))))\n+ .get();\n+ assertHitCount(searchResponse, 0l);\n+\n+ searchResponse = client().prepareSearch(\"test\")\n+ .setQuery(constantScoreQuery(hasParentFilter(\"parent\", termFilter(\"field1\", \"a\"))))\n+ .get();\n+ assertHitCount(searchResponse, 2l);\n+\n+ searchResponse = client().prepareSearch(\"test\")\n+ .setQuery(hasParentQuery(\"parent\", constantScoreQuery(termFilter(\"field1\", \"a\"))))\n+ .get();\n+ assertHitCount(searchResponse, 2l);\n+ }\n+\n List<IndexRequestBuilder> createMinMaxDocBuilders() {\n List<IndexRequestBuilder> indexBuilders = new ArrayList<>();\n // Parent 1 and its children", "filename": "src/test/java/org/elasticsearch/search/child/SimpleChildQuerySearchTests.java", "status": "modified" } ] }
{ "body": "If snapshot metadata file disappears from a repository or it wasn't created due to network issues or master node crash during snapshot process, such snapshot cannot be deleted. Was originally reported in https://github.com/elasticsearch/elasticsearch/issues/5958#issuecomment-57136510\n", "comments": [ { "body": "Hi, I just ran into this issue today and was wondering if you had any idea when this patch would be released. Is the target version 1.3.5 or 1.4? Thanks!\n", "created_at": "2014-11-04T21:01:11Z" }, { "body": "@saahn yes, 1.3.5 and 1.4.0. You can check labels on the issue #7981 to see all versions that it was merged into.\n", "created_at": "2014-11-04T23:03:28Z" }, { "body": "We've run into (what I believe) is this issue. Running 1.4.4 and there is a snapshot that is:\n\nIN_PROGRESS when I check localhost:9200/_snapshot/backups/snapshot_name endpoint\nABORTED when I check localhost:9200/_cluster/state\n\n-XDELETE hangs when attempting to delete the snapshot\n\nThe reason I believe it is this issue - we upgraded and restarted Elasticsearch around the time this snapshot was running. \n\n@imotov I also tried to use your cleanup script - however it returns with \"No snapshots found\" and the snapshot is still stuck in the same states above.\n\nAny other ideas on a way to force delete this snapshot? It is currently blocking us from getting any other snapshots created.\n", "created_at": "2015-02-26T10:05:40Z" }, { "body": "@sarkis which type of repository and which version are you using? Could you post the snapshot part of the cluster state here?\n", "created_at": "2015-02-26T12:57:46Z" }, { "body": "@imotov snapshot part of cluster state: https://gist.github.com/sarkis/f46de23dc81b1dba0d1a\n\nWe're now on ES 1.4.4 the snapshot was started on ES 1.4.2, and we ran into troubles as the snapshot was running while upgrading 1.4.2 -> 1.4.4. We are using an fs snapshot, more info:\n\n{\"backups_sea\":{\"type\":\"fs\",\"settings\":{\"compress\":\"true\",\"location\":\"/path/to/snapshot_dir\"}}\n", "created_at": "2015-02-26T17:54:07Z" }, { "body": "@sarkis how many nodes are in the cluster right now? Is the node bhwQqwZ2QuCUPjcZGrGpuQ still running? \n", "created_at": "2015-02-26T17:59:30Z" }, { "body": "@imotov 1 gateway and 2 data nodes (1 set to master)\n\nI cannot find that node name - I assume it was renamed upon rolling restart or possibly from the upgrade?\n", "created_at": "2015-02-26T18:04:38Z" }, { "body": "@sarkis Are you sure it doesn't appear in `curl \"localhost:9200/_nodes?pretty\"` output?\n", "created_at": "2015-02-26T18:12:25Z" }, { "body": "@imotov just double checked - nothing.\n", "created_at": "2015-02-26T18:22:50Z" }, { "body": "@sarkis did you try running cleanup script after the upgrade or before? Did you restart master node during upgrade or it is still running 1.4.2? Does master node have proper access to the shared file system, or read/write operations with the shared files system still hang?\n", "created_at": "2015-02-26T18:40:24Z" }, { "body": "@imotov I tried running the cleanup script after the upgrade.\n\nThe master node was restarted and is running 1.4.4 - if I had known about this issue I would have stopped the snapshot before rolling restarts/upgrades :(\n\nThe snapshot directory is a nfs mount and the \"elasticsearch\" user does have proper read/write perms. I just double checked this on all nodes in the cluster.\n\nThanks a lot for the help and quick responses.\n", "created_at": "2015-02-26T18:45:36Z" }, { "body": "@sarkis I am completely puzzled about what went wrong and I am still trying to figure out what happened and how to reproduce the issue. With a single master node, the snapshot should have disappeared during restart. There is simply no place for it to survive since snapshot information is not getting persisted on disk. Even if the snapshot somehow survived the restart, the cleanup script should have removed it. So, I feel that I am missing something important about the issue. \n\nWhen you said rolling restart, what did you mean? Could you describe the process in as many details as possible. Was snapshot stuck before the upgrade or was it simply taking long time. What was the upgrade process? Which nodes did you restart first? \n", "created_at": "2015-02-26T18:58:05Z" }, { "body": "@imotov Sure - so we have 2 data nodes and 1 gateway node (total of 3 nodes). The rolling restart was done following the recommended way to do so via elasticsearch documentation: \n\n1) turn off allocation\n2) upgrade / restart gateway\n3) turn on allocation (wait for green)\n4) turn off allocation\n5) upgrade / restart non-master data node\n6) turn on allocation (wait for green)\n7) turn off allocation\n8) upgrade / restart master data node\n9) turn on allocation\n\nI think we have tried everything we could at this point as well. Would you recommend removing/adding back the repo? What's the best way to just get around this? I understand you wanted to reproduce it on your end but I'd like to get snapshots working ASAP. \n\nUpdate: I know there isn't truly a \"master\" - I called the above nodes master and non-master based off of info from paramedic at the time of upgrades\n", "created_at": "2015-02-26T19:55:49Z" }, { "body": "So they are all master-eligible nodes! That explains the first part - how snapshot survived restart. It doesn't explain how it happened in the first place, though. Removing and adding back the repo is not going to help. There are really only two ways to go - I can try to figure out what went wrong and fix cleanup script to clean the issue or you can do full cluster restart (shut down all master-eligible nodes and then start them back up). By the way, what do you mean by \"gateway\"?\n", "created_at": "2015-02-26T20:22:07Z" }, { "body": "~~We have a dedicated node for this: http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-gateway.html~~\n\nOn the full cluster restart - do you mean to shut them all down at the same time and bring them back up? Does that mean there will be data loss in the time it takes to bring them both back up?\n", "created_at": "2015-02-26T20:26:49Z" }, { "body": "@sarkis not sure I am following. Are you using non-local gateway on one of your nodes? Which gateway are you using there? How did you configure this node comparing to all other nodes? Could you send me your complete cluster state (you can send it to igor.motov@elasticsearch.com if you don't want to post it here).\n", "created_at": "2015-02-26T20:38:05Z" }, { "body": "@imotov sorry for the confusion - our 3rd non-data, non-master node we refer to as a gateway is the entry point to the cluster. It's one and only purpose is to pass traffic through to the data nodes. Sending you the full cluster state via e-mail.\n", "created_at": "2015-02-26T20:41:36Z" }, { "body": "OK, so \"gateway\" node doesn't have anything to do with [gateways](http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-gateway.html) and it's simply a client node. Got it! I will be waiting for the cluster state to continue investigation. Thanks!\n\nFull cluster restart will make cluster unavailable for indexing and searching while nodes are restarting and shards are recovering. Depending on how you are indexing data it might or might not cause loss of data (if your client has a retry logic to reindex failed records, it shouldn't lead to any data loss).\n", "created_at": "2015-02-26T20:53:06Z" }, { "body": "@imotov sent the cluster state - let me know if I can do anything else. I am looking for a window we can do a full restart to see if this will fix our problem.\n", "created_at": "2015-02-26T21:48:46Z" }, { "body": "In case others come here with the same issue. @imotov's updated cleanup script (https://github.com/imotov/elasticsearch-snapshot-cleanup) for 1.4.4 worked in clearing up the ABORTED snapshots.\n", "created_at": "2015-02-27T19:25:44Z" }, { "body": "Since it seems to be a different problem, I have created a [separate issue](https://github.com/elasticsearch/elasticsearch/issues/9924) for it.\n", "created_at": "2015-02-27T19:41:26Z" } ], "number": 7980, "title": "Snapshot/Restore: snapshot with missing metadata file cannot be deleted" }
{ "body": "...data file\n\nFixes #7980\n", "number": 7981, "review_comments": [], "title": "Make it possible to delete snapshots with missing metadata file" }
{ "commits": [ { "message": "Snapshot/Restore: make it possible to delete snapshots with missing metadata file\n\nFixes #7980" } ], "files": [ { "diff": "@@ -44,6 +44,7 @@\n import org.elasticsearch.common.unit.ByteSizeUnit;\n import org.elasticsearch.common.unit.ByteSizeValue;\n import org.elasticsearch.common.xcontent.*;\n+import org.elasticsearch.index.shard.IndexShardException;\n import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.index.snapshots.IndexShardRepository;\n import org.elasticsearch.index.snapshots.blobstore.BlobStoreIndexShardRepository;\n@@ -262,7 +263,12 @@ public void initializeSnapshot(SnapshotId snapshotId, ImmutableList<String> indi\n @Override\n public void deleteSnapshot(SnapshotId snapshotId) {\n Snapshot snapshot = readSnapshot(snapshotId);\n- MetaData metaData = readSnapshotMetaData(snapshotId, snapshot.indices(), true);\n+ MetaData metaData = null;\n+ try {\n+ metaData = readSnapshotMetaData(snapshotId, snapshot.indices(), true);\n+ } catch (SnapshotException ex) {\n+ logger.warn(\"cannot read metadata for snapshot [{}]\", ex, snapshotId);\n+ }\n try {\n String blobName = snapshotBlobName(snapshotId);\n // Delete snapshot file first so we wouldn't end up with partially deleted snapshot that looks OK\n@@ -289,10 +295,17 @@ public void deleteSnapshot(SnapshotId snapshotId) {\n } catch (IOException ex) {\n logger.warn(\"[{}] failed to delete metadata for index [{}]\", ex, snapshotId, index);\n }\n- IndexMetaData indexMetaData = metaData.index(index);\n- if (indexMetaData != null) {\n- for (int i = 0; i < indexMetaData.getNumberOfShards(); i++) {\n- indexShardRepository.delete(snapshotId, new ShardId(index, i));\n+ if (metaData != null) {\n+ IndexMetaData indexMetaData = metaData.index(index);\n+ if (indexMetaData != null) {\n+ for (int i = 0; i < indexMetaData.getNumberOfShards(); i++) {\n+ ShardId shardId = new ShardId(index, i);\n+ try {\n+ indexShardRepository.delete(snapshotId, shardId);\n+ } catch (IndexShardException | SnapshotException ex) {\n+ logger.warn(\"[{}] failed to delete shard data for shard [{}]\", ex, snapshotId, shardId);\n+ }\n+ }\n }\n }\n }", "filename": "src/main/java/org/elasticsearch/repositories/blobstore/BlobStoreRepository.java", "status": "modified" }, { "diff": "@@ -740,6 +740,41 @@ public void deleteSnapshotWithMissingIndexAndShardMetadataTest() throws Exceptio\n assertThrows(client.admin().cluster().prepareGetSnapshots(\"test-repo\").addSnapshots(\"test-snap-1\"), SnapshotMissingException.class);\n }\n \n+ @Test\n+ public void deleteSnapshotWithMissingMetadataTest() throws Exception {\n+ Client client = client();\n+\n+ File repo = newTempDir(LifecycleScope.SUITE);\n+ logger.info(\"--> creating repository at \" + repo.getAbsolutePath());\n+ assertAcked(client.admin().cluster().preparePutRepository(\"test-repo\")\n+ .setType(\"fs\").setSettings(ImmutableSettings.settingsBuilder()\n+ .put(\"location\", repo)\n+ .put(\"compress\", false)\n+ .put(\"chunk_size\", randomIntBetween(100, 1000))));\n+\n+ createIndex(\"test-idx-1\", \"test-idx-2\");\n+ ensureYellow();\n+ logger.info(\"--> indexing some data\");\n+ indexRandom(true,\n+ client().prepareIndex(\"test-idx-1\", \"doc\").setSource(\"foo\", \"bar\"),\n+ client().prepareIndex(\"test-idx-2\", \"doc\").setSource(\"foo\", \"bar\"));\n+\n+ logger.info(\"--> creating snapshot\");\n+ CreateSnapshotResponse createSnapshotResponse = client.admin().cluster().prepareCreateSnapshot(\"test-repo\", \"test-snap-1\").setWaitForCompletion(true).setIndices(\"test-idx-*\").get();\n+ assertThat(createSnapshotResponse.getSnapshotInfo().successfulShards(), greaterThan(0));\n+ assertThat(createSnapshotResponse.getSnapshotInfo().successfulShards(), equalTo(createSnapshotResponse.getSnapshotInfo().totalShards()));\n+\n+ logger.info(\"--> delete index metadata and shard metadata\");\n+ File metadata = new File(repo, \"metadata-test-snap-1\");\n+ assertThat(metadata.delete(), equalTo(true));\n+\n+ logger.info(\"--> delete snapshot\");\n+ client.admin().cluster().prepareDeleteSnapshot(\"test-repo\", \"test-snap-1\").get();\n+\n+ logger.info(\"--> make sure snapshot doesn't exist\");\n+ assertThrows(client.admin().cluster().prepareGetSnapshots(\"test-repo\").addSnapshots(\"test-snap-1\"), SnapshotMissingException.class);\n+ }\n+\n @Test\n @TestLogging(\"snapshots:TRACE\")\n public void snapshotClosedIndexTest() throws Exception {", "filename": "src/test/java/org/elasticsearch/snapshots/SharedClusterSnapshotRestoreTests.java", "status": "modified" } ] }
{ "body": "Hey,\n\nas we tried to understand the missing filter\nwe came across a null pointer exception in creating a default null_value in the mapping.\n\nThe NPE is thrown if we try to set the \"null_value\" to null.\n\n```\nPUT /foo\n{\n \"mappings\": {\n \"bar\": {\n \"properties\": {\n \"exception\": {\n \"null_value\": null,\n \"type\": \"integer\"\n }\n }\n }\n }\n}\n```\n", "comments": [ { "body": "I can try.\n\nIs there anything else that can be provided? A ErrorStack maybe?\n\nCheers.\n", "created_at": "2014-08-22T06:33:28Z" } ], "number": 7273, "title": "NPE in case of null_value creation with value as null" }
{ "body": "The mapping parser should throw an exception if \"null_value\" is set to `null`. \n\nFixes #7273\n\n``` bash\nPUT /foo\n{\n \"mappings\": {\n \"bar\": {\n \"properties\": {\n \"exception\": {\n \"null_value\": null,\n \"type\": \"integer\"\n }\n }\n }\n }\n}\n```\n\n```\n{\n \"error\": \"MapperParsingException[mapping [bar]]; nested: MapperParsingException[Property [null_value] cannot be null.]; \",\n \"status\": 400\n}\n```\n\nAs a side note, looks like there are a lot of other properties which could be set to null and throw similar errors: https://github.com/elasticsearch/elasticsearch/blob/master/src/main/java/org/elasticsearch/index/mapper/core/TypeParsers.java#L146-L187\n\nI'm not sure the best way to handle these other than an explicit check for nullness inside each clause...which seems gross.\n", "number": 7978, "review_comments": [], "title": "Throw exception if null_value is set to `null`" }
{ "commits": [ { "message": "Mapper: Throw exception if null_value is set to null" } ], "files": [ { "diff": "@@ -111,6 +111,9 @@ public Mapper.Builder parse(String name, Map<String, Object> node, ParserContext\n String propName = Strings.toUnderscoreCase(entry.getKey());\n Object propNode = entry.getValue();\n if (propName.equals(\"null_value\")) {\n+ if (propNode == null) {\n+ throw new MapperParsingException(\"Property [null_value] cannot be null.\");\n+ }\n builder.nullValue(nodeBooleanValue(propNode));\n }\n }", "filename": "src/main/java/org/elasticsearch/index/mapper/core/BooleanFieldMapper.java", "status": "modified" }, { "diff": "@@ -108,6 +108,9 @@ public Mapper.Builder parse(String name, Map<String, Object> node, ParserContext\n String propName = Strings.toUnderscoreCase(entry.getKey());\n Object propNode = entry.getValue();\n if (propName.equals(\"null_value\")) {\n+ if (propNode == null) {\n+ throw new MapperParsingException(\"Property [null_value] cannot be null.\");\n+ }\n builder.nullValue(nodeByteValue(propNode));\n }\n }", "filename": "src/main/java/org/elasticsearch/index/mapper/core/ByteFieldMapper.java", "status": "modified" }, { "diff": "@@ -152,6 +152,9 @@ public static class TypeParser implements Mapper.TypeParser {\n String propName = Strings.toUnderscoreCase(entry.getKey());\n Object propNode = entry.getValue();\n if (propName.equals(\"null_value\")) {\n+ if (propNode == null) {\n+ throw new MapperParsingException(\"Property [null_value] cannot be null.\");\n+ }\n builder.nullValue(propNode.toString());\n } else if (propName.equals(\"format\")) {\n builder.dateTimeFormatter(parseDateTimeFormatter(propNode));", "filename": "src/main/java/org/elasticsearch/index/mapper/core/DateFieldMapper.java", "status": "modified" }, { "diff": "@@ -111,6 +111,9 @@ public Mapper.Builder parse(String name, Map<String, Object> node, ParserContext\n String propName = entry.getKey();\n Object propNode = entry.getValue();\n if (propName.equals(\"nullValue\") || propName.equals(\"null_value\")) {\n+ if (propNode == null) {\n+ throw new MapperParsingException(\"Property [null_value] cannot be null.\");\n+ }\n builder.nullValue(nodeDoubleValue(propNode));\n }\n }", "filename": "src/main/java/org/elasticsearch/index/mapper/core/DoubleFieldMapper.java", "status": "modified" }, { "diff": "@@ -112,6 +112,9 @@ public Mapper.Builder parse(String name, Map<String, Object> node, ParserContext\n String propName = Strings.toUnderscoreCase(entry.getKey());\n Object propNode = entry.getValue();\n if (propName.equals(\"null_value\")) {\n+ if (propNode == null) {\n+ throw new MapperParsingException(\"Property [null_value] cannot be null.\");\n+ }\n builder.nullValue(nodeFloatValue(propNode));\n }\n }", "filename": "src/main/java/org/elasticsearch/index/mapper/core/FloatFieldMapper.java", "status": "modified" }, { "diff": "@@ -108,6 +108,9 @@ public Mapper.Builder parse(String name, Map<String, Object> node, ParserContext\n String propName = Strings.toUnderscoreCase(entry.getKey());\n Object propNode = entry.getValue();\n if (propName.equals(\"null_value\")) {\n+ if (propNode == null) {\n+ throw new MapperParsingException(\"Property [null_value] cannot be null.\");\n+ }\n builder.nullValue(nodeIntegerValue(propNode));\n }\n }", "filename": "src/main/java/org/elasticsearch/index/mapper/core/IntegerFieldMapper.java", "status": "modified" }, { "diff": "@@ -108,6 +108,9 @@ public Mapper.Builder parse(String name, Map<String, Object> node, ParserContext\n String propName = Strings.toUnderscoreCase(entry.getKey());\n Object propNode = entry.getValue();\n if (propName.equals(\"null_value\")) {\n+ if (propNode == null) {\n+ throw new MapperParsingException(\"Property [null_value] cannot be null.\");\n+ }\n builder.nullValue(nodeLongValue(propNode));\n }\n }", "filename": "src/main/java/org/elasticsearch/index/mapper/core/LongFieldMapper.java", "status": "modified" }, { "diff": "@@ -110,6 +110,9 @@ public Mapper.Builder parse(String name, Map<String, Object> node, ParserContext\n String propName = Strings.toUnderscoreCase(entry.getKey());\n Object propNode = entry.getValue();\n if (propName.equals(\"null_value\")) {\n+ if (propNode == null) {\n+ throw new MapperParsingException(\"Property [null_value] cannot be null.\");\n+ }\n builder.nullValue(nodeShortValue(propNode));\n }\n }", "filename": "src/main/java/org/elasticsearch/index/mapper/core/ShortFieldMapper.java", "status": "modified" }, { "diff": "@@ -155,6 +155,9 @@ public Mapper.Builder parse(String name, Map<String, Object> node, ParserContext\n String propName = Strings.toUnderscoreCase(entry.getKey());\n Object propNode = entry.getValue();\n if (propName.equals(\"null_value\")) {\n+ if (propNode == null) {\n+ throw new MapperParsingException(\"Property [null_value] cannot be null.\");\n+ }\n builder.nullValue(propNode.toString());\n } else if (propName.equals(\"search_quote_analyzer\")) {\n NamedAnalyzer analyzer = parserContext.analysisService().analyzer(propNode.toString());", "filename": "src/main/java/org/elasticsearch/index/mapper/core/StringFieldMapper.java", "status": "modified" }, { "diff": "@@ -143,6 +143,9 @@ public Mapper.Builder parse(String name, Map<String, Object> node, ParserContext\n String propName = Strings.toUnderscoreCase(entry.getKey());\n Object propNode = entry.getValue();\n if (propName.equals(\"null_value\")) {\n+ if (propNode == null) {\n+ throw new MapperParsingException(\"Property [null_value] cannot be null.\");\n+ }\n builder.nullValue(propNode.toString());\n }\n }", "filename": "src/main/java/org/elasticsearch/index/mapper/ip/IpFieldMapper.java", "status": "modified" }, { "diff": "@@ -0,0 +1,65 @@\n+package org.elasticsearch.index.mapper.null_value;\n+\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+\n+import org.elasticsearch.common.settings.ImmutableSettings;\n+import org.elasticsearch.common.xcontent.XContentFactory;\n+import org.elasticsearch.index.mapper.MapperParsingException;\n+import org.elasticsearch.index.service.IndexService;\n+import org.elasticsearch.test.ElasticsearchSingleNodeTest;\n+import org.junit.Test;\n+\n+import static org.hamcrest.Matchers.*;\n+\n+/**\n+ */\n+public class NullValueTests extends ElasticsearchSingleNodeTest {\n+\n+ @Test\n+ public void testNullNull_Value() throws Exception {\n+ IndexService indexService = createIndex(\"test\", ImmutableSettings.settingsBuilder().build());\n+ String[] typesToTest = {\"integer\", \"long\", \"double\", \"float\", \"short\", \"date\", \"ip\", \"string\", \"boolean\", \"byte\"};\n+\n+ for (String type : typesToTest) {\n+ String mapping = XContentFactory.jsonBuilder()\n+ .startObject()\n+ .startObject(\"type\")\n+ .startObject(\"properties\")\n+ .startObject(\"numeric\")\n+ .field(\"type\", type)\n+ .field(\"null_value\", (String) null)\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject().string();\n+\n+ try {\n+ indexService.mapperService().documentMapperParser().parse(mapping);\n+ fail(\"Test should have failed because [null_value] was null.\");\n+ } catch (MapperParsingException e) {\n+ assertThat(e.getMessage(), equalTo(\"Property [null_value] cannot be null.\"));\n+ }\n+\n+ }\n+\n+\n+ }\n+}", "filename": "src/test/java/org/elasticsearch/index/mapper/null_value/NullValueTests.java", "status": "added" } ] }
{ "body": "Creating an alias with a null `index` matches all indices instead of none:\n\n```\n$ curl -XPUT localhost:9200/index1\n{\"acknowledged\":true}\n$ curl -XPUT localhost:9200/index2\n{\"acknowledged\":true}\n$ curl localhost:9200/_aliases -d '{\"actions\":[{\"add\":{\"alias\":\"empty-alias\", \"index\":null}}]}'\n{\"acknowledged\":true}\n$ curl localhost:9200/_aliases\n{\"index1\":{\"aliases\":{\"empty-alias\":{}}},\"index2\":{\"aliases\":{\"empty-alias\":{}}}}\n```\n\nThis behavior is surprising and dangerous. Someone who didn't know what to expect might accidentally delete all their indices:\n\n```\n$ curl -XDELETE localhost:9200/empty-alias\n{\"acknowledged\":true}\n$ curl localhost:9200/index1/_count\n{\"error\":\"IndexMissingException[[index1] missing]\",\"status\":404}\n$ curl localhost:9200/index2/_count\n{\"error\":\"IndexMissingException[[index1] missing]\",\"status\":404}\n```\n\nWhen given a null `index` value, the alias action should raise an error. Creating an alias to match all indices should require a `*` wildcard as the `index` value.\n", "comments": [ { "body": "I learned in #7864 that empty aliases are technically impossible, so I edited the issue to recommend raising an error.\n", "created_at": "2014-09-29T18:00:30Z" }, { "body": "Fixed in https://github.com/elasticsearch/elasticsearch/commit/ee857bc07302b5bfcb327b3be9d07d9c6de28254, thanks for the bug report!\n\nEdit: this change was backed out due to a testing problem. See correction below\n", "created_at": "2014-10-10T20:54:36Z" } ], "number": 7863, "title": "Null index in alias POST matches all indices" }
{ "body": "Fixes a bug where alias creation would allow `null` for index name, which thereby applied the alias to _all_ indices. This patch makes the validator throw an exception if the index is null.\n\nFixes #7863\n\n``` bash\nPOST /_aliases\n{\n \"actions\": [\n {\n \"add\": {\n \"alias\": \"empty-alias\",\n \"index\": null\n }\n }\n ]\n}\n```\n\n``` json\n{\n \"error\": \"ActionRequestValidationException[Validation Failed: 1: Alias action [add]: [index] may not be null;]\",\n \"status\": 400\n}\n```\n\nEdit:\n\nThe reason this bug wasn't caught by the existing tests is because the old test for nullness only validated against a cluster which had zero indices. The null index is translated into \"_all\", and since there are no indices, this fails because the index doesn't exist. So the test passes.\n\nHowever, as soon as you add an index, \"_all\" resolves and you get the situation described in the original bug report: null index is accepted by the alias, resolves to \"_all\" and gets applied to everything.\n", "number": 7976, "review_comments": [], "title": "Aliases: Throw exception if index is null when creating alias" }
{ "commits": [ { "message": "Throw exception if index is null when creating alias\n\nFixes #7863" }, { "message": "Remove redundant check on indices[]\n\nThe Remove section independently checked the size of indices, but now that\nit is being checked at the end of verification it is not needed inside the\nRemove clause" }, { "message": "Reconfigure tests to more accurately describe the scenarios\n\nThe reason this bug wasn't caught by the existing tests is because the old test for nullness\nonly validated against a cluster which had zero indices. The null index is translated into \"_all\",\nand since there are no indices, this fails because the index doesn't exist.\n\nHowever, as soon as you add an index, \"_all\" resolves and you get the situation described in the\noriginal bug report (but which is not tested by the suite)." } ], "files": [ { "diff": "@@ -294,10 +294,6 @@ public ActionRequestValidationException validate() {\n + \"]: [alias] may not be empty string\", validationException);\n }\n }\n- if (CollectionUtils.isEmpty(aliasAction.indices)) {\n- validationException = addValidationError(\"Alias action [\" + aliasAction.actionType().name().toLowerCase(Locale.ENGLISH)\n- + \"]: indices may not be empty\", validationException);\n- }\n }\n if (!CollectionUtils.isEmpty(aliasAction.indices)) {\n for (String index : aliasAction.indices) {\n@@ -306,6 +302,9 @@ public ActionRequestValidationException validate() {\n + \"]: [index] may not be empty string\", validationException);\n }\n }\n+ } else {\n+ validationException = addValidationError(\"Alias action [\" + aliasAction.actionType().name().toLowerCase(Locale.ENGLISH)\n+ + \"]: [index] may not be null\", validationException);\n }\n }\n return validationException;", "filename": "src/main/java/org/elasticsearch/action/admin/indices/alias/IndicesAliasesRequest.java", "status": "modified" }, { "diff": "@@ -744,9 +744,31 @@ public void testIndicesGetAliases() throws Exception {\n assertThat(existsResponse.exists(), equalTo(false));\n }\n \n- @Test(expected = IndexMissingException.class)\n- public void testAddAliasNullIndex() {\n- admin().indices().prepareAliases().addAliasAction(AliasAction.newAddAliasAction(null, \"alias1\")).get();\n+ @Test\n+ public void testAddAliasNullWithoutExistingIndices() {\n+ try {\n+ assertAcked(admin().indices().prepareAliases().addAliasAction(AliasAction.newAddAliasAction(null, \"alias1\")));\n+ fail(\"create alias should have failed due to null index\");\n+ } catch (ElasticsearchIllegalArgumentException e) {\n+ assertThat(e.getMessage(), equalTo(\"Validation Failed: 1: Alias action [add]: [index] may not be null;\"));\n+ }\n+\n+ }\n+\n+ @Test\n+ public void testAddAliasNullWithExistingIndices() throws Exception {\n+ logger.info(\"--> creating index [test]\");\n+ createIndex(\"test\");\n+ ensureGreen();\n+\n+ logger.info(\"--> aliasing index [null] with [empty-alias]\");\n+\n+ try {\n+ assertAcked(admin().indices().prepareAliases().addAlias((String) null, \"empty-alias\"));\n+ fail(\"create alias should have failed due to null index\");\n+ } catch (ElasticsearchIllegalArgumentException e) {\n+ assertThat(e.getMessage(), equalTo(\"Validation Failed: 1: Alias action [add]: [index] may not be null;\"));\n+ }\n }\n \n @Test(expected = ActionRequestValidationException.class)\n@@ -771,7 +793,7 @@ public void testAddAliasNullAliasNullIndex() {\n assertTrue(\"Should throw \" + ActionRequestValidationException.class.getSimpleName(), false);\n } catch (ActionRequestValidationException e) {\n assertThat(e.validationErrors(), notNullValue());\n- assertThat(e.validationErrors().size(), equalTo(1));\n+ assertThat(e.validationErrors().size(), equalTo(2));\n }\n }\n \n@@ -928,7 +950,7 @@ public void testAddAliasWithFilterNoMapping() throws Exception {\n .addAlias(\"test\", \"a\", FilterBuilders.matchAllFilter()) // <-- no fail, b/c no field mentioned\n .get();\n }\n-\n+ \n private void checkAliases() {\n GetAliasesResponse getAliasesResponse = admin().indices().prepareGetAliases(\"alias1\").get();\n assertThat(getAliasesResponse.getAliases().get(\"test\").size(), equalTo(1));", "filename": "src/test/java/org/elasticsearch/aliases/IndexAliasesTests.java", "status": "modified" } ] }
{ "body": "Hello.\n\nI'm running: Version: 1.3.0, Build: 1265b14/2014-07-23T13:46:36Z, JVM: 1.7.0_65\n\nI'm trying to do a simple significant terms aggregation and I get an exception:\n\nCommand:\n\n```\ncurl -s XGET 'localhost:9200/test_index/job/_search?pretty' -d '{\n \"query\": {\n \"filtered\": {\n \"filter\": {\n \"terms\": {\n \"profession\": [\n \"4980\"\n ]\n }\n }\n }\n },\n \"size\": 0,\n \"aggs\": {\n \"term_cloud\": {\n \"significant_terms\": {\n \"field\": \"fulltext\"\n }\n }\n }\n}'\n```\n\nResponse:\n\n```\n{\n \"error\" : \"SearchPhaseExecutionException[Failed to execute phase [query], all shards failed; shardFailures {[w_0mi1Z-Tvm68G3dfr7rUg][test_index][2]: ElasticsearchIllegalArgumentException[supersetFreq > supersetSize, in JLHScore.score(..)]}{[w_0mi1Z-Tvm68G3dfr7rUg][test_index][3]: ElasticsearchIllegalArgumentException[supersetFreq > supersetSize, in JLHScore.score(..)]}{[w_0mi1Z-Tvm68G3dfr7rUg][test_index][4]: ElasticsearchIllegalArgumentException[supersetFreq > supersetSize, in JLHScore.score(..)]}{[w_0mi1Z-Tvm68G3dfr7rUg][test_index][0]: ElasticsearchIllegalArgumentException[supersetFreq > supersetSize, in JLHScore.score(..)]}{[w_0mi1Z-Tvm68G3dfr7rUg][test_index][1]: ElasticsearchIllegalArgumentException[supersetFreq > supersetSize, in JLHScore.score(..)]}]\",\n \"status\" : 400\n}\n```\n\nConsole:\n\n```\n[2014-10-01 18:02:55,119][DEBUG][action.search.type ] [Joystick] [test_index][2], node[w_0mi1Z-Tvm68G3dfr7rUg], [P], s[STARTED]: Failed to execute [org.elasticsearch.action.search.SearchRequest@4b58136a]\norg.elasticsearch.ElasticsearchIllegalArgumentException: supersetFreq > supersetSize, in JLHScore.score(..)\n at org.elasticsearch.search.aggregations.bucket.significant.heuristics.JLHScore.getScore(JLHScore.java:79)\n at org.elasticsearch.search.aggregations.bucket.significant.InternalSignificantTerms$Bucket.updateScore(InternalSignificantTerms.java:80)\n at org.elasticsearch.search.aggregations.bucket.significant.GlobalOrdinalsSignificantTermsAggregator.buildAggregation(GlobalOrdinalsSignificantTermsAggregator.java:102)\n at org.elasticsearch.search.aggregations.bucket.significant.GlobalOrdinalsSignificantTermsAggregator.buildAggregation(GlobalOrdinalsSignificantTermsAggregator.java:41)\n at org.elasticsearch.search.aggregations.AggregationPhase.execute(AggregationPhase.java:133)\n at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:171)\n at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:261)\n at org.elasticsearch.search.action.SearchServiceTransportAction$5.call(SearchServiceTransportAction.java:206)\n at org.elasticsearch.search.action.SearchServiceTransportAction$5.call(SearchServiceTransportAction.java:203)\n at org.elasticsearch.search.action.SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:517)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n at java.lang.Thread.run(Thread.java:745)\n```\n\nThe mapping for the two fields involved:\n\n```\n\"profession\" : {\"type\" : \"integer\"},\n\"fulltext\" : {\"type\" : \"string\"},\n```\n", "comments": [ { "body": "Thanks for reporting this I have reproduced the failure and working on the fix\n", "created_at": "2014-10-01T19:10:43Z" }, { "body": "@brwe Wouldn't mind discussing this - one fix for the above is pretty simple (use a background superset doc-count of IndexReader.maxDoc that includes deleted docs) but it introduces a lot of test failures into your SignificantTermsSignificanceScoreTests class which is very sensitive to score changes caused by the randomized testing framework's habit of deleting docs then re-inserting docs.\n", "created_at": "2014-10-01T20:13:15Z" } ], "number": 7951, "title": "Assertion failure when doing a significant terms aggregation." }
{ "body": "If an index contains a highly popular term and many deleted documents it can cause an error as reported in issue #7951 \n\nThe background count for docs should include deleted docs otherwise a term’s docFreq (which includes deleted docs) can exceed the number of docs reported in the index and cause an exception.\n\nThe randomisation that deletes documents is also removed from tests as this doc-accounting change would mean the specific scores being expected in tests would now be subject to random variability and so fail.\n\nCloses #7951\n", "number": 7960, "review_comments": [ { "body": "maybe just leave a comment that we get maxDoc on purpose instead of numDocs, so that someone doesn't rollback that change in the future if numDocs feels more logical to him?\n", "created_at": "2014-10-02T22:15:36Z" } ], "title": "Significant terms can throw error on index with deleted docs." }
{ "commits": [ { "message": "Aggs fix - background count for docs should include deleted docs otherwise a term’s docFreq (which includes deleted docs) can exceed the number of docs reported in the index and cause an exception.\nThe randomisation that deletes documents is also removed from tests as this doc-accounting change would mean the specific scores being expected in tests would now be subject to random variability and so fail.\n\nCloses #7951" }, { "message": "Added test and comment following review" } ], "files": [ { "diff": "@@ -71,7 +71,9 @@ public FilterableTermsEnum(IndexReader reader, String field, int docsEnumFlag, @\n }\n this.docsEnumFlag = docsEnumFlag;\n if (filter == null) {\n- numDocs = reader.numDocs();\n+ // Important - need to use the doc count that includes deleted docs\n+ // or we have this issue: https://github.com/elasticsearch/elasticsearch/issues/7951\n+ numDocs = reader.maxDoc();\n }\n ApplyAcceptedDocsFilter acceptedDocsFilter = filter == null ? null : new ApplyAcceptedDocsFilter(filter);\n List<AtomicReaderContext> leaves = reader.leaves();", "filename": "src/main/java/org/elasticsearch/common/lucene/index/FilterableTermsEnum.java", "status": "modified" }, { "diff": "@@ -262,6 +262,49 @@ public void testXContentResponse() throws Exception {\n assertThat(responseBuilder.string(), equalTo(result));\n \n }\n+ \n+ @Test\n+ public void testDeletesIssue7951() throws Exception {\n+ String settings = \"{\\\"index.number_of_shards\\\": 1, \\\"index.number_of_replicas\\\": 0}\";\n+ String mappings = \"{\\\"doc\\\": {\\\"properties\\\":{\\\"text\\\": {\\\"type\\\":\\\"string\\\",\\\"index\\\":\\\"not_analyzed\\\"}}}}\";\n+ assertAcked(prepareCreate(INDEX_NAME).setSettings(settings).addMapping(\"doc\", mappings));\n+ String[] cat1v1 = {\"constant\", \"one\"};\n+ String[] cat1v2 = {\"constant\", \"uno\"};\n+ String[] cat2v1 = {\"constant\", \"two\"};\n+ String[] cat2v2 = {\"constant\", \"duo\"};\n+ List<IndexRequestBuilder> indexRequestBuilderList = new ArrayList<>();\n+ indexRequestBuilderList.add(client().prepareIndex(INDEX_NAME, DOC_TYPE, \"1\")\n+ .setSource(TEXT_FIELD, cat1v1, CLASS_FIELD, \"1\"));\n+ indexRequestBuilderList.add(client().prepareIndex(INDEX_NAME, DOC_TYPE, \"2\")\n+ .setSource(TEXT_FIELD, cat1v2, CLASS_FIELD, \"1\"));\n+ indexRequestBuilderList.add(client().prepareIndex(INDEX_NAME, DOC_TYPE, \"3\")\n+ .setSource(TEXT_FIELD, cat2v1, CLASS_FIELD, \"2\"));\n+ indexRequestBuilderList.add(client().prepareIndex(INDEX_NAME, DOC_TYPE, \"4\")\n+ .setSource(TEXT_FIELD, cat2v2, CLASS_FIELD, \"2\"));\n+ indexRandom(true, false, indexRequestBuilderList);\n+ \n+ // Now create some holes in the index with selective deletes caused by updates.\n+ // This is the scenario that caused this issue https://github.com/elasticsearch/elasticsearch/issues/7951\n+ // Scoring algorithms throw exceptions if term docFreqs exceed the reported size of the index \n+ // from which they are taken so need to make sure this doesn't happen.\n+ String[] text = cat1v1;\n+ indexRequestBuilderList.clear();\n+ for (int i = 0; i < 50; i++) {\n+ text = text == cat1v2 ? cat1v1 : cat1v2;\n+ indexRequestBuilderList.add(client().prepareIndex(INDEX_NAME, DOC_TYPE, \"1\").setSource(TEXT_FIELD, text, CLASS_FIELD, \"1\"));\n+ }\n+ indexRandom(true, false, indexRequestBuilderList);\n+ \n+ SearchResponse response1 = client().prepareSearch(INDEX_NAME).setTypes(DOC_TYPE)\n+ .addAggregation(new TermsBuilder(\"class\")\n+ .field(CLASS_FIELD)\n+ .subAggregation(\n+ new SignificantTermsBuilder(\"sig_terms\")\n+ .field(TEXT_FIELD)\n+ .minDocCount(1)))\n+ .execute()\n+ .actionGet();\n+ } \n \n @Test\n public void testBackgroundVsSeparateSet() throws Exception {\n@@ -347,7 +390,7 @@ private void index01Docs(String type, String settings) throws ExecutionException\n .setSource(TEXT_FIELD, gb, CLASS_FIELD, \"0\"));\n indexRequestBuilderList.add(client().prepareIndex(INDEX_NAME, DOC_TYPE, \"7\")\n .setSource(TEXT_FIELD, \"0\", CLASS_FIELD, \"0\"));\n- indexRandom(true, indexRequestBuilderList);\n+ indexRandom(true, false, indexRequestBuilderList);\n }\n \n @Test\n@@ -413,6 +456,6 @@ private void indexEqualTestData() throws ExecutionException, InterruptedExceptio\n indexRequestBuilders.add(client().prepareIndex(\"test\", \"doc\", \"\" + i)\n .setSource(\"class\", parts[0], \"text\", parts[1]));\n }\n- indexRandom(true, indexRequestBuilders);\n+ indexRandom(true, false, indexRequestBuilders);\n }\n }", "filename": "src/test/java/org/elasticsearch/search/aggregations/bucket/SignificantTermsSignificanceScoreTests.java", "status": "modified" }, { "diff": "@@ -69,7 +69,7 @@ public void shardMinDocCountSignificantTermsTest() throws Exception {\n addTermsDocs(\"5\", 3, 1, indexBuilders);//low score but high doc freq\n addTermsDocs(\"6\", 3, 1, indexBuilders);\n addTermsDocs(\"7\", 0, 3, indexBuilders);// make sure the terms all get score > 0 except for this one\n- indexRandom(true, indexBuilders);\n+ indexRandom(true, false, indexBuilders);\n \n // first, check that indeed when not setting the shardMinDocCount parameter 0 terms are returned\n SearchResponse response = client().prepareSearch(index)\n@@ -126,7 +126,7 @@ public void shardMinDocCountTermsTest() throws Exception {\n addTermsDocs(\"4\", 1, indexBuilders);\n addTermsDocs(\"5\", 3, indexBuilders);//low score but high doc freq\n addTermsDocs(\"6\", 3, indexBuilders);\n- indexRandom(true, indexBuilders);\n+ indexRandom(true, false, indexBuilders);\n \n // first, check that indeed when not setting the shardMinDocCount parameter 0 terms are returned\n SearchResponse response = client().prepareSearch(index)", "filename": "src/test/java/org/elasticsearch/search/aggregations/bucket/TermsShardMinDocCountTests.java", "status": "modified" } ] }
{ "body": "I made a mistake while i deployed a script. I created a file (instead of a directory) named \"scripts\" to save my script. And now, even if scripts is a directory (with the script inside), i got the following exception\n\n`[2014-09-11 13:55:15,117][WARN ][threadpool ] [Barristan] failed to run org.elasticsearch.watcher.ResourceWatcherService$ResourceMonitor@5985e4fa\njava.lang.NullPointerException\n at org.elasticsearch.watcher.FileWatcher$FileObserver.updateChildren(FileWatcher.java:184)\n at org.elasticsearch.watcher.FileWatcher$FileObserver.checkAndNotify(FileWatcher.java:93)\n at org.elasticsearch.watcher.FileWatcher.doCheckAndNotify(FileWatcher.java:47)\n at org.elasticsearch.watcher.AbstractResourceWatcher.checkAndNotify(AbstractResourceWatcher.java:43)\n at org.elasticsearch.watcher.ResourceWatcherService$ResourceMonitor.run(ResourceWatcherService.java:102)\n at org.elasticsearch.threadpool.ThreadPool$LoggingRunnable.run(ThreadPool.java:440)\n at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)\n at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)\n at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)\n at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n at java.lang.Thread.run(Thread.java:745)`\n", "comments": [ { "body": "ES v1.2.1\n", "created_at": "2014-09-12T09:53:02Z" }, { "body": "I'm not able to reproduce this. I added a test, which passes on 1.x and the 1.2.1 release. Can you list of set of reproducible steps?\n", "created_at": "2014-09-12T23:38:46Z" } ], "number": 7689, "title": "NullPointerException on ResourceWatcherService" }
{ "body": "Fixes #7689\n", "number": 7953, "review_comments": [ { "body": "maybe log something here?\n", "created_at": "2014-10-01T20:36:59Z" }, { "body": "This seems dangerous? Shouldn't this be similar to the other try/catches below?\n", "created_at": "2014-10-01T20:41:40Z" }, { "body": "Yeah, I missed it somehow. Fixed.\n", "created_at": "2014-10-02T12:30:14Z" } ], "title": "Fix NPE in ScriptService when script file with no extension is deleted" }
{ "commits": [ { "message": "Fix NPE in ScriptService when script file with no extension is deleted\n\nFixes #7689" } ], "files": [ { "diff": "@@ -537,8 +537,10 @@ public void onFileCreated(File file) {\n @Override\n public void onFileDeleted(File file) {\n Tuple<String, String> scriptNameExt = scriptNameExt(file);\n- logger.info(\"removing script file [{}]\", file.getAbsolutePath());\n- staticCache.remove(scriptNameExt.v1());\n+ if (scriptNameExt != null) {\n+ logger.info(\"removing script file [{}]\", file.getAbsolutePath());\n+ staticCache.remove(scriptNameExt.v1());\n+ }\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/script/ScriptService.java", "status": "modified" }, { "diff": "@@ -18,6 +18,9 @@\n */\n package org.elasticsearch.watcher;\n \n+import org.elasticsearch.common.logging.ESLogger;\n+import org.elasticsearch.common.logging.Loggers;\n+\n import java.io.File;\n import java.util.Arrays;\n \n@@ -30,6 +33,8 @@ public class FileWatcher extends AbstractResourceWatcher<FileChangesListener> {\n \n private FileObserver rootFileObserver;\n \n+ private static final ESLogger logger = Loggers.getLogger(FileWatcher.class);\n+\n /**\n * Creates new file watcher on the given directory\n */\n@@ -228,32 +233,49 @@ private void deleteChild(int child) {\n \n private void onFileCreated(boolean initial) {\n for (FileChangesListener listener : listeners()) {\n- if (initial) {\n- listener.onFileInit(file);\n- } else {\n- listener.onFileCreated(file);\n+ try {\n+ if (initial) {\n+ listener.onFileInit(file);\n+ } else {\n+ listener.onFileCreated(file);\n+ }\n+ } catch (Throwable t) {\n+ logger.warn(\"cannot notify file changes listener\", t);\n }\n }\n }\n \n private void onFileDeleted() {\n for (FileChangesListener listener : listeners()) {\n- listener.onFileDeleted(file);\n+ try {\n+ listener.onFileDeleted(file);\n+ } catch (Throwable t) {\n+ logger.warn(\"cannot notify file changes listener\", t);\n+ }\n }\n }\n \n private void onFileChanged() {\n for (FileChangesListener listener : listeners()) {\n- listener.onFileChanged(file);\n+ try {\n+ listener.onFileChanged(file);\n+ } catch (Throwable t) {\n+ logger.warn(\"cannot notify file changes listener\", t);\n+ }\n+\n }\n }\n \n private void onDirectoryCreated(boolean initial) {\n for (FileChangesListener listener : listeners()) {\n- if (initial) {\n- listener.onDirectoryInit(file);\n- } else {\n- listener.onDirectoryCreated(file);\n+ try {\n+ if (initial) {\n+ listener.onDirectoryInit(file);\n+ } else {\n+ listener.onDirectoryCreated(file);\n+ }\n+ } catch (Throwable t) {\n+ logger.warn(\"cannot notify file changes listener\", t);\n }\n }\n children = listChildren(initial);\n@@ -265,7 +287,11 @@ private void onDirectoryDeleted() {\n deleteChild(child);\n }\n for (FileChangesListener listener : listeners()) {\n- listener.onDirectoryDeleted(file);\n+ try {\n+ listener.onDirectoryDeleted(file);\n+ } catch (Throwable t) {\n+ logger.warn(\"cannot notify file changes listener\", t);\n+ }\n }\n }\n ", "filename": "src/main/java/org/elasticsearch/watcher/FileWatcher.java", "status": "modified" }, { "diff": "@@ -137,6 +137,26 @@ public <W extends ResourceWatcher> WatcherHandle<W> add(W watcher, Frequency fre\n }\n }\n \n+ public void notifyNow() {\n+ notifyNow(Frequency.MEDIUM);\n+ }\n+\n+ public void notifyNow(Frequency frequency) {\n+ switch (frequency) {\n+ case LOW:\n+ lowMonitor.run();\n+ break;\n+ case MEDIUM:\n+ mediumMonitor.run();\n+ break;\n+ case HIGH:\n+ highMonitor.run();\n+ break;\n+ default:\n+ throw new ElasticsearchIllegalArgumentException(\"Unknown frequency [\" + frequency + \"]\");\n+ }\n+ }\n+\n static class ResourceMonitor implements Runnable {\n \n final TimeValue interval;\n@@ -155,7 +175,7 @@ private <W extends ResourceWatcher> WatcherHandle<W> add(W watcher) {\n }\n \n @Override\n- public void run() {\n+ public synchronized void run() {\n for(ResourceWatcher watcher : watchers) {\n watcher.checkAndNotify();\n }", "filename": "src/main/java/org/elasticsearch/watcher/ResourceWatcherService.java", "status": "modified" }, { "diff": "@@ -0,0 +1,137 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.script;\n+\n+import com.carrotsearch.ant.tasks.junit4.dependencies.com.google.common.collect.ImmutableSet;\n+import org.elasticsearch.ElasticsearchIllegalArgumentException;\n+import org.elasticsearch.common.Nullable;\n+import org.elasticsearch.common.io.Streams;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.env.Environment;\n+import org.elasticsearch.search.lookup.SearchLookup;\n+import org.elasticsearch.test.ElasticsearchTestCase;\n+import org.elasticsearch.watcher.ResourceWatcherService;\n+import org.junit.Test;\n+\n+import java.io.File;\n+import java.io.IOException;\n+import java.util.Map;\n+\n+import static org.elasticsearch.common.settings.ImmutableSettings.settingsBuilder;\n+import static org.hamcrest.Matchers.containsString;\n+import static org.hamcrest.Matchers.equalTo;\n+\n+/**\n+ *\n+ */\n+public class ScriptServiceTests extends ElasticsearchTestCase {\n+\n+ @Test\n+ public void testScriptsWithoutExtensions() throws IOException {\n+ File homeFolder = newTempDir();\n+ File genericConfigFolder = newTempDir();\n+\n+ Settings settings = settingsBuilder()\n+ .put(\"path.conf\", genericConfigFolder)\n+ .put(\"path.home\", homeFolder)\n+ .build();\n+ Environment environment = new Environment(settings);\n+\n+ ResourceWatcherService resourceWatcherService = new ResourceWatcherService(settings, null);\n+\n+ logger.info(\"--> setup script service\");\n+ ScriptService scriptService = new ScriptService(settings, environment, ImmutableSet.of(new TestEngineService()), resourceWatcherService);\n+ File scriptsFile = new File(genericConfigFolder, \"scripts\");\n+ assertThat(scriptsFile.mkdir(), equalTo(true));\n+ resourceWatcherService.notifyNow();\n+\n+ logger.info(\"--> setup two test files one with extension and another without\");\n+ File testFileNoExt = new File(scriptsFile, \"test_no_ext\");\n+ File testFileWithExt = new File(scriptsFile, \"test_script.tst\");\n+ Streams.copy(\"test_file_no_ext\".getBytes(\"UTF-8\"), testFileNoExt);\n+ Streams.copy(\"test_file\".getBytes(\"UTF-8\"), testFileWithExt);\n+ resourceWatcherService.notifyNow();\n+\n+ logger.info(\"--> verify that file with extension was correctly processed\");\n+ CompiledScript compiledScript = scriptService.compile(\"test\", \"test_script\", ScriptService.ScriptType.FILE);\n+ assertThat(compiledScript.compiled(), equalTo((Object) \"compiled_test_file\"));\n+\n+ logger.info(\"--> delete both files\");\n+ assertThat(testFileNoExt.delete(), equalTo(true));\n+ assertThat(testFileWithExt.delete(), equalTo(true));\n+ resourceWatcherService.notifyNow();\n+\n+ logger.info(\"--> verify that file with extension was correctly removed\");\n+ try {\n+ scriptService.compile(\"test\", \"test_script\", ScriptService.ScriptType.FILE);\n+ fail(\"the script test_script should no longe exist\");\n+ } catch (ElasticsearchIllegalArgumentException ex) {\n+ assertThat(ex.getMessage(), containsString(\"Unable to find on disk script test_script\"));\n+ }\n+ }\n+\n+ public static class TestEngineService implements ScriptEngineService {\n+\n+ @Override\n+ public String[] types() {\n+ return new String[] {\"test\"};\n+ }\n+\n+ @Override\n+ public String[] extensions() {\n+ return new String[] {\"test\", \"tst\"};\n+ }\n+\n+ @Override\n+ public boolean sandboxed() {\n+ return false;\n+ }\n+\n+ @Override\n+ public Object compile(String script) {\n+ return \"compiled_\" + script;\n+ }\n+\n+ @Override\n+ public ExecutableScript executable(final Object compiledScript, @Nullable Map<String, Object> vars) {\n+ return null;\n+ }\n+\n+ @Override\n+ public SearchScript search(Object compiledScript, SearchLookup lookup, @Nullable Map<String, Object> vars) {\n+ return null;\n+ }\n+\n+ @Override\n+ public Object execute(Object compiledScript, Map<String, Object> vars) {\n+ return null;\n+ }\n+\n+ @Override\n+ public Object unwrap(Object value) {\n+ return null;\n+ }\n+\n+ @Override\n+ public void close() {\n+\n+ }\n+ }\n+\n+}", "filename": "src/test/java/org/elasticsearch/script/ScriptServiceTests.java", "status": "added" } ] }
{ "body": "[2014-09-11 11:47:51,946][DEBUG][index.search.slowlog.query] [n020] [index][3] took[3.1s], took_millis[3141], types[type], stats[], search_type[QUERY_THEN_FETCH], total_shards[5], source[{\"size\":12,\"from\":0,\"sort\":{\"ats\":\"desc\"},\"query\":{\"filtered\":{\"query\":{\"query_string\":{\"query\":\"ten words string\",\"fields\":[\"title\",\"tags\"],\"default_operator\":\"OR\"}},\"filter\":{\"bool\":{\"must\":[{\"range\":{\"ats\":{\"lte\":1410428944}}},{\"terms\":{\"aid\":[27]}}],\"must_not\":[{\"ids\":{\"values\":[[\"ten-words-dash-separated-string\"]]}}]}}}}}], extra_source[]\n", "comments": [ { "body": "Could you provide the REST command you used? \nIt would make it easier for meto test and understand the issue.\n", "created_at": "2014-09-18T01:45:41Z" }, { "body": "It was a simple Search query with params as query.\n", "created_at": "2014-09-18T13:33:14Z" }, { "body": "This is the pertinent clause:\n\n```\n{\"ids\":{\"values\":[[\"ten-words-dash-separated-string\"]]}\n```\n\nNote the double array\n", "created_at": "2014-09-25T18:36:38Z" }, { "body": "Yes, I mentioned the double array which is a bug from my script, however as @dakrone pointed out, ES should throw a parsing error in this case which it did not.\n", "created_at": "2014-09-26T12:26:02Z" } ], "number": 7686, "title": "ES not throwing parse exception `ids` query double-nested array" }
{ "body": "Adds a check to make sure that all ids in the query are either strings\nor numbers. This is to prevent the case where a user accidentally\nspecifies:\n\n\"ids\": [[\"1\", \"2\"]]\n\n(note the double array)\n\nWith this change, an exception will be thrown since the second \"[\" is\nnot a string or number, it is a Token.START_ARRAY.\n\nFixes #7686\n", "number": 7945, "review_comments": [], "title": "Be stricter parsing ids for ids query" }
{ "commits": [ { "message": "Be stricter parsing ids for ids query\n\nAdds a check to make sure that all ids in the query are either strings\nor numbers. This is to prevent the case where a user accidentally\nspecifies:\n\n\"ids\": [[\"1\", \"2\"]]\n\n(note the double array)\n\nWith this change, an exception will be thrown since the second \"[\" is\nnot a string or number, it is a Token.START_ARRAY.\n\nFixes #7686" } ], "files": [ { "diff": "@@ -70,11 +70,17 @@ public Query parse(QueryParseContext parseContext) throws IOException, QueryPars\n if (\"values\".equals(currentFieldName)) {\n idsProvided = true;\n while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) {\n- BytesRef value = parser.utf8BytesOrNull();\n- if (value == null) {\n- throw new QueryParsingException(parseContext.index(), \"No value specified for term filter\");\n+ if ((token == XContentParser.Token.VALUE_STRING) ||\n+ (token == XContentParser.Token.VALUE_NUMBER)) {\n+ BytesRef value = parser.utf8BytesOrNull();\n+ if (value == null) {\n+ throw new QueryParsingException(parseContext.index(), \"No value specified for term filter\");\n+ }\n+ ids.add(value);\n+ } else {\n+ throw new QueryParsingException(parseContext.index(),\n+ \"Illegal value for id, expecting a string or number, got: \" + token);\n }\n- ids.add(value);\n }\n } else if (\"types\".equals(currentFieldName) || \"type\".equals(currentFieldName)) {\n types = new ArrayList<>();\n@@ -125,4 +131,3 @@ public Query parse(QueryParseContext parseContext) throws IOException, QueryPars\n return query;\n }\n }\n-", "filename": "src/main/java/org/elasticsearch/index/query/IdsQueryParser.java", "status": "modified" }, { "diff": "@@ -2700,4 +2700,24 @@ public void testQueryStringParserCache() throws Exception {\n }\n }\n \n+ @Test // see #7686.\n+ public void testIdsQueryWithInvalidValues() throws Exception {\n+ createIndex(\"test\");\n+ indexRandom(true, false, client().prepareIndex(\"test\", \"type\", \"1\").setSource(\"body\", \"foo\"));\n+ try {\n+ client().prepareSearch(\"test\")\n+ .setTypes(\"type\")\n+ .setQuery(\"{\\n\" +\n+ \" \\\"ids\\\": {\\n\" +\n+ \" \\\"values\\\": [[\\\"1\\\"]]\\n\" +\n+ \" }\\n\" +\n+ \"}\")\n+ .get();\n+ fail(\"query is invalid and should have produced a parse exception\");\n+ } catch (Exception e) {\n+ assertThat(\"query could not be parsed due to bad format: \" + e.getMessage(),\n+ e.getMessage().contains(\"Illegal value for id, expecting a string or number, got: START_ARRAY\"),\n+ equalTo(true));\n+ }\n+ }\n }", "filename": "src/test/java/org/elasticsearch/search/query/SimpleQueryTests.java", "status": "modified" } ] }
{ "body": "This is how the /usr/share/elasticsearch directory looks like:\n\n```\nroot@host:/usr/share/elasticsearch# ls -l\ntotal 36\n-rw-r--r-- 1 root root 77 Sep 29 13:41 all-signatures.txt\ndrwxr-xr-x 2 root root 4096 Sep 29 18:10 bin\n-rw-r--r-- 1 root root 4441 Sep 29 13:41 core-signatures.txt\ndrwxr-xr-x 3 root root 4096 Sep 29 18:10 lib\n-rw-r--r-- 1 root root 150 Sep 29 13:41 NOTICE.txt\n-rw-r--r-- 1 root root 8421 Sep 29 13:41 README.textile\n-rw-r--r-- 1 root root 0 Sep 29 13:41 test-signatures.txt\nroot@pangaea-mw1:/usr/share/elasticsearch#\n```\n\nThe txt files are relicts caused by packaging *.txt into the debian file. There are two solutions:\n- exclude those files explicit from packaging\n- alternatively rename those files to something like *.sig and change pom.xml\n\nI prefer the second variant. *.txt is stupid for signatures files. This was just copied from Lucene where they are in a separate subdirectory.\n", "comments": [ { "body": "+1\n\nLow priority but probably worth it because it can't be that hard.\n", "created_at": "2014-09-29T18:54:35Z" }, { "body": "Thanks. Moving to dev-tools is also fine, somewhat like in lucene :-)\n", "created_at": "2014-09-30T07:49:12Z" }, { "body": "I also ported this to `1.3` and `1.4` branches\n", "created_at": "2014-09-30T09:01:27Z" } ], "number": 7917, "title": "Debian package contains forbidden-apis signatures files in /usr/share/elasticsearch" }
{ "body": "This avoids the files showing up in the binary release, since .txt files are copied.\n\ncloses #7917\n", "number": 7921, "review_comments": [], "title": "Move forbidden api signature files to dev-tools." }
{ "commits": [ { "message": "Move forbidden api signature files to dev-tools.\n\nThis avoids the files showing up in the binary release, since .txt files\nare copied.\n\ncloses #7917" } ], "files": [ { "diff": "@@ -1213,8 +1213,8 @@\n <bundledSignature>jdk-system-out</bundledSignature>\n </bundledSignatures>\n <signaturesFiles>\n- <signaturesFile>core-signatures.txt</signaturesFile>\n- <signaturesFile>all-signatures.txt</signaturesFile>\n+ <signaturesFile>dev-tools/forbidden/core-signatures.txt</signaturesFile>\n+ <signaturesFile>dev-tools/forbidden/all-signatures.txt</signaturesFile>\n </signaturesFiles>\n <signatures>${forbidden.signatures}</signatures>\n </configuration>\n@@ -1242,8 +1242,8 @@\n <!-- end exclude for GC simulation -->\n </excludes>\n <signaturesFiles>\n- <signaturesFile>test-signatures.txt</signaturesFile>\n- <signaturesFile>all-signatures.txt</signaturesFile>\n+ <signaturesFile>dev-tools/forbidden/test-signatures.txt</signaturesFile>\n+ <signaturesFile>dev-tools/forbidden/all-signatures.txt</signaturesFile>\n </signaturesFiles>\n <signatures>${forbidden.test.signatures}</signatures>\n </configuration>", "filename": "pom.xml", "status": "modified" } ] }
{ "body": "The implementation of \"forcing\" optimize right now will cause the ElasticsearchMergePolicy to return one giant OneMerge if `force==true`. However, this can cause IO issues. The existing MP impls chain merging through \"cascading\", so that no OneMerge merges more than some X segments (e.g. X = 30 for TieredMP). Forcing should do the same...\n", "comments": [ { "body": "This is quite nasty: if a shard has many segments, this can suck up lots of RAM, file descriptors, take much longer to run than if we let the merge policy do separate merges ... I wonder how often users are \"forcing\" their optimize.\n", "created_at": "2014-09-27T08:37:42Z" } ], "number": 7904, "title": "Internal: Optimize with `force=true` can sidestep max segments to merge at once used by delegate" }
{ "body": "This does the following:\n- Make 'force' flag only build a merge if the delegate MP returned no merges\n- Add async handling for 'flush' when 'waitForMerges' is false\n- Remove flush at the beginning of optimize. This is something the user can\n do if they wish, before calling optimize.\n\ncloses #7886\ncloses #7904\n", "number": 7920, "review_comments": [], "title": "Fix optimize behavior with 'force' and 'flush' flags." }
{ "commits": [ { "message": "Fix optimize behavior with 'force' and 'flush' flags.\n\nThis does the following:\n* Make 'force' flag only build a merge if the delegate MP returned no merges\n* Add async handling for 'flush' when 'waitForMerges' is false\n* Remove flush at the beginning of optimize. This is something the user can\n do if they wish, before calling optimize.\n\ncloses #7886\ncloses #7904" } ], "files": [ { "diff": "@@ -51,6 +51,7 @@\n import org.elasticsearch.common.unit.ByteSizeUnit;\n import org.elasticsearch.common.unit.ByteSizeValue;\n import org.elasticsearch.common.unit.TimeValue;\n+import org.elasticsearch.common.util.concurrent.AbstractRunnable;\n import org.elasticsearch.common.util.concurrent.EsExecutors;\n import org.elasticsearch.common.util.concurrent.EsRejectedExecutionException;\n import org.elasticsearch.index.analysis.AnalysisService;\n@@ -1009,12 +1010,20 @@ public void maybeMerge() throws EngineException {\n throw new OptimizeFailedEngineException(shardId, t);\n }\n }\n+ \n+ private void waitForMerges(boolean flushAfter) {\n+ try {\n+ currentIndexWriter().waitForMerges();\n+ } catch (IOException e) {\n+ throw new OptimizeFailedEngineException(shardId, e);\n+ }\n+ if (flushAfter) {\n+ flush(new Flush().force(true).waitIfOngoing(true));\n+ }\n+ }\n \n @Override\n public void optimize(Optimize optimize) throws EngineException {\n- if (optimize.flush()) {\n- flush(new Flush().force(true).waitIfOngoing(true));\n- }\n if (optimizeMutex.compareAndSet(false, true)) {\n ElasticsearchMergePolicy elasticsearchMergePolicy = null;\n try (InternalLock _ = readLock.acquire()) {\n@@ -1054,18 +1063,23 @@ public void optimize(Optimize optimize) throws EngineException {\n }\n optimizeMutex.set(false);\n }\n-\n }\n+ \n // wait for the merges outside of the read lock\n if (optimize.waitForMerge()) {\n- try {\n- currentIndexWriter().waitForMerges();\n- } catch (IOException e) {\n- throw new OptimizeFailedEngineException(shardId, e);\n- }\n- }\n- if (optimize.flush()) {\n- flush(new Flush().force(true).waitIfOngoing(true));\n+ waitForMerges(optimize.flush());\n+ } else if (optimize.flush()) {\n+ // we only need to monitor merges for async calls if we are going to flush\n+ threadPool.executor(ThreadPool.Names.OPTIMIZE).execute(new AbstractRunnable() {\n+ @Override\n+ public void onFailure(Throwable t) {\n+ logger.error(\"Exception while waiting for merges asynchronously after optimize\", t);\n+ }\n+ @Override\n+ protected void doRun() throws Exception {\n+ waitForMerges(true);\n+ }\n+ });\n }\n }\n ", "filename": "src/main/java/org/elasticsearch/index/engine/internal/InternalEngine.java", "status": "modified" }, { "diff": "@@ -196,20 +196,22 @@ public MergeSpecification findMerges(MergeTrigger mergeTrigger,\n public MergeSpecification findForcedMerges(SegmentInfos segmentInfos,\n int maxSegmentCount, Map<SegmentCommitInfo,Boolean> segmentsToMerge, IndexWriter writer)\n throws IOException {\n- if (force) {\n- List<SegmentCommitInfo> segments = Lists.newArrayList();\n- for (SegmentCommitInfo info : segmentInfos) {\n- if (segmentsToMerge.containsKey(info)) {\n- segments.add(info);\n- }\n- }\n- if (!segments.isEmpty()) {\n- MergeSpecification spec = new IndexUpgraderMergeSpecification();\n- spec.add(new OneMerge(segments));\n- return spec;\n- }\n- }\n- return upgradedMergeSpecification(delegate.findForcedMerges(segmentInfos, maxSegmentCount, segmentsToMerge, writer));\n+ MergeSpecification spec = delegate.findForcedMerges(segmentInfos, maxSegmentCount, segmentsToMerge, writer);\n+\n+ if (spec == null && force) {\n+ List<SegmentCommitInfo> segments = Lists.newArrayList();\n+ for (SegmentCommitInfo info : segmentInfos) {\n+ if (segmentsToMerge.containsKey(info)) {\n+ segments.add(info);\n+ }\n+ }\n+ if (!segments.isEmpty()) {\n+ spec = new IndexUpgraderMergeSpecification();\n+ spec.add(new OneMerge(segments));\n+ return spec;\n+ }\n+ }\n+ return upgradedMergeSpecification(spec);\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/index/merge/policy/ElasticsearchMergePolicy.java", "status": "modified" }, { "diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.index.engine.internal;\n \n+import com.google.common.base.Predicate;\n import org.apache.log4j.AppenderSkeleton;\n import org.apache.log4j.Level;\n import org.apache.log4j.Logger;\n@@ -29,6 +30,7 @@\n import org.apache.lucene.document.TextField;\n import org.apache.lucene.index.CorruptIndexException;\n import org.apache.lucene.index.IndexDeletionPolicy;\n+import org.apache.lucene.index.SegmentInfos;\n import org.apache.lucene.index.Term;\n import org.apache.lucene.search.TermQuery;\n import org.apache.lucene.store.AlreadyClosedException;\n@@ -374,7 +376,7 @@ public void afterMerge(OnGoingMerge merge) {\n }\n });\n \n- Engine engine = createEngine(engineSettingsService, store, createTranslog(), mergeSchedulerProvider);\n+ final Engine engine = createEngine(engineSettingsService, store, createTranslog(), mergeSchedulerProvider);\n engine.start();\n ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocument(), Lucene.STANDARD_ANALYZER, B_1, false);\n Engine.Index index = new Engine.Index(null, newUid(\"1\"), doc);\n@@ -410,26 +412,44 @@ public void afterMerge(OnGoingMerge merge) {\n index = new Engine.Index(null, newUid(\"4\"), doc);\n engine.index(index);\n engine.flush(new Engine.Flush());\n-\n+ final long gen1 = store.readLastCommittedSegmentsInfo().getGeneration();\n // now, optimize and wait for merges, see that we have no merge flag\n engine.optimize(new Engine.Optimize().flush(true).maxNumSegments(1).waitForMerge(true));\n \n for (Segment segment : engine.segments()) {\n assertThat(segment.getMergeId(), nullValue());\n }\n+ // we could have multiple underlying merges, so the generation may increase more than once\n+ assertTrue(store.readLastCommittedSegmentsInfo().getGeneration() > gen1);\n \n // forcing an optimize will merge this single segment shard\n final boolean force = randomBoolean();\n if (force) {\n waitTillMerge.set(new CountDownLatch(1));\n waitForMerge.set(new CountDownLatch(1));\n }\n- engine.optimize(new Engine.Optimize().flush(true).maxNumSegments(1).force(force).waitForMerge(false));\n+ final boolean flush = randomBoolean();\n+ final long gen2 = store.readLastCommittedSegmentsInfo().getGeneration();\n+ engine.optimize(new Engine.Optimize().flush(flush).maxNumSegments(1).force(force).waitForMerge(false));\n waitTillMerge.get().await();\n for (Segment segment : engine.segments()) {\n assertThat(segment.getMergeId(), force ? notNullValue() : nullValue());\n }\n waitForMerge.get().countDown();\n+ \n+ if (flush) {\n+ awaitBusy(new Predicate<Object>() {\n+ @Override\n+ public boolean apply(Object o) {\n+ try {\n+ // we should have had just 1 merge, so last generation should be exact\n+ return store.readLastCommittedSegmentsInfo().getLastGeneration() == gen2;\n+ } catch (IOException e) {\n+ throw ExceptionsHelper.convertToRuntime(e);\n+ }\n+ }\n+ });\n+ }\n \n engine.close();\n }", "filename": "src/test/java/org/elasticsearch/index/engine/internal/InternalEngineTests.java", "status": "modified" } ] }
{ "body": "The implementation of optimize currently has a conditional flush before doing the `forceMerge`, as well as a conditional flush afterwards. However, if `wait_for_merges=false`, the second flush is bogus. It should occur once the force merge is complete. \n", "comments": [ { "body": "Nice catch! It's silly to do that 2nd flush if we didn't wait ...\n", "created_at": "2014-09-26T16:04:45Z" } ], "number": 7886, "title": "Optimize with `flush=true` only works when `wait_for_merges=true`" }
{ "body": "This does the following:\n- Make 'force' flag only build a merge if the delegate MP returned no merges\n- Add async handling for 'flush' when 'waitForMerges' is false\n- Remove flush at the beginning of optimize. This is something the user can\n do if they wish, before calling optimize.\n\ncloses #7886\ncloses #7904\n", "number": 7920, "review_comments": [], "title": "Fix optimize behavior with 'force' and 'flush' flags." }
{ "commits": [ { "message": "Fix optimize behavior with 'force' and 'flush' flags.\n\nThis does the following:\n* Make 'force' flag only build a merge if the delegate MP returned no merges\n* Add async handling for 'flush' when 'waitForMerges' is false\n* Remove flush at the beginning of optimize. This is something the user can\n do if they wish, before calling optimize.\n\ncloses #7886\ncloses #7904" } ], "files": [ { "diff": "@@ -51,6 +51,7 @@\n import org.elasticsearch.common.unit.ByteSizeUnit;\n import org.elasticsearch.common.unit.ByteSizeValue;\n import org.elasticsearch.common.unit.TimeValue;\n+import org.elasticsearch.common.util.concurrent.AbstractRunnable;\n import org.elasticsearch.common.util.concurrent.EsExecutors;\n import org.elasticsearch.common.util.concurrent.EsRejectedExecutionException;\n import org.elasticsearch.index.analysis.AnalysisService;\n@@ -1009,12 +1010,20 @@ public void maybeMerge() throws EngineException {\n throw new OptimizeFailedEngineException(shardId, t);\n }\n }\n+ \n+ private void waitForMerges(boolean flushAfter) {\n+ try {\n+ currentIndexWriter().waitForMerges();\n+ } catch (IOException e) {\n+ throw new OptimizeFailedEngineException(shardId, e);\n+ }\n+ if (flushAfter) {\n+ flush(new Flush().force(true).waitIfOngoing(true));\n+ }\n+ }\n \n @Override\n public void optimize(Optimize optimize) throws EngineException {\n- if (optimize.flush()) {\n- flush(new Flush().force(true).waitIfOngoing(true));\n- }\n if (optimizeMutex.compareAndSet(false, true)) {\n ElasticsearchMergePolicy elasticsearchMergePolicy = null;\n try (InternalLock _ = readLock.acquire()) {\n@@ -1054,18 +1063,23 @@ public void optimize(Optimize optimize) throws EngineException {\n }\n optimizeMutex.set(false);\n }\n-\n }\n+ \n // wait for the merges outside of the read lock\n if (optimize.waitForMerge()) {\n- try {\n- currentIndexWriter().waitForMerges();\n- } catch (IOException e) {\n- throw new OptimizeFailedEngineException(shardId, e);\n- }\n- }\n- if (optimize.flush()) {\n- flush(new Flush().force(true).waitIfOngoing(true));\n+ waitForMerges(optimize.flush());\n+ } else if (optimize.flush()) {\n+ // we only need to monitor merges for async calls if we are going to flush\n+ threadPool.executor(ThreadPool.Names.OPTIMIZE).execute(new AbstractRunnable() {\n+ @Override\n+ public void onFailure(Throwable t) {\n+ logger.error(\"Exception while waiting for merges asynchronously after optimize\", t);\n+ }\n+ @Override\n+ protected void doRun() throws Exception {\n+ waitForMerges(true);\n+ }\n+ });\n }\n }\n ", "filename": "src/main/java/org/elasticsearch/index/engine/internal/InternalEngine.java", "status": "modified" }, { "diff": "@@ -196,20 +196,22 @@ public MergeSpecification findMerges(MergeTrigger mergeTrigger,\n public MergeSpecification findForcedMerges(SegmentInfos segmentInfos,\n int maxSegmentCount, Map<SegmentCommitInfo,Boolean> segmentsToMerge, IndexWriter writer)\n throws IOException {\n- if (force) {\n- List<SegmentCommitInfo> segments = Lists.newArrayList();\n- for (SegmentCommitInfo info : segmentInfos) {\n- if (segmentsToMerge.containsKey(info)) {\n- segments.add(info);\n- }\n- }\n- if (!segments.isEmpty()) {\n- MergeSpecification spec = new IndexUpgraderMergeSpecification();\n- spec.add(new OneMerge(segments));\n- return spec;\n- }\n- }\n- return upgradedMergeSpecification(delegate.findForcedMerges(segmentInfos, maxSegmentCount, segmentsToMerge, writer));\n+ MergeSpecification spec = delegate.findForcedMerges(segmentInfos, maxSegmentCount, segmentsToMerge, writer);\n+\n+ if (spec == null && force) {\n+ List<SegmentCommitInfo> segments = Lists.newArrayList();\n+ for (SegmentCommitInfo info : segmentInfos) {\n+ if (segmentsToMerge.containsKey(info)) {\n+ segments.add(info);\n+ }\n+ }\n+ if (!segments.isEmpty()) {\n+ spec = new IndexUpgraderMergeSpecification();\n+ spec.add(new OneMerge(segments));\n+ return spec;\n+ }\n+ }\n+ return upgradedMergeSpecification(spec);\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/index/merge/policy/ElasticsearchMergePolicy.java", "status": "modified" }, { "diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.index.engine.internal;\n \n+import com.google.common.base.Predicate;\n import org.apache.log4j.AppenderSkeleton;\n import org.apache.log4j.Level;\n import org.apache.log4j.Logger;\n@@ -29,6 +30,7 @@\n import org.apache.lucene.document.TextField;\n import org.apache.lucene.index.CorruptIndexException;\n import org.apache.lucene.index.IndexDeletionPolicy;\n+import org.apache.lucene.index.SegmentInfos;\n import org.apache.lucene.index.Term;\n import org.apache.lucene.search.TermQuery;\n import org.apache.lucene.store.AlreadyClosedException;\n@@ -374,7 +376,7 @@ public void afterMerge(OnGoingMerge merge) {\n }\n });\n \n- Engine engine = createEngine(engineSettingsService, store, createTranslog(), mergeSchedulerProvider);\n+ final Engine engine = createEngine(engineSettingsService, store, createTranslog(), mergeSchedulerProvider);\n engine.start();\n ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocument(), Lucene.STANDARD_ANALYZER, B_1, false);\n Engine.Index index = new Engine.Index(null, newUid(\"1\"), doc);\n@@ -410,26 +412,44 @@ public void afterMerge(OnGoingMerge merge) {\n index = new Engine.Index(null, newUid(\"4\"), doc);\n engine.index(index);\n engine.flush(new Engine.Flush());\n-\n+ final long gen1 = store.readLastCommittedSegmentsInfo().getGeneration();\n // now, optimize and wait for merges, see that we have no merge flag\n engine.optimize(new Engine.Optimize().flush(true).maxNumSegments(1).waitForMerge(true));\n \n for (Segment segment : engine.segments()) {\n assertThat(segment.getMergeId(), nullValue());\n }\n+ // we could have multiple underlying merges, so the generation may increase more than once\n+ assertTrue(store.readLastCommittedSegmentsInfo().getGeneration() > gen1);\n \n // forcing an optimize will merge this single segment shard\n final boolean force = randomBoolean();\n if (force) {\n waitTillMerge.set(new CountDownLatch(1));\n waitForMerge.set(new CountDownLatch(1));\n }\n- engine.optimize(new Engine.Optimize().flush(true).maxNumSegments(1).force(force).waitForMerge(false));\n+ final boolean flush = randomBoolean();\n+ final long gen2 = store.readLastCommittedSegmentsInfo().getGeneration();\n+ engine.optimize(new Engine.Optimize().flush(flush).maxNumSegments(1).force(force).waitForMerge(false));\n waitTillMerge.get().await();\n for (Segment segment : engine.segments()) {\n assertThat(segment.getMergeId(), force ? notNullValue() : nullValue());\n }\n waitForMerge.get().countDown();\n+ \n+ if (flush) {\n+ awaitBusy(new Predicate<Object>() {\n+ @Override\n+ public boolean apply(Object o) {\n+ try {\n+ // we should have had just 1 merge, so last generation should be exact\n+ return store.readLastCommittedSegmentsInfo().getLastGeneration() == gen2;\n+ } catch (IOException e) {\n+ throw ExceptionsHelper.convertToRuntime(e);\n+ }\n+ }\n+ });\n+ }\n \n engine.close();\n }", "filename": "src/test/java/org/elasticsearch/index/engine/internal/InternalEngineTests.java", "status": "modified" } ] }
{ "body": "In `/_aliases` it looks like this:\n\n``` json\n{\n \"statistics-20131006\" : {\n \"aliases\" : {\n \"statistics-20131006\" : { }\n }\n }\n}\n```\n\nI restored index `statistics-20131006-compacted` that had alias to `statistics-20131006` with removal of `-compacted` suffix. Now I cannot remove those aliases.\n\nProbably restore process should check for such things. @imotov.\n", "comments": [ { "body": "@imotov, thanks!\n", "created_at": "2014-10-05T18:54:33Z" }, { "body": "@bobrik thank you for reporting it!\n", "created_at": "2014-10-05T19:38:29Z" } ], "number": 7915, "title": "Restore with rewriting could create alias with the same name as index" }
{ "body": "...ses\n\nFixes #7915\n", "number": 7918, "review_comments": [], "title": "Make sure indices cannot be renamed into restored aliases" }
{ "commits": [ { "message": "Snapshot/Restore: Make sure indices cannot be renamed into restored aliases\n\nFixes #7915" } ], "files": [ { "diff": "@@ -46,14 +46,12 @@\n import org.elasticsearch.transport.*;\n \n import java.io.IOException;\n-import java.util.ArrayList;\n-import java.util.HashMap;\n-import java.util.List;\n-import java.util.Map;\n+import java.util.*;\n import java.util.concurrent.CopyOnWriteArrayList;\n \n import static com.google.common.collect.Lists.newArrayList;\n import static com.google.common.collect.Maps.newHashMap;\n+import static com.google.common.collect.Sets.newHashSet;\n import static org.elasticsearch.cluster.metadata.MetaDataIndexStateService.INDEX_CLOSED_BLOCK;\n \n /**\n@@ -146,6 +144,7 @@ public ClusterState execute(ClusterState currentState) {\n ClusterBlocks.Builder blocks = ClusterBlocks.builder().blocks(currentState.blocks());\n RoutingTable.Builder rtBuilder = RoutingTable.builder(currentState.routingTable());\n final ImmutableMap<ShardId, RestoreMetaData.ShardRestoreStatus> shards;\n+ Set<String> aliases = newHashSet();\n if (!renamedIndices.isEmpty()) {\n // We have some indices to restore\n ImmutableMap.Builder<ShardId, RestoreMetaData.ShardRestoreStatus> shardsBuilder = ImmutableMap.builder();\n@@ -166,6 +165,10 @@ public ClusterState execute(ClusterState currentState) {\n if (!request.includeAliases() && !snapshotIndexMetaData.aliases().isEmpty()) {\n // Remove all aliases - they shouldn't be restored\n indexMdBuilder.removeAllAliases();\n+ } else {\n+ for (ObjectCursor<String> alias : snapshotIndexMetaData.aliases().keys()) {\n+ aliases.add(alias.value);\n+ }\n }\n IndexMetaData updatedIndexMetaData = indexMdBuilder.build();\n if (partial) {\n@@ -187,6 +190,10 @@ public ClusterState execute(ClusterState currentState) {\n for (ObjectCursor<AliasMetaData> alias : currentIndexMetaData.aliases().values()) {\n indexMdBuilder.putAlias(alias.value);\n }\n+ } else {\n+ for (ObjectCursor<String> alias : snapshotIndexMetaData.aliases().keys()) {\n+ aliases.add(alias.value);\n+ }\n }\n IndexMetaData updatedIndexMetaData = indexMdBuilder.index(renamedIndex).build();\n rtBuilder.addAsRestore(updatedIndexMetaData, restoreSource);\n@@ -209,12 +216,14 @@ public ClusterState execute(ClusterState currentState) {\n shards = ImmutableMap.of();\n }\n \n+ checkAliasNameConflicts(renamedIndices, aliases);\n+\n // Restore global state if needed\n restoreGlobalStateIfRequested(mdBuilder);\n \n if (completed(shards)) {\n // We don't have any indices to restore - we are done\n- restoreInfo = new RestoreInfo(request.name(), ImmutableList.<String>copyOf(renamedIndices.keySet()),\n+ restoreInfo = new RestoreInfo(request.name(), ImmutableList.copyOf(renamedIndices.keySet()),\n shards.size(), shards.size() - failedShards(shards));\n }\n \n@@ -223,6 +232,14 @@ public ClusterState execute(ClusterState currentState) {\n return ClusterState.builder(updatedState).routingResult(routingResult).build();\n }\n \n+ private void checkAliasNameConflicts(Map<String, String> renamedIndices, Set<String> aliases) {\n+ for(Map.Entry<String, String> renamedIndex: renamedIndices.entrySet()) {\n+ if (aliases.contains(renamedIndex.getKey())) {\n+ throw new SnapshotRestoreException(snapshotId, \"cannot rename index [\" + renamedIndex.getValue() + \"] into [\" + renamedIndex.getKey() + \"] because of conflict with an alias with the same name\");\n+ }\n+ }\n+ }\n+\n private void populateIgnoredShards(String index, IntSet ignoreShards) {\n for (SnapshotShardFailure failure : snapshot.shardFailures()) {\n if (index.equals(failure.index())) {", "filename": "src/main/java/org/elasticsearch/snapshots/RestoreService.java", "status": "modified" }, { "diff": "@@ -776,9 +776,15 @@ public void renameOnRestoreTest() throws Exception {\n .setType(\"fs\").setSettings(ImmutableSettings.settingsBuilder()\n .put(\"location\", newTempDir(LifecycleScope.SUITE))));\n \n- createIndex(\"test-idx-1\", \"test-idx-2\");\n+ createIndex(\"test-idx-1\", \"test-idx-2\", \"test-idx-3\");\n ensureGreen();\n \n+ assertAcked(client.admin().indices().prepareAliases()\n+ .addAlias(\"test-idx-1\", \"alias-1\")\n+ .addAlias(\"test-idx-2\", \"alias-2\")\n+ .addAlias(\"test-idx-3\", \"alias-3\")\n+ );\n+\n logger.info(\"--> indexing some data\");\n for (int i = 0; i < 100; i++) {\n index(\"test-idx-1\", \"doc\", Integer.toString(i), \"foo\", \"bar\" + i);\n@@ -823,6 +829,9 @@ public void renameOnRestoreTest() throws Exception {\n .setRenamePattern(\"(.+-2)\").setRenameReplacement(\"$1-copy\").setWaitForCompletion(true).execute().actionGet();\n assertThat(restoreSnapshotResponse.getRestoreInfo().totalShards(), greaterThan(0));\n \n+ logger.info(\"--> delete indices\");\n+ cluster().wipeIndices(\"test-idx-1\", \"test-idx-1-copy\", \"test-idx-2\", \"test-idx-2-copy\");\n+\n logger.info(\"--> try renaming indices using the same name\");\n try {\n client.admin().cluster().prepareRestoreSnapshot(\"test-repo\", \"test-snap\").setRenamePattern(\"(.+)\").setRenameReplacement(\"same-name\").setWaitForCompletion(true).execute().actionGet();\n@@ -846,6 +855,38 @@ public void renameOnRestoreTest() throws Exception {\n } catch (InvalidIndexNameException ex) {\n // Expected\n }\n+\n+ logger.info(\"--> try renaming indices into existing alias name\");\n+ try {\n+ client.admin().cluster().prepareRestoreSnapshot(\"test-repo\", \"test-snap\").setIndices(\"test-idx-1\").setRenamePattern(\".+\").setRenameReplacement(\"alias-3\").setWaitForCompletion(true).execute().actionGet();\n+ fail(\"Shouldn't be here\");\n+ } catch (InvalidIndexNameException ex) {\n+ // Expected\n+ }\n+\n+ logger.info(\"--> try renaming indices into existing alias of itself\");\n+ try {\n+ client.admin().cluster().prepareRestoreSnapshot(\"test-repo\", \"test-snap\").setIndices(\"test-idx-1\").setRenamePattern(\"test-idx\").setRenameReplacement(\"alias\").setWaitForCompletion(true).execute().actionGet();\n+ fail(\"Shouldn't be here\");\n+ } catch (SnapshotRestoreException ex) {\n+ // Expected\n+ }\n+\n+ logger.info(\"--> try renaming indices into existing alias of another restored index\");\n+ try {\n+ client.admin().cluster().prepareRestoreSnapshot(\"test-repo\", \"test-snap\").setIndices(\"test-idx-1\", \"test-idx-2\").setRenamePattern(\"test-idx-1\").setRenameReplacement(\"alias-2\").setWaitForCompletion(true).execute().actionGet();\n+ fail(\"Shouldn't be here\");\n+ } catch (SnapshotRestoreException ex) {\n+ // Expected\n+ }\n+\n+ logger.info(\"--> try renaming indices into existing alias of itself, but don't restore aliases \");\n+ restoreSnapshotResponse = client.admin().cluster().prepareRestoreSnapshot(\"test-repo\", \"test-snap\")\n+ .setIndices(\"test-idx-1\").setRenamePattern(\"test-idx\").setRenameReplacement(\"alias\")\n+ .setWaitForCompletion(true).setIncludeAliases(false).execute().actionGet();\n+ assertThat(restoreSnapshotResponse.getRestoreInfo().totalShards(), greaterThan(0));\n+\n+\n }\n \n @Test", "filename": "src/test/java/org/elasticsearch/snapshots/SharedClusterSnapshotRestoreTests.java", "status": "modified" } ] }
{ "body": "An example scenario where this will help:\n\nWhen the node is shutdown via api call, for example\n(https://github.com/elasticsearch/elasticsearch/blob/master/src/test/java/org/elasticsearch/test/ExternalNode.java#L219 )\nthen the call returns immediately even if the node is not actually shutdown yet\n(https://github.com/elasticsearch/elasticsearch/blob/master/src/main/java/org/elasticsearch/action/admin/cluster/node/shutdown/TransportNodesShutdownAction.java#L226).\nIf at the same time the proces is killed, then the hook that would usually prevent\nuncontrolled shutdown\n(https://github.com/elasticsearch/elasticsearch/blob/master/src/main/java/org/elasticsearch/bootstrap/Bootstrap.java#L75)\nhas no effect: It again calls close() which might then just return\nfor example because one of the lifecycles was moved to closed already.\n\nThe bwc test FunctionScoreBackwardCompatibilityTests.testSimpleFunctionScoreParsingWorks\nfailed because of this. The translog was not properly\nwritten because if the shutdown was called via api, the following process.destroy()\n(https://github.com/elasticsearch/elasticsearch/blob/master/src/test/java/org/elasticsearch/test/ExternalNode.java#L225)\nkilled the node before the translog was written to disk.\n", "comments": [ { "body": "LGTM - left one minor comment\n", "created_at": "2014-09-26T09:18:05Z" }, { "body": "comment added. may I push?\n", "created_at": "2014-09-26T10:34:49Z" } ], "number": 7885, "title": "Make close() synchronized during node shutdown" }
{ "body": "This relates to #7885 which causes problems when the shutdown and the kill request happen concurrently.\n", "number": 7910, "review_comments": [], "title": "[TEST] Use Shutdown API only if nodes are on 1.3.3 or newer to prevent shutdown problems" }
{ "commits": [ { "message": "[TEST] Use Shutdown API only if nodes are on 1.3.3 or newer to prevent shutdown problems" } ], "files": [ { "diff": "@@ -20,6 +20,7 @@\n \n import com.google.common.base.Predicate;\n import org.apache.lucene.util.Constants;\n+import org.elasticsearch.Version;\n import org.elasticsearch.action.admin.cluster.node.info.NodeInfo;\n import org.elasticsearch.action.admin.cluster.node.info.NodesInfoResponse;\n import org.elasticsearch.client.Client;\n@@ -215,8 +216,10 @@ synchronized void stop(boolean forceKill) {\n if (running()) {\n try {\n if (forceKill == false && nodeInfo != null && random.nextBoolean()) {\n- // sometimes shut down gracefully\n- getClient().admin().cluster().prepareNodesShutdown(this.nodeInfo.getNode().id()).setExit(random.nextBoolean()).setDelay(\"0s\").get();\n+ if (nodeInfo.getVersion().onOrAfter(Version.V_1_3_3)) {\n+ // sometimes shut down gracefully\n+ getClient().admin().cluster().prepareNodesShutdown(this.nodeInfo.getNode().id()).setExit(random.nextBoolean()).setDelay(\"0s\").get();\n+ }\n }\n if (this.client != null) {\n client.close();", "filename": "src/test/java/org/elasticsearch/test/ExternalNode.java", "status": "modified" } ] }
{ "body": "Seems like reposting a mapping explicitly mentioning the default analyzer throws an error when it should not\nwith the following mapping.json\n\n``` json\n{\"foobar\":{\"dynamic\":false,\"properties\":{\"description\": {\n \"index\":\"analyzed\",\n \"analyzer\":\"default\",\n \"type\":\"string\"}}}}\n```\n\nputting the mapping the 1st time everything goes well\n\n`curl -XPUT -d @mapping.json http://localhost:9200/twitter/foobar/_mapping`\n\n``` json\n{\"ok\":true,\"acknowledged\":true}\n```\n\nbut the second time\n`curl -XPUT -d @mapping.json http://localhost:9200/twitter/foobar/_mapping`\n\n``` json\n{\"error\":\"MergeMappingException[Merge failed with failures {[mapper [description] has different index_analyzer, mapper [description] has different search_analyzer]}]\",\"status\":400}\n```\n\nwith this mapping instead there's no problem posting it over and over\n\n``` json\n{\"foobar\":{\"dynamic\":false,\"properties\":{\"description\": {\n \"index\":\"analyzed\",\n \"analyzer\":\"english\",\n \"type\":\"string\"}}}}\n```\n\nso seems like for some reason the default analyzer is not considered to be equal to itself\n", "comments": [ { "body": "Just submitted PR #7902. Looks like there's only one point where null index analyzers and \"default\"-named index analyzers were being treated as nonequivalent.\n", "created_at": "2014-09-26T19:53:53Z" }, { "body": "closed using https://github.com/elasticsearch/elasticsearch/pull/7902\n", "created_at": "2014-11-14T10:09:42Z" } ], "number": 2716, "title": "Mapping: Posting a mapping with default analyzer fails" }
{ "body": "Closes #2716\n\nWhen merging two mappings, default index analyzers are represented by either null or by a \"default\"-named index analyzer object. Fixed a spot where only the null representation was being considered as default.\n\nWrote a simple REST test to confirm the behavior is fixed. All tests pass.\n", "number": 7902, "review_comments": [ { "body": "Could we maybe check the resulting mapping is as expected by doing a get_mapping call rather than assuming that no exception means everything is as expected?\n", "created_at": "2014-10-10T13:50:12Z" } ], "title": "Posting a mapping with default analyzer fails" }
{ "commits": [ { "message": "Fixed behavior where two representations of the default index analyzer weren't being treated as equivalent. Added REST test to confirm fix.\nResolves issue #2716 \"posting a mapping with default analyzer fails\"" }, { "message": "Merge pull request #2 from elasticsearch/master\n\npull recent from master" }, { "message": "Fixed behavior where two representations of the default index analyzer weren't being treated as equivalent. Added REST test to confirm fix.\nResolves issue #2716 \"posting a mapping with default analyzer fails\"" }, { "message": "Merge pull request #3 from elasticsearch/master\n\npull to current" } ], "files": [ { "diff": "@@ -176,3 +176,33 @@ setup:\n catch: param\n indices.put_mapping: {}\n \n+---\n+\"post a mapping with default analyzer twice\":\n+\n+ - do:\n+ indices.put_mapping:\n+ index: test_index1\n+ type: test_type\n+ body:\n+ test_type:\n+ dynamic: false\n+ properties:\n+ text:\n+ index: analyzed\n+ analyzer: default\n+ type: string\n+\n+ - do:\n+ indices.put_mapping:\n+ index: test_index1\n+ type: test_type\n+ body:\n+ test_type:\n+ dynamic: false\n+ properties:\n+ text:\n+ index: analyzed\n+ analyzer: default\n+ type: string\n+\n+# no exception here means success", "filename": "rest-api-spec/test/indices.put_mapping/all_path_options.yaml", "status": "modified" }, { "diff": "@@ -594,15 +594,18 @@ public void merge(Mapper mergeWith, MergeContext mergeContext) throws MergeMappi\n if (this.fieldType().storeTermVectorPayloads() != fieldMergeWith.fieldType().storeTermVectorPayloads()) {\n mergeContext.addConflict(\"mapper [\" + names.fullName() + \"] has different store_term_vector_payloads values\");\n }\n- if (this.indexAnalyzer == null) {\n- if (fieldMergeWith.indexAnalyzer != null) {\n+ \n+ // null and \"default\"-named index analyzers both mean the default is used\n+ if (this.indexAnalyzer == null || \"default\".equals(this.indexAnalyzer.name())) {\n+ if (fieldMergeWith.indexAnalyzer != null && !\"default\".equals(fieldMergeWith.indexAnalyzer.name())) {\n mergeContext.addConflict(\"mapper [\" + names.fullName() + \"] has different index_analyzer\");\n }\n- } else if (fieldMergeWith.indexAnalyzer == null) {\n+ } else if (fieldMergeWith.indexAnalyzer == null || \"default\".equals(fieldMergeWith.indexAnalyzer.name())) {\n mergeContext.addConflict(\"mapper [\" + names.fullName() + \"] has different index_analyzer\");\n } else if (!this.indexAnalyzer.name().equals(fieldMergeWith.indexAnalyzer.name())) {\n mergeContext.addConflict(\"mapper [\" + names.fullName() + \"] has different index_analyzer\");\n }\n+ \n if (!this.names().equals(fieldMergeWith.names())) {\n mergeContext.addConflict(\"mapper [\" + names.fullName() + \"] has different index_name\");\n }", "filename": "src/main/java/org/elasticsearch/index/mapper/core/AbstractFieldMapper.java", "status": "modified" } ] }
{ "body": "To reproduce:\n\n```\ncurl -XDELETE 'http://localhost:9200/test/?pretty'\n\ncurl -XPUT 'http://localhost:9200/test/?pretty' -d '{\n \"settings\": {\n \"index.number_of_shards\": 1\n },\n \"mappings\": {\n \"document\": {\n \"_routing\" : {\n \"required\": true\n },\n \"properties\": {\n \"title\": {\n \"type\": \"string\"\n }\n }\n },\n \"entity\": {\n \"_parent\": {\n \"type\": \"document\"\n },\n \"_routing\" : {\n \"required\": true\n },\n \"properties\": {\n \"body\": {\n \"type\": \"string\"\n }\n }\n }\n }\n}'\n\n\ncurl -XPOST 'http://localhost:9200/_bulk?pretty' --data-binary '\n{\"index\": {\"_index\": \"test\", \"_type\": \"document\", \"_id\" : \"1\", \"_routing\": \"1\"}}\n{\"title\": \"New document\"}\n{\"index\": {\"_index\": \"test\", \"_type\": \"entity\", \"_id\" : \"1\", \"_routing\": \"1\", \"_parent\": \"1\"}}\n{\"body\": \"document body\"}\n'\ncurl -XPOST 'http://localhost:9200/test/_refresh?pretty'\ncurl -XGET 'http://localhost:9200/test/document/_search?pretty' -d '\n{\n \"query\" : {\n \"bool\": {\n \"minimum_should_match\": 1,\n \"disable_coord\": true,\n \"should\": [\n {\n \"simple_query_string\": {\n \"query\": \"old document\",\n \"default_operator\": \"and\",\n \"fields\": [\"title^10\"],\n \"flags\": \"NONE\"\n }\n },\n {\n \"has_child\": {\n \"query\": {\n \"bool\": {\n \"minimum_should_match\": 1,\n \"disable_coord\": true,\n \"should\": [\n {\n \"simple_query_string\" : {\n \"query\" : \"\\\"document body\\\"~2\",\n \"default_operator\" : \"and\",\n \"flags\": \"PHRASE|SLOP\",\n \"fields\": [\"body^5\"]\n }\n },\n {\n \"simple_query_string\" : {\n \"query\" : \"document body\",\n \"default_operator\" : \"and\",\n \"fields\": [\"body\"],\n \"flags\": \"NONE\"\n }\n }\n ]\n }\n },\n \"type\": \"entity\",\n \"score_mode\": \"sum\"\n }\n }\n ]\n }\n }\n}\n'\n```\n\nThe search fails with `NumberFormatException[For input string: \\\"PHRASE|SLOP\\\"];` exception.\n\nThe problem occurs because `simple_query_string` parser is using `XContentParser.hasTextCharacters()` method to check for the presence of text in the token, while this method should be only used to detect internal presentation of the string. \n\nThe issue was originally reported on the mailing list https://groups.google.com/forum/#!topic/elasticsearch-ru/SeiifNQW-qo\n", "comments": [], "number": 7875, "title": "simple_query_string parser may fail with NumberFormatException while parsing flags" }
{ "body": "Incorrect usage of XContentParser.hasTextCharacters() can result in NumberFormatException as well as other possible issues in template query parser and phrase suggest parsers.\n\nFixes #7875\n", "number": 7876, "review_comments": [], "title": "Fix NumberFormatException in Simple Query String Query" }
{ "commits": [ { "message": "Fix NumberFormatException in Simple Query String Query\n\nIncorrect usage of XContentParser.hasTextCharacters() can result in NumberFormatException as well as other possible issues in template query parser and phrase suggest parsers.\n\nFixes #7875" } ], "files": [ { "diff": "@@ -154,6 +154,16 @@ enum NumberType {\n \n Object objectBytes() throws IOException;\n \n+ /**\n+ * Method that can be used to determine whether calling of textCharacters() would be the most efficient way to\n+ * access textual content for the event parser currently points to.\n+ *\n+ * Default implementation simply returns false since only actual\n+ * implementation class has knowledge of its internal buffering\n+ * state.\n+ *\n+ * This method shouldn't be used to check if the token contains text or not.\n+ */\n boolean hasTextCharacters();\n \n char[] textCharacters() throws IOException;", "filename": "src/main/java/org/elasticsearch/common/xcontent/XContentParser.java", "status": "modified" }, { "diff": "@@ -160,7 +160,7 @@ public Query parse(QueryParseContext parseContext) throws IOException, QueryPars\n \"[\" + NAME + \"] default operator [\" + op + \"] is not allowed\");\n }\n } else if (\"flags\".equals(currentFieldName)) {\n- if (parser.hasTextCharacters()) {\n+ if (parser.currentToken() != XContentParser.Token.VALUE_NUMBER) {\n // Possible options are:\n // ALL, NONE, AND, OR, PREFIX, PHRASE, PRECEDENCE, ESCAPE, WHITESPACE, FUZZY, NEAR, SLOP\n flags = SimpleQueryStringFlag.resolveFlags(parser.text());", "filename": "src/main/java/org/elasticsearch/index/query/SimpleQueryStringParser.java", "status": "modified" }, { "diff": "@@ -115,7 +115,7 @@ public static TemplateContext parse(XContentParser parser, String paramsFieldnam\n currentFieldName = parser.currentName();\n } else if (parameterMap.containsKey(currentFieldName)) {\n type = parameterMap.get(currentFieldName);\n- if (token == XContentParser.Token.START_OBJECT && !parser.hasTextCharacters()) {\n+ if (token == XContentParser.Token.START_OBJECT) {\n XContentBuilder builder = XContentBuilder.builder(parser.contentType().xContent());\n builder.copyCurrentStructure(parser);\n templateNameOrTemplateContent = builder.string();", "filename": "src/main/java/org/elasticsearch/index/query/TemplateQueryParser.java", "status": "modified" }, { "diff": "@@ -132,7 +132,7 @@ public SuggestionSearchContext.SuggestionContext parse(XContentParser parser, Ma\n fieldName = parser.currentName();\n } else if (\"query\".equals(fieldName) || \"filter\".equals(fieldName)) {\n String templateNameOrTemplateContent;\n- if (token == XContentParser.Token.START_OBJECT && !parser.hasTextCharacters()) {\n+ if (token == XContentParser.Token.START_OBJECT) {\n XContentBuilder builder = XContentBuilder.builder(parser.contentType().xContent());\n builder.copyCurrentStructure(parser);\n templateNameOrTemplateContent = builder.string();", "filename": "src/main/java/org/elasticsearch/search/suggest/phrase/PhraseSuggestParser.java", "status": "modified" } ] }
{ "body": "This commit changes the way how files are selected for retransmission\non recovery / restore. Today this happens on a per-file basis where the\nrather weak checksum and the file length in bytes is compared to check if\na file is identical. This is prone to fail in the case of a checksum collision\nwhich can happen under certain circumstances.\nThe changes in this commit move the identity comparison to a per-commit / per-segment\nlevel where files are only treated as identical iff all the other files in the\ncommit / segment are the same. This `all or nothing` strategy is reducing the chance for\na collision dramatically since we also use a strong hash to identify commits / segments\nbased on the content of the `.si` / `segments.N` file.\n", "comments": [ { "body": "@imotov @rmuir can you guys do a review here? I am not sure about the XContent changes in Backup/Restore would be good to get some ideas here...\n", "created_at": "2014-08-20T14:46:28Z" }, { "body": "The diffing logic here etc looks great to me.\n", "created_at": "2014-08-20T15:15:30Z" }, { "body": "I left a couple of minor comments. Otherwise, looks good to me.\n", "created_at": "2014-08-20T17:34:09Z" }, { "body": "@imotov I pushed a new commit including a test for the `FileInfo` serialization\n", "created_at": "2014-08-21T07:47:54Z" }, { "body": "LGTM\n", "created_at": "2014-08-21T14:58:17Z" }, { "body": "I think we have a small regression here for snapshot and restore since we don't have the hash for the segments in the already existing snapshot. I think we can read the hashes for those where we calculated them from the snapshot on the fly if necessary. I will open a followup for this as I already discussed this with @imotov \n", "created_at": "2014-08-21T15:25:19Z" } ], "number": 7351, "title": "Improve recovery / snapshot restoring file identity handling" }
{ "body": "We saw lots of issues with hash collisions which have been fixed in 1.4 but back then it seemed like an improvement rather than a bugfix. I think in the meanwhile we should really declare it as a bugfix and port it into `1.3` This backport PR includes fixes for #7434 & #7351 as well as related fixes. \n\nI ran BWC tests against 1.3.2 and 1.2.4 as well as the entire test suite multiple times but I think this need a deep review.\n", "number": 7857, "review_comments": [ { "body": "Should we update this comment to say 1.3.3? I suppose we should update master/1.x too, so we have accurate \"history\" in case we are debugging something.\n", "created_at": "2014-09-24T13:04:00Z" }, { "body": "Same as before, we might want to update this to 1.3.3 (in all branches)\n", "created_at": "2014-09-24T13:04:51Z" }, { "body": "We should remember to somehow back-back-port it to 1.4, which has comparison to `org.elasticsearch.Version.V_1_4_0_Beta1` here. Same in `writeTo()` a few lines below. \n", "created_at": "2014-09-24T13:15:55Z" }, { "body": "I already have a commit for this here https://github.com/s1monw/elasticsearch/commit/461103844b22f2b10a8b9b0c622a80346f91ecc0\n", "created_at": "2014-09-24T13:17:34Z" } ], "title": "Resiliency: Backport Recovery / Snapshot file identity improvements to 1.3" }
{ "commits": [ { "message": "Add more logging for testSnapshotAndRestore backward compatibility test" }, { "message": "[STORE] Improve recovery / snapshot restoring file identity handling\n\nThis commit changes the way how files are selected for retransmission\non recovery / restore. Today this happens on a per-file basis where the\nrather weak checksum and the file length in bytes is compared to check if\na file is identical. This is prone to fail in the case of a checksum collision\nwhich can happen under certain circumstances.\nThe changes in this commit move the identity comparsion to a per-commit / per-segment\nlevel where files are only treated as identical iff all the other files in the\ncommit / segment are the same. This \"all or nothing\" strategy is reducing the chance for\na collision dramatically since we also use a strong hash to identify commits / segments\nbased on the content of the \".si\" / \"segments.N\" file.\n\nCloses #7351" }, { "message": "[SNAPSHOT] Add BWC layer to .si / segments_N hashing\n\nDue to additional safety added in #7351 we compute now a strong hash for\n.si and segments_N files which are compared during snapshot / restore.\nOld snapshots don't have this hash which can cause unnecessary copying\nof large amount of data. This commit adds the ability to fetch this\nhash from the blob store if needed.\n\nCloses #7434" }, { "message": "[SNAPSHOT] Ensure BWC layer can read chunked blobs" }, { "message": "Use empty BytesRef if we read from <= 1.3.3" }, { "message": "[TEST] only bump replicas if we have enough nodes in the cluster" }, { "message": "[TEST] only expand to 1 replica in SnapshotBackwardsCompatibilityTest" }, { "message": "Use physical name to compare files from snapshot metadata\n\nThe comparison and read code in the BlobStoreIndexShardRepository\nused the physicalName and Name in reverse order. This caused\nSnapshotBackwardsCompatibilityTest to fail." }, { "message": "Adjust inline comments\n\nWe backported this feature to 1.3.3 as well so comments should be\nupdated." } ], "files": [ { "diff": "@@ -20,12 +20,15 @@\n package org.elasticsearch.index.snapshots.blobstore;\n \n import com.google.common.collect.ImmutableMap;\n+import com.google.common.collect.Iterables;\n import com.google.common.collect.Lists;\n import org.apache.lucene.index.CorruptIndexException;\n import org.apache.lucene.store.IOContext;\n import org.apache.lucene.store.IndexInput;\n import org.apache.lucene.store.IndexOutput;\n import org.apache.lucene.store.RateLimiter;\n+import org.apache.lucene.store.*;\n+import org.apache.lucene.util.BytesRef;\n import org.apache.lucene.util.IOUtils;\n import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.cluster.metadata.SnapshotId;\n@@ -49,13 +52,11 @@\n import org.elasticsearch.indices.recovery.RecoveryState;\n import org.elasticsearch.repositories.RepositoryName;\n \n+import java.io.ByteArrayOutputStream;\n import java.io.FilterInputStream;\n import java.io.IOException;\n import java.io.InputStream;\n-import java.util.ArrayList;\n-import java.util.Collections;\n-import java.util.List;\n-import java.util.Map;\n+import java.util.*;\n import java.util.concurrent.CopyOnWriteArrayList;\n import java.util.concurrent.CountDownLatch;\n import java.util.concurrent.atomic.AtomicInteger;\n@@ -426,6 +427,7 @@ public void snapshot(SnapshotIndexCommit snapshotIndexCommit) {\n long indexTotalFilesSize = 0;\n ArrayList<FileInfo> filesToSnapshot = newArrayList();\n final Store.MetadataSnapshot metadata;\n+ // TODO apparently we don't use the MetadataSnapshot#.recoveryDiff(...) here but we should\n try {\n metadata = store.getMetadata(snapshotIndexCommit);\n } catch (IOException e) {\n@@ -445,7 +447,15 @@ public void snapshot(SnapshotIndexCommit snapshotIndexCommit) {\n // }\n \n BlobStoreIndexShardSnapshot.FileInfo fileInfo = snapshots.findPhysicalIndexFile(fileName);\n-\n+ try {\n+ // in 1.3.3 we added additional hashes for .si / segments_N files\n+ // to ensure we don't double the space in the repo since old snapshots\n+ // don't have this hash we try to read that hash from the blob store\n+ // in a bwc compatible way.\n+ maybeRecalculateMetadataHash(blobContainer, fileInfo, metadata);\n+ } catch (Throwable e) {\n+ logger.warn(\"{} Can't calculate hash from blob for file [{}] [{}]\", e, shardId, fileInfo.physicalName(), fileInfo.metadata());\n+ }\n if (fileInfo == null || !fileInfo.isSame(md) || !snapshotFileExistsInBlobs(fileInfo, blobs)) {\n // commit point file does not exists in any commit point, or has different length, or does not fully exists in the listed blobs\n snapshotRequired = true;\n@@ -635,6 +645,77 @@ private void checkAborted() {\n }\n }\n \n+ /**\n+ * This is a BWC layer to ensure we update the snapshots metdata with the corresponding hashes before we compare them.\n+ * The new logic for StoreFileMetaData reads the entire <tt>.si</tt> and <tt>segments.n</tt> files to strengthen the\n+ * comparison of the files on a per-segment / per-commit level.\n+ */\n+ private static final void maybeRecalculateMetadataHash(final ImmutableBlobContainer blobContainer, final FileInfo fileInfo, Store.MetadataSnapshot snapshot) throws Throwable {\n+ final StoreFileMetaData metadata;\n+ if (fileInfo != null && (metadata = snapshot.get(fileInfo.physicalName())) != null) {\n+ if (metadata.hash().length > 0 && fileInfo.metadata().hash().length == 0) {\n+ // we have a hash - check if our repo has a hash too otherwise we have\n+ // to calculate it.\n+ final ByteArrayOutputStream out = new ByteArrayOutputStream();\n+ final CountDownLatch latch = new CountDownLatch(1);\n+ final CopyOnWriteArrayList<Throwable> failures = new CopyOnWriteArrayList<>();\n+ // we might have multiple parts even though the file is small... make sure we read all of it.\n+ // TODO this API should really support a stream!\n+ blobContainer.readBlob(fileInfo.partName(0), new BlobContainer.ReadBlobListener() {\n+ final AtomicInteger partIndex = new AtomicInteger();\n+ @Override\n+ public synchronized void onPartial(byte[] data, int offset, int size) throws IOException {\n+ out.write(data, offset, size);\n+ }\n+\n+ @Override\n+ public synchronized void onCompleted() {\n+ boolean countDown = true;\n+ try {\n+ final int part = partIndex.incrementAndGet();\n+ if (part < fileInfo.numberOfParts()) {\n+ final String partName = fileInfo.partName(part);\n+ // continue with the new part\n+ blobContainer.readBlob(partName, this);\n+ countDown = false;\n+ return;\n+ }\n+ } finally {\n+ if (countDown) {\n+ latch.countDown();\n+ }\n+ }\n+ }\n+\n+ @Override\n+ public void onFailure(Throwable t) {\n+ try {\n+ failures.add(t);\n+ } finally {\n+ latch.countDown();\n+ }\n+ }\n+ });\n+\n+ try {\n+ latch.await();\n+ } catch (InterruptedException e) {\n+ Thread.interrupted();\n+ }\n+\n+ if (!failures.isEmpty()) {\n+ ExceptionsHelper.rethrowAndSuppress(failures);\n+ }\n+\n+ final byte[] bytes = out.toByteArray();\n+ assert bytes != null;\n+ assert bytes.length == fileInfo.length() : bytes.length + \" != \" + fileInfo.length();\n+ final BytesRef spare = new BytesRef(bytes);\n+ Store.MetadataSnapshot.hashFile(fileInfo.metadata().hash(), spare);\n+ }\n+ }\n+ }\n+\n /**\n * Context for restore operations\n */\n@@ -672,9 +753,9 @@ public void restore() {\n long totalSize = 0;\n int numberOfReusedFiles = 0;\n long reusedTotalSize = 0;\n- Map<String, StoreFileMetaData> metadata = Collections.emptyMap();\n+ Store.MetadataSnapshot recoveryTargetMetadata = Store.MetadataSnapshot.EMPTY;\n try {\n- metadata = store.getMetadata().asMap();\n+ recoveryTargetMetadata = store.getMetadata();\n } catch (CorruptIndexException e) {\n logger.warn(\"{} Can't read metadata from store\", e, shardId);\n throw new IndexShardRestoreFailedException(shardId, \"Can't restore corrupted shard\", e);\n@@ -683,33 +764,51 @@ public void restore() {\n logger.warn(\"{} Can't read metadata from store\", e, shardId);\n }\n \n- List<FileInfo> filesToRecover = Lists.newArrayList();\n- for (FileInfo fileInfo : snapshot.indexFiles()) {\n- String fileName = fileInfo.physicalName();\n- final StoreFileMetaData md = metadata.get(fileName);\n+ final List<FileInfo> filesToRecover = Lists.newArrayList();\n+ final Map<String, StoreFileMetaData> snapshotMetaData = new HashMap<>();\n+ final Map<String, FileInfo> fileInfos = new HashMap<>();\n+ for (final FileInfo fileInfo : snapshot.indexFiles()) {\n+ try {\n+ // in 1.3.3 we added additional hashes for .si / segments_N files\n+ // to ensure we don't double the space in the repo since old snapshots\n+ // don't have this hash we try to read that hash from the blob store\n+ // in a bwc compatible way.\n+ maybeRecalculateMetadataHash(blobContainer, fileInfo, recoveryTargetMetadata);\n+ } catch (Throwable e) {\n+ // if the index is broken we might not be able to read it\n+ logger.warn(\"{} Can't calculate hash from blog for file [{}] [{}]\", e, shardId, fileInfo.physicalName(), fileInfo.metadata());\n+ }\n+ snapshotMetaData.put(fileInfo.metadata().name(), fileInfo.metadata());\n+ fileInfos.put(fileInfo.metadata().name(), fileInfo);\n+ }\n+ final Store.MetadataSnapshot sourceMetaData = new Store.MetadataSnapshot(snapshotMetaData);\n+ final Store.RecoveryDiff diff = sourceMetaData.recoveryDiff(recoveryTargetMetadata);\n+ for (StoreFileMetaData md : diff.identical) {\n+ FileInfo fileInfo = fileInfos.get(md.name());\n numberOfFiles++;\n- if (md != null && fileInfo.isSame(md)) {\n- totalSize += md.length();\n- numberOfReusedFiles++;\n- reusedTotalSize += md.length();\n- recoveryState.getIndex().addReusedFileDetail(fileInfo.name(), fileInfo.length());\n- if (logger.isTraceEnabled()) {\n- logger.trace(\"not_recovering [{}], exists in local store and is same\", fileInfo.physicalName());\n- }\n- } else {\n- totalSize += fileInfo.length();\n- filesToRecover.add(fileInfo);\n- recoveryState.getIndex().addFileDetail(fileInfo.name(), fileInfo.length());\n- if (logger.isTraceEnabled()) {\n- if (md == null) {\n- logger.trace(\"recovering [{}], does not exists in local store\", fileInfo.physicalName());\n- } else {\n- logger.trace(\"recovering [{}], exists in local store but is different\", fileInfo.physicalName());\n- }\n- }\n+ totalSize += md.length();\n+ numberOfReusedFiles++;\n+ reusedTotalSize += md.length();\n+ recoveryState.getIndex().addReusedFileDetail(fileInfo.name(), fileInfo.length());\n+ if (logger.isTraceEnabled()) {\n+ logger.trace(\"[{}] [{}] not_recovering [{}] from [{}], exists in local store and is same\", shardId, snapshotId, fileInfo.physicalName(), fileInfo.name());\n }\n }\n \n+ for (StoreFileMetaData md : Iterables.concat(diff.different, diff.missing)) {\n+ FileInfo fileInfo = fileInfos.get(md.name());\n+ numberOfFiles++;\n+ totalSize += fileInfo.length();\n+ filesToRecover.add(fileInfo);\n+ recoveryState.getIndex().addFileDetail(fileInfo.name(), fileInfo.length());\n+ if (logger.isTraceEnabled()) {\n+ if (md == null) {\n+ logger.trace(\"[{}] [{}] recovering [{}] from [{}], does not exists in local store\", shardId, snapshotId, fileInfo.physicalName(), fileInfo.name());\n+ } else {\n+ logger.trace(\"[{}] [{}] recovering [{}] from [{}], exists in local store but is different\", shardId, snapshotId, fileInfo.physicalName(), fileInfo.name());\n+ }\n+ }\n+ }\n final RecoveryState.Index index = recoveryState.getIndex();\n index.totalFileCount(numberOfFiles);\n index.totalByteCount(totalSize);", "filename": "src/main/java/org/elasticsearch/index/snapshots/blobstore/BlobStoreIndexShardRepository.java", "status": "modified" }, { "diff": "@@ -20,10 +20,12 @@\n package org.elasticsearch.index.snapshots.blobstore;\n \n import com.google.common.collect.ImmutableList;\n+import org.apache.lucene.util.BytesRef;\n import org.apache.lucene.util.Version;\n import org.elasticsearch.ElasticsearchParseException;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.ParseField;\n+import org.elasticsearch.common.bytes.BytesArray;\n import org.elasticsearch.common.lucene.Lucene;\n import org.elasticsearch.common.unit.ByteSizeValue;\n import org.elasticsearch.common.xcontent.ToXContent;\n@@ -196,6 +198,7 @@ static final class Fields {\n static final XContentBuilderString CHECKSUM = new XContentBuilderString(\"checksum\");\n static final XContentBuilderString PART_SIZE = new XContentBuilderString(\"part_size\");\n static final XContentBuilderString WRITTEN_BY = new XContentBuilderString(\"written_by\");\n+ static final XContentBuilderString META_HASH = new XContentBuilderString(\"meta_hash\");\n }\n \n /**\n@@ -221,6 +224,10 @@ public static void toXContent(FileInfo file, XContentBuilder builder, ToXContent\n if (file.metadata.writtenBy() != null) {\n builder.field(Fields.WRITTEN_BY, file.metadata.writtenBy());\n }\n+\n+ if (file.metadata.hash() != null && file.metadata().hash().length > 0) {\n+ builder.field(Fields.META_HASH, new BytesArray(file.metadata.hash()));\n+ }\n builder.endObject();\n }\n \n@@ -239,6 +246,7 @@ public static FileInfo fromXContent(XContentParser parser) throws IOException {\n String checksum = null;\n ByteSizeValue partSize = null;\n Version writtenBy = null;\n+ BytesRef metaHash = new BytesRef();\n if (token == XContentParser.Token.START_OBJECT) {\n while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {\n if (token == XContentParser.Token.FIELD_NAME) {\n@@ -257,6 +265,10 @@ public static FileInfo fromXContent(XContentParser parser) throws IOException {\n partSize = new ByteSizeValue(parser.longValue());\n } else if (\"written_by\".equals(currentFieldName)) {\n writtenBy = Lucene.parseVersionLenient(parser.text(), null);\n+ } else if (\"meta_hash\".equals(currentFieldName)) {\n+ metaHash.bytes = parser.binaryValue();\n+ metaHash.offset = 0;\n+ metaHash.length = metaHash.bytes.length;\n } else {\n throw new ElasticsearchParseException(\"unknown parameter [\" + currentFieldName + \"]\");\n }\n@@ -269,7 +281,7 @@ public static FileInfo fromXContent(XContentParser parser) throws IOException {\n }\n }\n // TODO: Verify???\n- return new FileInfo(name, new StoreFileMetaData(physicalName, length, checksum, writtenBy), partSize);\n+ return new FileInfo(name, new StoreFileMetaData(physicalName, length, checksum, writtenBy, metaHash), partSize);\n }\n \n }", "filename": "src/main/java/org/elasticsearch/index/snapshots/blobstore/BlobStoreIndexShardSnapshot.java", "status": "modified" }, { "diff": "@@ -21,12 +21,12 @@\n \n import com.google.common.collect.ImmutableList;\n import com.google.common.collect.ImmutableMap;\n+import com.google.common.collect.Iterables;\n import org.apache.lucene.codecs.CodecUtil;\n-import org.apache.lucene.index.CorruptIndexException;\n-import org.apache.lucene.index.IndexCommit;\n-import org.apache.lucene.index.SegmentCommitInfo;\n-import org.apache.lucene.index.SegmentInfos;\n+import org.apache.lucene.codecs.lucene46.Lucene46SegmentInfoFormat;\n+import org.apache.lucene.index.*;\n import org.apache.lucene.store.*;\n+import org.apache.lucene.util.BytesRef;\n import org.apache.lucene.util.IOUtils;\n import org.apache.lucene.util.Version;\n import org.elasticsearch.ExceptionsHelper;\n@@ -440,7 +440,17 @@ public String toString() {\n * @see StoreFileMetaData\n */\n public final static class MetadataSnapshot implements Iterable<StoreFileMetaData> {\n- private final ImmutableMap<String, StoreFileMetaData> metadata;\n+ private final Map<String, StoreFileMetaData> metadata;\n+\n+ public static final MetadataSnapshot EMPTY = new MetadataSnapshot();\n+\n+ public MetadataSnapshot(Map<String, StoreFileMetaData> metadata) {\n+ this.metadata = metadata;\n+ }\n+\n+ MetadataSnapshot() {\n+ this.metadata = Collections.emptyMap();\n+ }\n \n MetadataSnapshot(IndexCommit commit, Directory directory, ESLogger logger) throws IOException {\n metadata = buildMetadata(commit, directory, logger);\n@@ -467,7 +477,7 @@ ImmutableMap<String, StoreFileMetaData> buildMetadata(IndexCommit commit, Direct\n for (String file : info.files()) {\n String legacyChecksum = checksumMap.get(file);\n if (version.onOrAfter(Version.LUCENE_4_8) && legacyChecksum == null) {\n- checksumFromLuceneFile(directory, file, builder, logger, version);\n+ checksumFromLuceneFile(directory, file, builder, logger, version, Lucene46SegmentInfoFormat.SI_EXTENSION.equals(IndexFileNames.getExtension(file)));\n } else {\n builder.put(file, new StoreFileMetaData(file, directory.fileLength(file), legacyChecksum, null));\n }\n@@ -476,7 +486,7 @@ ImmutableMap<String, StoreFileMetaData> buildMetadata(IndexCommit commit, Direct\n final String segmentsFile = segmentCommitInfos.getSegmentsFileName();\n String legacyChecksum = checksumMap.get(segmentsFile);\n if (maxVersion.onOrAfter(Version.LUCENE_4_8) && legacyChecksum == null) {\n- checksumFromLuceneFile(directory, segmentsFile, builder, logger, maxVersion);\n+ checksumFromLuceneFile(directory, segmentsFile, builder, logger, maxVersion, true);\n } else {\n builder.put(segmentsFile, new StoreFileMetaData(segmentsFile, directory.fileLength(segmentsFile), legacyChecksum, null));\n }\n@@ -526,22 +536,49 @@ static Map<String, String> readLegacyChecksums(Directory directory) throws IOExc\n }\n }\n \n- private static void checksumFromLuceneFile(Directory directory, String file, ImmutableMap.Builder<String, StoreFileMetaData> builder, ESLogger logger, Version version) throws IOException {\n+ private static void checksumFromLuceneFile(Directory directory, String file, ImmutableMap.Builder<String, StoreFileMetaData> builder, ESLogger logger, Version version, boolean readFileAsHash) throws IOException {\n+ final String checksum;\n+ final BytesRef fileHash = new BytesRef();\n try (IndexInput in = directory.openInput(file, IOContext.READONCE)) {\n try {\n if (in.length() < CodecUtil.footerLength()) {\n // truncated files trigger IAE if we seek negative... these files are really corrupted though\n throw new CorruptIndexException(\"Can't retrieve checksum from file: \" + file + \" file length must be >= \" + CodecUtil.footerLength() + \" but was: \" + in.length());\n }\n- String checksum = digestToString(CodecUtil.retrieveChecksum(in));\n- builder.put(file, new StoreFileMetaData(file, directory.fileLength(file), checksum, version));\n+ if (readFileAsHash) {\n+ hashFile(fileHash, in);\n+ }\n+ checksum = digestToString(CodecUtil.retrieveChecksum(in));\n+\n } catch (Throwable ex) {\n logger.debug(\"Can retrieve checksum from file [{}]\", ex, file);\n throw ex;\n }\n+ builder.put(file, new StoreFileMetaData(file, directory.fileLength(file), checksum, version, fileHash));\n }\n }\n \n+ /**\n+ * Computes a strong hash value for small files. Note that this method should only be used for files < 1MB\n+ */\n+ public static void hashFile(BytesRef fileHash, IndexInput in) throws IOException {\n+ final int len = (int)Math.min(1024 * 1024, in.length()); // for safety we limit this to 1MB\n+ fileHash.offset = 0;\n+ fileHash.grow(len);\n+ fileHash.length = len;\n+ in.readBytes(fileHash.bytes, 0, len);\n+ }\n+\n+ /**\n+ * Computes a strong hash value for small files. Note that this method should only be used for files < 1MB\n+ */\n+ public static void hashFile(BytesRef fileHash, BytesRef source) throws IOException {\n+ final int len = Math.min(1024 * 1024, source.length); // for safety we limit this to 1MB\n+ fileHash.offset = 0;\n+ fileHash.grow(len);\n+ fileHash.length = len;\n+ System.arraycopy(source.bytes, source.offset, fileHash.bytes, 0, len);\n+ }\n \n @Override\n public Iterator<StoreFileMetaData> iterator() {\n@@ -555,6 +592,134 @@ public StoreFileMetaData get(String name) {\n public Map<String, StoreFileMetaData> asMap() {\n return metadata;\n }\n+\n+ private static final String DEL_FILE_EXTENSION = \"del\"; // TODO think about how we can detect if this changes?\n+ private static final String FIELD_INFOS_FILE_EXTENSION = \"fnm\";\n+\n+ /**\n+ * Returns a diff between the two snapshots that can be used for recovery. The given snapshot is treated as the\n+ * recovery target and this snapshot as the source. The returned diff will hold a list of files that are:\n+ * <ul>\n+ * <li>identical: they exist in both snapshots and they can be considered the same ie. they don't need to be recovered</li>\n+ * <li>different: they exist in both snapshots but their they are not identical</li>\n+ * <li>missing: files that exist in the source but not in the target</li>\n+ * </ul>\n+ * This method groups file into per-segment files and per-commit files. A file is treated as\n+ * identical if and on if all files in it's group are identical. On a per-segment level files for a segment are treated\n+ * as identical iff:\n+ * <ul>\n+ * <li>all files in this segment have the same checksum</li>\n+ * <li>all files in this segment have the same length</li>\n+ * <li>the segments <tt>.si</tt> files hashes are byte-identical Note: This is a using a perfect hash function, The metadata transfers the <tt>.si</tt> file content as it's hash</li>\n+ * </ul>\n+ *\n+ * The <tt>.si</tt> file contains a lot of diagnostics including a timestamp etc. in the future there might be\n+ * unique segment identifiers in there hardening this method further.\n+ *\n+ * The per-commit files handles very similar. A commit is composed of the <tt>segments_N</tt> files as well as generational files like\n+ * deletes (<tt>_x_y.del</tt>) or field-info (<tt>_x_y.fnm</tt>) files. On a per-commit level files for a commit are treated\n+ * as identical iff:\n+ * <ul>\n+ * <li>all files belonging to this commit have the same checksum</li>\n+ * <li>all files belonging to this commit have the same length</li>\n+ * <li>the segments file <tt>segments_N</tt> files hashes are byte-identical Note: This is a using a perfect hash function, The metadata transfers the <tt>segments_N</tt> file content as it's hash</li>\n+ * </ul>\n+ *\n+ * NOTE: this diff will not contain the <tt>segments.gen</tt> file. This file is omitted on recovery.\n+ */\n+ public RecoveryDiff recoveryDiff(MetadataSnapshot recoveryTargetSnapshot) {\n+ final ImmutableList.Builder<StoreFileMetaData> identical = ImmutableList.builder();\n+ final ImmutableList.Builder<StoreFileMetaData> different = ImmutableList.builder();\n+ final ImmutableList.Builder<StoreFileMetaData> missing = ImmutableList.builder();\n+ final Map<String, List<StoreFileMetaData>> perSegment = new HashMap<>();\n+ final List<StoreFileMetaData> perCommitStoreFiles = new ArrayList<>();\n+\n+ for (StoreFileMetaData meta : this) {\n+ if (IndexFileNames.SEGMENTS_GEN.equals(meta.name())) {\n+ continue; // we don't need that file at all\n+ }\n+ final String segmentId = IndexFileNames.parseSegmentName(meta.name());\n+ final String extension = IndexFileNames.getExtension(meta.name());\n+ assert FIELD_INFOS_FILE_EXTENSION.equals(extension) == false || IndexFileNames.stripExtension(IndexFileNames.stripSegmentName(meta.name())).isEmpty() : \"FieldInfos are generational but updateable DV are not supported in elasticsearch\";\n+ if (IndexFileNames.SEGMENTS.equals(segmentId) || DEL_FILE_EXTENSION.equals(extension)) {\n+ // only treat del files as per-commit files fnm files are generational but only for upgradable DV\n+ perCommitStoreFiles.add(meta);\n+ } else {\n+ List<StoreFileMetaData> perSegStoreFiles = perSegment.get(segmentId);\n+ if (perSegStoreFiles == null) {\n+ perSegStoreFiles = new ArrayList<>();\n+ perSegment.put(segmentId, perSegStoreFiles);\n+ }\n+ perSegStoreFiles.add(meta);\n+ }\n+ }\n+ final ArrayList<StoreFileMetaData> identicalFiles = new ArrayList<>();\n+ for (List<StoreFileMetaData> segmentFiles : Iterables.concat(perSegment.values(), Collections.singleton(perCommitStoreFiles))) {\n+ identicalFiles.clear();\n+ boolean consistent = true;\n+ for (StoreFileMetaData meta : segmentFiles) {\n+ StoreFileMetaData storeFileMetaData = recoveryTargetSnapshot.get(meta.name());\n+ if (storeFileMetaData == null) {\n+ consistent = false;\n+ missing.add(meta);\n+ } else if (storeFileMetaData.isSame(meta) == false) {\n+ consistent = false;\n+ different.add(meta);\n+ } else {\n+ identicalFiles.add(meta);\n+ }\n+ }\n+ if (consistent) {\n+ identical.addAll(identicalFiles);\n+ } else {\n+ // make sure all files are added - this can happen if only the deletes are different\n+ different.addAll(identicalFiles);\n+ }\n+ }\n+ RecoveryDiff recoveryDiff = new RecoveryDiff(identical.build(), different.build(), missing.build());\n+ assert recoveryDiff.size() == this.metadata.size() - (metadata.containsKey(IndexFileNames.SEGMENTS_GEN) ? 1: 0)\n+ : \"some files are missing recoveryDiff size: [\" + recoveryDiff.size() + \"] metadata size: [\" + this.metadata.size() + \"] contains segments.gen: [\" + metadata.containsKey(IndexFileNames.SEGMENTS_GEN) + \"]\" ;\n+ return recoveryDiff;\n+ }\n+\n+ /**\n+ * Returns the number of files in this snapshot\n+ */\n+ public int size() {\n+ return metadata.size();\n+ }\n+ }\n+\n+ /**\n+ * A class representing the diff between a recovery source and recovery target\n+ * @see MetadataSnapshot#recoveryDiff(org.elasticsearch.index.store.Store.MetadataSnapshot)\n+ */\n+ public static final class RecoveryDiff {\n+ /**\n+ * Files that exist in both snapshots and they can be considered the same ie. they don't need to be recovered\n+ */\n+ public final List<StoreFileMetaData> identical;\n+ /**\n+ * Files that exist in both snapshots but their they are not identical\n+ */\n+ public final List<StoreFileMetaData> different;\n+ /**\n+ * Files that exist in the source but not in the target\n+ */\n+ public final List<StoreFileMetaData> missing;\n+\n+ RecoveryDiff(List<StoreFileMetaData> identical, List<StoreFileMetaData> different, List<StoreFileMetaData> missing) {\n+ this.identical = identical;\n+ this.different = different;\n+ this.missing = missing;\n+ }\n+\n+ /**\n+ * Returns the sum of the files in this diff.\n+ */\n+ public int size() {\n+ return identical.size() + different.size() + missing.size();\n+ }\n }\n \n public final static class LegacyChecksums {", "filename": "src/main/java/org/elasticsearch/index/store/Store.java", "status": "modified" }, { "diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.index.store;\n \n+import org.apache.lucene.util.BytesRef;\n import org.apache.lucene.util.Version;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.io.stream.StreamInput;\n@@ -42,22 +43,35 @@ public class StoreFileMetaData implements Streamable {\n \n private Version writtenBy;\n \n+ private BytesRef hash;\n+\n private StoreFileMetaData() {\n }\n \n public StoreFileMetaData(String name, long length) {\n- this(name, length, null, null);\n+ this(name, length, null);\n+ }\n \n+ public StoreFileMetaData(String name, long length, String checksum) {\n+ this(name, length, checksum, null, null);\n }\n \n public StoreFileMetaData(String name, long length, String checksum, Version writtenBy) {\n+ this(name, length, checksum, writtenBy, null);\n+ }\n+\n+ public StoreFileMetaData(String name, long length, String checksum, Version writtenBy, BytesRef hash) {\n this.name = name;\n this.length = length;\n this.checksum = checksum;\n this.writtenBy = writtenBy;\n+ this.hash = hash == null ? new BytesRef() : hash;\n }\n \n \n+ /**\n+ * Returns the name of this file\n+ */\n public String name() {\n return name;\n }\n@@ -69,6 +83,12 @@ public long length() {\n return length;\n }\n \n+ /**\n+ * Returns a string representation of the files checksum. Since Lucene 4.8 this is a CRC32 checksum written\n+ * by lucene. Previously we use Adler32 on top of Lucene as the checksum algorithm, if {@link #hasLegacyChecksum()} returns\n+ * <code>true</code> this is a Adler32 checksum.\n+ * @return\n+ */\n @Nullable\n public String checksum() {\n return this.checksum;\n@@ -81,7 +101,7 @@ public boolean isSame(StoreFileMetaData other) {\n if (checksum == null || other.checksum == null) {\n return false;\n }\n- return length == other.length && checksum.equals(other.checksum);\n+ return length == other.length && checksum.equals(other.checksum) && hash.equals(other.hash);\n }\n \n public static StoreFileMetaData readStoreFileMetaData(StreamInput in) throws IOException {\n@@ -104,6 +124,11 @@ public void readFrom(StreamInput in) throws IOException {\n String versionString = in.readOptionalString();\n writtenBy = Lucene.parseVersionLenient(versionString, null);\n }\n+ if (in.getVersion().onOrAfter(org.elasticsearch.Version.V_1_3_3)) {\n+ hash = in.readBytesRef();\n+ } else {\n+ hash = new BytesRef();\n+ }\n }\n \n @Override\n@@ -114,6 +139,9 @@ public void writeTo(StreamOutput out) throws IOException {\n if (out.getVersion().onOrAfter(org.elasticsearch.Version.V_1_3_0)) {\n out.writeOptionalString(writtenBy == null ? null : writtenBy.name());\n }\n+ if (out.getVersion().onOrAfter(org.elasticsearch.Version.V_1_3_3)) {\n+ out.writeBytesRef(hash);\n+ }\n }\n \n /**\n@@ -130,4 +158,12 @@ public Version writtenBy() {\n public boolean hasLegacyChecksum() {\n return checksum != null && ((writtenBy != null && writtenBy.onOrAfter(Version.LUCENE_4_8)) == false);\n }\n+\n+ /**\n+ * Returns a variable length hash of the file represented by this metadata object. This can be the file\n+ * itself if the file is small enough. If the length of the hash is <tt>0</tt> no hash value is available\n+ */\n+ public BytesRef hash() {\n+ return hash;\n+ }\n }", "filename": "src/main/java/org/elasticsearch/index/store/StoreFileMetaData.java", "status": "modified" }, { "diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.indices.recovery;\n \n+import com.google.common.collect.Iterables;\n import com.google.common.collect.Lists;\n import com.google.common.collect.Sets;\n import org.apache.lucene.index.CorruptIndexException;\n@@ -141,35 +142,32 @@ public void phase1(final SnapshotIndexCommit snapshot) throws ElasticsearchExcep\n store.incRef();\n try {\n StopWatch stopWatch = new StopWatch().start();\n- final Store.MetadataSnapshot metadata;\n- metadata = store.getMetadata(snapshot);\n+ final Store.MetadataSnapshot recoverySourceMetadata = store.getMetadata(snapshot);\n for (String name : snapshot.getFiles()) {\n- final StoreFileMetaData md = metadata.get(name);\n+ final StoreFileMetaData md = recoverySourceMetadata.get(name);\n if (md == null) {\n- logger.info(\"Snapshot differs from actual index for file: {} meta: {}\", name, metadata.asMap());\n- throw new CorruptIndexException(\"Snapshot differs from actual index - maybe index was removed metadata has \" + metadata.asMap().size() + \" files\");\n+ logger.info(\"Snapshot differs from actual index for file: {} meta: {}\", name, recoverySourceMetadata.asMap());\n+ throw new CorruptIndexException(\"Snapshot differs from actual index - maybe index was removed metadata has \" + recoverySourceMetadata.asMap().size() + \" files\");\n }\n- boolean useExisting = false;\n- if (request.existingFiles().containsKey(name)) {\n- if (md.isSame(request.existingFiles().get(name))) {\n- response.phase1ExistingFileNames.add(name);\n- response.phase1ExistingFileSizes.add(md.length());\n- existingTotalSize += md.length();\n- useExisting = true;\n- if (logger.isTraceEnabled()) {\n- logger.trace(\"[{}][{}] recovery [phase1] to {}: not recovering [{}], exists in local store and has checksum [{}], size [{}]\", request.shardId().index().name(), request.shardId().id(), request.targetNode(), name, md.checksum(), md.length());\n- }\n- }\n+ }\n+ final Store.RecoveryDiff diff = recoverySourceMetadata.recoveryDiff(new Store.MetadataSnapshot(request.existingFiles()));\n+ for (StoreFileMetaData md : diff.identical) {\n+ response.phase1ExistingFileNames.add(md.name());\n+ response.phase1ExistingFileSizes.add(md.length());\n+ existingTotalSize += md.length();\n+ if (logger.isTraceEnabled()) {\n+ logger.trace(\"[{}][{}] recovery [phase1] to {}: not recovering [{}], exists in local store and has checksum [{}], size [{}]\", request.shardId().index().name(), request.shardId().id(), request.targetNode(), md.name(), md.checksum(), md.length());\n }\n- if (!useExisting) {\n- if (request.existingFiles().containsKey(name)) {\n- logger.trace(\"[{}][{}] recovery [phase1] to {}: recovering [{}], exists in local store, but is different: remote [{}], local [{}]\", request.shardId().index().name(), request.shardId().id(), request.targetNode(), name, request.existingFiles().get(name), md);\n- } else {\n- logger.trace(\"[{}][{}] recovery [phase1] to {}: recovering [{}], does not exists in remote\", request.shardId().index().name(), request.shardId().id(), request.targetNode(), name);\n- }\n- response.phase1FileNames.add(name);\n- response.phase1FileSizes.add(md.length());\n+ totalSize += md.length();\n+ }\n+ for (StoreFileMetaData md : Iterables.concat(diff.different, diff.missing)) {\n+ if (request.existingFiles().containsKey(md.name())) {\n+ logger.trace(\"[{}][{}] recovery [phase1] to {}: recovering [{}], exists in local store, but is different: remote [{}], local [{}]\", request.shardId().index().name(), request.shardId().id(), request.targetNode(), md.name(), request.existingFiles().get(md.name()), md);\n+ } else {\n+ logger.trace(\"[{}][{}] recovery [phase1] to {}: recovering [{}], does not exists in remote\", request.shardId().index().name(), request.shardId().id(), request.targetNode(), md.name());\n }\n+ response.phase1FileNames.add(md.name());\n+ response.phase1FileSizes.add(md.length());\n totalSize += md.length();\n }\n response.phase1TotalSize = totalSize;\n@@ -199,7 +197,7 @@ public void phase1(final SnapshotIndexCommit snapshot) throws ElasticsearchExcep\n public void run() {\n IndexInput indexInput = null;\n store.incRef();\n- final StoreFileMetaData md = metadata.get(name);\n+ final StoreFileMetaData md = recoverySourceMetadata.get(name);\n try {\n final int BUFFER_SIZE = (int) recoverySettings.fileChunkSize().bytes();\n byte[] buf = new byte[BUFFER_SIZE];", "filename": "src/main/java/org/elasticsearch/indices/recovery/RecoverySource.java", "status": "modified" }, { "diff": "@@ -411,94 +411,4 @@ public Version getMasterVersion() {\n return client().admin().cluster().prepareState().get().getState().nodes().masterNode().getVersion();\n }\n \n- @Test\n- public void testSnapshotAndRestore() throws ExecutionException, InterruptedException, IOException {\n- logger.info(\"--> creating repository\");\n- assertAcked(client().admin().cluster().preparePutRepository(\"test-repo\")\n- .setType(\"fs\").setSettings(ImmutableSettings.settingsBuilder()\n- .put(\"location\", newTempDir(LifecycleScope.SUITE).getAbsolutePath())\n- .put(\"compress\", randomBoolean())\n- .put(\"chunk_size\", randomIntBetween(100, 1000))));\n- String[] indices = new String[randomIntBetween(1,5)];\n- for (int i = 0; i < indices.length; i++) {\n- indices[i] = \"index_\" + i;\n- createIndex(indices[i]);\n- }\n- ensureYellow();\n- logger.info(\"--> indexing some data\");\n- IndexRequestBuilder[] builders = new IndexRequestBuilder[randomIntBetween(10, 200)];\n- for (int i = 0; i < builders.length; i++) {\n- builders[i] = client().prepareIndex(RandomPicks.randomFrom(getRandom(), indices), \"foo\", Integer.toString(i)).setSource(\"{ \\\"foo\\\" : \\\"bar\\\" } \");\n- }\n- indexRandom(true, builders);\n- assertThat(client().prepareCount(indices).get().getCount(), equalTo((long)builders.length));\n- long[] counts = new long[indices.length];\n- for (int i = 0; i < indices.length; i++) {\n- counts[i] = client().prepareCount(indices[i]).get().getCount();\n- }\n-\n- logger.info(\"--> snapshot\");\n- CreateSnapshotResponse createSnapshotResponse = client().admin().cluster().prepareCreateSnapshot(\"test-repo\", \"test-snap\").setWaitForCompletion(true).setIndices(\"index_*\").get();\n- assertThat(createSnapshotResponse.getSnapshotInfo().successfulShards(), greaterThan(0));\n- assertThat(createSnapshotResponse.getSnapshotInfo().successfulShards(), equalTo(createSnapshotResponse.getSnapshotInfo().totalShards()));\n-\n- assertThat(client().admin().cluster().prepareGetSnapshots(\"test-repo\").setSnapshots(\"test-snap\").get().getSnapshots().get(0).state(), equalTo(SnapshotState.SUCCESS));\n-\n- logger.info(\"--> delete some data\");\n- int howMany = randomIntBetween(1, builders.length);\n- \n- for (int i = 0; i < howMany; i++) {\n- IndexRequestBuilder indexRequestBuilder = RandomPicks.randomFrom(getRandom(), builders);\n- IndexRequest request = indexRequestBuilder.request();\n- client().prepareDelete(request.index(), request.type(), request.id()).get();\n- }\n- refresh();\n- final long numDocs = client().prepareCount(indices).get().getCount();\n- assertThat(client().prepareCount(indices).get().getCount(), lessThan((long)builders.length));\n-\n-\n- client().admin().indices().prepareUpdateSettings(indices).setSettings(ImmutableSettings.builder().put(EnableAllocationDecider.INDEX_ROUTING_ALLOCATION_ENABLE, \"none\")).get();\n- backwardsCluster().allowOnAllNodes(indices);\n- logClusterState();\n- boolean upgraded;\n- do {\n- logClusterState();\n- CountResponse countResponse = client().prepareCount().get();\n- assertHitCount(countResponse, numDocs);\n- upgraded = backwardsCluster().upgradeOneNode();\n- ensureYellow();\n- countResponse = client().prepareCount().get();\n- assertHitCount(countResponse, numDocs);\n- } while (upgraded);\n- client().admin().indices().prepareUpdateSettings(indices).setSettings(ImmutableSettings.builder().put(EnableAllocationDecider.INDEX_ROUTING_ALLOCATION_ENABLE, \"all\")).get();\n-\n- logger.info(\"--> close indices\");\n-\n- client().admin().indices().prepareClose(indices).get();\n-\n- logger.info(\"--> restore all indices from the snapshot\");\n- RestoreSnapshotResponse restoreSnapshotResponse = client().admin().cluster().prepareRestoreSnapshot(\"test-repo\", \"test-snap\").setWaitForCompletion(true).execute().actionGet();\n- assertThat(restoreSnapshotResponse.getRestoreInfo().totalShards(), greaterThan(0));\n-\n- ensureYellow();\n- assertThat(client().prepareCount(indices).get().getCount(), equalTo((long)builders.length));\n- for (int i = 0; i < indices.length; i++) {\n- assertThat(counts[i], equalTo(client().prepareCount(indices[i]).get().getCount()));\n- }\n-\n- // Test restore after index deletion\n- logger.info(\"--> delete indices\");\n- String index = RandomPicks.randomFrom(getRandom(), indices);\n- cluster().wipeIndices(index);\n- logger.info(\"--> restore one index after deletion\");\n- restoreSnapshotResponse = client().admin().cluster().prepareRestoreSnapshot(\"test-repo\", \"test-snap\").setWaitForCompletion(true).setIndices(index).execute().actionGet();\n- assertThat(restoreSnapshotResponse.getRestoreInfo().totalShards(), greaterThan(0));\n- ensureYellow();\n- assertThat(client().prepareCount(indices).get().getCount(), equalTo((long)builders.length));\n- for (int i = 0; i < indices.length; i++) {\n- assertThat(counts[i], equalTo(client().prepareCount(indices[i]).get().getCount()));\n- }\n- }\n-\n-\n }", "filename": "src/test/java/org/elasticsearch/bwcompat/BasicBackwardsCompatibilityTest.java", "status": "modified" }, { "diff": "@@ -0,0 +1,69 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.index.snapshots.blobstore;\n+\n+import org.apache.lucene.util.BytesRef;\n+import org.elasticsearch.common.unit.ByteSizeValue;\n+import org.elasticsearch.common.xcontent.*;\n+import org.elasticsearch.index.store.StoreFileMetaData;\n+import org.elasticsearch.test.ElasticsearchTestCase;\n+import org.junit.Test;\n+\n+import java.io.IOException;\n+\n+import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.is;\n+\n+/**\n+ */\n+public class FileInfoTest extends ElasticsearchTestCase {\n+\n+ @Test\n+ public void testToFromXContent() throws IOException {\n+ final int iters = scaledRandomIntBetween(1, 10);\n+ for (int iter = 0; iter < iters; iter++) {\n+ final BytesRef hash = new BytesRef(scaledRandomIntBetween(0, 1024 * 1024));\n+ hash.length = hash.bytes.length;\n+ for (int i = 0; i < hash.length; i++) {\n+ hash.bytes[i] = randomByte();\n+ }\n+ StoreFileMetaData meta = new StoreFileMetaData(\"foobar\", randomInt(), randomAsciiOfLengthBetween(1, 10), TEST_VERSION_CURRENT, hash);\n+ ByteSizeValue size = new ByteSizeValue(Math.max(0,Math.abs(randomLong())));\n+ BlobStoreIndexShardSnapshot.FileInfo info = new BlobStoreIndexShardSnapshot.FileInfo(\"_foobar\", meta, size);\n+ XContentBuilder builder = XContentFactory.contentBuilder(XContentType.JSON).prettyPrint();\n+ BlobStoreIndexShardSnapshot.FileInfo.toXContent(info, builder, ToXContent.EMPTY_PARAMS);\n+ byte[] xcontent = builder.bytes().toBytes();\n+\n+ final BlobStoreIndexShardSnapshot.FileInfo parsedInfo;\n+ try (XContentParser parser = XContentFactory.xContent(XContentType.JSON).createParser(xcontent)) {\n+ parser.nextToken();\n+ parsedInfo = BlobStoreIndexShardSnapshot.FileInfo.fromXContent(parser);\n+ }\n+ assertThat(info.name(), equalTo(parsedInfo.name()));\n+ assertThat(info.physicalName(), equalTo(parsedInfo.physicalName()));\n+ assertThat(info.length(), equalTo(parsedInfo.length()));\n+ assertThat(info.checksum(), equalTo(parsedInfo.checksum()));\n+ assertThat(info.partBytes(), equalTo(parsedInfo.partBytes()));\n+ assertThat(parsedInfo.metadata().hash().length, equalTo(hash.length));\n+ assertThat(parsedInfo.metadata().hash(), equalTo(hash));\n+ assertThat(parsedInfo.metadata().writtenBy(), equalTo(TEST_VERSION_CURRENT));\n+ assertThat(parsedInfo.isSame(info.metadata()), is(true));\n+ }\n+ }\n+}", "filename": "src/test/java/org/elasticsearch/index/snapshots/blobstore/FileInfoTest.java", "status": "added" }, { "diff": "@@ -20,10 +20,7 @@\n \n import org.apache.lucene.analysis.MockAnalyzer;\n import org.apache.lucene.codecs.CodecUtil;\n-import org.apache.lucene.document.Document;\n-import org.apache.lucene.document.Field;\n-import org.apache.lucene.document.SortedDocValuesField;\n-import org.apache.lucene.document.TextField;\n+import org.apache.lucene.document.*;\n import org.apache.lucene.index.*;\n import org.apache.lucene.store.*;\n import org.apache.lucene.util.BytesRef;\n@@ -41,9 +38,7 @@\n import java.io.FileNotFoundException;\n import java.io.IOException;\n import java.nio.file.NoSuchFileException;\n-import java.util.Arrays;\n-import java.util.HashMap;\n-import java.util.Map;\n+import java.util.*;\n \n import static org.hamcrest.Matchers.*;\n \n@@ -108,7 +103,7 @@ public void testVerifyingIndexOutputWithBogusInput() throws IOException {\n @Test\n public void testWriteLegacyChecksums() throws IOException {\n final ShardId shardId = new ShardId(new Index(\"index\"), 1);\n- DirectoryService directoryService = new LuceneManagedDirectoryService();\n+ DirectoryService directoryService = new LuceneManagedDirectoryService(random());\n Store store = new Store(shardId, ImmutableSettings.EMPTY, null, null, directoryService, randomDistributor(directoryService));\n // set default codec - all segments need checksums\n IndexWriter writer = new IndexWriter(store.directory(), newIndexWriterConfig(random(), TEST_VERSION_CURRENT, new MockAnalyzer(random())).setCodec(actualDefaultCodec()));\n@@ -171,7 +166,7 @@ public void testWriteLegacyChecksums() throws IOException {\n @Test\n public void testNewChecksums() throws IOException {\n final ShardId shardId = new ShardId(new Index(\"index\"), 1);\n- DirectoryService directoryService = new LuceneManagedDirectoryService();\n+ DirectoryService directoryService = new LuceneManagedDirectoryService(random());\n Store store = new Store(shardId, ImmutableSettings.EMPTY, null, null, directoryService, randomDistributor(directoryService));\n // set default codec - all segments need checksums\n IndexWriter writer = new IndexWriter(store.directory(), newIndexWriterConfig(random(), TEST_VERSION_CURRENT, new MockAnalyzer(random())).setCodec(actualDefaultCodec()));\n@@ -211,6 +206,9 @@ public void testNewChecksums() throws IOException {\n assertThat(\"File: \" + meta.name() + \" has a different checksum\", meta.checksum(), equalTo(checksum));\n assertThat(meta.hasLegacyChecksum(), equalTo(false));\n assertThat(meta.writtenBy(), equalTo(TEST_VERSION_CURRENT));\n+ if (meta.name().endsWith(\".si\") || meta.name().startsWith(\"segments_\")) {\n+ assertThat(meta.hash().length, greaterThan(0));\n+ }\n }\n }\n assertConsistent(store, metadata);\n@@ -223,7 +221,7 @@ public void testNewChecksums() throws IOException {\n @Test\n public void testMixedChecksums() throws IOException {\n final ShardId shardId = new ShardId(new Index(\"index\"), 1);\n- DirectoryService directoryService = new LuceneManagedDirectoryService();\n+ DirectoryService directoryService = new LuceneManagedDirectoryService(random());\n Store store = new Store(shardId, ImmutableSettings.EMPTY, null, null, directoryService, randomDistributor(directoryService));\n // this time random codec....\n IndexWriter writer = new IndexWriter(store.directory(), newIndexWriterConfig(random(), TEST_VERSION_CURRENT, new MockAnalyzer(random())).setCodec(actualDefaultCodec()));\n@@ -309,7 +307,7 @@ public void testMixedChecksums() throws IOException {\n @Test\n public void testRenameFile() throws IOException {\n final ShardId shardId = new ShardId(new Index(\"index\"), 1);\n- DirectoryService directoryService = new LuceneManagedDirectoryService(false);\n+ DirectoryService directoryService = new LuceneManagedDirectoryService(random(), false);\n Store store = new Store(shardId, ImmutableSettings.EMPTY, null, null, directoryService, randomDistributor(directoryService));\n {\n IndexOutput output = store.directory().createOutput(\"foo.bar\", IOContext.DEFAULT);\n@@ -368,20 +366,22 @@ public void assertDeleteContent(Store store,DirectoryService service) throws IOE\n }\n }\n \n- private final class LuceneManagedDirectoryService implements DirectoryService {\n+ private static final class LuceneManagedDirectoryService implements DirectoryService {\n private final Directory[] dirs;\n+ private final Random random;\n \n- public LuceneManagedDirectoryService() {\n- this(true);\n+ public LuceneManagedDirectoryService(Random random) {\n+ this(random, true);\n }\n- public LuceneManagedDirectoryService(boolean preventDoubleWrite) {\n- this.dirs = new Directory[1 + random().nextInt(5)];\n+ public LuceneManagedDirectoryService(Random random, boolean preventDoubleWrite) {\n+ this.dirs = new Directory[1 + random.nextInt(5)];\n for (int i = 0; i < dirs.length; i++) {\n- dirs[i] = newDirectory();\n+ dirs[i] = newDirectory(random);\n if (dirs[i] instanceof MockDirectoryWrapper) {\n ((MockDirectoryWrapper)dirs[i]).setPreventDoubleWrite(preventDoubleWrite);\n }\n }\n+ this.random = random;\n }\n @Override\n public Directory[] build() throws IOException {\n@@ -390,7 +390,7 @@ public Directory[] build() throws IOException {\n \n @Override\n public long throttleTimeInNanos() {\n- return random().nextInt(1000);\n+ return random.nextInt(1000);\n }\n \n @Override\n@@ -416,9 +416,159 @@ public static void assertConsistent(Store store, Store.MetadataSnapshot metadata\n }\n }\n }\n-\n private Distributor randomDistributor(DirectoryService service) throws IOException {\n- return random().nextBoolean() ? new LeastUsedDistributor(service) : new RandomWeightedDistributor(service);\n+ return randomDistributor(random(), service);\n+ }\n+\n+ private Distributor randomDistributor(Random random, DirectoryService service) throws IOException {\n+ return random.nextBoolean() ? new LeastUsedDistributor(service) : new RandomWeightedDistributor(service);\n+ }\n+\n+\n+ @Test\n+ public void testRecoveryDiff() throws IOException, InterruptedException {\n+ int numDocs = 2 + random().nextInt(100);\n+ List<Document> docs = new ArrayList<>();\n+ for (int i = 0; i < numDocs; i++) {\n+ Document doc = new Document();\n+ doc.add(new StringField(\"id\", \"\" + i, random().nextBoolean() ? Field.Store.YES : Field.Store.NO));\n+ doc.add(new TextField(\"body\", TestUtil.randomRealisticUnicodeString(random()), random().nextBoolean() ? Field.Store.YES : Field.Store.NO));\n+ doc.add(new SortedDocValuesField(\"dv\", new BytesRef(TestUtil.randomRealisticUnicodeString(random()))));\n+ docs.add(doc);\n+ }\n+ long seed = random().nextLong();\n+ Store.MetadataSnapshot first;\n+ {\n+ Random random = new Random(seed);\n+ IndexWriterConfig iwc = new IndexWriterConfig(TEST_VERSION_CURRENT, new MockAnalyzer(random)).setCodec(actualDefaultCodec());\n+ iwc.setMergePolicy(NoMergePolicy.INSTANCE);\n+ iwc.setUseCompoundFile(random.nextBoolean());\n+ iwc.setMaxThreadStates(1);\n+ final ShardId shardId = new ShardId(new Index(\"index\"), 1);\n+ DirectoryService directoryService = new LuceneManagedDirectoryService(random);\n+ Store store = new Store(shardId, ImmutableSettings.EMPTY, null, null, directoryService, randomDistributor(random, directoryService));\n+ IndexWriter writer = new IndexWriter(store.directory(), iwc);\n+ final boolean lotsOfSegments = rarely(random);\n+ for (Document d : docs) {\n+ writer.addDocument(d);\n+ if (lotsOfSegments && random.nextBoolean()) {\n+ writer.commit();\n+ } else if (rarely(random)) {\n+ writer.commit();\n+ }\n+ }\n+ writer.close();\n+ first = store.getMetadata();\n+ assertDeleteContent(store, directoryService);\n+ store.close();\n+ }\n+ long time = new Date().getTime();\n+ while(time == new Date().getTime()) {\n+ Thread.sleep(10); // bump the time\n+ }\n+ Store.MetadataSnapshot second;\n+ Store store;\n+ {\n+ Random random = new Random(seed);\n+ IndexWriterConfig iwc = new IndexWriterConfig(TEST_VERSION_CURRENT, new MockAnalyzer(random)).setCodec(actualDefaultCodec());\n+ iwc.setMergePolicy(NoMergePolicy.INSTANCE);\n+ iwc.setUseCompoundFile(random.nextBoolean());\n+ iwc.setMaxThreadStates(1);\n+ final ShardId shardId = new ShardId(new Index(\"index\"), 1);\n+ DirectoryService directoryService = new LuceneManagedDirectoryService(random);\n+ store = new Store(shardId, ImmutableSettings.EMPTY, null, null, directoryService, randomDistributor(random, directoryService));\n+ IndexWriter writer = new IndexWriter(store.directory(), iwc);\n+ final boolean lotsOfSegments = rarely(random);\n+ for (Document d : docs) {\n+ writer.addDocument(d);\n+ if (lotsOfSegments && random.nextBoolean()) {\n+ writer.commit();\n+ } else if (rarely(random)) {\n+ writer.commit();\n+ }\n+ }\n+ writer.close();\n+ second = store.getMetadata();\n+ }\n+ Store.RecoveryDiff diff = first.recoveryDiff(second);\n+ assertThat(first.size(), equalTo(second.size()));\n+ for (StoreFileMetaData md : first) {\n+ assertThat(second.get(md.name()), notNullValue());\n+ // si files are different - containing timestamps etc\n+ assertThat(second.get(md.name()).isSame(md), equalTo(md.name().endsWith(\".si\") == false));\n+ }\n+ assertThat(diff.different.size(), equalTo(first.size()-1));\n+ assertThat(diff.identical.size(), equalTo(1)); // commit point is identical\n+ assertThat(diff.missing, empty());\n+\n+ // check the self diff\n+ Store.RecoveryDiff selfDiff = first.recoveryDiff(first);\n+ assertThat(selfDiff.identical.size(), equalTo(first.size()));\n+ assertThat(selfDiff.different, empty());\n+ assertThat(selfDiff.missing, empty());\n+\n+\n+ // lets add some deletes\n+ Random random = new Random(seed);\n+ IndexWriterConfig iwc = new IndexWriterConfig(TEST_VERSION_CURRENT, new MockAnalyzer(random)).setCodec(actualDefaultCodec());\n+ iwc.setMergePolicy(NoMergePolicy.INSTANCE);\n+ iwc.setUseCompoundFile(random.nextBoolean());\n+ iwc.setMaxThreadStates(1);\n+ iwc.setOpenMode(IndexWriterConfig.OpenMode.APPEND);\n+ IndexWriter writer = new IndexWriter(store.directory(), iwc);\n+ writer.deleteDocuments(new Term(\"id\", Integer.toString(random().nextInt(numDocs))));\n+ writer.close();\n+ Store.MetadataSnapshot metadata = store.getMetadata();\n+ StoreFileMetaData delFile = null;\n+ for (StoreFileMetaData md : metadata) {\n+ if (md.name().endsWith(\".del\")) {\n+ delFile = md;\n+ break;\n+ }\n+ }\n+ Store.RecoveryDiff afterDeleteDiff = metadata.recoveryDiff(second);\n+ if (delFile != null) {\n+ assertThat(afterDeleteDiff.identical.size(), equalTo(metadata.size()-2)); // segments_N + del file\n+ assertThat(afterDeleteDiff.different.size(), equalTo(0));\n+ assertThat(afterDeleteDiff.missing.size(), equalTo(2));\n+ } else {\n+ // an entire segment must be missing (single doc segment got dropped)\n+ assertThat(afterDeleteDiff.identical.size(), greaterThan(0));\n+ assertThat(afterDeleteDiff.different.size(), equalTo(0));\n+ assertThat(afterDeleteDiff.missing.size(), equalTo(1)); // the commit file is different\n+ }\n+\n+ // check the self diff\n+ selfDiff = metadata.recoveryDiff(metadata);\n+ assertThat(selfDiff.identical.size(), equalTo(metadata.size()));\n+ assertThat(selfDiff.different, empty());\n+ assertThat(selfDiff.missing, empty());\n+\n+ // add a new commit\n+ iwc = new IndexWriterConfig(TEST_VERSION_CURRENT, new MockAnalyzer(random)).setCodec(actualDefaultCodec());\n+ iwc.setMergePolicy(NoMergePolicy.INSTANCE);\n+ iwc.setUseCompoundFile(true); // force CFS - easier to test here since we know it will add 3 files\n+ iwc.setMaxThreadStates(1);\n+ iwc.setOpenMode(IndexWriterConfig.OpenMode.APPEND);\n+ writer = new IndexWriter(store.directory(), iwc);\n+ writer.addDocument(docs.get(0));\n+ writer.close();\n+\n+ Store.MetadataSnapshot newCommitMetaData = store.getMetadata();\n+ Store.RecoveryDiff newCommitDiff = newCommitMetaData.recoveryDiff(metadata);\n+ if (delFile != null) {\n+ assertThat(newCommitDiff.identical.size(), equalTo(newCommitMetaData.size()-5)); // segments_N, del file, cfs, cfe, si for the new segment\n+ assertThat(newCommitDiff.different.size(), equalTo(1)); // the del file must be different\n+ assertThat(newCommitDiff.different.get(0).name(), endsWith(\".del\"));\n+ assertThat(newCommitDiff.missing.size(), equalTo(4)); // segments_N,cfs, cfe, si for the new segment\n+ } else {\n+ assertThat(newCommitDiff.identical.size(), equalTo(newCommitMetaData.size() - 4)); // segments_N, cfs, cfe, si for the new segment\n+ assertThat(newCommitDiff.different.size(), equalTo(0));\n+ assertThat(newCommitDiff.missing.size(), equalTo(4)); // an entire segment must be missing (single doc segment got dropped) plus the commit is different\n+ }\n+\n+ store.deleteContent();\n+ IOUtils.close(store);\n }\n \n ", "filename": "src/test/java/org/elasticsearch/index/store/StoreTest.java", "status": "modified" }, { "diff": "@@ -22,7 +22,6 @@\n import com.carrotsearch.randomizedtesting.LifecycleScope;\n import com.google.common.base.Predicate;\n import com.google.common.collect.ImmutableList;\n-import org.apache.lucene.util.LuceneTestCase;\n import org.apache.lucene.util.LuceneTestCase.Slow;\n import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.action.ListenableActionFuture;\n@@ -36,8 +35,10 @@\n import org.elasticsearch.action.admin.indices.settings.get.GetSettingsResponse;\n import org.elasticsearch.action.admin.indices.template.get.GetIndexTemplatesResponse;\n import org.elasticsearch.action.count.CountResponse;\n+import org.elasticsearch.action.index.IndexRequestBuilder;\n import org.elasticsearch.client.Client;\n import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.MappingMetaData;\n import org.elasticsearch.cluster.metadata.SnapshotMetaData;\n import org.elasticsearch.cluster.routing.allocation.decider.FilterAllocationDecider;\n@@ -52,6 +53,8 @@\n import org.junit.Test;\n \n import java.io.File;\n+import java.util.List;\n+import java.util.concurrent.ExecutionException;\n import java.util.concurrent.TimeUnit;\n \n import static org.elasticsearch.cluster.metadata.IndexMetaData.*;\n@@ -1232,6 +1235,68 @@ public void snapshotRelocatingPrimary() throws Exception {\n logger.info(\"--> done\");\n }\n \n+ public void testSnapshotMoreThanOnce() throws ExecutionException, InterruptedException {\n+ Client client = client();\n+\n+ logger.info(\"--> creating repository\");\n+ assertAcked(client.admin().cluster().preparePutRepository(\"test-repo\")\n+ .setType(\"fs\").setSettings(ImmutableSettings.settingsBuilder()\n+ .put(\"location\", newTempDir(LifecycleScope.SUITE))\n+ .put(\"compress\", randomBoolean())\n+ .put(\"chunk_size\", randomIntBetween(100, 1000))));\n+\n+ // only one shard\n+ assertAcked(prepareCreate(\"test\").setSettings(ImmutableSettings.builder().put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1)));\n+ ensureGreen();\n+ logger.info(\"--> indexing\");\n+\n+ final int numdocs = randomIntBetween(10, 100);\n+ IndexRequestBuilder[] builders = new IndexRequestBuilder[numdocs];\n+ for (int i = 0; i < builders.length; i++) {\n+ builders[i] = client().prepareIndex(\"test\", \"doc\", Integer.toString(i)).setSource(\"foo\", \"bar\" + i);\n+ }\n+ indexRandom(true, builders);\n+ flushAndRefresh();\n+ assertNoFailures(client().admin().indices().prepareOptimize(\"test\").setForce(true).setFlush(true).setWaitForMerge(true).setMaxNumSegments(1).get());\n+\n+ CreateSnapshotResponse createSnapshotResponseFirst = client.admin().cluster().prepareCreateSnapshot(\"test-repo\", \"test\").setWaitForCompletion(true).setIndices(\"test\").get();\n+ assertThat(createSnapshotResponseFirst.getSnapshotInfo().successfulShards(), greaterThan(0));\n+ assertThat(createSnapshotResponseFirst.getSnapshotInfo().successfulShards(), equalTo(createSnapshotResponseFirst.getSnapshotInfo().totalShards()));\n+ assertThat(client.admin().cluster().prepareGetSnapshots(\"test-repo\").setSnapshots(\"test\").get().getSnapshots().get(0).state(), equalTo(SnapshotState.SUCCESS));\n+ {\n+ SnapshotStatus snapshotStatus = client.admin().cluster().prepareSnapshotStatus(\"test-repo\").setSnapshots(\"test\").get().getSnapshots().get(0);\n+ List<SnapshotIndexShardStatus> shards = snapshotStatus.getShards();\n+ for (SnapshotIndexShardStatus status : shards) {\n+ assertThat(status.getStats().getProcessedFiles(), greaterThan(1));\n+ }\n+ }\n+\n+ CreateSnapshotResponse createSnapshotResponseSecond = client.admin().cluster().prepareCreateSnapshot(\"test-repo\", \"test-1\").setWaitForCompletion(true).setIndices(\"test\").get();\n+ assertThat(createSnapshotResponseSecond.getSnapshotInfo().successfulShards(), greaterThan(0));\n+ assertThat(createSnapshotResponseSecond.getSnapshotInfo().successfulShards(), equalTo(createSnapshotResponseSecond.getSnapshotInfo().totalShards()));\n+ assertThat(client.admin().cluster().prepareGetSnapshots(\"test-repo\").setSnapshots(\"test-1\").get().getSnapshots().get(0).state(), equalTo(SnapshotState.SUCCESS));\n+ {\n+ SnapshotStatus snapshotStatus = client.admin().cluster().prepareSnapshotStatus(\"test-repo\").setSnapshots(\"test-1\").get().getSnapshots().get(0);\n+ List<SnapshotIndexShardStatus> shards = snapshotStatus.getShards();\n+ for (SnapshotIndexShardStatus status : shards) {\n+ assertThat(status.getStats().getProcessedFiles(), equalTo(1)); // we flush before the snapshot such that we have to process the segments_N files\n+ }\n+ }\n+\n+ client().prepareDelete(\"test\", \"doc\", \"1\").get();\n+ CreateSnapshotResponse createSnapshotResponseThird = client.admin().cluster().prepareCreateSnapshot(\"test-repo\", \"test-2\").setWaitForCompletion(true).setIndices(\"test\").get();\n+ assertThat(createSnapshotResponseThird.getSnapshotInfo().successfulShards(), greaterThan(0));\n+ assertThat(createSnapshotResponseThird.getSnapshotInfo().successfulShards(), equalTo(createSnapshotResponseThird.getSnapshotInfo().totalShards()));\n+ assertThat(client.admin().cluster().prepareGetSnapshots(\"test-repo\").setSnapshots(\"test-2\").get().getSnapshots().get(0).state(), equalTo(SnapshotState.SUCCESS));\n+ {\n+ SnapshotStatus snapshotStatus = client.admin().cluster().prepareSnapshotStatus(\"test-repo\").setSnapshots(\"test-2\").get().getSnapshots().get(0);\n+ List<SnapshotIndexShardStatus> shards = snapshotStatus.getShards();\n+ for (SnapshotIndexShardStatus status : shards) {\n+ assertThat(status.getStats().getProcessedFiles(), equalTo(2)); // we flush before the snapshot such that we have to process the segments_N files plus the .del file\n+ }\n+ }\n+ }\n+\n private boolean waitForIndex(final String index, TimeValue timeout) throws InterruptedException {\n return awaitBusy(new Predicate<Object>() {\n @Override", "filename": "src/test/java/org/elasticsearch/snapshots/SharedClusterSnapshotRestoreTests.java", "status": "modified" }, { "diff": "@@ -0,0 +1,246 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.snapshots;\n+\n+import com.carrotsearch.randomizedtesting.LifecycleScope;\n+import com.carrotsearch.randomizedtesting.generators.RandomPicks;\n+import org.elasticsearch.action.admin.cluster.snapshots.create.CreateSnapshotResponse;\n+import org.elasticsearch.action.admin.cluster.snapshots.restore.RestoreSnapshotResponse;\n+import org.elasticsearch.action.admin.cluster.snapshots.status.SnapshotIndexShardStatus;\n+import org.elasticsearch.action.admin.cluster.snapshots.status.SnapshotStatus;\n+import org.elasticsearch.action.count.CountResponse;\n+import org.elasticsearch.action.index.IndexRequest;\n+import org.elasticsearch.action.index.IndexRequestBuilder;\n+import org.elasticsearch.client.Client;\n+import org.elasticsearch.cluster.metadata.IndexMetaData;\n+import org.elasticsearch.cluster.routing.allocation.decider.EnableAllocationDecider;\n+import org.elasticsearch.common.settings.ImmutableSettings;\n+import org.elasticsearch.test.ElasticsearchBackwardsCompatIntegrationTest;\n+import org.junit.Ignore;\n+import org.junit.Test;\n+\n+import java.io.File;\n+import java.io.IOException;\n+import java.util.Arrays;\n+import java.util.List;\n+import java.util.concurrent.ExecutionException;\n+\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertHitCount;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertNoFailures;\n+import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.greaterThan;\n+import static org.hamcrest.Matchers.lessThan;\n+\n+public class SnapshotBackwardsCompatibilityTest extends ElasticsearchBackwardsCompatIntegrationTest {\n+\n+ @Test\n+ public void testSnapshotAndRestore() throws ExecutionException, InterruptedException, IOException {\n+ logger.info(\"--> creating repository\");\n+ assertAcked(client().admin().cluster().preparePutRepository(\"test-repo\")\n+ .setType(\"fs\").setSettings(ImmutableSettings.settingsBuilder()\n+ .put(\"location\", newTempDir(LifecycleScope.SUITE).getAbsolutePath())\n+ .put(\"compress\", randomBoolean())\n+ .put(\"chunk_size\", randomIntBetween(100, 1000))));\n+ String[] indicesBefore = new String[randomIntBetween(2,5)];\n+ String[] indicesAfter = new String[randomIntBetween(2,5)];\n+ for (int i = 0; i < indicesBefore.length; i++) {\n+ indicesBefore[i] = \"index_before_\" + i;\n+ createIndex(indicesBefore[i]);\n+ }\n+ for (int i = 0; i < indicesAfter.length; i++) {\n+ indicesAfter[i] = \"index_after_\" + i;\n+ createIndex(indicesAfter[i]);\n+ }\n+ String[] indices = new String[indicesBefore.length + indicesAfter.length];\n+ System.arraycopy(indicesBefore, 0, indices, 0, indicesBefore.length);\n+ System.arraycopy(indicesAfter, 0, indices, indicesBefore.length, indicesAfter.length);\n+ ensureYellow();\n+ logger.info(\"--> indexing some data\");\n+ IndexRequestBuilder[] buildersBefore = new IndexRequestBuilder[randomIntBetween(10, 200)];\n+ for (int i = 0; i < buildersBefore.length; i++) {\n+ buildersBefore[i] = client().prepareIndex(RandomPicks.randomFrom(getRandom(), indicesBefore), \"foo\", Integer.toString(i)).setSource(\"{ \\\"foo\\\" : \\\"bar\\\" } \");\n+ }\n+ IndexRequestBuilder[] buildersAfter = new IndexRequestBuilder[randomIntBetween(10, 200)];\n+ for (int i = 0; i < buildersAfter.length; i++) {\n+ buildersAfter[i] = client().prepareIndex(RandomPicks.randomFrom(getRandom(), indicesBefore), \"bar\", Integer.toString(i)).setSource(\"{ \\\"foo\\\" : \\\"bar\\\" } \");\n+ }\n+ indexRandom(true, buildersBefore);\n+ indexRandom(true, buildersAfter);\n+ assertThat(client().prepareCount(indices).get().getCount(), equalTo((long) (buildersBefore.length + buildersAfter.length)));\n+ long[] counts = new long[indices.length];\n+ for (int i = 0; i < indices.length; i++) {\n+ counts[i] = client().prepareCount(indices[i]).get().getCount();\n+ }\n+\n+ logger.info(\"--> snapshot subset of indices before upgrage\");\n+ CreateSnapshotResponse createSnapshotResponse = client().admin().cluster().prepareCreateSnapshot(\"test-repo\", \"test-snap-1\").setWaitForCompletion(true).setIndices(\"index_before_*\").get();\n+ assertThat(createSnapshotResponse.getSnapshotInfo().successfulShards(), greaterThan(0));\n+ assertThat(createSnapshotResponse.getSnapshotInfo().successfulShards(), equalTo(createSnapshotResponse.getSnapshotInfo().totalShards()));\n+\n+ assertThat(client().admin().cluster().prepareGetSnapshots(\"test-repo\").setSnapshots(\"test-snap-1\").get().getSnapshots().get(0).state(), equalTo(SnapshotState.SUCCESS));\n+\n+ logger.info(\"--> delete some data from indices that were already snapshotted\");\n+ int howMany = randomIntBetween(1, buildersBefore.length);\n+\n+ for (int i = 0; i < howMany; i++) {\n+ IndexRequestBuilder indexRequestBuilder = RandomPicks.randomFrom(getRandom(), buildersBefore);\n+ IndexRequest request = indexRequestBuilder.request();\n+ client().prepareDelete(request.index(), request.type(), request.id()).get();\n+ }\n+ refresh();\n+ final long numDocs = client().prepareCount(indices).get().getCount();\n+ assertThat(client().prepareCount(indices).get().getCount(), lessThan((long) (buildersBefore.length + buildersAfter.length)));\n+\n+\n+ client().admin().indices().prepareUpdateSettings(indices).setSettings(ImmutableSettings.builder().put(EnableAllocationDecider.INDEX_ROUTING_ALLOCATION_ENABLE, \"none\")).get();\n+ backwardsCluster().allowOnAllNodes(indices);\n+ logClusterState();\n+ boolean upgraded;\n+ do {\n+ logClusterState();\n+ CountResponse countResponse = client().prepareCount().get();\n+ assertHitCount(countResponse, numDocs);\n+ upgraded = backwardsCluster().upgradeOneNode();\n+ ensureYellow();\n+ countResponse = client().prepareCount().get();\n+ assertHitCount(countResponse, numDocs);\n+ } while (upgraded);\n+ client().admin().indices().prepareUpdateSettings(indices).setSettings(ImmutableSettings.builder().put(EnableAllocationDecider.INDEX_ROUTING_ALLOCATION_ENABLE, \"all\")).get();\n+\n+ logger.info(\"--> close indices\");\n+\n+ client().admin().indices().prepareClose(\"index_before_*\").get();\n+\n+ logger.info(\"--> restore all indices from the snapshot\");\n+ RestoreSnapshotResponse restoreSnapshotResponse = client().admin().cluster().prepareRestoreSnapshot(\"test-repo\", \"test-snap-1\").setWaitForCompletion(true).execute().actionGet();\n+ assertThat(restoreSnapshotResponse.getRestoreInfo().totalShards(), greaterThan(0));\n+\n+ ensureYellow();\n+ assertThat(client().prepareCount(indices).get().getCount(), equalTo((long) (buildersBefore.length + buildersAfter.length)));\n+ for (int i = 0; i < indices.length; i++) {\n+ assertThat(counts[i], equalTo(client().prepareCount(indices[i]).get().getCount()));\n+ }\n+\n+ logger.info(\"--> snapshot subset of indices after upgrade\");\n+ createSnapshotResponse = client().admin().cluster().prepareCreateSnapshot(\"test-repo\", \"test-snap-2\").setWaitForCompletion(true).setIndices(\"index_*\").get();\n+ assertThat(createSnapshotResponse.getSnapshotInfo().successfulShards(), greaterThan(0));\n+ assertThat(createSnapshotResponse.getSnapshotInfo().successfulShards(), equalTo(createSnapshotResponse.getSnapshotInfo().totalShards()));\n+\n+ // Test restore after index deletion\n+ logger.info(\"--> delete indices\");\n+ String index = RandomPicks.randomFrom(getRandom(), indices);\n+ cluster().wipeIndices(index);\n+ logger.info(\"--> restore one index after deletion\");\n+ restoreSnapshotResponse = client().admin().cluster().prepareRestoreSnapshot(\"test-repo\", \"test-snap-2\").setWaitForCompletion(true).setIndices(index).execute().actionGet();\n+ assertThat(restoreSnapshotResponse.getRestoreInfo().totalShards(), greaterThan(0));\n+ ensureYellow();\n+ assertThat(client().prepareCount(indices).get().getCount(), equalTo((long) (buildersBefore.length + buildersAfter.length)));\n+ for (int i = 0; i < indices.length; i++) {\n+ assertThat(counts[i], equalTo(client().prepareCount(indices[i]).get().getCount()));\n+ }\n+ }\n+\n+ public void testSnapshotMoreThanOnce() throws ExecutionException, InterruptedException, IOException {\n+ Client client = client();\n+ final File tempDir = newTempDir(LifecycleScope.SUITE).getAbsoluteFile();\n+ logger.info(\"--> creating repository\");\n+ assertAcked(client.admin().cluster().preparePutRepository(\"test-repo\")\n+ .setType(\"fs\").setSettings(ImmutableSettings.settingsBuilder()\n+ .put(\"location\", tempDir)\n+ .put(\"compress\", randomBoolean())\n+ .put(\"chunk_size\", randomIntBetween(100, 1000))));\n+\n+ // only one shard\n+ assertAcked(prepareCreate(\"test\").setSettings(ImmutableSettings.builder()\n+ .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1)\n+ .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0)\n+ ));\n+ ensureYellow();\n+ logger.info(\"--> indexing\");\n+\n+ final int numDocs = randomIntBetween(10, 100);\n+ IndexRequestBuilder[] builders = new IndexRequestBuilder[numDocs];\n+ for (int i = 0; i < builders.length; i++) {\n+ builders[i] = client().prepareIndex(\"test\", \"doc\", Integer.toString(i)).setSource(\"foo\", \"bar\" + i);\n+ }\n+ indexRandom(true, builders);\n+ flushAndRefresh();\n+ assertNoFailures(client().admin().indices().prepareOptimize(\"test\").setForce(true).setFlush(true).setWaitForMerge(true).setMaxNumSegments(1).get());\n+\n+ CreateSnapshotResponse createSnapshotResponseFirst = client.admin().cluster().prepareCreateSnapshot(\"test-repo\", \"test\").setWaitForCompletion(true).setIndices(\"test\").get();\n+ assertThat(createSnapshotResponseFirst.getSnapshotInfo().successfulShards(), greaterThan(0));\n+ assertThat(createSnapshotResponseFirst.getSnapshotInfo().successfulShards(), equalTo(createSnapshotResponseFirst.getSnapshotInfo().totalShards()));\n+ assertThat(client.admin().cluster().prepareGetSnapshots(\"test-repo\").setSnapshots(\"test\").get().getSnapshots().get(0).state(), equalTo(SnapshotState.SUCCESS));\n+ {\n+ SnapshotStatus snapshotStatus = client.admin().cluster().prepareSnapshotStatus(\"test-repo\").setSnapshots(\"test\").get().getSnapshots().get(0);\n+ List<SnapshotIndexShardStatus> shards = snapshotStatus.getShards();\n+ for (SnapshotIndexShardStatus status : shards) {\n+ assertThat(status.getStats().getProcessedFiles(), greaterThan(1));\n+ }\n+ }\n+ if (frequently()) {\n+ logger.info(\"--> upgrade\");\n+ client().admin().indices().prepareUpdateSettings(\"test\").setSettings(ImmutableSettings.builder().put(EnableAllocationDecider.INDEX_ROUTING_ALLOCATION_ENABLE, \"none\")).get();\n+ backwardsCluster().allowOnAllNodes(\"test\");\n+ logClusterState();\n+ boolean upgraded;\n+ do {\n+ logClusterState();\n+ CountResponse countResponse = client().prepareCount().get();\n+ assertHitCount(countResponse, numDocs);\n+ upgraded = backwardsCluster().upgradeOneNode();\n+ ensureYellow();\n+ countResponse = client().prepareCount().get();\n+ assertHitCount(countResponse, numDocs);\n+ } while (upgraded);\n+ client().admin().indices().prepareUpdateSettings(\"test\").setSettings(ImmutableSettings.builder().put(EnableAllocationDecider.INDEX_ROUTING_ALLOCATION_ENABLE, \"all\")).get();\n+ }\n+ if (cluster().numDataNodes() > 1 && randomBoolean()) { // only bump the replicas if we have enough nodes\n+ logger.info(\"--> move from 0 to 1 replica\");\n+ client().admin().indices().prepareUpdateSettings(\"test\").setSettings(ImmutableSettings.builder().put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 1)).get();\n+ }\n+ logger.debug(\"---> repo exists: \" + new File(tempDir, \"indices/test/0\").exists() + \" files: \" + Arrays.toString(new File(tempDir, \"indices/test/0\").list())); // it's only one shard!\n+ CreateSnapshotResponse createSnapshotResponseSecond = client.admin().cluster().prepareCreateSnapshot(\"test-repo\", \"test-1\").setWaitForCompletion(true).setIndices(\"test\").get();\n+ assertThat(createSnapshotResponseSecond.getSnapshotInfo().successfulShards(), greaterThan(0));\n+ assertThat(createSnapshotResponseSecond.getSnapshotInfo().successfulShards(), equalTo(createSnapshotResponseSecond.getSnapshotInfo().totalShards()));\n+ assertThat(client.admin().cluster().prepareGetSnapshots(\"test-repo\").setSnapshots(\"test-1\").get().getSnapshots().get(0).state(), equalTo(SnapshotState.SUCCESS));\n+ {\n+ SnapshotStatus snapshotStatus = client.admin().cluster().prepareSnapshotStatus(\"test-repo\").setSnapshots(\"test-1\").get().getSnapshots().get(0);\n+ List<SnapshotIndexShardStatus> shards = snapshotStatus.getShards();\n+ for (SnapshotIndexShardStatus status : shards) {\n+\n+ assertThat(status.getStats().getProcessedFiles(), equalTo(1)); // we flush before the snapshot such that we have to process the segments_N files\n+ }\n+ }\n+\n+ client().prepareDelete(\"test\", \"doc\", \"1\").get();\n+ CreateSnapshotResponse createSnapshotResponseThird = client.admin().cluster().prepareCreateSnapshot(\"test-repo\", \"test-2\").setWaitForCompletion(true).setIndices(\"test\").get();\n+ assertThat(createSnapshotResponseThird.getSnapshotInfo().successfulShards(), greaterThan(0));\n+ assertThat(createSnapshotResponseThird.getSnapshotInfo().successfulShards(), equalTo(createSnapshotResponseThird.getSnapshotInfo().totalShards()));\n+ assertThat(client.admin().cluster().prepareGetSnapshots(\"test-repo\").setSnapshots(\"test-2\").get().getSnapshots().get(0).state(), equalTo(SnapshotState.SUCCESS));\n+ {\n+ SnapshotStatus snapshotStatus = client.admin().cluster().prepareSnapshotStatus(\"test-repo\").setSnapshots(\"test-2\").get().getSnapshots().get(0);\n+ List<SnapshotIndexShardStatus> shards = snapshotStatus.getShards();\n+ for (SnapshotIndexShardStatus status : shards) {\n+ assertThat(status.getStats().getProcessedFiles(), equalTo(2)); // we flush before the snapshot such that we have to process the segments_N files plus the .del file\n+ }\n+ }\n+ }\n+}", "filename": "src/test/java/org/elasticsearch/snapshots/SnapshotBackwardsCompatibilityTest.java", "status": "added" } ] }
{ "body": "This is caused by the TransportClient failing to register a module that is now required to deserialize responses correctly.\nThe fix is to add this line to the constructor:\n\n```\n modules.add(new SignificantTermsHeuristicModule());\n```\n\nThanks to Felipe Hummel for reporting the error and providing a failing test case here: https://groups.google.com/forum/#!topic/elasticsearch/R42Nyyfr73I\n", "comments": [ { "body": "I think since the SignificantTermsHeuristicModule is just for use in the Aggregations we should make TransportAggregationModule implement SpawnModules and implement the spawnModules method as follows:\n\n```\n @Override\n public Iterable<? extends Module> spawnModules() {\n return ImmutableList.of(new SignificantTermsHeuristicModule());\n }\n```\n", "created_at": "2014-09-24T09:01:47Z" } ], "number": 7840, "title": "Aggregations: NPE in SignificanceHeuristicStreams.read while deserializing response" }
{ "body": "Closes #7840\n", "number": 7853, "review_comments": [], "title": "Aggregations: Significant Terms Heuristics now registered correctly" }
{ "commits": [ { "message": "Aggregations: Significant Terms Heuristics now registered correctly\n\nCloses #7840" } ], "files": [ { "diff": "@@ -27,7 +27,6 @@\n import org.elasticsearch.index.search.morelikethis.MoreLikeThisFetchService;\n import org.elasticsearch.search.action.SearchServiceTransportAction;\n import org.elasticsearch.search.aggregations.AggregationModule;\n-import org.elasticsearch.search.aggregations.bucket.significant.heuristics.SignificantTermsHeuristicModule;\n import org.elasticsearch.search.controller.SearchPhaseController;\n import org.elasticsearch.search.dfs.DfsPhase;\n import org.elasticsearch.search.fetch.FetchPhase;\n@@ -50,7 +49,7 @@ public class SearchModule extends AbstractModule implements SpawnModules {\n \n @Override\n public Iterable<? extends Module> spawnModules() {\n- return ImmutableList.of(new TransportSearchModule(), new HighlightModule(), new SuggestModule(), new FunctionScoreModule(), new AggregationModule(), new SignificantTermsHeuristicModule());\n+ return ImmutableList.of(new TransportSearchModule(), new HighlightModule(), new SuggestModule(), new FunctionScoreModule(), new AggregationModule());\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/search/SearchModule.java", "status": "modified" }, { "diff": "@@ -18,8 +18,11 @@\n */\n package org.elasticsearch.search.aggregations;\n \n+import com.google.common.collect.ImmutableList;\n import com.google.common.collect.Lists;\n import org.elasticsearch.common.inject.AbstractModule;\n+import org.elasticsearch.common.inject.Module;\n+import org.elasticsearch.common.inject.SpawnModules;\n import org.elasticsearch.common.inject.multibindings.Multibinder;\n import org.elasticsearch.search.aggregations.bucket.children.ChildrenParser;\n import org.elasticsearch.search.aggregations.bucket.filter.FilterParser;\n@@ -36,6 +39,7 @@\n import org.elasticsearch.search.aggregations.bucket.range.geodistance.GeoDistanceParser;\n import org.elasticsearch.search.aggregations.bucket.range.ipv4.IpRangeParser;\n import org.elasticsearch.search.aggregations.bucket.significant.SignificantTermsParser;\n+import org.elasticsearch.search.aggregations.bucket.significant.heuristics.SignificantTermsHeuristicModule;\n import org.elasticsearch.search.aggregations.bucket.terms.TermsParser;\n import org.elasticsearch.search.aggregations.metrics.avg.AvgParser;\n import org.elasticsearch.search.aggregations.metrics.cardinality.CardinalityParser;\n@@ -56,7 +60,7 @@\n /**\n * The main module for the get (binding all get components together)\n */\n-public class AggregationModule extends AbstractModule {\n+public class AggregationModule extends AbstractModule implements SpawnModules{\n \n private List<Class<? extends Aggregator.Parser>> parsers = Lists.newArrayList();\n \n@@ -113,4 +117,9 @@ protected void configure() {\n bind(AggregationPhase.class).asEagerSingleton();\n }\n \n+ @Override\n+ public Iterable<? extends Module> spawnModules() {\n+ return ImmutableList.of(new SignificantTermsHeuristicModule());\n+ }\n+\n }", "filename": "src/main/java/org/elasticsearch/search/aggregations/AggregationModule.java", "status": "modified" }, { "diff": "@@ -18,7 +18,10 @@\n */\n package org.elasticsearch.search.aggregations;\n \n+import com.google.common.collect.ImmutableList;\n import org.elasticsearch.common.inject.AbstractModule;\n+import org.elasticsearch.common.inject.Module;\n+import org.elasticsearch.common.inject.SpawnModules;\n import org.elasticsearch.search.aggregations.bucket.children.InternalChildren;\n import org.elasticsearch.search.aggregations.bucket.filter.InternalFilter;\n import org.elasticsearch.search.aggregations.bucket.filters.InternalFilters;\n@@ -36,6 +39,7 @@\n import org.elasticsearch.search.aggregations.bucket.significant.SignificantLongTerms;\n import org.elasticsearch.search.aggregations.bucket.significant.SignificantStringTerms;\n import org.elasticsearch.search.aggregations.bucket.significant.UnmappedSignificantTerms;\n+import org.elasticsearch.search.aggregations.bucket.significant.heuristics.TransportSignificantTermsHeuristicModule;\n import org.elasticsearch.search.aggregations.bucket.terms.DoubleTerms;\n import org.elasticsearch.search.aggregations.bucket.terms.LongTerms;\n import org.elasticsearch.search.aggregations.bucket.terms.StringTerms;\n@@ -57,7 +61,7 @@\n /**\n * A module that registers all the transport streams for the addAggregation\n */\n-public class TransportAggregationModule extends AbstractModule {\n+public class TransportAggregationModule extends AbstractModule implements SpawnModules {\n \n @Override\n protected void configure() {\n@@ -100,4 +104,9 @@ protected void configure() {\n InternalGeoBounds.registerStream();\n InternalChildren.registerStream();\n }\n+\n+ @Override\n+ public Iterable<? extends Module> spawnModules() {\n+ return ImmutableList.of(new TransportSignificantTermsHeuristicModule());\n+ }\n }", "filename": "src/main/java/org/elasticsearch/search/aggregations/TransportAggregationModule.java", "status": "modified" }, { "diff": "@@ -30,18 +30,16 @@\n public class SignificantTermsHeuristicModule extends AbstractModule {\n \n private List<Class<? extends SignificanceHeuristicParser>> parsers = Lists.newArrayList();\n- private List<SignificanceHeuristicStreams.Stream> streams = Lists.newArrayList();\n \n public SignificantTermsHeuristicModule() {\n- registerHeuristic(JLHScore.JLHScoreParser.class, JLHScore.STREAM);\n- registerHeuristic(MutualInformation.MutualInformationParser.class, MutualInformation.STREAM);\n- registerHeuristic(GND.GNDParser.class, GND.STREAM);\n- registerHeuristic(ChiSquare.ChiSquareParser.class, ChiSquare.STREAM);\n+ registerParser(JLHScore.JLHScoreParser.class);\n+ registerParser(MutualInformation.MutualInformationParser.class);\n+ registerParser(GND.GNDParser.class);\n+ registerParser(ChiSquare.ChiSquareParser.class);\n }\n \n- public void registerHeuristic(Class<? extends SignificanceHeuristicParser> parser, SignificanceHeuristicStreams.Stream stream) {\n+ public void registerParser(Class<? extends SignificanceHeuristicParser> parser) {\n parsers.add(parser);\n- streams.add(stream);\n }\n \n @Override\n@@ -51,8 +49,5 @@ protected void configure() {\n parserMapBinder.addBinding().to(clazz);\n }\n bind(SignificanceHeuristicParserMapper.class);\n- for (SignificanceHeuristicStreams.Stream stream : streams) {\n- SignificanceHeuristicStreams.registerStream(stream, stream.getName());\n- }\n }\n }", "filename": "src/main/java/org/elasticsearch/search/aggregations/bucket/significant/heuristics/SignificantTermsHeuristicModule.java", "status": "modified" }, { "diff": "@@ -0,0 +1,50 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+\n+package org.elasticsearch.search.aggregations.bucket.significant.heuristics;\n+\n+import com.google.common.collect.Lists;\n+import org.elasticsearch.common.inject.AbstractModule;\n+\n+import java.util.List;\n+\n+\n+public class TransportSignificantTermsHeuristicModule extends AbstractModule {\n+\n+ private List<SignificanceHeuristicStreams.Stream> streams = Lists.newArrayList();\n+\n+ public TransportSignificantTermsHeuristicModule() {\n+ registerStream(JLHScore.STREAM);\n+ registerStream(MutualInformation.STREAM);\n+ registerStream(GND.STREAM);\n+ registerStream(ChiSquare.STREAM);\n+ }\n+\n+ public void registerStream(SignificanceHeuristicStreams.Stream stream) {\n+ streams.add(stream);\n+ }\n+\n+ @Override\n+ protected void configure() {\n+ for (SignificanceHeuristicStreams.Stream stream : streams) {\n+ SignificanceHeuristicStreams.registerStream(stream, stream.getName());\n+ }\n+ }\n+}", "filename": "src/main/java/org/elasticsearch/search/aggregations/bucket/significant/heuristics/TransportSignificantTermsHeuristicModule.java", "status": "added" }, { "diff": "@@ -159,7 +159,11 @@ public String description() {\n }\n \n public void onModule(SignificantTermsHeuristicModule significanceModule) {\n- significanceModule.registerHeuristic(SimpleHeuristic.SimpleHeuristicParser.class, SimpleHeuristic.STREAM);\n+ significanceModule.registerParser(SimpleHeuristic.SimpleHeuristicParser.class);\n+ }\n+\n+ public void onModule(TransportSignificantTermsHeuristicModule significanceModule) {\n+ significanceModule.registerStream(SimpleHeuristic.STREAM);\n }\n }\n ", "filename": "src/test/java/org/elasticsearch/search/aggregations/bucket/SignificantTermsSignificanceScoreTests.java", "status": "modified" } ] }
{ "body": "This is caused by the TransportClient failing to register a module that is now required to deserialize responses correctly.\nThe fix is to add this line to the constructor:\n\n```\n modules.add(new SignificantTermsHeuristicModule());\n```\n\nThanks to Felipe Hummel for reporting the error and providing a failing test case here: https://groups.google.com/forum/#!topic/elasticsearch/R42Nyyfr73I\n", "comments": [ { "body": "I think since the SignificantTermsHeuristicModule is just for use in the Aggregations we should make TransportAggregationModule implement SpawnModules and implement the spawnModules method as follows:\n\n```\n @Override\n public Iterable<? extends Module> spawnModules() {\n return ImmutableList.of(new SignificantTermsHeuristicModule());\n }\n```\n", "created_at": "2014-09-24T09:01:47Z" } ], "number": 7840, "title": "Aggregations: NPE in SignificanceHeuristicStreams.read while deserializing response" }
{ "body": "Required for serialising significant_terms agg responses\nCloses #7840\n", "number": 7852, "review_comments": [], "title": "Added missing module registration in TransportClient for Significant Terms" }
{ "commits": [ { "message": "Aggs fix - added missing module registration required for serialising significant_terms agg responses\n\nCloses #7840" } ], "files": [ { "diff": "@@ -80,6 +80,7 @@\n import org.elasticsearch.plugins.PluginsModule;\n import org.elasticsearch.plugins.PluginsService;\n import org.elasticsearch.search.TransportSearchModule;\n+import org.elasticsearch.search.aggregations.bucket.significant.heuristics.SignificantTermsHeuristicModule;\n import org.elasticsearch.threadpool.ThreadPool;\n import org.elasticsearch.threadpool.ThreadPoolModule;\n import org.elasticsearch.transport.TransportModule;\n@@ -183,6 +184,7 @@ public TransportClient(Settings pSettings, boolean loadConfigSettings) throws El\n modules.add(new ActionModule(true));\n modules.add(new ClientTransportModule());\n modules.add(new CircuitBreakerModule(this.settings));\n+ modules.add(new SignificantTermsHeuristicModule());\n \n injector = modules.createInjector();\n ", "filename": "src/main/java/org/elasticsearch/client/transport/TransportClient.java", "status": "modified" } ] }
{ "body": "This query was generated by a benchmarking framework that creates random combinations of clauses and this combo of scripted agg and script_score that references \"_score\" causes a StackOverflowError\n\n```\ncurl -XPUT \"http://localhost:9200/test?pretty=true\" -d'\n{\n \"mappings\": {\n \"car\": {\n \"properties\": {\n \"make\": {\n \"type\": \"string\",\n \"index\": \"not_analyzed\"\n },\n \"model\": {\n \"type\": \"string\",\n \"index\": \"not_analyzed\"\n },\n \"mileage\": {\n \"type\": \"integer\"\n }\n }\n }\n }\n}'\ncurl -XPOST \"http://localhost:9200/test/car/1?pretty=true\" -d'\n{\n \"make\": \"bmw\",\n \"model\": \"m3\",\n \"mileage\": 30000\n}' \ncurl -XPOST \"http://localhost:9200/test/car/_search?pretty\" -d'\n{\n \"query\": {\n \"function_score\": {\n \"query\": {\n \"term\": {\n \"model\": \"m3\"\n }\n },\n \"script_score\": {\n \"script\": \"_score * doc[\\\"mileage\\\"].value \"\n },\n \"boost_mode\": \"replace\"\n }\n },\n \"aggs\": {\n \"makes\": {\n \"terms\": {\n \"script\": \"doc[\\\"make\\\"].value\"\n }\n }\n }\n}' \n```\n", "comments": [ { "body": "So this is caused by the following: \n\nThe aggregation framework registers the agg script as scorer aware. When setScorer is called it eventually delegates down to DocLookup.setScorer(). DocLookup is global though so the agg script overwrites the original scorer (the one that the score script needs to use) and replaces it with the Score script. So now when the score script evaluates _score instead of using the original scorer it tries to use itself causing a recursive loop.\n\nThe solution should be to replace the global DocLookup with a DocLookup for each context (e.g. one for aggs, one for query_score, etc.)\n", "created_at": "2014-09-10T08:03:28Z" }, { "body": "> The solution should be to replace the global DocLookup with a DocLookup for each context (e.g. one for aggs, one for query_score, etc.)\n\nI believe it might be sufficient to just remove the dependence of DocLookup and scorer. The scorer should be set for each script explicitly instead of having the script score set the scorer in DocLookup which is then in turn looked up by the script when it is executed. \nHere is what I mean: https://github.com/brwe/elasticsearch/commit/5e142fe0459d03a013d77554aa4c4a27090125d9\n", "created_at": "2014-09-19T17:47:25Z" }, { "body": "@brwe @markharwood any progress on this one?\n", "created_at": "2014-09-25T12:40:00Z" }, { "body": "@clintongormley See my PR above, needs a reviewer\n", "created_at": "2014-09-25T12:49:54Z" } ], "number": 7487, "title": "StackOverflowError running query script and agg script" }
{ "body": "As pointed out in #7487 DocLookup is a variable that is accessible by all scripts\nfor one doc while the query is executed. But the _score and therfore the scorer\ndepends on the current context, that is, which part of query is currently executed.\nInstead of setting the scorer for DocLookup\nand have Script access the DocLookup for getting the score, the Scorer should just\nbe explicitely set for each script.\nDocLookup should not have any reference to a scorer.\nThis was similarly discussed in #7043.\n\nThis dependency caused a stackoverflow when running script score in combination with an\naggregation on _score. Also the wrong scorer was called when nesting several script scores.\n\ncloses #7487\n", "number": 7819, "review_comments": [], "title": "Script with `_score`: remove dependency of DocLookup and scorer" }
{ "commits": [ { "message": "script with _score: remove dependency of DocLookup and scorer\n\nAs pointed out in #7487 DocLookup is a variable that is accessible by all scripts\nfor one doc while the query is executed. But the _score and therfore the scorer\ndepends on the current context, that is, which part of query is currently executed.\nInstead of setting the scorer for DocLookup\nand have Script access the DocLookup for getting the score, the Scorer should just\nbe explicitely set for each script.\nDocLookup should not have any reference to a scorer.\nThis was similarly discussed in #7043.\n\nThis dependency caused a stackoverflow when running script score in combination with an\naggregation on _score. Also the wrong scorer was called when nesting several script scores.\n\ncloses #7487" } ], "files": [ { "diff": "@@ -95,7 +95,7 @@ void setLookup(SearchLookup lookup) {\n \n @Override\n public void setScorer(Scorer scorer) {\n- lookup.setScorer(scorer);\n+ throw new UnsupportedOperationException();\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/script/AbstractSearchScript.java", "status": "modified" }, { "diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.script;\n \n+import org.apache.lucene.search.Scorer;\n import org.elasticsearch.search.lookup.DocLookup;\n \n import java.io.IOException;\n@@ -31,15 +32,15 @@\n */\n public final class ScoreAccessor extends Number {\n \n- final DocLookup doc;\n+ Scorer scorer;\n \n- public ScoreAccessor(DocLookup d) {\n- doc = d;\n+ public ScoreAccessor(Scorer scorer) {\n+ this.scorer = scorer;\n }\n \n float score() {\n try {\n- return doc.score();\n+ return scorer.score();\n } catch (IOException e) {\n throw new RuntimeException(\"Could not get score\", e);\n }", "filename": "src/main/java/org/elasticsearch/script/ScoreAccessor.java", "status": "modified" }, { "diff": "@@ -230,9 +230,6 @@ public ScriptService(Settings settings, Environment env, Set<ScriptEngineService\n }\n this.scriptEngines = builder.build();\n \n- // put some default optimized scripts\n- staticCache.put(\"doc.score\", new CompiledScript(\"native\", new DocScoreNativeScriptFactory()));\n-\n // add file watcher for static scripts\n scriptsDirectory = new File(env.configFile(), \"scripts\");\n if (logger.isTraceEnabled()) {\n@@ -574,22 +571,4 @@ public int hashCode() {\n return lang.hashCode() + 31 * script.hashCode();\n }\n }\n-\n- public static class DocScoreNativeScriptFactory implements NativeScriptFactory {\n- @Override\n- public ExecutableScript newScript(@Nullable Map<String, Object> params) {\n- return new DocScoreSearchScript();\n- }\n- }\n-\n- public static class DocScoreSearchScript extends AbstractFloatSearchScript {\n- @Override\n- public float runAsFloat() {\n- try {\n- return doc().score();\n- } catch (IOException e) {\n- return 0;\n- }\n- }\n- }\n }", "filename": "src/main/java/org/elasticsearch/script/ScriptService.java", "status": "modified" }, { "diff": "@@ -43,6 +43,7 @@\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.script.*;\n import org.elasticsearch.search.lookup.SearchLookup;\n+import org.elasticsearch.search.suggest.term.TermSuggestion;\n \n import java.io.IOException;\n import java.math.BigDecimal;\n@@ -186,6 +187,7 @@ public static final class GroovyScript implements ExecutableScript, SearchScript\n private final SearchLookup lookup;\n private final Map<String, Object> variables;\n private final ESLogger logger;\n+ private Scorer scorer;\n \n public GroovyScript(Script script, ESLogger logger) {\n this(script, null, logger);\n@@ -196,17 +198,12 @@ public GroovyScript(Script script, @Nullable SearchLookup lookup, ESLogger logge\n this.lookup = lookup;\n this.logger = logger;\n this.variables = script.getBinding().getVariables();\n- if (lookup != null) {\n- // Add the _score variable, which will access score from lookup.doc()\n- this.variables.put(\"_score\", new ScoreAccessor(lookup.doc()));\n- }\n }\n \n @Override\n public void setScorer(Scorer scorer) {\n- if (lookup != null) {\n- lookup.setScorer(scorer);\n- }\n+ this.scorer = scorer;\n+ this.variables.put(\"_score\", new ScoreAccessor(scorer));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/script/groovy/GroovyScriptEngineService.java", "status": "modified" }, { "diff": "@@ -49,8 +49,6 @@ public class DocLookup implements Map {\n \n private AtomicReaderContext reader;\n \n- private Scorer scorer;\n-\n private int docId = -1;\n \n DocLookup(MapperService mapperService, IndexFieldDataService fieldDataService, @Nullable String[] types) {\n@@ -76,22 +74,10 @@ public void setNextReader(AtomicReaderContext context) {\n localCacheFieldData.clear();\n }\n \n- public void setScorer(Scorer scorer) {\n- this.scorer = scorer;\n- }\n-\n public void setNextDocId(int docId) {\n this.docId = docId;\n }\n \n- public float score() throws IOException {\n- return scorer.score();\n- }\n-\n- public float getScore() throws IOException {\n- return scorer.score();\n- }\n-\n @Override\n public Object get(Object key) {\n // assume its a string...", "filename": "src/main/java/org/elasticsearch/search/lookup/DocLookup.java", "status": "modified" }, { "diff": "@@ -76,10 +76,6 @@ public DocLookup doc() {\n return this.docMap;\n }\n \n- public void setScorer(Scorer scorer) {\n- docMap.setScorer(scorer);\n- }\n-\n public void setNextReader(AtomicReaderContext context) {\n docMap.setNextReader(context);\n sourceLookup.setNextReader(context);", "filename": "src/main/java/org/elasticsearch/search/lookup/SearchLookup.java", "status": "modified" }, { "diff": "@@ -1140,7 +1140,7 @@ public void script_Score() {\n .setQuery(functionScoreQuery(matchAllQuery()).add(ScoreFunctionBuilders.scriptFunction(\"doc['\" + SINGLE_VALUED_FIELD_NAME + \"'].value\")))\n .addAggregation(terms(\"terms\")\n .collectMode(randomFrom(SubAggCollectionMode.values()))\n- .script(\"ceil(_doc.score()/3)\")\n+ .script(\"ceil(_score.doubleValue()/3)\")\n ).execute().actionGet();\n \n assertSearchResponse(response);", "filename": "src/test/java/org/elasticsearch/search/aggregations/bucket/DoubleTermsTests.java", "status": "modified" }, { "diff": "@@ -270,7 +270,7 @@ public void testFieldCollapsing() throws Exception {\n topHits(\"hits\").setSize(1)\n )\n .subAggregation(\n- max(\"max_score\").script(\"_doc.score()\")\n+ max(\"max_score\").script(\"_score.doubleValue()\")\n )\n )\n .get();", "filename": "src/test/java/org/elasticsearch/search/aggregations/bucket/TopHitsTests.java", "status": "modified" }, { "diff": "@@ -28,6 +28,7 @@\n import org.elasticsearch.index.query.functionscore.FunctionScoreQueryBuilder;\n import org.elasticsearch.index.query.functionscore.ScoreFunctionBuilder;\n import org.elasticsearch.index.query.functionscore.weight.WeightBuilder;\n+import org.elasticsearch.search.aggregations.bucket.terms.Terms;\n import org.elasticsearch.test.ElasticsearchIntegrationTest;\n import org.junit.Test;\n \n@@ -40,6 +41,7 @@\n import static org.elasticsearch.index.query.QueryBuilders.functionScoreQuery;\n import static org.elasticsearch.index.query.QueryBuilders.termQuery;\n import static org.elasticsearch.index.query.functionscore.ScoreFunctionBuilders.*;\n+import static org.elasticsearch.search.aggregations.AggregationBuilders.terms;\n import static org.elasticsearch.search.builder.SearchSourceBuilder.searchSource;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchResponse;\n@@ -388,4 +390,44 @@ public void checkWeightOnlyCreatesBoostFunction() throws IOException {\n assertSearchResponse(response);\n assertThat(response.getHits().getAt(0).score(), equalTo(2.0f));\n }\n+\n+ @Test\n+ public void testScriptScoresNested() throws IOException {\n+ index(INDEX, TYPE, \"1\", jsonBuilder().startObject().field(\"dummy_field\", 1).endObject());\n+ refresh();\n+ SearchResponse response = client().search(\n+ searchRequest().source(\n+ searchSource().query(\n+ functionScoreQuery(\n+ functionScoreQuery(\n+ functionScoreQuery().add(scriptFunction(\"1\")))\n+ .add(scriptFunction(\"_score.doubleValue()\")))\n+ .add(scriptFunction(\"_score.doubleValue()\")\n+ )\n+ )\n+ )\n+ ).actionGet();\n+ assertSearchResponse(response);\n+ assertThat(response.getHits().getAt(0).score(), equalTo(1.0f));\n+ }\n+\n+ @Test\n+ public void testScriptScoresWithAgg() throws IOException {\n+ index(INDEX, TYPE, \"1\", jsonBuilder().startObject().field(\"dummy_field\", 1).endObject());\n+ refresh();\n+ SearchResponse response = client().search(\n+ searchRequest().source(\n+ searchSource().query(\n+ functionScoreQuery()\n+ .add(scriptFunction(\"_score.doubleValue()\")\n+ )\n+ ).aggregation(terms(\"score_agg\").script(\"_score.doubleValue()\"))\n+ )\n+ ).actionGet();\n+ assertSearchResponse(response);\n+ assertThat(response.getHits().getAt(0).score(), equalTo(1.0f));\n+ assertThat(((Terms) response.getAggregations().asMap().get(\"score_agg\")).getBuckets().get(0).getKeyAsNumber().floatValue(), is(1f));\n+ assertThat(((Terms) response.getAggregations().asMap().get(\"score_agg\")).getBuckets().get(0).getDocCount(), is(1l));\n+ }\n }\n+", "filename": "src/test/java/org/elasticsearch/search/functionscore/FunctionScoreTests.java", "status": "modified" } ] }
{ "body": "curl -s 'http://localhost:9200/_cat/nodes'\n{\"error\":\"NullPointerException[null]\",\"status\":500}\n\nNothing else is logged on the masters or data nodes.\nWe have Logstash 1.4.1 connected to the cluster, but this happens regardless of whether the Logstash nodes are connected or not.\n", "comments": [ { "body": "I can't repro locally. Can you post `/_cluster/state?filter_metadata` and `/_nodes?all` somewhere?\n", "created_at": "2014-05-23T19:26:10Z" }, { "body": "or do you see a stacktrace in the logs somewhere?\n", "created_at": "2014-05-23T19:59:38Z" }, { "body": "Hi @drewr ew, I hope you don't mind, I emailed them to you as they contain data I'd prefer not to post publicly :)\n\n@s1monw no stack traces from this in any logs on any ES node.\n", "created_at": "2014-05-23T22:08:04Z" }, { "body": "@avleen I don't mind at all. However, I haven't seen anything yet. Where did you send it? You can use first.last@elasticsearch.\n\nUpdate: Nevermind, spam. :smiling_imp: \n", "created_at": "2014-05-23T22:32:14Z" }, { "body": "Hi Drew,\n\nJust double checked.. Yup that's where I sent it, at 17:29 ET.\nHas two attachments, a zip file and a text file.\nOn May 23, 2014 6:32 PM, \"Drew Raines\" notifications@github.com wrote:\n\n> @avleen https://github.com/avleen I don't mind at all. However, I\n> haven't seen anything yet. You can use first.last@elasticsearch.\n> \n> ## \n> \n> Reply to this email directly or view it on GitHubhttps://github.com/elasticsearch/elasticsearch/issues/6297#issuecomment-44067235\n> .\n", "created_at": "2014-05-23T23:32:24Z" }, { "body": "FWIW I'm also seeing this, on a freshly-updated 1.2.1 cluster (from 1.1.1) with 64-bit Java 1.7.0_55, CentOS 5.8.\n", "created_at": "2014-06-04T10:51:22Z" }, { "body": "This PR addresses should fix this NullPointerException:\nhttps://github.com/elasticsearch/elasticsearch/pull/6190\n\nThe fix will be included in 1.2.2\n", "created_at": "2014-06-20T20:26:23Z" }, { "body": "Hi,\nFor info - just installed ES for the first time a week ago (1.2.1 debian wheezy 64bit), and hit this issue. Upgraded to 1.2.2 yesterday, and it still reports the NullPointerException when requesting _cat/nodes. Installed using the debian packages. Also get on a newly installed 1.2.2 node (though joined to the same cluster).\nCheers,\nMatthew\n", "created_at": "2014-07-11T23:06:20Z" }, { "body": "Any stack trace in nodes logs?\n", "created_at": "2014-07-12T09:05:02Z" }, { "body": "No, nothing in node logs. Is there any way to increase debugging, maybe? I wondered if the package hadn't upgraded correctly, but it's all reporting 1.2.2 (logs say version[1.2.2], pid[10140], build[9902f08/2014-07-09T12:02:32Z]).\n", "created_at": "2014-07-12T14:26:31Z" }, { "body": "I can confirm this bug is still present in 1.3.0, even after upgrading all nodes:\n\n```\n$ curl 0:9200\n```\n\n``` json\n{\n \"status\" : 200,\n \"name\" : \"mycluster\",\n \"version\" : {\n \"number\" : \"1.3.0\",\n \"build_hash\" : \"1265b1454eee7725a6918f57415c480028700fb4\",\n \"build_timestamp\" : \"2014-07-23T13:46:36Z\",\n \"build_snapshot\" : false,\n \"lucene_version\" : \"4.9\"\n },\n \"tagline\" : \"You Know, for Search\"\n}\n```\n\n```\n$ curl 0:9200/_cat/nodes\n```\n\n``` json\n{\"error\":\"NullPointerException[null]\",\"status\":500}\n```\n\nNothing in the logfile.\n", "created_at": "2014-07-25T06:10:41Z" }, { "body": "@faxm0dem do you have multiple nodes running? if so did you look in the logs for all the nodes? \n\ncould you send the output of:\n\n```\ncurl 0:9200/_nodes\ncurl 0:9200/_nodes/stats\n```\n\nthanks\n", "created_at": "2014-07-25T06:19:34Z" }, { "body": "Yes, two nodes, nothing on either side.\nHere's the output:\n\n```\ncurl 0:9200/_nodes\n```\n\n``` json\n{\n \"cluster_name\" : \"telecom\",\n \"nodes\" : {\n \"JUOZAr-mT6e8e9nmwWH9ww\" : {\n \"name\" : \"node07-telecom\",\n \"transport_address\" : \"inet[/10.0.104.214:9300]\",\n \"host\" : \"node07\",\n \"ip\" : \"10.0.104.214\",\n \"version\" : \"1.3.0\",\n \"build\" : \"1265b14\",\n \"http_address\" : \"inet[/10.0.104.214:9200]\",\n \"settings\" : {\n \"node\" : {\n \"name\" : \"node07-telecom\"\n },\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"8\"\n },\n \"bootstrap\" : {\n \"mlockall\" : \"true\"\n },\n \"name\" : \"node07-telecom\",\n \"pidfile\" : \"/var/run/elasticsearch/elasticsearch-telecom.pid\",\n \"path\" : {\n \"data\" : \"/var/lib/elasticsearch/telecom\",\n \"work\" : \"/tmp/elasticsearch\",\n \"home\" : \"/usr/share/elasticsearch\",\n \"conf\" : \"/etc/elasticsearch/telecom\",\n \"logs\" : \"/var/log/elasticsearch/telecom\"\n },\n \"cluster\" : {\n \"name\" : \"telecom\"\n },\n \"indices\" : {\n \"memory\" : {\n \"index_buffer_size\" : \"30%\"\n }\n },\n \"discovery\" : {\n \"zen\" : {\n \"minimum_master_nodes\" : \"1\",\n \"ping\" : {\n \"unicast\" : {\n \"hosts\" : [ \"node07\", \"node38\" ]\n },\n \"multicast\" : {\n \"enabled\" : \"false\"\n },\n \"timeout\" : \"30s\"\n }\n }\n }\n },\n \"os\" : {\n \"refresh_interval_in_millis\" : 1000,\n \"available_processors\" : 16,\n \"cpu\" : {\n \"vendor\" : \"Intel\",\n \"model\" : \"Xeon\",\n \"mhz\" : 2527,\n \"total_cores\" : 16,\n \"total_sockets\" : 1,\n \"cores_per_socket\" : 16,\n \"cache_size_in_bytes\" : 8192\n },\n \"mem\" : {\n \"total_in_bytes\" : 25185079296\n },\n \"swap\" : {\n \"total_in_bytes\" : 279650304\n }\n },\n \"process\" : {\n \"refresh_interval_in_millis\" : 1000,\n \"id\" : 27129,\n \"max_file_descriptors\" : 65535,\n \"mlockall\" : false\n },\n \"jvm\" : {\n \"pid\" : 27129,\n \"version\" : \"1.7.0_65\",\n \"vm_name\" : \"OpenJDK 64-Bit Server VM\",\n \"vm_version\" : \"24.65-b04\",\n \"vm_vendor\" : \"Oracle Corporation\",\n \"start_time_in_millis\" : 1406268341800,\n \"mem\" : {\n \"heap_init_in_bytes\" : 17179869184,\n \"heap_max_in_bytes\" : 17066491904,\n \"non_heap_init_in_bytes\" : 24313856,\n \"non_heap_max_in_bytes\" : 224395264,\n \"direct_max_in_bytes\" : 17066491904\n },\n \"gc_collectors\" : [ \"ParNew\", \"ConcurrentMarkSweep\" ],\n \"memory_pools\" : [ \"Code Cache\", \"Par Eden Space\", \"Par Survivor Space\", \"CMS Old Gen\", \"CMS Perm Gen\" ]\n },\n \"thread_pool\" : {\n \"generic\" : {\n \"type\" : \"cached\",\n \"keep_alive\" : \"30s\",\n \"queue_size\" : -1\n },\n \"index\" : {\n \"type\" : \"fixed\",\n \"min\" : 16,\n \"max\" : 16,\n \"queue_size\" : \"200\"\n },\n \"snapshot_data\" : {\n \"type\" : \"scaling\",\n \"min\" : 1,\n \"max\" : 5,\n \"keep_alive\" : \"5m\",\n \"queue_size\" : -1\n },\n \"bench\" : {\n \"type\" : \"scaling\",\n \"min\" : 1,\n \"max\" : 5,\n \"keep_alive\" : \"5m\",\n \"queue_size\" : -1\n },\n \"get\" : {\n \"type\" : \"fixed\",\n \"min\" : 16,\n \"max\" : 16,\n \"queue_size\" : \"1k\"\n },\n \"snapshot\" : {\n \"type\" : \"scaling\",\n \"min\" : 1,\n \"max\" : 5,\n \"keep_alive\" : \"5m\",\n \"queue_size\" : -1\n },\n \"merge\" : {\n \"type\" : \"scaling\",\n \"min\" : 1,\n \"max\" : 5,\n \"keep_alive\" : \"5m\",\n \"queue_size\" : -1\n },\n \"suggest\" : {\n \"type\" : \"fixed\",\n \"min\" : 16,\n \"max\" : 16,\n \"queue_size\" : \"1k\"\n },\n \"bulk\" : {\n \"type\" : \"fixed\",\n \"min\" : 16,\n \"max\" : 16,\n \"queue_size\" : \"50\"\n },\n \"optimize\" : {\n \"type\" : \"fixed\",\n \"min\" : 1,\n \"max\" : 1,\n \"queue_size\" : -1\n },\n \"warmer\" : {\n \"type\" : \"scaling\",\n \"min\" : 1,\n \"max\" : 5,\n \"keep_alive\" : \"5m\",\n \"queue_size\" : -1\n },\n \"flush\" : {\n \"type\" : \"scaling\",\n \"min\" : 1,\n \"max\" : 5,\n \"keep_alive\" : \"5m\",\n \"queue_size\" : -1\n },\n \"search\" : {\n \"type\" : \"fixed\",\n \"min\" : 48,\n \"max\" : 48,\n \"queue_size\" : \"1k\"\n },\n \"percolate\" : {\n \"type\" : \"fixed\",\n \"min\" : 16,\n \"max\" : 16,\n \"queue_size\" : \"1k\"\n },\n \"management\" : {\n \"type\" : \"scaling\",\n \"min\" : 1,\n \"max\" : 5,\n \"keep_alive\" : \"5m\",\n \"queue_size\" : -1\n },\n \"refresh\" : {\n \"type\" : \"scaling\",\n \"min\" : 1,\n \"max\" : 8,\n \"keep_alive\" : \"5m\",\n \"queue_size\" : -1\n }\n },\n \"network\" : {\n \"refresh_interval_in_millis\" : 5000,\n \"primary_interface\" : {\n \"address\" : \"10.0.104.214\",\n \"name\" : \"eth0\",\n \"mac_address\" : \"\"\n }\n },\n \"transport\" : {\n \"bound_address\" : \"inet[/0:0:0:0:0:0:0:0%0:9300]\",\n \"publish_address\" : \"inet[/10.0.104.214:9300]\"\n },\n \"http\" : {\n \"bound_address\" : \"inet[/0:0:0:0:0:0:0:0%0:9200]\",\n \"publish_address\" : \"inet[/10.0.104.214:9200]\",\n \"max_content_length_in_bytes\" : 104857600\n },\n \"plugins\" : [ ]\n },\n \"JL38-jS9Sn67hQk2XxNZxw\" : {\n \"name\" : \"logstash-netflow\",\n \"transport_address\" : \"inet[/10.0.108.171:9303]\",\n \"host\" : \"node38\",\n \"ip\" : \"10.0.108.171\",\n \"version\" : \"1.1.1\",\n \"build\" : \"f1585f0\",\n \"attributes\" : {\n \"client\" : \"true\",\n \"data\" : \"false\"\n },\n \"settings\" : {\n \"path\" : {\n \"logs\" : \"/home/sysunix/logs\"\n },\n \"cluster\" : {\n \"name\" : \"telecom\"\n },\n \"node\" : {\n \"client\" : \"true\",\n \"name\" : \"logstash-netflow\"\n },\n \"discovery\" : {\n \"zen\" : {\n \"ping\" : {\n \"unicast\" : {\n \"hosts\" : \"node38:9300,node38:9301,node38:9302,node38:9303,node38:9304,node38:9305\"\n },\n \"multicast\" : {\n \"enabled\" : \"false\"\n }\n }\n }\n },\n \"http\" : {\n \"enabled\" : \"false\"\n },\n \"name\" : \"logstash-netflow\"\n },\n \"os\" : {\n \"refresh_interval_in_millis\" : 1000,\n \"available_processors\" : 32\n },\n \"process\" : {\n \"refresh_interval_in_millis\" : 1000,\n \"id\" : 8365,\n \"max_file_descriptors\" : 4096,\n \"mlockall\" : false\n },\n \"jvm\" : {\n \"pid\" : 8365,\n \"version\" : \"1.7.0_55\",\n \"vm_name\" : \"OpenJDK 64-Bit Server VM\",\n \"vm_version\" : \"24.51-b03\",\n \"vm_vendor\" : \"Oracle Corporation\",\n \"start_time_in_millis\" : 1402068863618,\n \"mem\" : {\n \"heap_init_in_bytes\" : 524288000,\n \"heap_max_in_bytes\" : 506855424,\n \"non_heap_init_in_bytes\" : 24313856,\n \"non_heap_max_in_bytes\" : 224395264,\n \"direct_max_in_bytes\" : 506855424\n },\n \"gc_collectors\" : [ \"ParNew\", \"ConcurrentMarkSweep\" ],\n \"memory_pools\" : [ \"Code Cache\", \"Par Eden Space\", \"Par Survivor Space\", \"CMS Old Gen\", \"CMS Perm Gen\" ]\n },\n \"thread_pool\" : {\n \"generic\" : {\n \"type\" : \"cached\",\n \"keep_alive\" : \"30s\",\n \"queue_size\" : -1\n },\n \"index\" : {\n \"type\" : \"fixed\",\n \"min\" : 32,\n \"max\" : 32,\n \"queue_size\" : \"200\"\n },\n \"get\" : {\n \"type\" : \"fixed\",\n \"min\" : 32,\n \"max\" : 32,\n \"queue_size\" : \"1k\"\n },\n \"snapshot\" : {\n \"type\" : \"scaling\",\n \"min\" : 1,\n \"max\" : 5,\n \"keep_alive\" : \"5m\",\n \"queue_size\" : -1\n },\n \"merge\" : {\n \"type\" : \"scaling\",\n \"min\" : 1,\n \"max\" : 5,\n \"keep_alive\" : \"5m\",\n \"queue_size\" : -1\n },\n \"suggest\" : {\n \"type\" : \"fixed\",\n \"min\" : 32,\n \"max\" : 32,\n \"queue_size\" : \"1k\"\n },\n \"bulk\" : {\n \"type\" : \"fixed\",\n \"min\" : 32,\n \"max\" : 32,\n \"queue_size\" : \"50\"\n },\n \"optimize\" : {\n \"type\" : \"fixed\",\n \"min\" : 1,\n \"max\" : 1,\n \"queue_size\" : -1\n },\n \"warmer\" : {\n \"type\" : \"scaling\",\n \"min\" : 1,\n \"max\" : 5,\n \"keep_alive\" : \"5m\",\n \"queue_size\" : -1\n },\n \"flush\" : {\n \"type\" : \"scaling\",\n \"min\" : 1,\n \"max\" : 5,\n \"keep_alive\" : \"5m\",\n \"queue_size\" : -1\n },\n \"search\" : {\n \"type\" : \"fixed\",\n \"min\" : 96,\n \"max\" : 96,\n \"queue_size\" : \"1k\"\n },\n \"percolate\" : {\n \"type\" : \"fixed\",\n \"min\" : 32,\n \"max\" : 32,\n \"queue_size\" : \"1k\"\n },\n \"management\" : {\n \"type\" : \"scaling\",\n \"min\" : 1,\n \"max\" : 5,\n \"keep_alive\" : \"5m\",\n \"queue_size\" : -1\n },\n \"refresh\" : {\n \"type\" : \"scaling\",\n \"min\" : 1,\n \"max\" : 10,\n \"keep_alive\" : \"5m\",\n \"queue_size\" : -1\n }\n },\n \"network\" : {\n \"refresh_interval_in_millis\" : 5000,\n \"primary_interface\" : {\n \"address\" : \"\",\n \"name\" : \"\",\n \"mac_address\" : \"\"\n }\n },\n \"transport\" : {\n \"bound_address\" : \"inet[/0:0:0:0:0:0:0:0%0:9303]\",\n \"publish_address\" : \"inet[/10.0.108.171:9303]\"\n },\n \"plugins\" : [ ]\n },\n \"rPZ9EsahRl-9Cs_AOucOJQ\" : {\n \"name\" : \"logstash-netflow\",\n \"transport_address\" : \"inet[/10.0.108.171:9302]\",\n \"host\" : \"node38\",\n \"ip\" : \"10.0.108.171\",\n \"version\" : \"1.1.1\",\n \"build\" : \"f1585f0\",\n \"attributes\" : {\n \"client\" : \"true\",\n \"data\" : \"false\"\n },\n \"settings\" : {\n \"path\" : {\n \"logs\" : \"/home/sysunix/logs\"\n },\n \"cluster\" : {\n \"name\" : \"telecom\"\n },\n \"node\" : {\n \"client\" : \"true\",\n \"name\" : \"logstash-netflow\"\n },\n \"discovery\" : {\n \"zen\" : {\n \"ping\" : {\n \"unicast\" : {\n \"hosts\" : \"node38:9300,node38:9301,node38:9302,node38:9303,node38:9304,node38:9305\"\n },\n \"multicast\" : {\n \"enabled\" : \"false\"\n }\n }\n }\n },\n \"http\" : {\n \"enabled\" : \"false\"\n },\n \"name\" : \"logstash-netflow\"\n },\n \"os\" : {\n \"refresh_interval_in_millis\" : 1000,\n \"available_processors\" : 32\n },\n \"process\" : {\n \"refresh_interval_in_millis\" : 1000,\n \"id\" : 8365,\n \"max_file_descriptors\" : 4096,\n \"mlockall\" : false\n },\n \"jvm\" : {\n \"pid\" : 8365,\n \"version\" : \"1.7.0_55\",\n \"vm_name\" : \"OpenJDK 64-Bit Server VM\",\n \"vm_version\" : \"24.51-b03\",\n \"vm_vendor\" : \"Oracle Corporation\",\n \"start_time_in_millis\" : 1402068863618,\n \"mem\" : {\n \"heap_init_in_bytes\" : 524288000,\n \"heap_max_in_bytes\" : 506855424,\n \"non_heap_init_in_bytes\" : 24313856,\n \"non_heap_max_in_bytes\" : 224395264,\n \"direct_max_in_bytes\" : 506855424\n },\n \"gc_collectors\" : [ \"ParNew\", \"ConcurrentMarkSweep\" ],\n \"memory_pools\" : [ \"Code Cache\", \"Par Eden Space\", \"Par Survivor Space\", \"CMS Old Gen\", \"CMS Perm Gen\" ]\n },\n \"thread_pool\" : {\n \"generic\" : {\n \"type\" : \"cached\",\n \"keep_alive\" : \"30s\",\n \"queue_size\" : -1\n },\n \"index\" : {\n \"type\" : \"fixed\",\n \"min\" : 32,\n \"max\" : 32,\n \"queue_size\" : \"200\"\n },\n \"get\" : {\n \"type\" : \"fixed\",\n \"min\" : 32,\n \"max\" : 32,\n \"queue_size\" : \"1k\"\n },\n \"snapshot\" : {\n \"type\" : \"scaling\",\n \"min\" : 1,\n \"max\" : 5,\n \"keep_alive\" : \"5m\",\n \"queue_size\" : -1\n },\n \"merge\" : {\n \"type\" : \"scaling\",\n \"min\" : 1,\n \"max\" : 5,\n \"keep_alive\" : \"5m\",\n \"queue_size\" : -1\n },\n \"suggest\" : {\n \"type\" : \"fixed\",\n \"min\" : 32,\n \"max\" : 32,\n \"queue_size\" : \"1k\"\n },\n \"bulk\" : {\n \"type\" : \"fixed\",\n \"min\" : 32,\n \"max\" : 32,\n \"queue_size\" : \"50\"\n },\n \"optimize\" : {\n \"type\" : \"fixed\",\n \"min\" : 1,\n \"max\" : 1,\n \"queue_size\" : -1\n },\n \"warmer\" : {\n \"type\" : \"scaling\",\n \"min\" : 1,\n \"max\" : 5,\n \"keep_alive\" : \"5m\",\n \"queue_size\" : -1\n },\n \"flush\" : {\n \"type\" : \"scaling\",\n \"min\" : 1,\n \"max\" : 5,\n \"keep_alive\" : \"5m\",\n \"queue_size\" : -1\n },\n \"search\" : {\n \"type\" : \"fixed\",\n \"min\" : 96,\n \"max\" : 96,\n \"queue_size\" : \"1k\"\n },\n \"percolate\" : {\n \"type\" : \"fixed\",\n \"min\" : 32,\n \"max\" : 32,\n \"queue_size\" : \"1k\"\n },\n \"management\" : {\n \"type\" : \"scaling\",\n \"min\" : 1,\n \"max\" : 5,\n \"keep_alive\" : \"5m\",\n \"queue_size\" : -1\n },\n \"refresh\" : {\n \"type\" : \"scaling\",\n \"min\" : 1,\n \"max\" : 10,\n \"keep_alive\" : \"5m\",\n \"queue_size\" : -1\n }\n },\n \"network\" : {\n \"refresh_interval_in_millis\" : 5000,\n \"primary_interface\" : {\n \"address\" : \"\",\n \"name\" : \"\",\n \"mac_address\" : \"\"\n }\n },\n \"transport\" : {\n \"bound_address\" : \"inet[/0:0:0:0:0:0:0:0%0:9302]\",\n \"publish_address\" : \"inet[/10.0.108.171:9302]\"\n },\n \"plugins\" : [ ]\n },\n \"myfhcxbYRCaqpJ_yAapmHw\" : {\n \"name\" : \"node38-telecom\",\n \"transport_address\" : \"inet[/10.0.108.171:9300]\",\n \"host\" : \"node38\",\n \"ip\" : \"10.0.108.171\",\n \"version\" : \"1.3.0\",\n \"build\" : \"1265b14\",\n \"http_address\" : \"inet[/10.0.108.171:9200]\",\n \"settings\" : {\n \"node\" : {\n \"name\" : \"node38-telecom\"\n },\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"8\"\n },\n \"bootstrap\" : {\n \"mlockall\" : \"true\"\n },\n \"name\" : \"node38-telecom\",\n \"pidfile\" : \"/var/run/elasticsearch/elasticsearch-telecom.pid\",\n \"path\" : {\n \"data\" : \"/var/lib/elasticsearch/telecom\",\n \"work\" : \"/tmp/elasticsearch\",\n \"home\" : \"/usr/share/elasticsearch\",\n \"conf\" : \"/etc/elasticsearch/telecom\",\n \"logs\" : \"/var/log/elasticsearch/telecom\"\n },\n \"cluster\" : {\n \"name\" : \"telecom\"\n },\n \"indices\" : {\n \"memory\" : {\n \"index_buffer_size\" : \"30%\"\n }\n },\n \"discovery\" : {\n \"zen\" : {\n \"minimum_master_nodes\" : \"1\",\n \"ping\" : {\n \"unicast\" : {\n \"hosts\" : [ \"node07\", \"node38\" ]\n },\n \"multicast\" : {\n \"enabled\" : \"false\"\n },\n \"timeout\" : \"30s\"\n }\n }\n }\n },\n \"os\" : {\n \"refresh_interval_in_millis\" : 1000,\n \"available_processors\" : 32,\n \"cpu\" : {\n \"vendor\" : \"Intel\",\n \"model\" : \"Xeon\",\n \"mhz\" : 2000,\n \"total_cores\" : 32,\n \"total_sockets\" : 1,\n \"cores_per_socket\" : 32,\n \"cache_size_in_bytes\" : 20480\n },\n \"mem\" : {\n \"total_in_bytes\" : 33617100800\n },\n \"swap\" : {\n \"total_in_bytes\" : 17459511296\n }\n },\n \"process\" : {\n \"refresh_interval_in_millis\" : 1000,\n \"id\" : 12731,\n \"max_file_descriptors\" : 65535,\n \"mlockall\" : false\n },\n \"jvm\" : {\n \"pid\" : 12731,\n \"version\" : \"1.7.0_65\",\n \"vm_name\" : \"OpenJDK 64-Bit Server VM\",\n \"vm_version\" : \"24.65-b04\",\n \"vm_vendor\" : \"Oracle Corporation\",\n \"start_time_in_millis\" : 1406268268972,\n \"mem\" : {\n \"heap_init_in_bytes\" : 17179869184,\n \"heap_max_in_bytes\" : 16979263488,\n \"non_heap_init_in_bytes\" : 24313856,\n \"non_heap_max_in_bytes\" : 224395264,\n \"direct_max_in_bytes\" : 16979263488\n },\n \"gc_collectors\" : [ \"ParNew\", \"ConcurrentMarkSweep\" ],\n \"memory_pools\" : [ \"Code Cache\", \"Par Eden Space\", \"Par Survivor Space\", \"CMS Old Gen\", \"CMS Perm Gen\" ]\n },\n \"thread_pool\" : {\n \"generic\" : {\n \"type\" : \"cached\",\n \"keep_alive\" : \"30s\",\n \"queue_size\" : -1\n },\n \"index\" : {\n \"type\" : \"fixed\",\n \"min\" : 32,\n \"max\" : 32,\n \"queue_size\" : \"200\"\n },\n \"snapshot_data\" : {\n \"type\" : \"scaling\",\n \"min\" : 1,\n \"max\" : 5,\n \"keep_alive\" : \"5m\",\n \"queue_size\" : -1\n },\n \"bench\" : {\n \"type\" : \"scaling\",\n \"min\" : 1,\n \"max\" : 5,\n \"keep_alive\" : \"5m\",\n \"queue_size\" : -1\n },\n \"get\" : {\n \"type\" : \"fixed\",\n \"min\" : 32,\n \"max\" : 32,\n \"queue_size\" : \"1k\"\n },\n \"snapshot\" : {\n \"type\" : \"scaling\",\n \"min\" : 1,\n \"max\" : 5,\n \"keep_alive\" : \"5m\",\n \"queue_size\" : -1\n },\n \"merge\" : {\n \"type\" : \"scaling\",\n \"min\" : 1,\n \"max\" : 5,\n \"keep_alive\" : \"5m\",\n \"queue_size\" : -1\n },\n \"suggest\" : {\n \"type\" : \"fixed\",\n \"min\" : 32,\n \"max\" : 32,\n \"queue_size\" : \"1k\"\n },\n \"bulk\" : {\n \"type\" : \"fixed\",\n \"min\" : 32,\n \"max\" : 32,\n \"queue_size\" : \"50\"\n },\n \"optimize\" : {\n \"type\" : \"fixed\",\n \"min\" : 1,\n \"max\" : 1,\n \"queue_size\" : -1\n },\n \"warmer\" : {\n \"type\" : \"scaling\",\n \"min\" : 1,\n \"max\" : 5,\n \"keep_alive\" : \"5m\",\n \"queue_size\" : -1\n },\n \"flush\" : {\n \"type\" : \"scaling\",\n \"min\" : 1,\n \"max\" : 5,\n \"keep_alive\" : \"5m\",\n \"queue_size\" : -1\n },\n \"search\" : {\n \"type\" : \"fixed\",\n \"min\" : 96,\n \"max\" : 96,\n \"queue_size\" : \"1k\"\n },\n \"percolate\" : {\n \"type\" : \"fixed\",\n \"min\" : 32,\n \"max\" : 32,\n \"queue_size\" : \"1k\"\n },\n \"management\" : {\n \"type\" : \"scaling\",\n \"min\" : 1,\n \"max\" : 5,\n \"keep_alive\" : \"5m\",\n \"queue_size\" : -1\n },\n \"refresh\" : {\n \"type\" : \"scaling\",\n \"min\" : 1,\n \"max\" : 10,\n \"keep_alive\" : \"5m\",\n \"queue_size\" : -1\n }\n },\n \"network\" : {\n \"refresh_interval_in_millis\" : 5000,\n \"primary_interface\" : {\n \"address\" : \"10.0.108.171\",\n \"name\" : \"eth0\",\n \"mac_address\" : \"\"\n }\n },\n \"transport\" : {\n \"bound_address\" : \"inet[/0:0:0:0:0:0:0:0:9300]\",\n \"publish_address\" : \"inet[/10.0.108.171:9300]\"\n },\n \"http\" : {\n \"bound_address\" : \"inet[/0:0:0:0:0:0:0:0:9200]\",\n \"publish_address\" : \"inet[/10.0.108.171:9200]\",\n \"max_content_length_in_bytes\" : 104857600\n },\n \"plugins\" : [ ]\n },\n \"9FwiLpriR12926UVB-YuVw\" : {\n \"name\" : \"logstash-netflow\",\n \"transport_address\" : \"inet[/10.0.108.171:9301]\",\n \"host\" : \"node38\",\n \"ip\" : \"10.0.108.171\",\n \"version\" : \"1.1.1\",\n \"build\" : \"f1585f0\",\n \"attributes\" : {\n \"client\" : \"true\",\n \"data\" : \"false\"\n },\n \"settings\" : {\n \"path\" : {\n \"logs\" : \"/home/sysunix/logs\"\n },\n \"cluster\" : {\n \"name\" : \"telecom\"\n },\n \"node\" : {\n \"client\" : \"true\",\n \"name\" : \"logstash-netflow\"\n },\n \"discovery\" : {\n \"zen\" : {\n \"ping\" : {\n \"unicast\" : {\n \"hosts\" : \"node38:9300,node38:9301,node38:9302,node38:9303,node38:9304,node38:9305\"\n },\n \"multicast\" : {\n \"enabled\" : \"false\"\n }\n }\n }\n },\n \"http\" : {\n \"enabled\" : \"false\"\n },\n \"name\" : \"logstash-netflow\"\n },\n \"os\" : {\n \"refresh_interval_in_millis\" : 1000,\n \"available_processors\" : 32\n },\n \"process\" : {\n \"refresh_interval_in_millis\" : 1000,\n \"id\" : 8365,\n \"max_file_descriptors\" : 4096,\n \"mlockall\" : false\n },\n \"jvm\" : {\n \"pid\" : 8365,\n \"version\" : \"1.7.0_55\",\n \"vm_name\" : \"OpenJDK 64-Bit Server VM\",\n \"vm_version\" : \"24.51-b03\",\n \"vm_vendor\" : \"Oracle Corporation\",\n \"start_time_in_millis\" : 1402068863618,\n \"mem\" : {\n \"heap_init_in_bytes\" : 524288000,\n \"heap_max_in_bytes\" : 506855424,\n \"non_heap_init_in_bytes\" : 24313856,\n \"non_heap_max_in_bytes\" : 224395264,\n \"direct_max_in_bytes\" : 506855424\n },\n \"gc_collectors\" : [ \"ParNew\", \"ConcurrentMarkSweep\" ],\n \"memory_pools\" : [ \"Code Cache\", \"Par Eden Space\", \"Par Survivor Space\", \"CMS Old Gen\", \"CMS Perm Gen\" ]\n },\n \"thread_pool\" : {\n \"generic\" : {\n \"type\" : \"cached\",\n \"keep_alive\" : \"30s\",\n \"queue_size\" : -1\n },\n \"index\" : {\n \"type\" : \"fixed\",\n \"min\" : 32,\n \"max\" : 32,\n \"queue_size\" : \"200\"\n },\n \"get\" : {\n \"type\" : \"fixed\",\n \"min\" : 32,\n \"max\" : 32,\n \"queue_size\" : \"1k\"\n },\n \"snapshot\" : {\n \"type\" : \"scaling\",\n \"min\" : 1,\n \"max\" : 5,\n \"keep_alive\" : \"5m\",\n \"queue_size\" : -1\n },\n \"merge\" : {\n \"type\" : \"scaling\",\n \"min\" : 1,\n \"max\" : 5,\n \"keep_alive\" : \"5m\",\n \"queue_size\" : -1\n },\n \"suggest\" : {\n \"type\" : \"fixed\",\n \"min\" : 32,\n \"max\" : 32,\n \"queue_size\" : \"1k\"\n },\n \"bulk\" : {\n \"type\" : \"fixed\",\n \"min\" : 32,\n \"max\" : 32,\n \"queue_size\" : \"50\"\n },\n \"optimize\" : {\n \"type\" : \"fixed\",\n \"min\" : 1,\n \"max\" : 1,\n \"queue_size\" : -1\n },\n \"warmer\" : {\n \"type\" : \"scaling\",\n \"min\" : 1,\n \"max\" : 5,\n \"keep_alive\" : \"5m\",\n \"queue_size\" : -1\n },\n \"flush\" : {\n \"type\" : \"scaling\",\n \"min\" : 1,\n \"max\" : 5,\n \"keep_alive\" : \"5m\",\n \"queue_size\" : -1\n },\n \"search\" : {\n \"type\" : \"fixed\",\n \"min\" : 96,\n \"max\" : 96,\n \"queue_size\" : \"1k\"\n },\n \"percolate\" : {\n \"type\" : \"fixed\",\n \"min\" : 32,\n \"max\" : 32,\n \"queue_size\" : \"1k\"\n },\n \"management\" : {\n \"type\" : \"scaling\",\n \"min\" : 1,\n \"max\" : 5,\n \"keep_alive\" : \"5m\",\n \"queue_size\" : -1\n },\n \"refresh\" : {\n \"type\" : \"scaling\",\n \"min\" : 1,\n \"max\" : 10,\n \"keep_alive\" : \"5m\",\n \"queue_size\" : -1\n }\n },\n \"network\" : {\n \"refresh_interval_in_millis\" : 5000,\n \"primary_interface\" : {\n \"address\" : \"\",\n \"name\" : \"\",\n \"mac_address\" : \"\"\n }\n },\n \"transport\" : {\n \"bound_address\" : \"inet[/0:0:0:0:0:0:0:0%0:9301]\",\n \"publish_address\" : \"inet[/10.0.108.171:9301]\"\n },\n \"plugins\" : [ ]\n }\n }\n}\n```\n\n```\ncurl 0:9200/_nodes/stats\n```\n\n``` json\n{\n \"cluster_name\" : \"telecom\",\n \"nodes\" : {\n \"JUOZAr-mT6e8e9nmwWH9ww\" : {\n \"timestamp\" : 1406277844538,\n \"name\" : \"node07-telecom\",\n \"transport_address\" : \"inet[/10.0.104.214:9300]\",\n \"host\" : \"node07\",\n \"ip\" : [ \"inet[/10.0.104.214:9300]\", \"NONE\" ],\n \"indices\" : {\n \"docs\" : {\n \"count\" : 0,\n \"deleted\" : 0\n },\n \"store\" : {\n \"size_in_bytes\" : 0,\n \"throttle_time_in_millis\" : 0\n },\n \"indexing\" : {\n \"index_total\" : 0,\n \"index_time_in_millis\" : 0,\n \"index_current\" : 0,\n \"delete_total\" : 0,\n \"delete_time_in_millis\" : 0,\n \"delete_current\" : 0\n },\n \"get\" : {\n \"total\" : 0,\n \"time_in_millis\" : 0,\n \"exists_total\" : 0,\n \"exists_time_in_millis\" : 0,\n \"missing_total\" : 0,\n \"missing_time_in_millis\" : 0,\n \"current\" : 0\n },\n \"search\" : {\n \"open_contexts\" : 0,\n \"query_total\" : 0,\n \"query_time_in_millis\" : 0,\n \"query_current\" : 0,\n \"fetch_total\" : 0,\n \"fetch_time_in_millis\" : 0,\n \"fetch_current\" : 0\n },\n \"merges\" : {\n \"current\" : 0,\n \"current_docs\" : 0,\n \"current_size_in_bytes\" : 0,\n \"total\" : 0,\n \"total_time_in_millis\" : 0,\n \"total_docs\" : 0,\n \"total_size_in_bytes\" : 0\n },\n \"refresh\" : {\n \"total\" : 0,\n \"total_time_in_millis\" : 0\n },\n \"flush\" : {\n \"total\" : 0,\n \"total_time_in_millis\" : 0\n },\n \"warmer\" : {\n \"current\" : 0,\n \"total\" : 0,\n \"total_time_in_millis\" : 0\n },\n \"filter_cache\" : {\n \"memory_size_in_bytes\" : 0,\n \"evictions\" : 0\n },\n \"id_cache\" : {\n \"memory_size_in_bytes\" : 0\n },\n \"fielddata\" : {\n \"memory_size_in_bytes\" : 0,\n \"evictions\" : 0\n },\n \"percolate\" : {\n \"total\" : 0,\n \"time_in_millis\" : 0,\n \"current\" : 0,\n \"memory_size_in_bytes\" : -1,\n \"memory_size\" : \"-1b\",\n \"queries\" : 0\n },\n \"completion\" : {\n \"size_in_bytes\" : 0\n },\n \"segments\" : {\n \"count\" : 0,\n \"memory_in_bytes\" : 0,\n \"index_writer_memory_in_bytes\" : 0,\n \"version_map_memory_in_bytes\" : 0\n },\n \"translog\" : {\n \"operations\" : 0,\n \"size_in_bytes\" : 0\n },\n \"suggest\" : {\n \"total\" : 0,\n \"time_in_millis\" : 0,\n \"current\" : 0\n }\n },\n \"os\" : {\n \"timestamp\" : 1406277844539,\n \"uptime_in_millis\" : 2050688,\n \"load_average\" : [ 0.0, 0.0, 0.0 ],\n \"cpu\" : {\n \"sys\" : 0,\n \"user\" : 0,\n \"idle\" : 99,\n \"usage\" : 0,\n \"stolen\" : 0\n },\n \"mem\" : {\n \"free_in_bytes\" : 13301526528,\n \"used_in_bytes\" : 11883552768,\n \"free_percent\" : 94,\n \"used_percent\" : 5,\n \"actual_free_in_bytes\" : 23725084672,\n \"actual_used_in_bytes\" : 1459994624\n },\n \"swap\" : {\n \"used_in_bytes\" : 26796032,\n \"free_in_bytes\" : 252854272\n }\n },\n \"process\" : {\n \"timestamp\" : 1406277844540,\n \"open_file_descriptors\" : 464,\n \"cpu\" : {\n \"percent\" : 0,\n \"sys_in_millis\" : 17990,\n \"user_in_millis\" : 33500,\n \"total_in_millis\" : 51490\n },\n \"mem\" : {\n \"resident_in_bytes\" : 616067072,\n \"share_in_bytes\" : 13942784,\n \"total_virtual_in_bytes\" : 26445426688\n }\n },\n \"jvm\" : {\n \"timestamp\" : 1406277844540,\n \"uptime_in_millis\" : 9502740,\n \"mem\" : {\n \"heap_used_in_bytes\" : 575853064,\n \"heap_used_percent\" : 3,\n \"heap_committed_in_bytes\" : 17066491904,\n \"heap_max_in_bytes\" : 17066491904,\n \"non_heap_used_in_bytes\" : 31055048,\n \"non_heap_committed_in_bytes\" : 32374784,\n \"pools\" : {\n \"young\" : {\n \"used_in_bytes\" : 546352176,\n \"max_in_bytes\" : 907345920,\n \"peak_used_in_bytes\" : 907345920,\n \"peak_max_in_bytes\" : 907345920\n },\n \"survivor\" : {\n \"used_in_bytes\" : 29500888,\n \"max_in_bytes\" : 113377280,\n \"peak_used_in_bytes\" : 29500888,\n \"peak_max_in_bytes\" : 113377280\n },\n \"old\" : {\n \"used_in_bytes\" : 0,\n \"max_in_bytes\" : 16045768704,\n \"peak_used_in_bytes\" : 0,\n \"peak_max_in_bytes\" : 16045768704\n }\n }\n },\n \"threads\" : {\n \"count\" : 112,\n \"peak_count\" : 115\n },\n \"gc\" : {\n \"collectors\" : {\n \"young\" : {\n \"collection_count\" : 1,\n \"collection_time_in_millis\" : 58\n },\n \"old\" : {\n \"collection_count\" : 0,\n \"collection_time_in_millis\" : 0\n }\n }\n },\n \"buffer_pools\" : {\n \"direct\" : {\n \"count\" : 179,\n \"used_in_bytes\" : 49807360,\n \"total_capacity_in_bytes\" : 49807360\n },\n \"mapped\" : {\n \"count\" : 0,\n \"used_in_bytes\" : 0,\n \"total_capacity_in_bytes\" : 0\n }\n }\n },\n \"thread_pool\" : {\n \"generic\" : {\n \"threads\" : 1,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 4,\n \"completed\" : 972\n },\n \"index\" : {\n \"threads\" : 0,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 0,\n \"completed\" : 0\n },\n \"snapshot_data\" : {\n \"threads\" : 0,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 0,\n \"completed\" : 0\n },\n \"bench\" : {\n \"threads\" : 0,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 0,\n \"completed\" : 0\n },\n \"get\" : {\n \"threads\" : 0,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 0,\n \"completed\" : 0\n },\n \"snapshot\" : {\n \"threads\" : 0,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 0,\n \"completed\" : 0\n },\n \"merge\" : {\n \"threads\" : 0,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 0,\n \"completed\" : 0\n },\n \"suggest\" : {\n \"threads\" : 0,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 0,\n \"completed\" : 0\n },\n \"bulk\" : {\n \"threads\" : 0,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 0,\n \"completed\" : 0\n },\n \"optimize\" : {\n \"threads\" : 0,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 0,\n \"completed\" : 0\n },\n \"warmer\" : {\n \"threads\" : 0,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 0,\n \"completed\" : 0\n },\n \"flush\" : {\n \"threads\" : 0,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 0,\n \"completed\" : 0\n },\n \"search\" : {\n \"threads\" : 0,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 0,\n \"completed\" : 0\n },\n \"percolate\" : {\n \"threads\" : 0,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 0,\n \"completed\" : 0\n },\n \"management\" : {\n \"threads\" : 1,\n \"queue\" : 0,\n \"active\" : 1,\n \"rejected\" : 0,\n \"largest\" : 1,\n \"completed\" : 653\n },\n \"refresh\" : {\n \"threads\" : 0,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 0,\n \"completed\" : 0\n }\n },\n \"network\" : {\n \"tcp\" : {\n \"active_opens\" : 421209,\n \"passive_opens\" : 138326,\n \"curr_estab\" : 132,\n \"in_segs\" : 110881241,\n \"out_segs\" : 114912697,\n \"retrans_segs\" : 105,\n \"estab_resets\" : 253951,\n \"attempt_fails\" : 4321,\n \"in_errs\" : 0,\n \"out_rsts\" : 68724493\n }\n },\n \"fs\" : {\n \"timestamp\" : 1406277844541,\n \"total\" : {\n \"total_in_bytes\" : 1648462135296,\n \"free_in_bytes\" : 1632006266880,\n \"available_in_bytes\" : 1632006266880,\n \"disk_reads\" : 11399,\n \"disk_writes\" : 3156978,\n \"disk_io_op\" : 3168377,\n \"disk_read_size_in_bytes\" : 515517952,\n \"disk_write_size_in_bytes\" : 302869040640,\n \"disk_io_size_in_bytes\" : 303384558592,\n \"disk_queue\" : \"1.5E-4\",\n \"disk_service_time\" : \"0.1\"\n },\n \"data\" : [ {\n \"path\" : \"/var/lib/elasticsearch/telecom/telecom/nodes/0\",\n \"mount\" : \"/var/lib/elasticsearch\",\n \"dev\" : \"/dev/mapper/rootvg-elasticsearch\",\n \"total_in_bytes\" : 1648462135296,\n \"free_in_bytes\" : 1632006266880,\n \"available_in_bytes\" : 1632006266880,\n \"disk_reads\" : 11399,\n \"disk_writes\" : 3156978,\n \"disk_io_op\" : 3168377,\n \"disk_read_size_in_bytes\" : 515517952,\n \"disk_write_size_in_bytes\" : 302869040640,\n \"disk_io_size_in_bytes\" : 303384558592,\n \"disk_queue\" : \"1.5E-4\",\n \"disk_service_time\" : \"0.1\"\n } ]\n },\n \"transport\" : {\n \"server_open\" : 65,\n \"rx_count\" : 19014,\n \"rx_size_in_bytes\" : 934110,\n \"tx_count\" : 19013,\n \"tx_size_in_bytes\" : 1151504\n },\n \"http\" : {\n \"current_open\" : 0,\n \"total_opened\" : 643\n },\n \"fielddata_breaker\" : {\n \"maximum_size_in_bytes\" : 10239895142,\n \"maximum_size\" : \"9.5gb\",\n \"estimated_size_in_bytes\" : 0,\n \"estimated_size\" : \"0b\",\n \"overhead\" : 1.03,\n \"tripped\" : 0\n }\n },\n \"JL38-jS9Sn67hQk2XxNZxw\" : {\n \"timestamp\" : 1406277844542,\n \"name\" : \"logstash-netflow\",\n \"transport_address\" : \"inet[/10.0.108.171:9303]\",\n \"host\" : \"node38\",\n \"ip\" : [ \"inet[/10.0.108.171:9303]\", \"NONE\" ],\n \"attributes\" : {\n \"client\" : \"true\",\n \"data\" : \"false\"\n },\n \"indices\" : {\n \"docs\" : {\n \"count\" : 0,\n \"deleted\" : 0\n },\n \"store\" : {\n \"size_in_bytes\" : 0,\n \"throttle_time_in_millis\" : 0\n },\n \"indexing\" : {\n \"index_total\" : 0,\n \"index_time_in_millis\" : 0,\n \"index_current\" : 0,\n \"delete_total\" : 0,\n \"delete_time_in_millis\" : 0,\n \"delete_current\" : 0\n },\n \"get\" : {\n \"total\" : 0,\n \"time_in_millis\" : 0,\n \"exists_total\" : 0,\n \"exists_time_in_millis\" : 0,\n \"missing_total\" : 0,\n \"missing_time_in_millis\" : 0,\n \"current\" : 0\n },\n \"search\" : {\n \"open_contexts\" : 0,\n \"query_total\" : 0,\n \"query_time_in_millis\" : 0,\n \"query_current\" : 0,\n \"fetch_total\" : 0,\n \"fetch_time_in_millis\" : 0,\n \"fetch_current\" : 0\n },\n \"merges\" : {\n \"current\" : 0,\n \"current_docs\" : 0,\n \"current_size_in_bytes\" : 0,\n \"total\" : 0,\n \"total_time_in_millis\" : 0,\n \"total_docs\" : 0,\n \"total_size_in_bytes\" : 0\n },\n \"refresh\" : {\n \"total\" : 0,\n \"total_time_in_millis\" : 0\n },\n \"flush\" : {\n \"total\" : 0,\n \"total_time_in_millis\" : 0\n },\n \"warmer\" : {\n \"current\" : 0,\n \"total\" : 0,\n \"total_time_in_millis\" : 0\n },\n \"filter_cache\" : {\n \"memory_size_in_bytes\" : 0,\n \"evictions\" : 0\n },\n \"id_cache\" : {\n \"memory_size_in_bytes\" : 0\n },\n \"fielddata\" : {\n \"memory_size_in_bytes\" : 0,\n \"evictions\" : 0\n },\n \"percolate\" : {\n \"total\" : 0,\n \"time_in_millis\" : 0,\n \"current\" : 0,\n \"memory_size_in_bytes\" : -1,\n \"memory_size\" : \"-1b\",\n \"queries\" : 0\n },\n \"completion\" : {\n \"size_in_bytes\" : 0\n },\n \"segments\" : {\n \"count\" : 0,\n \"memory_in_bytes\" : 0,\n \"index_writer_memory_in_bytes\" : 0,\n \"version_map_memory_in_bytes\" : 0\n },\n \"translog\" : {\n \"operations\" : 0,\n \"size_in_bytes\" : 0\n }\n },\n \"os\" : {\n \"timestamp\" : 1406277844542\n },\n \"process\" : {\n \"timestamp\" : 1406277844542,\n \"open_file_descriptors\" : 1377\n },\n \"jvm\" : {\n \"timestamp\" : 1406277844544,\n \"uptime_in_millis\" : 4208980926,\n \"mem\" : {\n \"heap_used_in_bytes\" : 226119440,\n \"heap_used_percent\" : 44,\n \"heap_committed_in_bytes\" : 506855424,\n \"heap_max_in_bytes\" : 506855424,\n \"non_heap_used_in_bytes\" : 68490160,\n \"non_heap_committed_in_bytes\" : 107167744,\n \"pools\" : {\n \"young\" : {\n \"used_in_bytes\" : 46953832,\n \"max_in_bytes\" : 139853824,\n \"peak_used_in_bytes\" : 139853824,\n \"peak_max_in_bytes\" : 139853824\n },\n \"survivor\" : {\n \"used_in_bytes\" : 110032,\n \"max_in_bytes\" : 17432576,\n \"peak_used_in_bytes\" : 17432576,\n \"peak_max_in_bytes\" : 17432576\n },\n \"old\" : {\n \"used_in_bytes\" : 179055576,\n \"max_in_bytes\" : 349569024,\n \"peak_used_in_bytes\" : 339622256,\n \"peak_max_in_bytes\" : 349569024\n }\n }\n },\n \"threads\" : {\n \"count\" : 428,\n \"peak_count\" : 442\n },\n \"gc\" : {\n \"collectors\" : {\n \"young\" : {\n \"collection_count\" : 11840,\n \"collection_time_in_millis\" : 67781\n },\n \"old\" : {\n \"collection_count\" : 14,\n \"collection_time_in_millis\" : 203\n }\n }\n },\n \"buffer_pools\" : {\n \"direct\" : {\n \"count\" : 723,\n \"used_in_bytes\" : 199419308,\n \"total_capacity_in_bytes\" : 199419308\n },\n \"mapped\" : {\n \"count\" : 0,\n \"used_in_bytes\" : 0,\n \"total_capacity_in_bytes\" : 0\n }\n }\n },\n \"thread_pool\" : {\n \"generic\" : {\n \"threads\" : 1,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 4,\n \"completed\" : 420988\n },\n \"index\" : {\n \"threads\" : 0,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 0,\n \"completed\" : 0\n },\n \"get\" : {\n \"threads\" : 0,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 0,\n \"completed\" : 0\n },\n \"snapshot\" : {\n \"threads\" : 0,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 0,\n \"completed\" : 0\n },\n \"merge\" : {\n \"threads\" : 0,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 0,\n \"completed\" : 0\n },\n \"suggest\" : {\n \"threads\" : 0,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 0,\n \"completed\" : 0\n },\n \"bulk\" : {\n \"threads\" : 0,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 0,\n \"completed\" : 0\n },\n \"optimize\" : {\n \"threads\" : 0,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 0,\n \"completed\" : 0\n },\n \"warmer\" : {\n \"threads\" : 0,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 0,\n \"completed\" : 0\n },\n \"flush\" : {\n \"threads\" : 0,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 0,\n \"completed\" : 0\n },\n \"search\" : {\n \"threads\" : 0,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 0,\n \"completed\" : 0\n },\n \"percolate\" : {\n \"threads\" : 0,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 0,\n \"completed\" : 0\n },\n \"management\" : {\n \"threads\" : 1,\n \"queue\" : 0,\n \"active\" : 1,\n \"rejected\" : 0,\n \"largest\" : 2,\n \"completed\" : 766\n },\n \"refresh\" : {\n \"threads\" : 0,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 0,\n \"completed\" : 0\n }\n },\n \"network\" : { },\n \"fs\" : {\n \"timestamp\" : 1406277844544,\n \"total\" : { },\n \"data\" : [ ]\n },\n \"transport\" : {\n \"server_open\" : 26,\n \"rx_count\" : 9160629,\n \"rx_size_in_bytes\" : 2570548643,\n \"tx_count\" : 9160591,\n \"tx_size_in_bytes\" : 1643358255\n },\n \"fielddata_breaker\" : {\n \"maximum_size_in_bytes\" : 405484339,\n \"maximum_size\" : \"386.6mb\",\n \"estimated_size_in_bytes\" : 0,\n \"estimated_size\" : \"0b\",\n \"overhead\" : 1.03,\n \"tripped\" : -1\n }\n },\n \"rPZ9EsahRl-9Cs_AOucOJQ\" : {\n \"timestamp\" : 1406277844542,\n \"name\" : \"logstash-netflow\",\n \"transport_address\" : \"inet[/10.0.108.171:9302]\",\n \"host\" : \"node38\",\n \"ip\" : [ \"inet[/10.0.108.171:9302]\", \"NONE\" ],\n \"attributes\" : {\n \"client\" : \"true\",\n \"data\" : \"false\"\n },\n \"indices\" : {\n \"docs\" : {\n \"count\" : 0,\n \"deleted\" : 0\n },\n \"store\" : {\n \"size_in_bytes\" : 0,\n \"throttle_time_in_millis\" : 0\n },\n \"indexing\" : {\n \"index_total\" : 0,\n \"index_time_in_millis\" : 0,\n \"index_current\" : 0,\n \"delete_total\" : 0,\n \"delete_time_in_millis\" : 0,\n \"delete_current\" : 0\n },\n \"get\" : {\n \"total\" : 0,\n \"time_in_millis\" : 0,\n \"exists_total\" : 0,\n \"exists_time_in_millis\" : 0,\n \"missing_total\" : 0,\n \"missing_time_in_millis\" : 0,\n \"current\" : 0\n },\n \"search\" : {\n \"open_contexts\" : 0,\n \"query_total\" : 0,\n \"query_time_in_millis\" : 0,\n \"query_current\" : 0,\n \"fetch_total\" : 0,\n \"fetch_time_in_millis\" : 0,\n \"fetch_current\" : 0\n },\n \"merges\" : {\n \"current\" : 0,\n \"current_docs\" : 0,\n \"current_size_in_bytes\" : 0,\n \"total\" : 0,\n \"total_time_in_millis\" : 0,\n \"total_docs\" : 0,\n \"total_size_in_bytes\" : 0\n },\n \"refresh\" : {\n \"total\" : 0,\n \"total_time_in_millis\" : 0\n },\n \"flush\" : {\n \"total\" : 0,\n \"total_time_in_millis\" : 0\n },\n \"warmer\" : {\n \"current\" : 0,\n \"total\" : 0,\n \"total_time_in_millis\" : 0\n },\n \"filter_cache\" : {\n \"memory_size_in_bytes\" : 0,\n \"evictions\" : 0\n },\n \"id_cache\" : {\n \"memory_size_in_bytes\" : 0\n },\n \"fielddata\" : {\n \"memory_size_in_bytes\" : 0,\n \"evictions\" : 0\n },\n \"percolate\" : {\n \"total\" : 0,\n \"time_in_millis\" : 0,\n \"current\" : 0,\n \"memory_size_in_bytes\" : -1,\n \"memory_size\" : \"-1b\",\n \"queries\" : 0\n },\n \"completion\" : {\n \"size_in_bytes\" : 0\n },\n \"segments\" : {\n \"count\" : 0,\n \"memory_in_bytes\" : 0,\n \"index_writer_memory_in_bytes\" : 0,\n \"version_map_memory_in_bytes\" : 0\n },\n \"translog\" : {\n \"operations\" : 0,\n \"size_in_bytes\" : 0\n }\n },\n \"os\" : {\n \"timestamp\" : 1406277844543\n },\n \"process\" : {\n \"timestamp\" : 1406277844543,\n \"open_file_descriptors\" : 1375\n },\n \"jvm\" : {\n \"timestamp\" : 1406277844544,\n \"uptime_in_millis\" : 4208980926,\n \"mem\" : {\n \"heap_used_in_bytes\" : 226123632,\n \"heap_used_percent\" : 44,\n \"heap_committed_in_bytes\" : 506855424,\n \"heap_max_in_bytes\" : 506855424,\n \"non_heap_used_in_bytes\" : 68490160,\n \"non_heap_committed_in_bytes\" : 107167744,\n \"pools\" : {\n \"young\" : {\n \"used_in_bytes\" : 46960112,\n \"max_in_bytes\" : 139853824,\n \"peak_used_in_bytes\" : 139853824,\n \"peak_max_in_bytes\" : 139853824\n },\n \"survivor\" : {\n \"used_in_bytes\" : 110032,\n \"max_in_bytes\" : 17432576,\n \"peak_used_in_bytes\" : 17432576,\n \"peak_max_in_bytes\" : 17432576\n },\n \"old\" : {\n \"used_in_bytes\" : 179055576,\n \"max_in_bytes\" : 349569024,\n \"peak_used_in_bytes\" : 339622256,\n \"peak_max_in_bytes\" : 349569024\n }\n }\n },\n \"threads\" : {\n \"count\" : 428,\n \"peak_count\" : 442\n },\n \"gc\" : {\n \"collectors\" : {\n \"young\" : {\n \"collection_count\" : 11840,\n \"collection_time_in_millis\" : 67781\n },\n \"old\" : {\n \"collection_count\" : 14,\n \"collection_time_in_millis\" : 203\n }\n }\n },\n \"buffer_pools\" : {\n \"direct\" : {\n \"count\" : 723,\n \"used_in_bytes\" : 199419308,\n \"total_capacity_in_bytes\" : 199419308\n },\n \"mapped\" : {\n \"count\" : 0,\n \"used_in_bytes\" : 0,\n \"total_capacity_in_bytes\" : 0\n }\n }\n },\n \"thread_pool\" : {\n \"generic\" : {\n \"threads\" : 1,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 3,\n \"completed\" : 420986\n },\n \"index\" : {\n \"threads\" : 0,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 0,\n \"completed\" : 0\n },\n \"get\" : {\n \"threads\" : 0,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 0,\n \"completed\" : 0\n },\n \"snapshot\" : {\n \"threads\" : 0,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 0,\n \"completed\" : 0\n },\n \"merge\" : {\n \"threads\" : 0,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 0,\n \"completed\" : 0\n },\n \"suggest\" : {\n \"threads\" : 0,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 0,\n \"completed\" : 0\n },\n \"bulk\" : {\n \"threads\" : 0,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 0,\n \"completed\" : 0\n },\n \"optimize\" : {\n \"threads\" : 0,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 0,\n \"completed\" : 0\n },\n \"warmer\" : {\n \"threads\" : 0,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 0,\n \"completed\" : 0\n },\n \"flush\" : {\n \"threads\" : 0,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 0,\n \"completed\" : 0\n },\n \"search\" : {\n \"threads\" : 0,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 0,\n \"completed\" : 0\n },\n \"percolate\" : {\n \"threads\" : 0,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 0,\n \"completed\" : 0\n },\n \"management\" : {\n \"threads\" : 1,\n \"queue\" : 0,\n \"active\" : 1,\n \"rejected\" : 0,\n \"largest\" : 2,\n \"completed\" : 766\n },\n \"refresh\" : {\n \"threads\" : 0,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 0,\n \"completed\" : 0\n }\n },\n \"network\" : { },\n \"fs\" : {\n \"timestamp\" : 1406277844544,\n \"total\" : { },\n \"data\" : [ ]\n },\n \"transport\" : {\n \"server_open\" : 26,\n \"rx_count\" : 9160652,\n \"rx_size_in_bytes\" : 2567605017,\n \"tx_count\" : 9160605,\n \"tx_size_in_bytes\" : 1635812229\n },\n \"fielddata_breaker\" : {\n \"maximum_size_in_bytes\" : 405484339,\n \"maximum_size\" : \"386.6mb\",\n \"estimated_size_in_bytes\" : 0,\n \"estimated_size\" : \"0b\",\n \"overhead\" : 1.03,\n \"tripped\" : -1\n }\n },\n \"myfhcxbYRCaqpJ_yAapmHw\" : {\n \"timestamp\" : 1406277844542,\n \"name\" : \"node38-telecom\",\n \"transport_address\" : \"inet[/10.0.108.171:9300]\",\n \"host\" : \"node38\",\n \"ip\" : [ \"inet[/10.0.108.171:9300]\", \"NONE\" ],\n \"indices\" : {\n \"docs\" : {\n \"count\" : 33434392,\n \"deleted\" : 0\n },\n \"store\" : {\n \"size_in_bytes\" : 16439595838,\n \"throttle_time_in_millis\" : 0\n },\n \"indexing\" : {\n \"index_total\" : 0,\n \"index_time_in_millis\" : 0,\n \"index_current\" : 0,\n \"delete_total\" : 0,\n \"delete_time_in_millis\" : 0,\n \"delete_current\" : 0\n },\n \"get\" : {\n \"total\" : 0,\n \"time_in_millis\" : 0,\n \"exists_total\" : 0,\n \"exists_time_in_millis\" : 0,\n \"missing_total\" : 0,\n \"missing_time_in_millis\" : 0,\n \"current\" : 0\n },\n \"search\" : {\n \"open_contexts\" : 0,\n \"query_total\" : 0,\n \"query_time_in_millis\" : 0,\n \"query_current\" : 0,\n \"fetch_total\" : 0,\n \"fetch_time_in_millis\" : 0,\n \"fetch_current\" : 0\n },\n \"merges\" : {\n \"current\" : 0,\n \"current_docs\" : 0,\n \"current_size_in_bytes\" : 0,\n \"total\" : 0,\n \"total_time_in_millis\" : 0,\n \"total_docs\" : 0,\n \"total_size_in_bytes\" : 0\n },\n \"refresh\" : {\n \"total\" : 26,\n \"total_time_in_millis\" : 0\n },\n \"flush\" : {\n \"total\" : 0,\n \"total_time_in_millis\" : 0\n },\n \"warmer\" : {\n \"current\" : 0,\n \"total\" : 52,\n \"total_time_in_millis\" : 2\n },\n \"filter_cache\" : {\n \"memory_size_in_bytes\" : 0,\n \"evictions\" : 0\n },\n \"id_cache\" : {\n \"memory_size_in_bytes\" : 0\n },\n \"fielddata\" : {\n \"memory_size_in_bytes\" : 0,\n \"evictions\" : 0\n },\n \"percolate\" : {\n \"total\" : 0,\n \"time_in_millis\" : 0,\n \"current\" : 0,\n \"memory_size_in_bytes\" : -1,\n \"memory_size\" : \"-1b\",\n \"queries\" : 0\n },\n \"completion\" : {\n \"size_in_bytes\" : 0\n },\n \"segments\" : {\n \"count\" : 331,\n \"memory_in_bytes\" : 380822712,\n \"index_writer_memory_in_bytes\" : 0,\n \"version_map_memory_in_bytes\" : 0\n },\n \"translog\" : {\n \"operations\" : 0,\n \"size_in_bytes\" : 0\n },\n \"suggest\" : {\n \"total\" : 0,\n \"time_in_millis\" : 0,\n \"current\" : 0\n }\n },\n \"os\" : {\n \"timestamp\" : 1406277844605,\n \"uptime_in_millis\" : 11125608,\n \"load_average\" : [ 0.02, 0.02, 0.0 ],\n \"cpu\" : {\n \"sys\" : 0,\n \"user\" : 0,\n \"idle\" : 99,\n \"usage\" : 0,\n \"stolen\" : 0\n },\n \"mem\" : {\n \"free_in_bytes\" : 9554894848,\n \"used_in_bytes\" : 24062205952,\n \"free_percent\" : 85,\n \"used_percent\" : 14,\n \"actual_free_in_bytes\" : 28848246784,\n \"actual_used_in_bytes\" : 4768854016\n },\n \"swap\" : {\n \"used_in_bytes\" : 306176000,\n \"free_in_bytes\" : 17153335296\n }\n },\n \"process\" : {\n \"timestamp\" : 1406277844605,\n \"open_file_descriptors\" : 1796,\n \"cpu\" : {\n \"percent\" : 1,\n \"sys_in_millis\" : 64390,\n \"user_in_millis\" : 113440,\n \"total_in_millis\" : 177830\n },\n \"mem\" : {\n \"resident_in_bytes\" : 2292084736,\n \"share_in_bytes\" : 16846848,\n \"total_virtual_in_bytes\" : 38485655552\n }\n },\n \"jvm\" : {\n \"timestamp\" : 1406277844607,\n \"uptime_in_millis\" : 9575635,\n \"mem\" : {\n \"heap_used_in_bytes\" : 1142720912,\n \"heap_used_percent\" : 6,\n \"heap_committed_in_bytes\" : 16979263488,\n \"heap_max_in_bytes\" : 16979263488,\n \"non_heap_used_in_bytes\" : 39112624,\n \"non_heap_committed_in_bytes\" : 39387136,\n \"pools\" : {\n \"young\" : {\n \"used_in_bytes\" : 978794664,\n \"max_in_bytes\" : 1605304320,\n \"peak_used_in_bytes\" : 1605304320,\n \"peak_max_in_bytes\" : 1605304320\n },\n \"survivor\" : {\n \"used_in_bytes\" : 3106256,\n \"max_in_bytes\" : 200605696,\n \"peak_used_in_bytes\" : 166923624,\n \"peak_max_in_bytes\" : 200605696\n },\n \"old\" : {\n \"used_in_bytes\" : 160819992,\n \"max_in_bytes\" : 15173353472,\n \"peak_used_in_bytes\" : 160819992,\n \"peak_max_in_bytes\" : 15173353472\n }\n }\n },\n \"threads\" : {\n \"count\" : 214,\n \"peak_count\" : 229\n },\n \"gc\" : {\n \"collectors\" : {\n \"young\" : {\n \"collection_count\" : 7,\n \"collection_time_in_millis\" : 403\n },\n \"old\" : {\n \"collection_count\" : 0,\n \"collection_time_in_millis\" : 0\n }\n }\n },\n \"buffer_pools\" : {\n \"direct\" : {\n \"count\" : 336,\n \"used_in_bytes\" : 87049216,\n \"total_capacity_in_bytes\" : 87049216\n },\n \"mapped\" : {\n \"count\" : 440,\n \"used_in_bytes\" : 3388615670,\n \"total_capacity_in_bytes\" : 3388615670\n }\n }\n },\n \"thread_pool\" : {\n \"generic\" : {\n \"threads\" : 1,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 19,\n \"completed\" : 1054\n },\n \"index\" : {\n \"threads\" : 0,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 0,\n \"completed\" : 0\n },\n \"snapshot_data\" : {\n \"threads\" : 0,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 0,\n \"completed\" : 0\n },\n \"bench\" : {\n \"threads\" : 0,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 0,\n \"completed\" : 0\n },\n \"get\" : {\n \"threads\" : 0,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 0,\n \"completed\" : 0\n },\n \"snapshot\" : {\n \"threads\" : 0,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 0,\n \"completed\" : 0\n },\n \"merge\" : {\n \"threads\" : 0,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 0,\n \"completed\" : 0\n },\n \"suggest\" : {\n \"threads\" : 0,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 0,\n \"completed\" : 0\n },\n \"bulk\" : {\n \"threads\" : 0,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 0,\n \"completed\" : 0\n },\n \"optimize\" : {\n \"threads\" : 0,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 0,\n \"completed\" : 0\n },\n \"warmer\" : {\n \"threads\" : 1,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 1,\n \"completed\" : 26\n },\n \"flush\" : {\n \"threads\" : 0,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 0,\n \"completed\" : 0\n },\n \"search\" : {\n \"threads\" : 0,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 0,\n \"completed\" : 0\n },\n \"percolate\" : {\n \"threads\" : 0,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 0,\n \"completed\" : 0\n },\n \"management\" : {\n \"threads\" : 5,\n \"queue\" : 0,\n \"active\" : 1,\n \"rejected\" : 0,\n \"largest\" : 5,\n \"completed\" : 1354\n },\n \"refresh\" : {\n \"threads\" : 0,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 0,\n \"completed\" : 0\n }\n },\n \"network\" : {\n \"tcp\" : {\n \"active_opens\" : 140097716,\n \"passive_opens\" : 1880178,\n \"curr_estab\" : 290,\n \"in_segs\" : 5172144515,\n \"out_segs\" : 5814393262,\n \"retrans_segs\" : 212947,\n \"estab_resets\" : 1056639,\n \"attempt_fails\" : 27282,\n \"in_errs\" : 50905,\n \"out_rsts\" : 1112514\n }\n },\n \"fs\" : {\n \"timestamp\" : 1406277844608,\n \"total\" : {\n \"total_in_bytes\" : 2197949513728,\n \"free_in_bytes\" : 2181463736320,\n \"available_in_bytes\" : 2181463736320,\n \"disk_reads\" : 38753157,\n \"disk_writes\" : 243000636,\n \"disk_io_op\" : 281753793,\n \"disk_read_size_in_bytes\" : 4230850646528,\n \"disk_write_size_in_bytes\" : 13743848959488,\n \"disk_io_size_in_bytes\" : 17974699606016\n },\n \"data\" : [ {\n \"path\" : \"/var/lib/elasticsearch/telecom/telecom/nodes/0\",\n \"mount\" : \"/var/lib/elasticsearch\",\n \"dev\" : \"/dev/mapper/raidvg-elasticsearch\",\n \"total_in_bytes\" : 2197949513728,\n \"free_in_bytes\" : 2181463736320,\n \"available_in_bytes\" : 2181463736320,\n \"disk_reads\" : 38753157,\n \"disk_writes\" : 243000636,\n \"disk_io_op\" : 281753793,\n \"disk_read_size_in_bytes\" : 4230850646528,\n \"disk_write_size_in_bytes\" : 13743848959488,\n \"disk_io_size_in_bytes\" : 17974699606016\n } ]\n },\n \"transport\" : {\n \"server_open\" : 65,\n \"rx_count\" : 76220,\n \"rx_size_in_bytes\" : 4624394,\n \"tx_count\" : 76219,\n \"tx_size_in_bytes\" : 3602292\n },\n \"http\" : {\n \"current_open\" : 1,\n \"total_opened\" : 655\n },\n \"fielddata_breaker\" : {\n \"maximum_size_in_bytes\" : 10187558092,\n \"maximum_size\" : \"9.4gb\",\n \"estimated_size_in_bytes\" : 0,\n \"estimated_size\" : \"0b\",\n \"overhead\" : 1.03,\n \"tripped\" : 0\n }\n },\n \"9FwiLpriR12926UVB-YuVw\" : {\n \"timestamp\" : 1406277844542,\n \"name\" : \"logstash-netflow\",\n \"transport_address\" : \"inet[/10.0.108.171:9301]\",\n \"host\" : \"node38\",\n \"ip\" : [ \"inet[/10.0.108.171:9301]\", \"NONE\" ],\n \"attributes\" : {\n \"client\" : \"true\",\n \"data\" : \"false\"\n },\n \"indices\" : {\n \"docs\" : {\n \"count\" : 0,\n \"deleted\" : 0\n },\n \"store\" : {\n \"size_in_bytes\" : 0,\n \"throttle_time_in_millis\" : 0\n },\n \"indexing\" : {\n \"index_total\" : 0,\n \"index_time_in_millis\" : 0,\n \"index_current\" : 0,\n \"delete_total\" : 0,\n \"delete_time_in_millis\" : 0,\n \"delete_current\" : 0\n },\n \"get\" : {\n \"total\" : 0,\n \"time_in_millis\" : 0,\n \"exists_total\" : 0,\n \"exists_time_in_millis\" : 0,\n \"missing_total\" : 0,\n \"missing_time_in_millis\" : 0,\n \"current\" : 0\n },\n \"search\" : {\n \"open_contexts\" : 0,\n \"query_total\" : 0,\n \"query_time_in_millis\" : 0,\n \"query_current\" : 0,\n \"fetch_total\" : 0,\n \"fetch_time_in_millis\" : 0,\n \"fetch_current\" : 0\n },\n \"merges\" : {\n \"current\" : 0,\n \"current_docs\" : 0,\n \"current_size_in_bytes\" : 0,\n \"total\" : 0,\n \"total_time_in_millis\" : 0,\n \"total_docs\" : 0,\n \"total_size_in_bytes\" : 0\n },\n \"refresh\" : {\n \"total\" : 0,\n \"total_time_in_millis\" : 0\n },\n \"flush\" : {\n \"total\" : 0,\n \"total_time_in_millis\" : 0\n },\n \"warmer\" : {\n \"current\" : 0,\n \"total\" : 0,\n \"total_time_in_millis\" : 0\n },\n \"filter_cache\" : {\n \"memory_size_in_bytes\" : 0,\n \"evictions\" : 0\n },\n \"id_cache\" : {\n \"memory_size_in_bytes\" : 0\n },\n \"fielddata\" : {\n \"memory_size_in_bytes\" : 0,\n \"evictions\" : 0\n },\n \"percolate\" : {\n \"total\" : 0,\n \"time_in_millis\" : 0,\n \"current\" : 0,\n \"memory_size_in_bytes\" : -1,\n \"memory_size\" : \"-1b\",\n \"queries\" : 0\n },\n \"completion\" : {\n \"size_in_bytes\" : 0\n },\n \"segments\" : {\n \"count\" : 0,\n \"memory_in_bytes\" : 0,\n \"index_writer_memory_in_bytes\" : 0,\n \"version_map_memory_in_bytes\" : 0\n },\n \"translog\" : {\n \"operations\" : 0,\n \"size_in_bytes\" : 0\n }\n },\n \"os\" : {\n \"timestamp\" : 1406277844543\n },\n \"process\" : {\n \"timestamp\" : 1406277844543,\n \"open_file_descriptors\" : 1376\n },\n \"jvm\" : {\n \"timestamp\" : 1406277844544,\n \"uptime_in_millis\" : 4208980926,\n \"mem\" : {\n \"heap_used_in_bytes\" : 226119440,\n \"heap_used_percent\" : 44,\n \"heap_committed_in_bytes\" : 506855424,\n \"heap_max_in_bytes\" : 506855424,\n \"non_heap_used_in_bytes\" : 68490160,\n \"non_heap_committed_in_bytes\" : 107167744,\n \"pools\" : {\n \"young\" : {\n \"used_in_bytes\" : 46955928,\n \"max_in_bytes\" : 139853824,\n \"peak_used_in_bytes\" : 139853824,\n \"peak_max_in_bytes\" : 139853824\n },\n \"survivor\" : {\n \"used_in_bytes\" : 110032,\n \"max_in_bytes\" : 17432576,\n \"peak_used_in_bytes\" : 17432576,\n \"peak_max_in_bytes\" : 17432576\n },\n \"old\" : {\n \"used_in_bytes\" : 179055576,\n \"max_in_bytes\" : 349569024,\n \"peak_used_in_bytes\" : 339622256,\n \"peak_max_in_bytes\" : 349569024\n }\n }\n },\n \"threads\" : {\n \"count\" : 428,\n \"peak_count\" : 442\n },\n \"gc\" : {\n \"collectors\" : {\n \"young\" : {\n \"collection_count\" : 11840,\n \"collection_time_in_millis\" : 67781\n },\n \"old\" : {\n \"collection_count\" : 14,\n \"collection_time_in_millis\" : 203\n }\n }\n },\n \"buffer_pools\" : {\n \"direct\" : {\n \"count\" : 723,\n \"used_in_bytes\" : 199419308,\n \"total_capacity_in_bytes\" : 199419308\n },\n \"mapped\" : {\n \"count\" : 0,\n \"used_in_bytes\" : 0,\n \"total_capacity_in_bytes\" : 0\n }\n }\n },\n \"thread_pool\" : {\n \"generic\" : {\n \"threads\" : 1,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 4,\n \"completed\" : 420979\n },\n \"index\" : {\n \"threads\" : 0,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 0,\n \"completed\" : 0\n },\n \"get\" : {\n \"threads\" : 0,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 0,\n \"completed\" : 0\n },\n \"snapshot\" : {\n \"threads\" : 0,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 0,\n \"completed\" : 0\n },\n \"merge\" : {\n \"threads\" : 0,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 0,\n \"completed\" : 0\n },\n \"suggest\" : {\n \"threads\" : 0,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 0,\n \"completed\" : 0\n },\n \"bulk\" : {\n \"threads\" : 0,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 0,\n \"completed\" : 0\n },\n \"optimize\" : {\n \"threads\" : 0,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 0,\n \"completed\" : 0\n },\n \"warmer\" : {\n \"threads\" : 0,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 0,\n \"completed\" : 0\n },\n \"flush\" : {\n \"threads\" : 0,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 0,\n \"completed\" : 0\n },\n \"search\" : {\n \"threads\" : 0,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 0,\n \"completed\" : 0\n },\n \"percolate\" : {\n \"threads\" : 0,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 0,\n \"completed\" : 0\n },\n \"management\" : {\n \"threads\" : 1,\n \"queue\" : 0,\n \"active\" : 1,\n \"rejected\" : 0,\n \"largest\" : 2,\n \"completed\" : 766\n },\n \"refresh\" : {\n \"threads\" : 0,\n \"queue\" : 0,\n \"active\" : 0,\n \"rejected\" : 0,\n \"largest\" : 0,\n \"completed\" : 0\n }\n },\n \"network\" : { },\n \"fs\" : {\n \"timestamp\" : 1406277844544,\n \"total\" : { },\n \"data\" : [ ]\n },\n \"transport\" : {\n \"server_open\" : 26,\n \"rx_count\" : 8688419,\n \"rx_size_in_bytes\" : 2179836154,\n \"tx_count\" : 8688418,\n \"tx_size_in_bytes\" : 486036255\n },\n \"fielddata_breaker\" : {\n \"maximum_size_in_bytes\" : 405484339,\n \"maximum_size\" : \"386.6mb\",\n \"estimated_size_in_bytes\" : 0,\n \"estimated_size\" : \"0b\",\n \"overhead\" : 1.03,\n \"tripped\" : -1\n }\n }\n }\n}\n```\n", "created_at": "2014-07-25T08:49:22Z" }, { "body": "thanks @faxm0dem \n\n@drewr could you take another look at this with the new info that has been provided please?\n", "created_at": "2014-07-25T10:21:57Z" }, { "body": "Sorry, just now seeing this... @faxm0dem did anything else anomalous happen in the cluster before this? Like, nodes not communicating properly, or running out of mem, anything like that?\n", "created_at": "2014-08-19T16:22:30Z" }, { "body": "Not really. I just upgraded my prod cluster to 1.3.2, and have the same issue.\n", "created_at": "2014-08-20T06:29:11Z" }, { "body": "Getting same error after connecting with logstash. Its working fine as single node and in cluster mode without logstash. Tried with ES 1.3.2 and 1.24 with logstash 1.4.2. is there any workaround for same? But data indexing and query is working fine.\n", "created_at": "2014-09-11T13:20:41Z" }, { "body": "I don't have logstash nodes in my devel cluster, and I still get the error\n", "created_at": "2014-09-12T07:17:25Z" }, { "body": "Fixed by #7815.\n", "created_at": "2014-09-27T05:44:40Z" }, { "body": "Just to confirm - was still seeing this in 1.3.4, but upgraded to 1.4.0 today and it is now working correctly.\nThanks!\nMatthew\n", "created_at": "2014-11-15T00:57:06Z" }, { "body": "I confirm this is finally working!\n", "created_at": "2014-11-16T19:03:09Z" }, { "body": "I upgraded to 1.4.0 today and now I see this error. \n\nhttp://localhost:9200/_aliases?pretty=1\n{\n \"error\" : \"RemoteTransportException[[Raza][inet[/123.456.17.4:9300]][indices:admin/get]]; nested: ActionNotFoundTransportException[No handler for action [indices:admin/get]]; \",\n \"status\" : 500\n}\n\nhttp://0:9200/_cat/indices\n{\"error\":\"NullPointerException[null]\",\"status\":500}\n\nHowever I am able to search indices directly \n\nhttp://localhost:9200/testindex/_search?\n{\"took\":2,\"timed_out\":false,\"_shards\":{\"total\":5,\"successful\":5,\"failed\":0},\"hits\":{\"total\":0,\"max_score\":null,\"hits\":[]}}\n", "created_at": "2014-11-19T14:18:48Z" }, { "body": "@Grauen are all your nodes 1.4.0?\n", "created_at": "2014-11-19T15:18:56Z" }, { "body": "@Grauen This sounds like a bug that was fixed in https://github.com/elasticsearch/elasticsearch/pull/8387. if you are still seeing this problems when all of your nodes are running 1.4.0, then please open a new issue.\n", "created_at": "2014-11-24T11:29:58Z" } ], "number": 6297, "title": "_cat/nodes causes NullPointerException in 1.2.0" }
{ "body": "Query Cache and Suggest stats were introduced in v1.4.0_beta1 and 1.2.0 respectively, therefore they can be null if stats are received from older nodes in the cluster.\n\nFixes #6297\n\nThis problem can occur only 1.x since current master isn't backward compatible with previous versions of elasticsearch anyway. So, I am opening this PR against 1.x not master. Not sure if this is right thing to do.\n", "number": 7815, "review_comments": [], "title": "Stats: Fix NPE in /_cat/nodes" }
{ "commits": [ { "message": "Fix NPE in /_cat/nodes\n\nQuery Cache and Suggest stats were introduced in v1.4.0_beta1 and 1.2.0 respectively, therefore they can be null while stats are received from older nodes in the cluster.\n\nFixes #6297" } ], "files": [ { "diff": "@@ -230,10 +230,10 @@ private Table buildTable(RestRequest req, ClusterStateResponse state, NodesInfoR\n table.addCell(stats == null ? null : stats.getIndices().getFilterCache().getMemorySize());\n table.addCell(stats == null ? null : stats.getIndices().getFilterCache().getEvictions());\n \n- table.addCell(stats == null ? null : stats.getIndices().getQueryCache().getMemorySize());\n- table.addCell(stats == null ? null : stats.getIndices().getQueryCache().getEvictions());\n- table.addCell(stats == null ? null : stats.getIndices().getQueryCache().getHitCount());\n- table.addCell(stats == null ? null : stats.getIndices().getQueryCache().getMissCount());\n+ table.addCell(stats == null ? null : stats.getIndices().getQueryCache() == null ? null : stats.getIndices().getQueryCache().getMemorySize());\n+ table.addCell(stats == null ? null : stats.getIndices().getQueryCache() == null ? null : stats.getIndices().getQueryCache().getEvictions());\n+ table.addCell(stats == null ? null : stats.getIndices().getQueryCache() == null ? null : stats.getIndices().getQueryCache().getHitCount());\n+ table.addCell(stats == null ? null : stats.getIndices().getQueryCache() == null ? null : stats.getIndices().getQueryCache().getMissCount());\n \n table.addCell(stats == null ? null : stats.getIndices().getFlush().getTotal());\n table.addCell(stats == null ? null : stats.getIndices().getFlush().getTotalTime());\n@@ -287,9 +287,9 @@ private Table buildTable(RestRequest req, ClusterStateResponse state, NodesInfoR\n table.addCell(stats == null ? null : stats.getIndices().getSegments().getVersionMapMemory());\n table.addCell(stats == null ? null : stats.getIndices().getSegments().getFixedBitSetMemory());\n \n- table.addCell(stats == null ? null : stats.getIndices().getSuggest().getCurrent());\n- table.addCell(stats == null ? null : stats.getIndices().getSuggest().getTime());\n- table.addCell(stats == null ? null : stats.getIndices().getSuggest().getCount());\n+ table.addCell(stats == null ? null : stats.getIndices().getSuggest() == null ? null : stats.getIndices().getSuggest().getCurrent());\n+ table.addCell(stats == null ? null : stats.getIndices().getSuggest() == null ? null : stats.getIndices().getSuggest().getTime());\n+ table.addCell(stats == null ? null : stats.getIndices().getSuggest() == null ? null : stats.getIndices().getSuggest().getCount());\n \n table.endRow();\n }", "filename": "src/main/java/org/elasticsearch/rest/action/cat/RestNodesAction.java", "status": "modified" } ] }
{ "body": "The get indexed script supports `preference`, `realtime` and `refresh` parameters. Those parameters can only be set through java api though, never parsed on the REST layer.\n\nThe set preference is also never used internally as we force the preference to `_local` all the time. Also, I wonder if `realtime` and `refresh` make sense since we always refresh internally after each write (put/delete indexed script).\n\nMy vote is for removing all three parameters from the java API.\n", "comments": [ { "body": "how can you set preference in Java api in the end ?\n", "created_at": "2016-09-05T17:46:38Z" } ], "number": 7567, "title": "Java api: get indexed script support for preference, realtime and refresh is incomplete" }
{ "body": "Some cleanup to close issues around the indexed scripts API\n\nThis closes : \n#7560\n#7568\n#7559\n#7647\n#7567\n", "number": 7787, "review_comments": [ { "body": "can we just use constants for this?\n", "created_at": "2014-09-18T14:14:42Z" }, { "body": "I think we should pass UTF-8 here instead of default charset?\n", "created_at": "2014-09-18T14:15:32Z" } ], "title": "Cleaned up various issues" }
{ "commits": [ { "message": "Indexed Scripts/Templates : Cleanup\n\nThis contains several cleanups to the indexed scripts.\nRemove the unused FetchSourceContext from the Get request..\nAdd lang,_version,_id to the REST GET API.\nRemoves the routing from GetIndexedScriptRequest since the script index is a single shard that is replicated across all nodes.\nFix backward compatible template file reference\nBefore 1.3.0 on disk scripts could be referenced by requesting\n````\n_search/template\n\n{\n \"template\" : \"ondiskscript\"\n}\n````\nThis was broken in 1.3.0 by requiring\n````\n{\n \"template\" :\n {\n \"file\" : \"ondiskscript\"\n }\n}\n````\nThis commit restores the previous behavior.\nRemove support for preference, realtime and refresh\nThese parameters don't make sense anymore for indexed scripts as we always force the preference to _local and\nalways refresh after a Put to the indexed scripts index.\n\nCloses #7568\nCloses #7559\nCloses #7647\nCloses #7567" } ], "files": [ { "diff": "@@ -36,3 +36,8 @@\n body: { \"id\" : \"1\", \"params\" : { \"my_value\" : \"value1_foo\", \"my_size\" : 1 } }\n - match: { hits.total: 1 }\n \n+ - do:\n+ catch: /ElasticsearchIllegalArgumentException.Unable.to.find.on.disk.script.simple1/\n+ search_template:\n+ body: { \"template\" : \"simple1\" }\n+", "filename": "rest-api-spec/test/template/20_search.yaml", "status": "modified" }, { "diff": "@@ -45,13 +45,6 @@ public class GetIndexedScriptRequest extends ActionRequest<GetIndexedScriptReque\n \n protected String scriptLang;\n protected String id;\n- protected String preference;\n- protected String routing;\n- private FetchSourceContext fetchSourceContext;\n-\n- private boolean refresh = false;\n-\n- Boolean realtime;\n \n private VersionType versionType = VersionType.INTERNAL;\n private long version = Versions.MATCH_ANY;\n@@ -117,24 +110,6 @@ public GetIndexedScriptRequest id(String id) {\n return this;\n }\n \n- /**\n- * Controls the shard routing of the request. Using this value to hash the shard\n- * and not the id.\n- */\n- public GetIndexedScriptRequest routing(String routing) {\n- this.routing = routing;\n- return this;\n- }\n-\n- /**\n- * Sets the preference to execute the get. Defaults to randomize across shards. Can be set to\n- * <tt>_local</tt> to prefer local shards, <tt>_primary</tt> to execute only on primary shards, or\n- * a custom value, which guarantees that the same order will be used across different requests.\n- */\n- public GetIndexedScriptRequest preference(String preference) {\n- this.preference = preference;\n- return this;\n- }\n \n public String scriptLang() {\n return scriptLang;\n@@ -144,37 +119,6 @@ public String id() {\n return id;\n }\n \n- public String routing() {\n- return routing;\n- }\n-\n- public String preference() {\n- return this.preference;\n- }\n-\n- /**\n- * Should a refresh be executed before this get operation causing the operation to\n- * return the latest value. Note, heavy get should not set this to <tt>true</tt>. Defaults\n- * to <tt>false</tt>.\n- */\n- public GetIndexedScriptRequest refresh(boolean refresh) {\n- this.refresh = refresh;\n- return this;\n- }\n-\n- public boolean refresh() {\n- return this.refresh;\n- }\n-\n- public boolean realtime() {\n- return this.realtime == null ? true : this.realtime;\n- }\n-\n- public GetIndexedScriptRequest realtime(Boolean realtime) {\n- this.realtime = realtime;\n- return this;\n- }\n-\n /**\n * Sets the version, which will cause the get operation to only be performed if a matching\n * version exists and no changes happened on the doc since then.\n@@ -209,19 +153,17 @@ public void readFrom(StreamInput in) throws IOException {\n }\n scriptLang = in.readString();\n id = in.readString();\n- preference = in.readOptionalString();\n- refresh = in.readBoolean();\n- byte realtime = in.readByte();\n- if (realtime == 0) {\n- this.realtime = false;\n- } else if (realtime == 1) {\n- this.realtime = true;\n+ if (in.getVersion().before(Version.V_1_5_0)) {\n+ in.readOptionalString(); //Preference\n+ in.readBoolean(); //Refresh\n+ in.readByte(); //Realtime\n }\n-\n this.versionType = VersionType.fromValue(in.readByte());\n this.version = Versions.readVersionWithVLongForBW(in);\n \n- fetchSourceContext = FetchSourceContext.optionalReadFromStream(in);\n+ if (in.getVersion().before(Version.V_1_5_0)) {\n+ FetchSourceContext.optionalReadFromStream(in);\n+ }\n }\n \n @Override\n@@ -233,24 +175,23 @@ public void writeTo(StreamOutput out) throws IOException {\n }\n out.writeString(scriptLang);\n out.writeString(id);\n- out.writeOptionalString(preference);\n- out.writeBoolean(refresh);\n- if (realtime == null) {\n- out.writeByte((byte) -1);\n- } else if (!realtime) {\n- out.writeByte((byte) 0);\n- } else {\n- out.writeByte((byte) 1);\n+ if (out.getVersion().before(Version.V_1_5_0)) {\n+ out.writeOptionalString(\"_local\"); //Preference\n+ out.writeBoolean(true); //Refresh\n+ out.writeByte((byte) -1); //Realtime\n }\n \n out.writeByte(versionType.getValue());\n Versions.writeVersionWithVLongForBW(version, out);\n \n- FetchSourceContext.optionalWriteToStream(fetchSourceContext, out);\n+ if (out.getVersion().before(Version.V_1_5_0)) {\n+ FetchSourceContext.optionalWriteToStream(null, out);\n+ }\n+\n }\n \n @Override\n public String toString() {\n- return \"[\" + ScriptService.SCRIPT_INDEX + \"][\" + scriptLang + \"][\" + id + \"]: routing [\" + routing + \"]\";\n+ return \"[\" + ScriptService.SCRIPT_INDEX + \"][\" + scriptLang + \"][\" + id + \"]\";\n }\n }", "filename": "src/main/java/org/elasticsearch/action/indexedscripts/get/GetIndexedScriptRequest.java", "status": "modified" }, { "diff": "@@ -52,31 +52,6 @@ public GetIndexedScriptRequestBuilder setId(String id) {\n return this;\n }\n \n- /**\n- * Sets the preference to execute the search. Defaults to randomize across shards. Can be set to\n- * <tt>_local</tt> to prefer local shards, <tt>_primary</tt> to execute only on primary shards, or\n- * a custom value, which guarantees that the same order will be used across different requests.\n- */\n- public GetIndexedScriptRequestBuilder setPreference(String preference) {\n- request.preference(preference);\n- return this;\n- }\n-\n- /**\n- * Should a refresh be executed before this get operation causing the operation to\n- * return the latest value. Note, heavy get should not set this to <tt>true</tt>. Defaults\n- * to <tt>false</tt>.\n- */\n- public GetIndexedScriptRequestBuilder setRefresh(boolean refresh) {\n- request.refresh(refresh);\n- return this;\n- }\n-\n- public GetIndexedScriptRequestBuilder setRealtime(Boolean realtime) {\n- request.realtime(realtime);\n- return this;\n- }\n-\n /**\n * Sets the version, which will cause the get operation to only be performed if a matching\n * version exists and no changes happened on the doc since then.", "filename": "src/main/java/org/elasticsearch/action/indexedscripts/get/GetIndexedScriptRequestBuilder.java", "status": "modified" }, { "diff": "@@ -115,9 +115,6 @@ public static TemplateContext parse(XContentParser parser, String paramsFieldnam\n currentFieldName = parser.currentName();\n } else if (parameterMap.containsKey(currentFieldName)) {\n type = parameterMap.get(currentFieldName);\n-\n-\n-\n if (token == XContentParser.Token.START_OBJECT && !parser.hasTextCharacters()) {\n XContentBuilder builder = XContentBuilder.builder(parser.contentType().xContent());\n builder.copyCurrentStructure(parser);", "filename": "src/main/java/org/elasticsearch/index/query/TemplateQueryParser.java", "status": "modified" }, { "diff": "@@ -42,6 +42,11 @@\n */\n public class RestGetIndexedScriptAction extends BaseRestHandler {\n \n+ private final static String LANG_FIELD = \"lang\";\n+ private final static String ID_FIELD = \"_id\";\n+ private final static String VERSION_FIELD = \"_version\";\n+ private final static String SCRIPT_FIELD = \"script\";\n+\n @Inject\n public RestGetIndexedScriptAction(Settings settings, RestController controller, Client client) {\n this(settings, controller, true, client);\n@@ -54,14 +59,15 @@ protected RestGetIndexedScriptAction(Settings settings, RestController controlle\n }\n }\n \n- protected String getScriptLang(RestRequest request) {\n- return request.param(\"lang\");\n+ protected String getScriptFieldName() {\n+ return SCRIPT_FIELD;\n }\n \n- protected String getScriptFieldName() {\n- return \"script\";\n+ protected String getScriptLang(RestRequest request) {\n+ return request.param(LANG_FIELD);\n }\n \n+\n @Override\n public void handleRequest(final RestRequest request, final RestChannel channel, Client client) {\n final GetIndexedScriptRequest getRequest = new GetIndexedScriptRequest(getScriptLang(request), request.param(\"id\"));\n@@ -78,6 +84,9 @@ public RestResponse buildResponse(GetIndexedScriptResponse response) throws Exce\n String script = response.getScript();\n builder.startObject();\n builder.field(getScriptFieldName(), script);\n+ builder.field(VERSION_FIELD, response.getVersion());\n+ builder.field(LANG_FIELD, response.getScriptLang());\n+ builder.field(ID_FIELD, response.getId());\n builder.endObject();\n return new BytesRestResponse(OK, builder);\n } catch( IOException|ClassCastException e ){", "filename": "src/main/java/org/elasticsearch/rest/action/script/RestGetIndexedScriptAction.java", "status": "modified" }, { "diff": "@@ -22,6 +22,7 @@\n import com.carrotsearch.hppc.ObjectOpenHashSet;\n import com.carrotsearch.hppc.ObjectSet;\n import com.carrotsearch.hppc.cursors.ObjectCursor;\n+import com.google.common.base.Charsets;\n import com.google.common.collect.ImmutableMap;\n import org.apache.lucene.index.AtomicReaderContext;\n import org.apache.lucene.index.NumericDocValues;\n@@ -77,7 +78,6 @@\n import org.elasticsearch.threadpool.ThreadPool;\n \n import java.io.IOException;\n-import java.nio.charset.Charset;\n import java.util.HashMap;\n import java.util.Iterator;\n import java.util.Map;\n@@ -614,11 +614,21 @@ private void parseTemplate(ShardSearchRequest request) {\n \n if (templateContext.scriptType().equals(ScriptService.ScriptType.INLINE)) {\n //Try to double parse for nested template id/file\n- parser = XContentFactory.xContent(templateContext.template().getBytes(Charset.defaultCharset())).createParser(templateContext.template().getBytes(Charset.defaultCharset()));\n- TemplateQueryParser.TemplateContext innerContext = TemplateQueryParser.parse(parser, \"params\");\n- if (hasLength(innerContext.template()) && !innerContext.scriptType().equals(ScriptService.ScriptType.INLINE)) {\n- //An inner template referring to a filename or id\n- templateContext = new TemplateQueryParser.TemplateContext(innerContext.scriptType(), innerContext.template(), templateContext.params());\n+ parser = null;\n+ try {\n+ byte[] templateBytes = templateContext.template().getBytes(Charsets.UTF_8);\n+ parser = XContentFactory.xContent(templateBytes).createParser(templateBytes);\n+ } catch (ElasticsearchParseException epe) {\n+ //This was an non-nested template, the parse failure was due to this, it is safe to assume this refers to a file\n+ //for backwards compatibility and keep going\n+ templateContext = new TemplateQueryParser.TemplateContext(ScriptService.ScriptType.FILE, templateContext.template(), templateContext.params());\n+ }\n+ if (parser != null) {\n+ TemplateQueryParser.TemplateContext innerContext = TemplateQueryParser.parse(parser, \"params\");\n+ if (hasLength(innerContext.template()) && !innerContext.scriptType().equals(ScriptService.ScriptType.INLINE)) {\n+ //An inner template referring to a filename or id\n+ templateContext = new TemplateQueryParser.TemplateContext(innerContext.scriptType(), innerContext.template(), templateContext.params());\n+ }\n }\n }\n } catch (IOException e) {", "filename": "src/main/java/org/elasticsearch/search/SearchService.java", "status": "modified" }, { "diff": "@@ -34,19 +34,10 @@ public class GetIndexedScriptRequestTests extends ElasticsearchTestCase {\n @Test\n public void testGetIndexedScriptRequestSerialization() throws IOException {\n GetIndexedScriptRequest request = new GetIndexedScriptRequest(\"lang\", \"id\");\n- if (randomBoolean()) {\n- request.realtime(false);\n- }\n- if (randomBoolean()) {\n- request.refresh(true);\n- }\n if (randomBoolean()) {\n request.version(randomIntBetween(1, Integer.MAX_VALUE));\n request.versionType(randomFrom(VersionType.values()));\n }\n- if (randomBoolean()) {\n- request.routing(randomAsciiOfLength(randomIntBetween(1, 10)));\n- }\n \n BytesStreamOutput out = new BytesStreamOutput();\n out.setVersion(randomVersion());\n@@ -59,8 +50,6 @@ public void testGetIndexedScriptRequestSerialization() throws IOException {\n \n assertThat(request2.id(), equalTo(request.id()));\n assertThat(request2.scriptLang(), equalTo(request.scriptLang()));\n- assertThat(request2.realtime(), equalTo(request.realtime()));\n- assertThat(request2.refresh(), equalTo(request.refresh()));\n assertThat(request2.version(), equalTo(request.version()));\n assertThat(request2.versionType(), equalTo(request.versionType()));\n }", "filename": "src/test/java/org/elasticsearch/action/indexedscripts/get/GetIndexedScriptRequestTests.java", "status": "modified" } ] }
{ "body": "`GetIndexedScriptRequest` holds a fetch source context and supports serializing it over the transport, although the field has no setter nor getter.\n\nQuestion is: do we want to remove it or does it make sense to properly support fetch source context?\n", "comments": [ { "body": "moving out to 1.5\n", "created_at": "2014-09-10T07:52:27Z" }, { "body": "Fixed by 8e742c2096dad1e87e88719902b961e2b7100fa6\n", "created_at": "2014-09-24T09:27:30Z" } ], "number": 7560, "title": "Internal: get indexed script holds an always null fetch source context" }
{ "body": "Some cleanup to close issues around the indexed scripts API\n\nThis closes : \n#7560\n#7568\n#7559\n#7647\n#7567\n", "number": 7787, "review_comments": [ { "body": "can we just use constants for this?\n", "created_at": "2014-09-18T14:14:42Z" }, { "body": "I think we should pass UTF-8 here instead of default charset?\n", "created_at": "2014-09-18T14:15:32Z" } ], "title": "Cleaned up various issues" }
{ "commits": [ { "message": "Indexed Scripts/Templates : Cleanup\n\nThis contains several cleanups to the indexed scripts.\nRemove the unused FetchSourceContext from the Get request..\nAdd lang,_version,_id to the REST GET API.\nRemoves the routing from GetIndexedScriptRequest since the script index is a single shard that is replicated across all nodes.\nFix backward compatible template file reference\nBefore 1.3.0 on disk scripts could be referenced by requesting\n````\n_search/template\n\n{\n \"template\" : \"ondiskscript\"\n}\n````\nThis was broken in 1.3.0 by requiring\n````\n{\n \"template\" :\n {\n \"file\" : \"ondiskscript\"\n }\n}\n````\nThis commit restores the previous behavior.\nRemove support for preference, realtime and refresh\nThese parameters don't make sense anymore for indexed scripts as we always force the preference to _local and\nalways refresh after a Put to the indexed scripts index.\n\nCloses #7568\nCloses #7559\nCloses #7647\nCloses #7567" } ], "files": [ { "diff": "@@ -36,3 +36,8 @@\n body: { \"id\" : \"1\", \"params\" : { \"my_value\" : \"value1_foo\", \"my_size\" : 1 } }\n - match: { hits.total: 1 }\n \n+ - do:\n+ catch: /ElasticsearchIllegalArgumentException.Unable.to.find.on.disk.script.simple1/\n+ search_template:\n+ body: { \"template\" : \"simple1\" }\n+", "filename": "rest-api-spec/test/template/20_search.yaml", "status": "modified" }, { "diff": "@@ -45,13 +45,6 @@ public class GetIndexedScriptRequest extends ActionRequest<GetIndexedScriptReque\n \n protected String scriptLang;\n protected String id;\n- protected String preference;\n- protected String routing;\n- private FetchSourceContext fetchSourceContext;\n-\n- private boolean refresh = false;\n-\n- Boolean realtime;\n \n private VersionType versionType = VersionType.INTERNAL;\n private long version = Versions.MATCH_ANY;\n@@ -117,24 +110,6 @@ public GetIndexedScriptRequest id(String id) {\n return this;\n }\n \n- /**\n- * Controls the shard routing of the request. Using this value to hash the shard\n- * and not the id.\n- */\n- public GetIndexedScriptRequest routing(String routing) {\n- this.routing = routing;\n- return this;\n- }\n-\n- /**\n- * Sets the preference to execute the get. Defaults to randomize across shards. Can be set to\n- * <tt>_local</tt> to prefer local shards, <tt>_primary</tt> to execute only on primary shards, or\n- * a custom value, which guarantees that the same order will be used across different requests.\n- */\n- public GetIndexedScriptRequest preference(String preference) {\n- this.preference = preference;\n- return this;\n- }\n \n public String scriptLang() {\n return scriptLang;\n@@ -144,37 +119,6 @@ public String id() {\n return id;\n }\n \n- public String routing() {\n- return routing;\n- }\n-\n- public String preference() {\n- return this.preference;\n- }\n-\n- /**\n- * Should a refresh be executed before this get operation causing the operation to\n- * return the latest value. Note, heavy get should not set this to <tt>true</tt>. Defaults\n- * to <tt>false</tt>.\n- */\n- public GetIndexedScriptRequest refresh(boolean refresh) {\n- this.refresh = refresh;\n- return this;\n- }\n-\n- public boolean refresh() {\n- return this.refresh;\n- }\n-\n- public boolean realtime() {\n- return this.realtime == null ? true : this.realtime;\n- }\n-\n- public GetIndexedScriptRequest realtime(Boolean realtime) {\n- this.realtime = realtime;\n- return this;\n- }\n-\n /**\n * Sets the version, which will cause the get operation to only be performed if a matching\n * version exists and no changes happened on the doc since then.\n@@ -209,19 +153,17 @@ public void readFrom(StreamInput in) throws IOException {\n }\n scriptLang = in.readString();\n id = in.readString();\n- preference = in.readOptionalString();\n- refresh = in.readBoolean();\n- byte realtime = in.readByte();\n- if (realtime == 0) {\n- this.realtime = false;\n- } else if (realtime == 1) {\n- this.realtime = true;\n+ if (in.getVersion().before(Version.V_1_5_0)) {\n+ in.readOptionalString(); //Preference\n+ in.readBoolean(); //Refresh\n+ in.readByte(); //Realtime\n }\n-\n this.versionType = VersionType.fromValue(in.readByte());\n this.version = Versions.readVersionWithVLongForBW(in);\n \n- fetchSourceContext = FetchSourceContext.optionalReadFromStream(in);\n+ if (in.getVersion().before(Version.V_1_5_0)) {\n+ FetchSourceContext.optionalReadFromStream(in);\n+ }\n }\n \n @Override\n@@ -233,24 +175,23 @@ public void writeTo(StreamOutput out) throws IOException {\n }\n out.writeString(scriptLang);\n out.writeString(id);\n- out.writeOptionalString(preference);\n- out.writeBoolean(refresh);\n- if (realtime == null) {\n- out.writeByte((byte) -1);\n- } else if (!realtime) {\n- out.writeByte((byte) 0);\n- } else {\n- out.writeByte((byte) 1);\n+ if (out.getVersion().before(Version.V_1_5_0)) {\n+ out.writeOptionalString(\"_local\"); //Preference\n+ out.writeBoolean(true); //Refresh\n+ out.writeByte((byte) -1); //Realtime\n }\n \n out.writeByte(versionType.getValue());\n Versions.writeVersionWithVLongForBW(version, out);\n \n- FetchSourceContext.optionalWriteToStream(fetchSourceContext, out);\n+ if (out.getVersion().before(Version.V_1_5_0)) {\n+ FetchSourceContext.optionalWriteToStream(null, out);\n+ }\n+\n }\n \n @Override\n public String toString() {\n- return \"[\" + ScriptService.SCRIPT_INDEX + \"][\" + scriptLang + \"][\" + id + \"]: routing [\" + routing + \"]\";\n+ return \"[\" + ScriptService.SCRIPT_INDEX + \"][\" + scriptLang + \"][\" + id + \"]\";\n }\n }", "filename": "src/main/java/org/elasticsearch/action/indexedscripts/get/GetIndexedScriptRequest.java", "status": "modified" }, { "diff": "@@ -52,31 +52,6 @@ public GetIndexedScriptRequestBuilder setId(String id) {\n return this;\n }\n \n- /**\n- * Sets the preference to execute the search. Defaults to randomize across shards. Can be set to\n- * <tt>_local</tt> to prefer local shards, <tt>_primary</tt> to execute only on primary shards, or\n- * a custom value, which guarantees that the same order will be used across different requests.\n- */\n- public GetIndexedScriptRequestBuilder setPreference(String preference) {\n- request.preference(preference);\n- return this;\n- }\n-\n- /**\n- * Should a refresh be executed before this get operation causing the operation to\n- * return the latest value. Note, heavy get should not set this to <tt>true</tt>. Defaults\n- * to <tt>false</tt>.\n- */\n- public GetIndexedScriptRequestBuilder setRefresh(boolean refresh) {\n- request.refresh(refresh);\n- return this;\n- }\n-\n- public GetIndexedScriptRequestBuilder setRealtime(Boolean realtime) {\n- request.realtime(realtime);\n- return this;\n- }\n-\n /**\n * Sets the version, which will cause the get operation to only be performed if a matching\n * version exists and no changes happened on the doc since then.", "filename": "src/main/java/org/elasticsearch/action/indexedscripts/get/GetIndexedScriptRequestBuilder.java", "status": "modified" }, { "diff": "@@ -115,9 +115,6 @@ public static TemplateContext parse(XContentParser parser, String paramsFieldnam\n currentFieldName = parser.currentName();\n } else if (parameterMap.containsKey(currentFieldName)) {\n type = parameterMap.get(currentFieldName);\n-\n-\n-\n if (token == XContentParser.Token.START_OBJECT && !parser.hasTextCharacters()) {\n XContentBuilder builder = XContentBuilder.builder(parser.contentType().xContent());\n builder.copyCurrentStructure(parser);", "filename": "src/main/java/org/elasticsearch/index/query/TemplateQueryParser.java", "status": "modified" }, { "diff": "@@ -42,6 +42,11 @@\n */\n public class RestGetIndexedScriptAction extends BaseRestHandler {\n \n+ private final static String LANG_FIELD = \"lang\";\n+ private final static String ID_FIELD = \"_id\";\n+ private final static String VERSION_FIELD = \"_version\";\n+ private final static String SCRIPT_FIELD = \"script\";\n+\n @Inject\n public RestGetIndexedScriptAction(Settings settings, RestController controller, Client client) {\n this(settings, controller, true, client);\n@@ -54,14 +59,15 @@ protected RestGetIndexedScriptAction(Settings settings, RestController controlle\n }\n }\n \n- protected String getScriptLang(RestRequest request) {\n- return request.param(\"lang\");\n+ protected String getScriptFieldName() {\n+ return SCRIPT_FIELD;\n }\n \n- protected String getScriptFieldName() {\n- return \"script\";\n+ protected String getScriptLang(RestRequest request) {\n+ return request.param(LANG_FIELD);\n }\n \n+\n @Override\n public void handleRequest(final RestRequest request, final RestChannel channel, Client client) {\n final GetIndexedScriptRequest getRequest = new GetIndexedScriptRequest(getScriptLang(request), request.param(\"id\"));\n@@ -78,6 +84,9 @@ public RestResponse buildResponse(GetIndexedScriptResponse response) throws Exce\n String script = response.getScript();\n builder.startObject();\n builder.field(getScriptFieldName(), script);\n+ builder.field(VERSION_FIELD, response.getVersion());\n+ builder.field(LANG_FIELD, response.getScriptLang());\n+ builder.field(ID_FIELD, response.getId());\n builder.endObject();\n return new BytesRestResponse(OK, builder);\n } catch( IOException|ClassCastException e ){", "filename": "src/main/java/org/elasticsearch/rest/action/script/RestGetIndexedScriptAction.java", "status": "modified" }, { "diff": "@@ -22,6 +22,7 @@\n import com.carrotsearch.hppc.ObjectOpenHashSet;\n import com.carrotsearch.hppc.ObjectSet;\n import com.carrotsearch.hppc.cursors.ObjectCursor;\n+import com.google.common.base.Charsets;\n import com.google.common.collect.ImmutableMap;\n import org.apache.lucene.index.AtomicReaderContext;\n import org.apache.lucene.index.NumericDocValues;\n@@ -77,7 +78,6 @@\n import org.elasticsearch.threadpool.ThreadPool;\n \n import java.io.IOException;\n-import java.nio.charset.Charset;\n import java.util.HashMap;\n import java.util.Iterator;\n import java.util.Map;\n@@ -614,11 +614,21 @@ private void parseTemplate(ShardSearchRequest request) {\n \n if (templateContext.scriptType().equals(ScriptService.ScriptType.INLINE)) {\n //Try to double parse for nested template id/file\n- parser = XContentFactory.xContent(templateContext.template().getBytes(Charset.defaultCharset())).createParser(templateContext.template().getBytes(Charset.defaultCharset()));\n- TemplateQueryParser.TemplateContext innerContext = TemplateQueryParser.parse(parser, \"params\");\n- if (hasLength(innerContext.template()) && !innerContext.scriptType().equals(ScriptService.ScriptType.INLINE)) {\n- //An inner template referring to a filename or id\n- templateContext = new TemplateQueryParser.TemplateContext(innerContext.scriptType(), innerContext.template(), templateContext.params());\n+ parser = null;\n+ try {\n+ byte[] templateBytes = templateContext.template().getBytes(Charsets.UTF_8);\n+ parser = XContentFactory.xContent(templateBytes).createParser(templateBytes);\n+ } catch (ElasticsearchParseException epe) {\n+ //This was an non-nested template, the parse failure was due to this, it is safe to assume this refers to a file\n+ //for backwards compatibility and keep going\n+ templateContext = new TemplateQueryParser.TemplateContext(ScriptService.ScriptType.FILE, templateContext.template(), templateContext.params());\n+ }\n+ if (parser != null) {\n+ TemplateQueryParser.TemplateContext innerContext = TemplateQueryParser.parse(parser, \"params\");\n+ if (hasLength(innerContext.template()) && !innerContext.scriptType().equals(ScriptService.ScriptType.INLINE)) {\n+ //An inner template referring to a filename or id\n+ templateContext = new TemplateQueryParser.TemplateContext(innerContext.scriptType(), innerContext.template(), templateContext.params());\n+ }\n }\n }\n } catch (IOException e) {", "filename": "src/main/java/org/elasticsearch/search/SearchService.java", "status": "modified" }, { "diff": "@@ -34,19 +34,10 @@ public class GetIndexedScriptRequestTests extends ElasticsearchTestCase {\n @Test\n public void testGetIndexedScriptRequestSerialization() throws IOException {\n GetIndexedScriptRequest request = new GetIndexedScriptRequest(\"lang\", \"id\");\n- if (randomBoolean()) {\n- request.realtime(false);\n- }\n- if (randomBoolean()) {\n- request.refresh(true);\n- }\n if (randomBoolean()) {\n request.version(randomIntBetween(1, Integer.MAX_VALUE));\n request.versionType(randomFrom(VersionType.values()));\n }\n- if (randomBoolean()) {\n- request.routing(randomAsciiOfLength(randomIntBetween(1, 10)));\n- }\n \n BytesStreamOutput out = new BytesStreamOutput();\n out.setVersion(randomVersion());\n@@ -59,8 +50,6 @@ public void testGetIndexedScriptRequestSerialization() throws IOException {\n \n assertThat(request2.id(), equalTo(request.id()));\n assertThat(request2.scriptLang(), equalTo(request.scriptLang()));\n- assertThat(request2.realtime(), equalTo(request.realtime()));\n- assertThat(request2.refresh(), equalTo(request.refresh()));\n assertThat(request2.version(), equalTo(request.version()));\n assertThat(request2.versionType(), equalTo(request.versionType()));\n }", "filename": "src/test/java/org/elasticsearch/action/indexedscripts/get/GetIndexedScriptRequestTests.java", "status": "modified" } ] }
{ "body": "`GetIndexedScriptRequest` supports setting a routing value, which never gets serialized over the transport though nor read when converting the request to the internal get one, also the correspoinding write operations `PutIndexedScriptRequest` and `DeleteIndexedScriptRequest` don't support it, thus it makes no sense to support it when reading.\n\nQuestion is: does it make sense to support routing here or shall we remove the support for it as it never worked?\n", "comments": [ { "body": "Given that the index is a single shard as of #7500, it seems that routing isn't required. (Unless users can change these settings?)\n", "created_at": "2014-09-06T16:07:11Z" }, { "body": "Right I would just remove any mention of routing around get indexed script api then.\n", "created_at": "2014-09-09T12:15:43Z" } ], "number": 7559, "title": "Java api: get indexed script support for routing is incomplete" }
{ "body": "Some cleanup to close issues around the indexed scripts API\n\nThis closes : \n#7560\n#7568\n#7559\n#7647\n#7567\n", "number": 7787, "review_comments": [ { "body": "can we just use constants for this?\n", "created_at": "2014-09-18T14:14:42Z" }, { "body": "I think we should pass UTF-8 here instead of default charset?\n", "created_at": "2014-09-18T14:15:32Z" } ], "title": "Cleaned up various issues" }
{ "commits": [ { "message": "Indexed Scripts/Templates : Cleanup\n\nThis contains several cleanups to the indexed scripts.\nRemove the unused FetchSourceContext from the Get request..\nAdd lang,_version,_id to the REST GET API.\nRemoves the routing from GetIndexedScriptRequest since the script index is a single shard that is replicated across all nodes.\nFix backward compatible template file reference\nBefore 1.3.0 on disk scripts could be referenced by requesting\n````\n_search/template\n\n{\n \"template\" : \"ondiskscript\"\n}\n````\nThis was broken in 1.3.0 by requiring\n````\n{\n \"template\" :\n {\n \"file\" : \"ondiskscript\"\n }\n}\n````\nThis commit restores the previous behavior.\nRemove support for preference, realtime and refresh\nThese parameters don't make sense anymore for indexed scripts as we always force the preference to _local and\nalways refresh after a Put to the indexed scripts index.\n\nCloses #7568\nCloses #7559\nCloses #7647\nCloses #7567" } ], "files": [ { "diff": "@@ -36,3 +36,8 @@\n body: { \"id\" : \"1\", \"params\" : { \"my_value\" : \"value1_foo\", \"my_size\" : 1 } }\n - match: { hits.total: 1 }\n \n+ - do:\n+ catch: /ElasticsearchIllegalArgumentException.Unable.to.find.on.disk.script.simple1/\n+ search_template:\n+ body: { \"template\" : \"simple1\" }\n+", "filename": "rest-api-spec/test/template/20_search.yaml", "status": "modified" }, { "diff": "@@ -45,13 +45,6 @@ public class GetIndexedScriptRequest extends ActionRequest<GetIndexedScriptReque\n \n protected String scriptLang;\n protected String id;\n- protected String preference;\n- protected String routing;\n- private FetchSourceContext fetchSourceContext;\n-\n- private boolean refresh = false;\n-\n- Boolean realtime;\n \n private VersionType versionType = VersionType.INTERNAL;\n private long version = Versions.MATCH_ANY;\n@@ -117,24 +110,6 @@ public GetIndexedScriptRequest id(String id) {\n return this;\n }\n \n- /**\n- * Controls the shard routing of the request. Using this value to hash the shard\n- * and not the id.\n- */\n- public GetIndexedScriptRequest routing(String routing) {\n- this.routing = routing;\n- return this;\n- }\n-\n- /**\n- * Sets the preference to execute the get. Defaults to randomize across shards. Can be set to\n- * <tt>_local</tt> to prefer local shards, <tt>_primary</tt> to execute only on primary shards, or\n- * a custom value, which guarantees that the same order will be used across different requests.\n- */\n- public GetIndexedScriptRequest preference(String preference) {\n- this.preference = preference;\n- return this;\n- }\n \n public String scriptLang() {\n return scriptLang;\n@@ -144,37 +119,6 @@ public String id() {\n return id;\n }\n \n- public String routing() {\n- return routing;\n- }\n-\n- public String preference() {\n- return this.preference;\n- }\n-\n- /**\n- * Should a refresh be executed before this get operation causing the operation to\n- * return the latest value. Note, heavy get should not set this to <tt>true</tt>. Defaults\n- * to <tt>false</tt>.\n- */\n- public GetIndexedScriptRequest refresh(boolean refresh) {\n- this.refresh = refresh;\n- return this;\n- }\n-\n- public boolean refresh() {\n- return this.refresh;\n- }\n-\n- public boolean realtime() {\n- return this.realtime == null ? true : this.realtime;\n- }\n-\n- public GetIndexedScriptRequest realtime(Boolean realtime) {\n- this.realtime = realtime;\n- return this;\n- }\n-\n /**\n * Sets the version, which will cause the get operation to only be performed if a matching\n * version exists and no changes happened on the doc since then.\n@@ -209,19 +153,17 @@ public void readFrom(StreamInput in) throws IOException {\n }\n scriptLang = in.readString();\n id = in.readString();\n- preference = in.readOptionalString();\n- refresh = in.readBoolean();\n- byte realtime = in.readByte();\n- if (realtime == 0) {\n- this.realtime = false;\n- } else if (realtime == 1) {\n- this.realtime = true;\n+ if (in.getVersion().before(Version.V_1_5_0)) {\n+ in.readOptionalString(); //Preference\n+ in.readBoolean(); //Refresh\n+ in.readByte(); //Realtime\n }\n-\n this.versionType = VersionType.fromValue(in.readByte());\n this.version = Versions.readVersionWithVLongForBW(in);\n \n- fetchSourceContext = FetchSourceContext.optionalReadFromStream(in);\n+ if (in.getVersion().before(Version.V_1_5_0)) {\n+ FetchSourceContext.optionalReadFromStream(in);\n+ }\n }\n \n @Override\n@@ -233,24 +175,23 @@ public void writeTo(StreamOutput out) throws IOException {\n }\n out.writeString(scriptLang);\n out.writeString(id);\n- out.writeOptionalString(preference);\n- out.writeBoolean(refresh);\n- if (realtime == null) {\n- out.writeByte((byte) -1);\n- } else if (!realtime) {\n- out.writeByte((byte) 0);\n- } else {\n- out.writeByte((byte) 1);\n+ if (out.getVersion().before(Version.V_1_5_0)) {\n+ out.writeOptionalString(\"_local\"); //Preference\n+ out.writeBoolean(true); //Refresh\n+ out.writeByte((byte) -1); //Realtime\n }\n \n out.writeByte(versionType.getValue());\n Versions.writeVersionWithVLongForBW(version, out);\n \n- FetchSourceContext.optionalWriteToStream(fetchSourceContext, out);\n+ if (out.getVersion().before(Version.V_1_5_0)) {\n+ FetchSourceContext.optionalWriteToStream(null, out);\n+ }\n+\n }\n \n @Override\n public String toString() {\n- return \"[\" + ScriptService.SCRIPT_INDEX + \"][\" + scriptLang + \"][\" + id + \"]: routing [\" + routing + \"]\";\n+ return \"[\" + ScriptService.SCRIPT_INDEX + \"][\" + scriptLang + \"][\" + id + \"]\";\n }\n }", "filename": "src/main/java/org/elasticsearch/action/indexedscripts/get/GetIndexedScriptRequest.java", "status": "modified" }, { "diff": "@@ -52,31 +52,6 @@ public GetIndexedScriptRequestBuilder setId(String id) {\n return this;\n }\n \n- /**\n- * Sets the preference to execute the search. Defaults to randomize across shards. Can be set to\n- * <tt>_local</tt> to prefer local shards, <tt>_primary</tt> to execute only on primary shards, or\n- * a custom value, which guarantees that the same order will be used across different requests.\n- */\n- public GetIndexedScriptRequestBuilder setPreference(String preference) {\n- request.preference(preference);\n- return this;\n- }\n-\n- /**\n- * Should a refresh be executed before this get operation causing the operation to\n- * return the latest value. Note, heavy get should not set this to <tt>true</tt>. Defaults\n- * to <tt>false</tt>.\n- */\n- public GetIndexedScriptRequestBuilder setRefresh(boolean refresh) {\n- request.refresh(refresh);\n- return this;\n- }\n-\n- public GetIndexedScriptRequestBuilder setRealtime(Boolean realtime) {\n- request.realtime(realtime);\n- return this;\n- }\n-\n /**\n * Sets the version, which will cause the get operation to only be performed if a matching\n * version exists and no changes happened on the doc since then.", "filename": "src/main/java/org/elasticsearch/action/indexedscripts/get/GetIndexedScriptRequestBuilder.java", "status": "modified" }, { "diff": "@@ -115,9 +115,6 @@ public static TemplateContext parse(XContentParser parser, String paramsFieldnam\n currentFieldName = parser.currentName();\n } else if (parameterMap.containsKey(currentFieldName)) {\n type = parameterMap.get(currentFieldName);\n-\n-\n-\n if (token == XContentParser.Token.START_OBJECT && !parser.hasTextCharacters()) {\n XContentBuilder builder = XContentBuilder.builder(parser.contentType().xContent());\n builder.copyCurrentStructure(parser);", "filename": "src/main/java/org/elasticsearch/index/query/TemplateQueryParser.java", "status": "modified" }, { "diff": "@@ -42,6 +42,11 @@\n */\n public class RestGetIndexedScriptAction extends BaseRestHandler {\n \n+ private final static String LANG_FIELD = \"lang\";\n+ private final static String ID_FIELD = \"_id\";\n+ private final static String VERSION_FIELD = \"_version\";\n+ private final static String SCRIPT_FIELD = \"script\";\n+\n @Inject\n public RestGetIndexedScriptAction(Settings settings, RestController controller, Client client) {\n this(settings, controller, true, client);\n@@ -54,14 +59,15 @@ protected RestGetIndexedScriptAction(Settings settings, RestController controlle\n }\n }\n \n- protected String getScriptLang(RestRequest request) {\n- return request.param(\"lang\");\n+ protected String getScriptFieldName() {\n+ return SCRIPT_FIELD;\n }\n \n- protected String getScriptFieldName() {\n- return \"script\";\n+ protected String getScriptLang(RestRequest request) {\n+ return request.param(LANG_FIELD);\n }\n \n+\n @Override\n public void handleRequest(final RestRequest request, final RestChannel channel, Client client) {\n final GetIndexedScriptRequest getRequest = new GetIndexedScriptRequest(getScriptLang(request), request.param(\"id\"));\n@@ -78,6 +84,9 @@ public RestResponse buildResponse(GetIndexedScriptResponse response) throws Exce\n String script = response.getScript();\n builder.startObject();\n builder.field(getScriptFieldName(), script);\n+ builder.field(VERSION_FIELD, response.getVersion());\n+ builder.field(LANG_FIELD, response.getScriptLang());\n+ builder.field(ID_FIELD, response.getId());\n builder.endObject();\n return new BytesRestResponse(OK, builder);\n } catch( IOException|ClassCastException e ){", "filename": "src/main/java/org/elasticsearch/rest/action/script/RestGetIndexedScriptAction.java", "status": "modified" }, { "diff": "@@ -22,6 +22,7 @@\n import com.carrotsearch.hppc.ObjectOpenHashSet;\n import com.carrotsearch.hppc.ObjectSet;\n import com.carrotsearch.hppc.cursors.ObjectCursor;\n+import com.google.common.base.Charsets;\n import com.google.common.collect.ImmutableMap;\n import org.apache.lucene.index.AtomicReaderContext;\n import org.apache.lucene.index.NumericDocValues;\n@@ -77,7 +78,6 @@\n import org.elasticsearch.threadpool.ThreadPool;\n \n import java.io.IOException;\n-import java.nio.charset.Charset;\n import java.util.HashMap;\n import java.util.Iterator;\n import java.util.Map;\n@@ -614,11 +614,21 @@ private void parseTemplate(ShardSearchRequest request) {\n \n if (templateContext.scriptType().equals(ScriptService.ScriptType.INLINE)) {\n //Try to double parse for nested template id/file\n- parser = XContentFactory.xContent(templateContext.template().getBytes(Charset.defaultCharset())).createParser(templateContext.template().getBytes(Charset.defaultCharset()));\n- TemplateQueryParser.TemplateContext innerContext = TemplateQueryParser.parse(parser, \"params\");\n- if (hasLength(innerContext.template()) && !innerContext.scriptType().equals(ScriptService.ScriptType.INLINE)) {\n- //An inner template referring to a filename or id\n- templateContext = new TemplateQueryParser.TemplateContext(innerContext.scriptType(), innerContext.template(), templateContext.params());\n+ parser = null;\n+ try {\n+ byte[] templateBytes = templateContext.template().getBytes(Charsets.UTF_8);\n+ parser = XContentFactory.xContent(templateBytes).createParser(templateBytes);\n+ } catch (ElasticsearchParseException epe) {\n+ //This was an non-nested template, the parse failure was due to this, it is safe to assume this refers to a file\n+ //for backwards compatibility and keep going\n+ templateContext = new TemplateQueryParser.TemplateContext(ScriptService.ScriptType.FILE, templateContext.template(), templateContext.params());\n+ }\n+ if (parser != null) {\n+ TemplateQueryParser.TemplateContext innerContext = TemplateQueryParser.parse(parser, \"params\");\n+ if (hasLength(innerContext.template()) && !innerContext.scriptType().equals(ScriptService.ScriptType.INLINE)) {\n+ //An inner template referring to a filename or id\n+ templateContext = new TemplateQueryParser.TemplateContext(innerContext.scriptType(), innerContext.template(), templateContext.params());\n+ }\n }\n }\n } catch (IOException e) {", "filename": "src/main/java/org/elasticsearch/search/SearchService.java", "status": "modified" }, { "diff": "@@ -34,19 +34,10 @@ public class GetIndexedScriptRequestTests extends ElasticsearchTestCase {\n @Test\n public void testGetIndexedScriptRequestSerialization() throws IOException {\n GetIndexedScriptRequest request = new GetIndexedScriptRequest(\"lang\", \"id\");\n- if (randomBoolean()) {\n- request.realtime(false);\n- }\n- if (randomBoolean()) {\n- request.refresh(true);\n- }\n if (randomBoolean()) {\n request.version(randomIntBetween(1, Integer.MAX_VALUE));\n request.versionType(randomFrom(VersionType.values()));\n }\n- if (randomBoolean()) {\n- request.routing(randomAsciiOfLength(randomIntBetween(1, 10)));\n- }\n \n BytesStreamOutput out = new BytesStreamOutput();\n out.setVersion(randomVersion());\n@@ -59,8 +50,6 @@ public void testGetIndexedScriptRequestSerialization() throws IOException {\n \n assertThat(request2.id(), equalTo(request.id()));\n assertThat(request2.scriptLang(), equalTo(request.scriptLang()));\n- assertThat(request2.realtime(), equalTo(request.realtime()));\n- assertThat(request2.refresh(), equalTo(request.refresh()));\n assertThat(request2.version(), equalTo(request.version()));\n assertThat(request2.versionType(), equalTo(request.versionType()));\n }", "filename": "src/test/java/org/elasticsearch/action/indexedscripts/get/GetIndexedScriptRequestTests.java", "status": "modified" } ] }
{ "body": "When executing a bulk request, with create index operation and auto generate id, if while the primary is relocating the bulk is executed, and the relocation is done while N items from the bulk have executed, the full shard bulk request will be retried on the new primary. This can create duplicates because the request is not makred as potentially holding conflicts.\n\nThis change carries over the response for each item on the request level, and if a conflict is detected on the primary shard, and the response is there (indicating that the request was executed once already), use the mentioned response as the actual response for that bulk shard item.\n\nOn top of that, when a primary fails and is retried, the change now marks the request as potentially causing duplicates, so the actual impl will do the extra lookup needed.\n\nThis change also fixes a bug in our exception handling on the replica, where if a specific item failed, and its not an exception we can ignore, we should actually cause the shard to fail.\n", "comments": [ { "body": "Main change looks good to me. There is a typo (s/false/true/) in the BWC commit. I also think we need a BWC test that indexes while relocating . testRecoverFromPreviousVersion looks like a good candidate for a change.\n", "created_at": "2014-09-15T19:07:19Z" }, { "body": "@bleskes I pushed removing the responses array, I think its ready\n", "created_at": "2014-09-15T20:18:34Z" }, { "body": "LGTM. The last change makes it cleaner imho. I'll add the BWC test tomorrow.\n", "created_at": "2014-09-15T20:41:45Z" }, { "body": "LGTM\n", "created_at": "2014-09-16T08:58:39Z" }, { "body": "LGTM\n", "created_at": "2014-09-16T10:02:06Z" }, { "body": "pushed to 1.3.3 as well\n", "created_at": "2014-09-22T13:11:54Z" } ], "number": 7729, "title": "Bulk operation can create duplicates on primary relocation" }
{ "body": "This PR extends a BWC test to make sure we index during relocation on a cross version cluster.\n\nRelates to #7729\n", "number": 7768, "review_comments": [], "title": "Tests: extend testRecoverFromPreviousVersion to sometimes index during relocation" }
{ "commits": [ { "message": "Tests: extend testRecoverFromPreviousVersion to sometimes index during relocation\n\nRelates to #7729" } ], "files": [ { "diff": "@@ -148,19 +148,43 @@ public void testIndexAndSearch() throws Exception {\n \n @Test\n public void testRecoverFromPreviousVersion() throws ExecutionException, InterruptedException {\n+\n+ if (backwardsCluster().numNewDataNodes() == 0) {\n+ backwardsCluster().startNewNode();\n+ }\n assertAcked(prepareCreate(\"test\").setSettings(ImmutableSettings.builder().put(\"index.routing.allocation.exclude._name\", backwardsCluster().newNodePattern()).put(indexSettings())));\n ensureYellow();\n assertAllShardsOnNodes(\"test\", backwardsCluster().backwardsNodePattern());\n int numDocs = randomIntBetween(100, 150);\n+ logger.info(\" --> indexing [{}] docs\", numDocs);\n IndexRequestBuilder[] docs = new IndexRequestBuilder[numDocs];\n for (int i = 0; i < numDocs; i++) {\n docs[i] = client().prepareIndex(\"test\", \"type1\", randomRealisticUnicodeOfLength(10) + String.valueOf(i)).setSource(\"field1\", English.intToEnglish(i));\n }\n indexRandom(true, docs);\n CountResponse countResponse = client().prepareCount().get();\n assertHitCount(countResponse, numDocs);\n- backwardsCluster().allowOnlyNewNodes(\"test\");\n- ensureYellow(\"test\");// move all shards to the new node\n+\n+ if (randomBoolean()) {\n+ logger.info(\" --> moving index to new nodes\");\n+ backwardsCluster().allowOnlyNewNodes(\"test\");\n+ } else {\n+ logger.info(\" --> allow index to on all nodes\");\n+ backwardsCluster().allowOnAllNodes(\"test\");\n+ }\n+\n+ logger.info(\" --> indexing [{}] more docs\", numDocs);\n+ // sometimes index while relocating\n+ if (randomBoolean()) {\n+ for (int i = 0; i < numDocs; i++) {\n+ docs[i] = client().prepareIndex(\"test\", \"type1\", randomRealisticUnicodeOfLength(10) + String.valueOf(numDocs + i)).setSource(\"field1\", English.intToEnglish(numDocs + i));\n+ }\n+ indexRandom(true, docs);\n+ numDocs *= 2;\n+ }\n+\n+ logger.info(\" --> waiting for relocation to complete\", numDocs);\n+ ensureYellow(\"test\");// move all shards to the new node (it waits on relocation)\n final int numIters = randomIntBetween(10, 20);\n for (int i = 0; i < numIters; i++) {\n countResponse = client().prepareCount().get();\n@@ -339,6 +363,7 @@ public void assertVersionCreated(Version version, String... indices) {\n }\n }\n \n+\n @Test\n public void testUnsupportedFeatures() throws IOException {\n if (compatibilityVersion().before(Version.V_1_3_0)) {", "filename": "src/test/java/org/elasticsearch/bwcompat/BasicBackwardsCompatibilityTest.java", "status": "modified" } ] }
{ "body": "`TransportMasterNodeOperationAction#checkBlock` should be implemented by any subclasses of ``TransportMasterNodeOperationAction` but it's returning `null` by default. We should make that abstract to force implementations for it.\n", "comments": [], "number": 7740, "title": "Internal: Make `TransportMasterNodeOperationAction#checkBlock` abstract" }
{ "body": "Master node related operations were missing proper handling of cluster blocks, allowing for example to perform cluster level update settings even before the state was fully restored on initial cluster startup\n\nNote, the change allows to change read only related settings without checking for blocks on update settings, as without it, it means one can't re-enable metadata/write. Also, it doesn't check for blocks on cluster state and health API, as those are allowed to be used even when blocked to figure out what causes the block.\n\nCloses #7740 \n", "number": 7763, "review_comments": [], "title": "Add missing cluster blocks handling for master operations" }
{ "commits": [ { "message": "Add missing cluster blocks handling for master operations\nMaster node related operations were missing proper handling of cluster blocks, allowing for example to perform cluster level update settings even before the state was fully restored on initial cluster startup\n\nNote, the change allows to change read only related settings without checking for blocks on update settings, as without it, it means one can't re-enable metadata/write. Also, it doesn't check for blocks on cluster state and health API, as those are allowed to be used even when blocked to figure out what causes the block.\ncloses #7763\ncloses #7740" } ], "files": [ { "diff": "@@ -28,6 +28,7 @@\n import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.ProcessedClusterStateUpdateTask;\n+import org.elasticsearch.cluster.block.ClusterBlockException;\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n@@ -59,6 +60,11 @@ protected String executor() {\n return ThreadPool.Names.GENERIC;\n }\n \n+ @Override\n+ protected ClusterBlockException checkBlock(ClusterHealthRequest request, ClusterState state) {\n+ return null; // we want users to be able to call this even when there are global blocks, just to check the health (are there blocks?)\n+ }\n+\n @Override\n protected ClusterHealthRequest newRequest() {\n return new ClusterHealthRequest();", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/health/TransportClusterHealthAction.java", "status": "modified" }, { "diff": "@@ -29,6 +29,8 @@\n import org.elasticsearch.cluster.ClusterName;\n import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.block.ClusterBlockException;\n+import org.elasticsearch.cluster.block.ClusterBlockLevel;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.io.stream.StreamInput;\n@@ -71,6 +73,11 @@ protected String executor() {\n return ThreadPool.Names.GENERIC;\n }\n \n+ @Override\n+ protected ClusterBlockException checkBlock(NodesShutdownRequest request, ClusterState state) {\n+ return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA);\n+ }\n+\n @Override\n protected NodesShutdownRequest newRequest() {\n return new NodesShutdownRequest();", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/node/shutdown/TransportNodesShutdownAction.java", "status": "modified" }, { "diff": "@@ -26,6 +26,8 @@\n import org.elasticsearch.cluster.AckedClusterStateUpdateTask;\n import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.block.ClusterBlockException;\n+import org.elasticsearch.cluster.block.ClusterBlockLevel;\n import org.elasticsearch.cluster.routing.allocation.AllocationService;\n import org.elasticsearch.cluster.routing.allocation.RoutingAllocation;\n import org.elasticsearch.cluster.routing.allocation.RoutingExplanations;\n@@ -54,6 +56,11 @@ protected String executor() {\n return ThreadPool.Names.SAME;\n }\n \n+ @Override\n+ protected ClusterBlockException checkBlock(ClusterRerouteRequest request, ClusterState state) {\n+ return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA);\n+ }\n+\n @Override\n protected ClusterRerouteRequest newRequest() {\n return new ClusterRerouteRequest();", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/reroute/TransportClusterRerouteAction.java", "status": "modified" }, { "diff": "@@ -26,6 +26,8 @@\n import org.elasticsearch.cluster.AckedClusterStateUpdateTask;\n import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.block.ClusterBlockException;\n+import org.elasticsearch.cluster.block.ClusterBlockLevel;\n import org.elasticsearch.cluster.block.ClusterBlocks;\n import org.elasticsearch.cluster.metadata.MetaData;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n@@ -67,6 +69,17 @@ protected String executor() {\n return ThreadPool.Names.SAME;\n }\n \n+ @Override\n+ protected ClusterBlockException checkBlock(ClusterUpdateSettingsRequest request, ClusterState state) {\n+ // allow for dedicated changes to the metadata blocks, so we don't block those to allow to \"re-enable\" it\n+ if ((request.transientSettings().getAsMap().isEmpty() && request.persistentSettings().getAsMap().size() == 1 && request.persistentSettings().get(MetaData.SETTING_READ_ONLY) != null) ||\n+ request.persistentSettings().getAsMap().isEmpty() && request.transientSettings().getAsMap().size() == 1 && request.transientSettings().get(MetaData.SETTING_READ_ONLY) != null) {\n+ return null;\n+ }\n+ return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA);\n+ }\n+\n+\n @Override\n protected ClusterUpdateSettingsRequest newRequest() {\n return new ClusterUpdateSettingsRequest();", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/settings/TransportClusterUpdateSettingsAction.java", "status": "modified" }, { "diff": "@@ -25,6 +25,8 @@\n import org.elasticsearch.action.support.master.TransportMasterNodeReadOperationAction;\n import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.block.ClusterBlockException;\n+import org.elasticsearch.cluster.block.ClusterBlockLevel;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n import org.elasticsearch.cluster.routing.GroupShardsIterator;\n import org.elasticsearch.cluster.routing.ShardIterator;\n@@ -54,6 +56,11 @@ protected String executor() {\n return ThreadPool.Names.SAME;\n }\n \n+ @Override\n+ protected ClusterBlockException checkBlock(ClusterSearchShardsRequest request, ClusterState state) {\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ }\n+\n @Override\n protected ClusterSearchShardsRequest newRequest() {\n return new ClusterSearchShardsRequest();", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/shards/TransportClusterSearchShardsAction.java", "status": "modified" }, { "diff": "@@ -28,6 +28,8 @@\n import org.elasticsearch.action.support.master.TransportMasterNodeOperationAction;\n import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.block.ClusterBlockException;\n+import org.elasticsearch.cluster.block.ClusterBlockLevel;\n import org.elasticsearch.cluster.metadata.SnapshotId;\n import org.elasticsearch.cluster.metadata.SnapshotMetaData;\n import org.elasticsearch.common.Strings;\n@@ -66,6 +68,11 @@ protected String executor() {\n return ThreadPool.Names.GENERIC;\n }\n \n+ @Override\n+ protected ClusterBlockException checkBlock(SnapshotsStatusRequest request, ClusterState state) {\n+ return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA);\n+ }\n+\n @Override\n protected SnapshotsStatusRequest newRequest() {\n return new SnapshotsStatusRequest();\n@@ -105,22 +112,22 @@ protected void masterOperation(final SnapshotsStatusRequest request,\n \n transportNodesSnapshotsStatus.status(nodesIds.toArray(new String[nodesIds.size()]),\n snapshotIds, request.masterNodeTimeout(), new ActionListener<TransportNodesSnapshotsStatus.NodesSnapshotStatus>() {\n- @Override\n- public void onResponse(TransportNodesSnapshotsStatus.NodesSnapshotStatus nodeSnapshotStatuses) {\n- try {\n- ImmutableList<SnapshotMetaData.Entry> currentSnapshots =\n- snapshotsService.currentSnapshots(request.repository(), request.snapshots());\n- listener.onResponse(buildResponse(request, currentSnapshots, nodeSnapshotStatuses));\n- } catch (Throwable e) {\n- listener.onFailure(e);\n- }\n- }\n+ @Override\n+ public void onResponse(TransportNodesSnapshotsStatus.NodesSnapshotStatus nodeSnapshotStatuses) {\n+ try {\n+ ImmutableList<SnapshotMetaData.Entry> currentSnapshots =\n+ snapshotsService.currentSnapshots(request.repository(), request.snapshots());\n+ listener.onResponse(buildResponse(request, currentSnapshots, nodeSnapshotStatuses));\n+ } catch (Throwable e) {\n+ listener.onFailure(e);\n+ }\n+ }\n \n- @Override\n- public void onFailure(Throwable e) {\n- listener.onFailure(e);\n- }\n- });\n+ @Override\n+ public void onFailure(Throwable e) {\n+ listener.onFailure(e);\n+ }\n+ });\n } else {\n // We don't have any in-progress shards, just return current stats\n listener.onResponse(buildResponse(request, currentSnapshots, null));", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/TransportSnapshotsStatusAction.java", "status": "modified" }, { "diff": "@@ -26,6 +26,8 @@\n import org.elasticsearch.cluster.ClusterName;\n import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.block.ClusterBlockException;\n+import org.elasticsearch.cluster.block.ClusterBlockLevel;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.MetaData;\n import org.elasticsearch.cluster.routing.RoutingTable;\n@@ -54,6 +56,15 @@ protected String executor() {\n return ThreadPool.Names.SAME;\n }\n \n+ @Override\n+ protected ClusterBlockException checkBlock(ClusterStateRequest request, ClusterState state) {\n+ // cluster state calls are done also on a fully blocked cluster to figure out what is going\n+ // on in the cluster. For example, which nodes have joined yet the recovery has not yet kicked\n+ // in, we need to make sure we allow those calls\n+ // return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA);\n+ return null;\n+ }\n+\n @Override\n protected ClusterStateRequest newRequest() {\n return new ClusterStateRequest();", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/state/TransportClusterStateAction.java", "status": "modified" }, { "diff": "@@ -21,10 +21,13 @@\n \n import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.action.ActionListener;\n+import org.elasticsearch.action.admin.cluster.state.ClusterStateRequest;\n import org.elasticsearch.action.support.ActionFilters;\n import org.elasticsearch.action.support.master.TransportMasterNodeReadOperationAction;\n import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.block.ClusterBlockException;\n+import org.elasticsearch.cluster.block.ClusterBlockLevel;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.threadpool.ThreadPool;\n@@ -48,6 +51,11 @@ protected String executor() {\n return ThreadPool.Names.SAME;\n }\n \n+ @Override\n+ protected ClusterBlockException checkBlock(PendingClusterTasksRequest request, ClusterState state) {\n+ return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA);\n+ }\n+\n @Override\n protected PendingClusterTasksRequest newRequest() {\n return new PendingClusterTasksRequest();", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/tasks/TransportPendingClusterTasksAction.java", "status": "modified" }, { "diff": "@@ -25,6 +25,8 @@\n import org.elasticsearch.action.support.master.TransportMasterNodeReadOperationAction;\n import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.block.ClusterBlockException;\n+import org.elasticsearch.cluster.block.ClusterBlockLevel;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.threadpool.ThreadPool;\n@@ -45,6 +47,11 @@ protected String executor() {\n return ThreadPool.Names.SAME;\n }\n \n+ @Override\n+ protected ClusterBlockException checkBlock(GetAliasesRequest request, ClusterState state) {\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ }\n+\n @Override\n protected GetAliasesRequest newRequest() {\n return new GetAliasesRequest();", "filename": "src/main/java/org/elasticsearch/action/admin/indices/alias/exists/TransportAliasesExistAction.java", "status": "modified" }, { "diff": "@@ -24,6 +24,8 @@\n import org.elasticsearch.action.support.master.TransportMasterNodeReadOperationAction;\n import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.block.ClusterBlockException;\n+import org.elasticsearch.cluster.block.ClusterBlockLevel;\n import org.elasticsearch.cluster.metadata.AliasMetaData;\n import org.elasticsearch.common.collect.ImmutableOpenMap;\n import org.elasticsearch.common.inject.Inject;\n@@ -48,6 +50,11 @@ protected String executor() {\n return ThreadPool.Names.SAME;\n }\n \n+ @Override\n+ protected ClusterBlockException checkBlock(GetAliasesRequest request, ClusterState state) {\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ }\n+\n @Override\n protected GetAliasesRequest newRequest() {\n return new GetAliasesRequest();\n@@ -62,7 +69,7 @@ protected GetAliasesResponse newResponse() {\n protected void masterOperation(GetAliasesRequest request, ClusterState state, ActionListener<GetAliasesResponse> listener) throws ElasticsearchException {\n String[] concreteIndices = state.metaData().concreteIndices(request.indicesOptions(), request.indices());\n @SuppressWarnings(\"unchecked\") // ImmutableList to List results incompatible type\n- ImmutableOpenMap<String, List<AliasMetaData>> result = (ImmutableOpenMap) state.metaData().findAliases(request.aliases(), concreteIndices);\n+ ImmutableOpenMap<String, List<AliasMetaData>> result = (ImmutableOpenMap) state.metaData().findAliases(request.aliases(), concreteIndices);\n listener.onResponse(new GetAliasesResponse(result));\n }\n ", "filename": "src/main/java/org/elasticsearch/action/admin/indices/alias/get/TransportGetAliasesAction.java", "status": "modified" }, { "diff": "@@ -27,6 +27,8 @@\n import org.elasticsearch.action.support.master.info.TransportClusterInfoAction;\n import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.block.ClusterBlockException;\n+import org.elasticsearch.cluster.block.ClusterBlockLevel;\n import org.elasticsearch.cluster.metadata.AliasMetaData;\n import org.elasticsearch.cluster.metadata.MappingMetaData;\n import org.elasticsearch.common.Strings;\n@@ -44,10 +46,21 @@ public class TransportGetIndexAction extends TransportClusterInfoAction<GetIndex\n \n @Inject\n public TransportGetIndexAction(Settings settings, TransportService transportService, ClusterService clusterService,\n- ThreadPool threadPool, ActionFilters actionFilters) {\n+ ThreadPool threadPool, ActionFilters actionFilters) {\n super(settings, GetIndexAction.NAME, transportService, clusterService, threadPool, actionFilters);\n }\n \n+ @Override\n+ protected String executor() {\n+ // very lightweight operation, no need to fork\n+ return ThreadPool.Names.SAME;\n+ }\n+\n+ @Override\n+ protected ClusterBlockException checkBlock(GetIndexRequest request, ClusterState state) {\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ }\n+\n @Override\n protected GetIndexRequest newRequest() {\n return new GetIndexRequest();\n@@ -60,7 +73,7 @@ protected GetIndexResponse newResponse() {\n \n @Override\n protected void doMasterOperation(final GetIndexRequest request, String[] concreteIndices, final ClusterState state,\n- final ActionListener<GetIndexResponse> listener) throws ElasticsearchException {\n+ final ActionListener<GetIndexResponse> listener) throws ElasticsearchException {\n ImmutableOpenMap<String, ImmutableList<Entry>> warmersResult = ImmutableOpenMap.of();\n ImmutableOpenMap<String, ImmutableOpenMap<String, MappingMetaData>> mappingsResult = ImmutableOpenMap.of();\n ImmutableOpenMap<String, ImmutableList<AliasMetaData>> aliasesResult = ImmutableOpenMap.of();\n@@ -72,40 +85,40 @@ protected void doMasterOperation(final GetIndexRequest request, String[] concret\n boolean doneWarmers = false;\n for (String feature : features) {\n switch (feature) {\n- case \"_warmer\":\n- case \"_warmers\":\n- if (!doneWarmers) {\n- warmersResult = state.metaData().findWarmers(concreteIndices, request.types(), Strings.EMPTY_ARRAY);\n- doneWarmers = true;\n- }\n- break;\n- case \"_mapping\":\n- case \"_mappings\":\n- if (!doneMappings) {\n- mappingsResult = state.metaData().findMappings(concreteIndices, request.types());\n- doneMappings = true;\n- }\n- break;\n- case \"_alias\":\n- case \"_aliases\":\n- if (!doneAliases) {\n- aliasesResult = state.metaData().findAliases(Strings.EMPTY_ARRAY, concreteIndices);\n- doneAliases = true;\n- }\n- break;\n- case \"_settings\":\n- if (!doneSettings) {\n- ImmutableOpenMap.Builder<String, Settings> settingsMapBuilder = ImmutableOpenMap.builder();\n- for (String index : concreteIndices) {\n- settingsMapBuilder.put(index, state.metaData().index(index).getSettings());\n+ case \"_warmer\":\n+ case \"_warmers\":\n+ if (!doneWarmers) {\n+ warmersResult = state.metaData().findWarmers(concreteIndices, request.types(), Strings.EMPTY_ARRAY);\n+ doneWarmers = true;\n+ }\n+ break;\n+ case \"_mapping\":\n+ case \"_mappings\":\n+ if (!doneMappings) {\n+ mappingsResult = state.metaData().findMappings(concreteIndices, request.types());\n+ doneMappings = true;\n+ }\n+ break;\n+ case \"_alias\":\n+ case \"_aliases\":\n+ if (!doneAliases) {\n+ aliasesResult = state.metaData().findAliases(Strings.EMPTY_ARRAY, concreteIndices);\n+ doneAliases = true;\n+ }\n+ break;\n+ case \"_settings\":\n+ if (!doneSettings) {\n+ ImmutableOpenMap.Builder<String, Settings> settingsMapBuilder = ImmutableOpenMap.builder();\n+ for (String index : concreteIndices) {\n+ settingsMapBuilder.put(index, state.metaData().index(index).getSettings());\n+ }\n+ settings = settingsMapBuilder.build();\n+ doneSettings = true;\n }\n- settings = settingsMapBuilder.build();\n- doneSettings = true;\n- }\n- break;\n+ break;\n \n- default:\n- throw new ElasticsearchIllegalStateException(\"feature [\" + feature + \"] is not valid\");\n+ default:\n+ throw new ElasticsearchIllegalStateException(\"feature [\" + feature + \"] is not valid\");\n }\n }\n listener.onResponse(new GetIndexResponse(concreteIndices, warmersResult, mappingsResult, aliasesResult, settings));", "filename": "src/main/java/org/elasticsearch/action/admin/indices/get/TransportGetIndexAction.java", "status": "modified" }, { "diff": "@@ -25,6 +25,8 @@\n import org.elasticsearch.action.support.master.info.TransportClusterInfoAction;\n import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.block.ClusterBlockException;\n+import org.elasticsearch.cluster.block.ClusterBlockLevel;\n import org.elasticsearch.cluster.metadata.MappingMetaData;\n import org.elasticsearch.common.collect.ImmutableOpenMap;\n import org.elasticsearch.common.inject.Inject;\n@@ -41,6 +43,17 @@ public TransportGetMappingsAction(Settings settings, TransportService transportS\n super(settings, GetMappingsAction.NAME, transportService, clusterService, threadPool, actionFilters);\n }\n \n+ @Override\n+ protected String executor() {\n+ // very lightweight operation, no need to fork\n+ return ThreadPool.Names.SAME;\n+ }\n+\n+ @Override\n+ protected ClusterBlockException checkBlock(GetMappingsRequest request, ClusterState state) {\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ }\n+\n @Override\n protected GetMappingsRequest newRequest() {\n return new GetMappingsRequest();", "filename": "src/main/java/org/elasticsearch/action/admin/indices/mapping/get/TransportGetMappingsAction.java", "status": "modified" }, { "diff": "@@ -21,10 +21,13 @@\n \n import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.action.ActionListener;\n+import org.elasticsearch.action.admin.cluster.state.ClusterStateRequest;\n import org.elasticsearch.action.support.ActionFilters;\n import org.elasticsearch.action.support.master.TransportMasterNodeReadOperationAction;\n import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.block.ClusterBlockException;\n+import org.elasticsearch.cluster.block.ClusterBlockLevel;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.common.collect.ImmutableOpenMap;\n import org.elasticsearch.common.inject.Inject;\n@@ -57,6 +60,12 @@ protected String executor() {\n return ThreadPool.Names.SAME;\n }\n \n+ @Override\n+ protected ClusterBlockException checkBlock(GetSettingsRequest request, ClusterState state) {\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ }\n+\n+\n @Override\n protected GetSettingsRequest newRequest() {\n return new GetSettingsRequest();", "filename": "src/main/java/org/elasticsearch/action/admin/indices/settings/get/TransportGetSettingsAction.java", "status": "modified" }, { "diff": "@@ -26,6 +26,9 @@\n import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.ack.ClusterStateUpdateResponse;\n+import org.elasticsearch.cluster.block.ClusterBlockException;\n+import org.elasticsearch.cluster.block.ClusterBlockLevel;\n+import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.MetaDataUpdateSettingsService;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n@@ -52,6 +55,19 @@ protected String executor() {\n return ThreadPool.Names.SAME;\n }\n \n+ @Override\n+ protected ClusterBlockException checkBlock(UpdateSettingsRequest request, ClusterState state) {\n+ // allow for dedicated changes to the metadata blocks, so we don't block those to allow to \"re-enable\" it\n+ ClusterBlockException globalBlock = state.blocks().globalBlockedException(ClusterBlockLevel.METADATA);\n+ if (globalBlock != null) {\n+ return globalBlock;\n+ }\n+ if (request.settings().getAsMap().size() == 1 && (request.settings().get(IndexMetaData.SETTING_BLOCKS_METADATA) != null || request.settings().get(IndexMetaData.SETTING_READ_ONLY) != null )) {\n+ return null;\n+ }\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ }\n+\n @Override\n protected UpdateSettingsRequest newRequest() {\n return new UpdateSettingsRequest();", "filename": "src/main/java/org/elasticsearch/action/admin/indices/settings/put/TransportUpdateSettingsAction.java", "status": "modified" }, { "diff": "@@ -26,6 +26,8 @@\n import org.elasticsearch.action.support.master.TransportMasterNodeReadOperationAction;\n import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.block.ClusterBlockException;\n+import org.elasticsearch.cluster.block.ClusterBlockLevel;\n import org.elasticsearch.cluster.metadata.IndexTemplateMetaData;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.regex.Regex;\n@@ -50,6 +52,11 @@ protected String executor() {\n return ThreadPool.Names.SAME;\n }\n \n+ @Override\n+ protected ClusterBlockException checkBlock(GetIndexTemplatesRequest request, ClusterState state) {\n+ return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA);\n+ }\n+\n @Override\n protected GetIndexTemplatesRequest newRequest() {\n return new GetIndexTemplatesRequest();", "filename": "src/main/java/org/elasticsearch/action/admin/indices/template/get/TransportGetIndexTemplatesAction.java", "status": "modified" }, { "diff": "@@ -22,10 +22,13 @@\n import com.google.common.collect.ImmutableList;\n import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.action.ActionListener;\n+import org.elasticsearch.action.admin.indices.alias.get.GetAliasesRequest;\n import org.elasticsearch.action.support.ActionFilters;\n import org.elasticsearch.action.support.master.info.TransportClusterInfoAction;\n import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.block.ClusterBlockException;\n+import org.elasticsearch.cluster.block.ClusterBlockLevel;\n import org.elasticsearch.common.collect.ImmutableOpenMap;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n@@ -45,6 +48,17 @@ public TransportGetWarmersAction(Settings settings, TransportService transportSe\n super(settings, GetWarmersAction.NAME, transportService, clusterService, threadPool, actionFilters);\n }\n \n+ @Override\n+ protected String executor() {\n+ // very lightweight operation, no need to fork\n+ return ThreadPool.Names.SAME;\n+ }\n+\n+ @Override\n+ protected ClusterBlockException checkBlock(GetWarmersRequest request, ClusterState state) {\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ }\n+\n @Override\n protected GetWarmersRequest newRequest() {\n return new GetWarmersRequest();", "filename": "src/main/java/org/elasticsearch/action/admin/indices/warmer/get/TransportGetWarmersAction.java", "status": "modified" }, { "diff": "@@ -24,6 +24,8 @@\n import org.elasticsearch.action.support.master.TransportMasterNodeOperationAction;\n import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.block.ClusterBlockException;\n+import org.elasticsearch.cluster.block.ClusterBlockLevel;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.threadpool.ThreadPool;\n@@ -48,6 +50,11 @@ protected String executor() {\n return ThreadPool.Names.GENERIC;\n }\n \n+ @Override\n+ protected ClusterBlockException checkBlock(AbortBenchmarkRequest request, ClusterState state) {\n+ return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA);\n+ }\n+\n @Override\n protected AbortBenchmarkRequest newRequest() {\n return new AbortBenchmarkRequest();", "filename": "src/main/java/org/elasticsearch/action/bench/TransportAbortBenchmarkAction.java", "status": "modified" }, { "diff": "@@ -24,10 +24,12 @@\n import org.elasticsearch.action.support.master.TransportMasterNodeOperationAction;\n import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.block.ClusterBlockException;\n+import org.elasticsearch.cluster.block.ClusterBlockLevel;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.threadpool.ThreadPool;\n-import org.elasticsearch.transport.*;\n+import org.elasticsearch.transport.TransportService;\n \n \n /**\n@@ -49,6 +51,11 @@ protected String executor() {\n return ThreadPool.Names.GENERIC;\n }\n \n+ @Override\n+ protected ClusterBlockException checkBlock(BenchmarkRequest request, ClusterState state) {\n+ return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA);\n+ }\n+\n @Override\n protected BenchmarkRequest newRequest() {\n return new BenchmarkRequest();", "filename": "src/main/java/org/elasticsearch/action/bench/TransportBenchmarkAction.java", "status": "modified" }, { "diff": "@@ -21,9 +21,12 @@\n \n import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.action.ActionListener;\n+import org.elasticsearch.action.admin.cluster.state.ClusterStateRequest;\n import org.elasticsearch.action.support.ActionFilters;\n import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.block.ClusterBlockException;\n+import org.elasticsearch.cluster.block.ClusterBlockLevel;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.threadpool.ThreadPool;\n@@ -49,6 +52,11 @@ protected String executor() {\n return ThreadPool.Names.GENERIC;\n }\n \n+ @Override\n+ protected ClusterBlockException checkBlock(BenchmarkStatusRequest request, ClusterState state) {\n+ return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA);\n+ }\n+\n @Override\n protected BenchmarkStatusRequest newRequest() {\n return new BenchmarkStatusRequest();", "filename": "src/main/java/org/elasticsearch/action/bench/TransportBenchmarkStatusAction.java", "status": "modified" }, { "diff": "@@ -70,9 +70,7 @@ protected boolean localExecute(Request request) {\n return false;\n }\n \n- protected ClusterBlockException checkBlock(Request request, ClusterState state) {\n- return null;\n- }\n+ protected abstract ClusterBlockException checkBlock(Request request, ClusterState state);\n \n protected void processBeforeDelegationToMaster(Request request, ClusterState state) {\n ", "filename": "src/main/java/org/elasticsearch/action/support/master/TransportMasterNodeOperationAction.java", "status": "modified" }, { "diff": "@@ -33,6 +33,7 @@\n import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.ack.ClusterStateUpdateResponse;\n+import org.elasticsearch.cluster.block.ClusterBlockException;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.MetaDataMappingService;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n@@ -117,6 +118,12 @@ public void updateMappingOnMaster(String index, DocumentMapper documentMapper, S\n masterMappingUpdater.add(new MappingChange(documentMapper, index, indexUUID, listener));\n }\n \n+ @Override\n+ protected ClusterBlockException checkBlock(MappingUpdatedRequest request, ClusterState state) {\n+ // internal call by other nodes, no need to check for blocks\n+ return null;\n+ }\n+\n @Override\n protected String executor() {\n // we go async right away", "filename": "src/main/java/org/elasticsearch/cluster/action/index/MappingUpdatedAction.java", "status": "modified" }, { "diff": "@@ -69,7 +69,7 @@ public void handleRequest(final RestRequest request, final RestChannel channel,\n }\n }\n for (Map.Entry<String, String> entry : request.params().entrySet()) {\n- if (entry.getKey().equals(\"pretty\") || entry.getKey().equals(\"timeout\") || entry.getKey().equals(\"master_timeout\")) {\n+ if (entry.getKey().equals(\"pretty\") || entry.getKey().equals(\"timeout\") || entry.getKey().equals(\"master_timeout\") || entry.getKey().equals(\"index\")) {\n continue;\n }\n updateSettings.put(entry.getKey(), entry.getValue());", "filename": "src/main/java/org/elasticsearch/rest/action/admin/indices/settings/RestUpdateSettingsAction.java", "status": "modified" }, { "diff": "@@ -26,6 +26,7 @@\n import org.elasticsearch.action.index.IndexRequest;\n import org.elasticsearch.client.Client;\n import org.elasticsearch.client.transport.TransportClient;\n+import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.common.settings.ImmutableSettings;\n import org.elasticsearch.common.unit.ByteSizeUnit;\n import org.elasticsearch.common.unit.ByteSizeValue;\n@@ -225,75 +226,70 @@ public void testBulkProcessorWaitOnClose() throws Exception {\n @Test\n public void testBulkProcessorConcurrentRequestsReadOnlyIndex() throws Exception {\n createIndex(\"test-ro\");\n- try {\n- assertAcked(client().admin().indices().prepareUpdateSettings(\"test-ro\")\n- .setSettings(ImmutableSettings.builder().put(\"index.blocks.read_only\", true)));\n- ensureGreen();\n-\n- int bulkActions = randomIntBetween(10, 100);\n- int numDocs = randomIntBetween(bulkActions, bulkActions + 100);\n- int concurrentRequests = randomIntBetween(0, 10);\n-\n- int expectedBulkActions = numDocs / bulkActions;\n-\n- final CountDownLatch latch = new CountDownLatch(expectedBulkActions);\n- int totalExpectedBulkActions = numDocs % bulkActions == 0 ? expectedBulkActions : expectedBulkActions + 1;\n- final CountDownLatch closeLatch = new CountDownLatch(totalExpectedBulkActions);\n-\n- int testDocs = 0;\n- int testReadOnlyDocs = 0;\n- MultiGetRequestBuilder multiGetRequestBuilder = client().prepareMultiGet();\n- BulkProcessorTestListener listener = new BulkProcessorTestListener(latch, closeLatch);\n-\n- try (BulkProcessor processor = BulkProcessor.builder(client(), listener)\n- .setConcurrentRequests(concurrentRequests).setBulkActions(bulkActions)\n- //set interval and size to high values\n- .setFlushInterval(TimeValue.timeValueHours(24)).setBulkSize(new ByteSizeValue(1, ByteSizeUnit.GB)).build()) {\n-\n- for (int i = 1; i <= numDocs; i++) {\n- if (randomBoolean()) {\n- testDocs++;\n- processor.add(new IndexRequest(\"test\", \"test\", Integer.toString(testDocs)).source(\"field\", \"value\"));\n- multiGetRequestBuilder.add(\"test\", \"test\", Integer.toString(testDocs));\n- } else {\n- testReadOnlyDocs++;\n- processor.add(new IndexRequest(\"test-ro\", \"test\", Integer.toString(testReadOnlyDocs)).source(\"field\", \"value\"));\n- }\n- }\n- }\n+ assertAcked(client().admin().indices().prepareUpdateSettings(\"test-ro\")\n+ .setSettings(ImmutableSettings.builder().put(IndexMetaData.SETTING_BLOCKS_WRITE, true)));\n+ ensureGreen();\n+\n+ int bulkActions = randomIntBetween(10, 100);\n+ int numDocs = randomIntBetween(bulkActions, bulkActions + 100);\n+ int concurrentRequests = randomIntBetween(0, 10);\n \n- closeLatch.await();\n+ int expectedBulkActions = numDocs / bulkActions;\n \n- assertThat(listener.beforeCounts.get(), equalTo(totalExpectedBulkActions));\n- assertThat(listener.afterCounts.get(), equalTo(totalExpectedBulkActions));\n- assertThat(listener.bulkFailures.size(), equalTo(0));\n- assertThat(listener.bulkItems.size(), equalTo(testDocs + testReadOnlyDocs));\n-\n- Set<String> ids = new HashSet<>();\n- Set<String> readOnlyIds = new HashSet<>();\n- for (BulkItemResponse bulkItemResponse : listener.bulkItems) {\n- assertThat(bulkItemResponse.getIndex(), either(equalTo(\"test\")).or(equalTo(\"test-ro\")));\n- assertThat(bulkItemResponse.getType(), equalTo(\"test\"));\n- if (bulkItemResponse.getIndex().equals(\"test\")) {\n- assertThat(bulkItemResponse.isFailed(), equalTo(false));\n- //with concurrent requests > 1 we can't rely on the order of the bulk requests\n- assertThat(Integer.valueOf(bulkItemResponse.getId()), both(greaterThan(0)).and(lessThanOrEqualTo(testDocs)));\n- //we do want to check that we don't get duplicate ids back\n- assertThat(ids.add(bulkItemResponse.getId()), equalTo(true));\n+ final CountDownLatch latch = new CountDownLatch(expectedBulkActions);\n+ int totalExpectedBulkActions = numDocs % bulkActions == 0 ? expectedBulkActions : expectedBulkActions + 1;\n+ final CountDownLatch closeLatch = new CountDownLatch(totalExpectedBulkActions);\n+\n+ int testDocs = 0;\n+ int testReadOnlyDocs = 0;\n+ MultiGetRequestBuilder multiGetRequestBuilder = client().prepareMultiGet();\n+ BulkProcessorTestListener listener = new BulkProcessorTestListener(latch, closeLatch);\n+\n+ try (BulkProcessor processor = BulkProcessor.builder(client(), listener)\n+ .setConcurrentRequests(concurrentRequests).setBulkActions(bulkActions)\n+ //set interval and size to high values\n+ .setFlushInterval(TimeValue.timeValueHours(24)).setBulkSize(new ByteSizeValue(1, ByteSizeUnit.GB)).build()) {\n+\n+ for (int i = 1; i <= numDocs; i++) {\n+ if (randomBoolean()) {\n+ testDocs++;\n+ processor.add(new IndexRequest(\"test\", \"test\", Integer.toString(testDocs)).source(\"field\", \"value\"));\n+ multiGetRequestBuilder.add(\"test\", \"test\", Integer.toString(testDocs));\n } else {\n- assertThat(bulkItemResponse.isFailed(), equalTo(true));\n- //with concurrent requests > 1 we can't rely on the order of the bulk requests\n- assertThat(Integer.valueOf(bulkItemResponse.getId()), both(greaterThan(0)).and(lessThanOrEqualTo(testReadOnlyDocs)));\n- //we do want to check that we don't get duplicate ids back\n- assertThat(readOnlyIds.add(bulkItemResponse.getId()), equalTo(true));\n+ testReadOnlyDocs++;\n+ processor.add(new IndexRequest(\"test-ro\", \"test\", Integer.toString(testReadOnlyDocs)).source(\"field\", \"value\"));\n }\n }\n+ }\n+\n+ closeLatch.await();\n \n- assertMultiGetResponse(multiGetRequestBuilder.get(), testDocs);\n- } finally {\n- assertAcked(client().admin().indices().prepareUpdateSettings(\"test-ro\")\n- .setSettings(ImmutableSettings.builder().put(\"index.blocks.read_only\", false)));\n+ assertThat(listener.beforeCounts.get(), equalTo(totalExpectedBulkActions));\n+ assertThat(listener.afterCounts.get(), equalTo(totalExpectedBulkActions));\n+ assertThat(listener.bulkFailures.size(), equalTo(0));\n+ assertThat(listener.bulkItems.size(), equalTo(testDocs + testReadOnlyDocs));\n+\n+ Set<String> ids = new HashSet<>();\n+ Set<String> readOnlyIds = new HashSet<>();\n+ for (BulkItemResponse bulkItemResponse : listener.bulkItems) {\n+ assertThat(bulkItemResponse.getIndex(), either(equalTo(\"test\")).or(equalTo(\"test-ro\")));\n+ assertThat(bulkItemResponse.getType(), equalTo(\"test\"));\n+ if (bulkItemResponse.getIndex().equals(\"test\")) {\n+ assertThat(bulkItemResponse.isFailed(), equalTo(false));\n+ //with concurrent requests > 1 we can't rely on the order of the bulk requests\n+ assertThat(Integer.valueOf(bulkItemResponse.getId()), both(greaterThan(0)).and(lessThanOrEqualTo(testDocs)));\n+ //we do want to check that we don't get duplicate ids back\n+ assertThat(ids.add(bulkItemResponse.getId()), equalTo(true));\n+ } else {\n+ assertThat(bulkItemResponse.isFailed(), equalTo(true));\n+ //with concurrent requests > 1 we can't rely on the order of the bulk requests\n+ assertThat(Integer.valueOf(bulkItemResponse.getId()), both(greaterThan(0)).and(lessThanOrEqualTo(testReadOnlyDocs)));\n+ //we do want to check that we don't get duplicate ids back\n+ assertThat(readOnlyIds.add(bulkItemResponse.getId()), equalTo(true));\n+ }\n }\n+\n+ assertMultiGetResponse(multiGetRequestBuilder.get(), testDocs);\n }\n \n private static MultiGetRequestBuilder indexDocs(Client client, BulkProcessor processor, int numDocs) {", "filename": "src/test/java/org/elasticsearch/action/bulk/BulkProcessorTests.java", "status": "modified" }, { "diff": "@@ -43,13 +43,13 @@ public class BlockClusterStatsTests extends ElasticsearchIntegrationTest {\n public void testBlocks() throws Exception {\n assertAcked(prepareCreate(\"foo\").addAlias(new Alias(\"foo-alias\")));\n try {\n+ assertAcked(client().admin().indices().prepareUpdateSettings(\"foo\").setSettings(\n+ ImmutableSettings.settingsBuilder().put(\"index.blocks.read_only\", true)));\n ClusterUpdateSettingsResponse updateSettingsResponse = client().admin().cluster().prepareUpdateSettings().setTransientSettings(\n ImmutableSettings.settingsBuilder().put(\"cluster.blocks.read_only\", true).build()).get();\n assertThat(updateSettingsResponse.isAcknowledged(), is(true));\n- assertAcked(client().admin().indices().prepareUpdateSettings(\"foo\").setSettings(\n- ImmutableSettings.settingsBuilder().put(\"index.blocks.read_only\", true)));\n \n- ClusterStateResponse clusterStateResponseUnfiltered = client().admin().cluster().prepareState().clear().setBlocks(true).get();\n+ ClusterStateResponse clusterStateResponseUnfiltered = client().admin().cluster().prepareState().setLocal(true).clear().setBlocks(true).get();\n assertThat(clusterStateResponseUnfiltered.getState().blocks().global(), hasSize(1));\n assertThat(clusterStateResponseUnfiltered.getState().blocks().indices().size(), is(1));\n ClusterStateResponse clusterStateResponse = client().admin().cluster().prepareState().clear().get();", "filename": "src/test/java/org/elasticsearch/cluster/BlockClusterStatsTests.java", "status": "modified" }, { "diff": "@@ -63,7 +63,6 @@\n public class DedicatedClusterSnapshotRestoreTests extends AbstractSnapshotTests {\n \n @Test\n- @LuceneTestCase.AwaitsFix(bugUrl = \"Shay is working on this\")\n public void restorePersistentSettingsTest() throws Exception {\n logger.info(\"--> start node\");\n internalCluster().startNode(settingsBuilder().put(\"gateway.type\", \"local\"));", "filename": "src/test/java/org/elasticsearch/snapshots/DedicatedClusterSnapshotRestoreTests.java", "status": "modified" } ] }
{ "body": "The Unicast Zen Ping mechanism is configured to ping certain host:port combinations in order to discover other node. Since this is only a ping, we do not setup a full connection but rather do a light connect with one channel. This light connection is closed at the end of the pinging.\n\nDuring pinging, we may discover disco nodes which are not yet connected (via temporalResponses). UnicastZenPing will setup the same light connection for those node. However, during pinging a cluster state may arrive with those nodes in it. In that case , we will mistakenly believe those nodes are connected and at the end of pinging we will mistakenly disconnect those valid node.\n\nThis commit makes sure that all nodes UnicastZenPing connects to have a unique id and can be safely disconnected.\n", "comments": [], "number": 7719, "title": "UnicastZenPing - use temporary node ids if can't resolve node by it's address" }
{ "body": "#7719 introduced temporary node ids for nodes that can't be resolved via their address. The change is overly aggressive and creates temporary nodes also for the configure target hosts.\n", "number": 7747, "review_comments": [], "title": "UnicastZenPing don't rename configure host name" }
{ "commits": [ { "message": "Discovery: UnicastZenPing don't rename configure host name\n\n#7719 introduced temporary node ids for nodes that can't be resolved via their address. The change is overly aggressive and creates temporary nodes also for the configure target hosts." } ], "files": [ { "diff": "@@ -305,12 +305,14 @@ void sendPings(final TimeValue timeout, @Nullable TimeValue waitTime, final Send\n // to make sure we don't disconnect a true node which was temporarily removed from the DiscoveryNodes\n // but will be added again during the pinging. We therefore create a new temporary node\n if (!nodeFoundByAddress) {\n- DiscoveryNode tempNode = new DiscoveryNode(\"\",\n- UNICAST_NODE_PREFIX + unicastNodeIdGenerator.incrementAndGet() + \"_\" + nodeToSend.id(),\n- nodeToSend.getHostName(), nodeToSend.getHostAddress(), nodeToSend.address(), nodeToSend.attributes(), nodeToSend.version()\n- );\n- logger.trace(\"replacing {} with temp node {}\", nodeToSend, tempNode);\n- nodeToSend = tempNode;\n+ if (!nodeToSend.id().startsWith(UNICAST_NODE_PREFIX)) {\n+ DiscoveryNode tempNode = new DiscoveryNode(\"\",\n+ UNICAST_NODE_PREFIX + unicastNodeIdGenerator.incrementAndGet() + \"_\" + nodeToSend.id() + \"#\",\n+ nodeToSend.getHostName(), nodeToSend.getHostAddress(), nodeToSend.address(), nodeToSend.attributes(), nodeToSend.version()\n+ );\n+ logger.trace(\"replacing {} with temp node {}\", nodeToSend, tempNode);\n+ nodeToSend = tempNode;\n+ }\n sendPingsHandler.nodeToDisconnect.add(nodeToSend);\n }\n // fork the connection to another thread", "filename": "src/main/java/org/elasticsearch/discovery/zen/ping/unicast/UnicastZenPing.java", "status": "modified" } ] }
{ "body": "The bulk API request was marked as completely failed,\nin case a request with a closed index was referred in\nany of the requests inside of a bulk one.\n\nImplementation Note: Currently the implementation is a bit more verbose in order to prevent an `instanceof` check and another cast - if that is fast enough, we could execute that logic only once at the beginning of the loop (thinking this might be a bit overoptimization here).\n\nCloses #6410\n", "comments": [ { "body": "@spinscale would this fix also handle the case mentioned in https://github.com/elasticsearch/elasticsearch/issues/6410#issuecomment-48298353 when indexing into a non-existent index with `action.auto_create_index` set to `false`?\n", "created_at": "2014-07-09T08:49:14Z" }, { "body": "added some comments\n", "created_at": "2014-07-09T19:38:54Z" }, { "body": "@clintongormley added another test when `action.auto_create_index` is set to false\n@s1monw added an interface and thus refactored to the code...\n", "created_at": "2014-07-11T09:22:37Z" }, { "body": "I added some comments \n", "created_at": "2014-07-15T13:05:42Z" } ], "number": 6790, "title": "Do not fail whole request on closed index" }
{ "body": "```\nBulk API: Do not fail whole request on closed index\n\nChanges from comments to @spinscale PR.\n\"\"\"\nThe bulk API request was marked as completely failed,\nin case a request with a closed index was referred in\nany of the requests inside of a bulk one.\n\nImplementation Note: Currently the implementation is a bit more verbose in order to prevent an instanceof check and another cast - if that is fast enough, we could execute that logic only once at the \n\"\"\"\nSee #6790\n```\n", "number": 7741, "review_comments": [ { "body": "can we have documentation for all of those?\n", "created_at": "2014-09-16T14:08:56Z" }, { "body": "I think as a followup issue we can change the BulkAPI to only accept this interface no? maybe we can think of a better name like `IndexModificationAction` while I think it's as good as `DocumentRequest` :)\n", "created_at": "2014-09-16T14:11:18Z" } ], "title": "Issue 6410 bulk indexing missing index update" }
{ "commits": [ { "message": "Bulk API: Do not fail whole request on closed index\n\nThe bulk API request was marked as completely failed,\nin case a request with a closed index was referred in\nany of the requests inside of a bulk one.\n\nImplementation Note: Currently the implementation is a bit more verbose in order to prevent an instanceof check and another cast - if that is fast enough, we could execute that logic only once at the beginning of the loop (thinking this might be a bit overoptimization here).\n\nCloses #6410" } ], "files": [ { "diff": "@@ -27,6 +27,7 @@\n import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.action.ActionListener;\n import org.elasticsearch.action.ActionRequest;\n+import org.elasticsearch.action.DocumentRequest;\n import org.elasticsearch.action.admin.indices.create.CreateIndexRequest;\n import org.elasticsearch.action.admin.indices.create.CreateIndexResponse;\n import org.elasticsearch.action.admin.indices.create.TransportCreateIndexAction;\n@@ -40,15 +41,18 @@\n import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.block.ClusterBlockLevel;\n+import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.MappingMetaData;\n import org.elasticsearch.cluster.metadata.MetaData;\n import org.elasticsearch.cluster.routing.GroupShardsIterator;\n import org.elasticsearch.cluster.routing.ShardIterator;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.util.concurrent.AtomicArray;\n+import org.elasticsearch.index.Index;\n import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.indices.IndexAlreadyExistsException;\n+import org.elasticsearch.indices.IndexClosedException;\n import org.elasticsearch.rest.RestStatus;\n import org.elasticsearch.threadpool.ThreadPool;\n import org.elasticsearch.transport.TransportService;\n@@ -96,26 +100,15 @@ protected void doExecute(final BulkRequest bulkRequest, final ActionListener<Bul\n if (autoCreateIndex.needToCheck()) {\n final Set<String> indices = Sets.newHashSet();\n for (ActionRequest request : bulkRequest.requests) {\n- if (request instanceof IndexRequest) {\n- IndexRequest indexRequest = (IndexRequest) request;\n- if (!indices.contains(indexRequest.index())) {\n- indices.add(indexRequest.index());\n- }\n- } else if (request instanceof DeleteRequest) {\n- DeleteRequest deleteRequest = (DeleteRequest) request;\n- if (!indices.contains(deleteRequest.index())) {\n- indices.add(deleteRequest.index());\n- }\n- } else if (request instanceof UpdateRequest) {\n- UpdateRequest updateRequest = (UpdateRequest) request;\n- if (!indices.contains(updateRequest.index())) {\n- indices.add(updateRequest.index());\n+ if (request instanceof DocumentRequest) {\n+ DocumentRequest req = (DocumentRequest) request;\n+ if (!indices.contains(req.index())) {\n+ indices.add(req.index());\n }\n } else {\n throw new ElasticsearchException(\"Parsed unknown request in bulk actions: \" + request.getClass().getSimpleName());\n }\n }\n-\n final AtomicInteger counter = new AtomicInteger(indices.size());\n ClusterState state = clusterService.state();\n for (final String index : indices) {\n@@ -204,30 +197,33 @@ private void executeBulk(final BulkRequest bulkRequest, final long startTime, fi\n MetaData metaData = clusterState.metaData();\n for (int i = 0; i < bulkRequest.requests.size(); i++) {\n ActionRequest request = bulkRequest.requests.get(i);\n- if (request instanceof IndexRequest) {\n- IndexRequest indexRequest = (IndexRequest) request;\n- String concreteIndex = concreteIndices.resolveIfAbsent(indexRequest.index(), indexRequest.indicesOptions());\n- MappingMetaData mappingMd = null;\n- if (metaData.hasIndex(concreteIndex)) {\n- mappingMd = metaData.index(concreteIndex).mappingOrDefault(indexRequest.type());\n+ if (request instanceof DocumentRequest) {\n+ DocumentRequest req = (DocumentRequest) request;\n+\n+ if (addFailureIfIndexIsClosed(req, bulkRequest, responses, i, concreteIndices, metaData)) {\n+ continue;\n }\n- try {\n- indexRequest.process(metaData, mappingMd, allowIdGeneration, concreteIndex);\n- } catch (ElasticsearchParseException e) {\n- BulkItemResponse.Failure failure = new BulkItemResponse.Failure(concreteIndex, indexRequest.type(), indexRequest.id(), e);\n- BulkItemResponse bulkItemResponse = new BulkItemResponse(i, \"index\", failure);\n- responses.set(i, bulkItemResponse);\n- // make sure the request gets never processed again\n- bulkRequest.requests.set(i, null);\n+\n+ String concreteIndex = concreteIndices.resolveIfAbsent(req.index(), req.indicesOptions());\n+ if (request instanceof IndexRequest) {\n+ IndexRequest indexRequest = (IndexRequest) request;\n+ MappingMetaData mappingMd = null;\n+ if (metaData.hasIndex(concreteIndex)) {\n+ mappingMd = metaData.index(concreteIndex).mappingOrDefault(indexRequest.type());\n+ }\n+ try {\n+ indexRequest.process(metaData, mappingMd, allowIdGeneration, concreteIndex);\n+ } catch (ElasticsearchParseException e) {\n+ BulkItemResponse.Failure failure = new BulkItemResponse.Failure(concreteIndex, indexRequest.type(), indexRequest.id(), e);\n+ BulkItemResponse bulkItemResponse = new BulkItemResponse(i, \"index\", failure);\n+ responses.set(i, bulkItemResponse);\n+ // make sure the request gets never processed again\n+ bulkRequest.requests.set(i, null);\n+ }\n+ } else {\n+ concreteIndices.resolveIfAbsent(req.index(), req.indicesOptions());\n+ req.routing(clusterState.metaData().resolveIndexRouting(req.routing(), req.index()));\n }\n- } else if (request instanceof DeleteRequest) {\n- DeleteRequest deleteRequest = (DeleteRequest) request;\n- concreteIndices.resolveIfAbsent(deleteRequest.index(), deleteRequest.indicesOptions());\n- deleteRequest.routing(clusterState.metaData().resolveIndexRouting(deleteRequest.routing(), deleteRequest.index()));\n- } else if (request instanceof UpdateRequest) {\n- UpdateRequest updateRequest = (UpdateRequest) request;\n- concreteIndices.resolveIfAbsent(updateRequest.index(), updateRequest.indicesOptions());\n- updateRequest.routing(clusterState.metaData().resolveIndexRouting(updateRequest.routing(), updateRequest.index()));\n }\n }\n \n@@ -343,8 +339,35 @@ private void finishHim() {\n }\n }\n \n- private static class ConcreteIndices {\n+ private boolean addFailureIfIndexIsClosed(DocumentRequest request, BulkRequest bulkRequest, AtomicArray<BulkItemResponse> responses, int idx,\n+ final ConcreteIndices concreteIndices,\n+ final MetaData metaData) {\n+ String concreteIndex = concreteIndices.getConcreteIndex(request.index());\n+ boolean isClosed = false;\n+ if (concreteIndex == null) {\n+ try {\n+ concreteIndex = concreteIndices.resolveIfAbsent(request.index(), request.indicesOptions());\n+ } catch (IndexClosedException ice) {\n+ isClosed = true;\n+ }\n+ }\n+ if (!isClosed) {\n+ IndexMetaData indexMetaData = metaData.index(concreteIndex);\n+ isClosed = indexMetaData.getState() == IndexMetaData.State.CLOSE;\n+ }\n+ if (isClosed) {\n+ BulkItemResponse.Failure failure = new BulkItemResponse.Failure(request.index(), request.type(), request.id(),\n+ new IndexClosedException(new Index(metaData.index(request.index()).getIndex())));\n+ BulkItemResponse bulkItemResponse = new BulkItemResponse(idx, \"index\", failure);\n+ responses.set(idx, bulkItemResponse);\n+ // make sure the request gets never processed again\n+ bulkRequest.requests.set(idx, null);\n+ }\n+ return isClosed;\n+ }\n+\n \n+ private static class ConcreteIndices {\n private final Map<String, String> indices = new HashMap<>();\n private final MetaData metaData;\n ", "filename": "src/main/java/org/elasticsearch/action/bulk/TransportBulkAction.java", "status": "modified" }, { "diff": "@@ -21,6 +21,7 @@\n \n import org.elasticsearch.action.ActionRequest;\n import org.elasticsearch.action.ActionRequestValidationException;\n+import org.elasticsearch.action.DocumentRequest;\n import org.elasticsearch.action.support.replication.ShardReplicationOperationRequest;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.io.stream.StreamInput;\n@@ -43,7 +44,7 @@\n * @see org.elasticsearch.client.Client#delete(DeleteRequest)\n * @see org.elasticsearch.client.Requests#deleteRequest(String)\n */\n-public class DeleteRequest extends ShardReplicationOperationRequest<DeleteRequest> {\n+public class DeleteRequest extends ShardReplicationOperationRequest<DeleteRequest> implements DocumentRequest<DeleteRequest> {\n \n private String type;\n private String id;", "filename": "src/main/java/org/elasticsearch/action/delete/DeleteRequest.java", "status": "modified" }, { "diff": "@@ -25,6 +25,7 @@\n import org.elasticsearch.action.ActionRequestValidationException;\n import org.elasticsearch.action.RoutingMissingException;\n import org.elasticsearch.action.TimestampParsingException;\n+import org.elasticsearch.action.DocumentRequest;\n import org.elasticsearch.action.support.replication.ShardReplicationOperationRequest;\n import org.elasticsearch.client.Requests;\n import org.elasticsearch.cluster.metadata.MappingMetaData;\n@@ -64,7 +65,7 @@\n * @see org.elasticsearch.client.Requests#indexRequest(String)\n * @see org.elasticsearch.client.Client#index(IndexRequest)\n */\n-public class IndexRequest extends ShardReplicationOperationRequest<IndexRequest> {\n+public class IndexRequest extends ShardReplicationOperationRequest<IndexRequest> implements DocumentRequest<IndexRequest> {\n \n /**\n * Operation type controls if the type of the index operation.", "filename": "src/main/java/org/elasticsearch/action/index/IndexRequest.java", "status": "modified" }, { "diff": "@@ -22,6 +22,7 @@\n import com.google.common.collect.Maps;\n import org.elasticsearch.Version;\n import org.elasticsearch.action.ActionRequestValidationException;\n+import org.elasticsearch.action.DocumentRequest;\n import org.elasticsearch.action.WriteConsistencyLevel;\n import org.elasticsearch.action.index.IndexRequest;\n import org.elasticsearch.action.support.replication.ReplicationType;\n@@ -47,7 +48,7 @@\n \n /**\n */\n-public class UpdateRequest extends InstanceShardOperationRequest<UpdateRequest> {\n+public class UpdateRequest extends InstanceShardOperationRequest<UpdateRequest> implements DocumentRequest<UpdateRequest> {\n \n private String type;\n private String id;", "filename": "src/main/java/org/elasticsearch/action/update/UpdateRequest.java", "status": "modified" }, { "diff": "@@ -22,6 +22,7 @@\n import com.google.common.base.Charsets;\n import org.elasticsearch.action.admin.indices.alias.Alias;\n import org.elasticsearch.action.bulk.BulkItemResponse;\n+import org.elasticsearch.action.bulk.BulkRequest;\n import org.elasticsearch.action.bulk.BulkRequestBuilder;\n import org.elasticsearch.action.bulk.BulkResponse;\n import org.elasticsearch.action.count.CountResponse;\n@@ -651,8 +652,31 @@ public void testThatFailedUpdateRequestReturnsCorrectType() throws Exception {\n assertThat(bulkItemResponse.getItems()[5].getOpType(), is(\"delete\"));\n }\n \n+\n private static String indexOrAlias() {\n return randomBoolean() ? \"test\" : \"alias\";\n }\n+\n+ @Test // issue 6410\n+ public void testThatMissingIndexDoesNotAbortFullBulkRequest() throws Exception{\n+ createIndex(\"bulkindex1\", \"bulkindex2\");\n+ BulkRequest bulkRequest = new BulkRequest();\n+ bulkRequest.add(new IndexRequest(\"bulkindex1\", \"index1_type\", \"1\").source(\"text\", \"hallo1\"))\n+ .add(new IndexRequest(\"bulkindex2\", \"index2_type\", \"1\").source(\"text\", \"hallo2\"))\n+ .add(new IndexRequest(\"bulkindex2\", \"index2_type\").source(\"text\", \"hallo2\"))\n+ .add(new UpdateRequest(\"bulkindex2\", \"index2_type\", \"2\").doc(\"foo\", \"bar\"))\n+ .add(new DeleteRequest(\"bulkindex2\", \"index2_type\", \"3\"))\n+ .refresh(true);\n+\n+ client().bulk(bulkRequest).get();\n+ SearchResponse searchResponse = client().prepareSearch(\"bulkindex*\").get();\n+ assertHitCount(searchResponse, 3);\n+\n+ assertAcked(client().admin().indices().prepareClose(\"bulkindex2\"));\n+\n+ BulkResponse bulkResponse = client().bulk(bulkRequest).get();\n+ assertThat(bulkResponse.hasFailures(), is(true));\n+ assertThat(bulkResponse.getItems().length, is(5));\n+ }\n }\n ", "filename": "src/test/java/org/elasticsearch/document/BulkTests.java", "status": "modified" } ] }
{ "body": "the right position is : Regex.simpleMatch( setting.getKey(), dynamicSetting)\n", "comments": [ { "body": "thanks for opening this we will take care of it.\n", "created_at": "2014-09-09T12:16:53Z" } ], "number": 7651, "title": "Parameter position error: Regex.simpleMatch(dynamicSetting, setting.getKey()) in DynamicSettings.validateDynamicSetting() " }
{ "body": "Previously we incorrectly sent them in the wrong order, which can cause\nvalidators not to be run for dynamic settings that have been added\nmatching a particular wildcard.\n\nAlso adds a small unit test that makes sure we fixed this behavior.\n\nFixes #7651\n", "number": 7661, "review_comments": [], "title": "Fix ordering of Regex.simpleMatch() parameters" }
{ "commits": [ { "message": "Fix ordering of Regex.simpleMatch() parameters\n\nPreviously we incorrectly sent them in the wrong order, which can cause\nvalidators not to be run for dynamic settings that have been added\nmatching a particular wildcard.\n\nFixes #7651" } ], "files": [ { "diff": "@@ -42,7 +42,7 @@ public boolean hasDynamicSetting(String key) {\n \n public String validateDynamicSetting(String dynamicSetting, String value) {\n for (Map.Entry<String, Validator> setting : dynamicSettings.entrySet()) {\n- if (Regex.simpleMatch(dynamicSetting, setting.getKey())) {\n+ if (Regex.simpleMatch(setting.getKey(), dynamicSetting)) {\n return setting.getValue().validate(dynamicSetting, value);\n }\n }", "filename": "src/main/java/org/elasticsearch/cluster/settings/DynamicSettings.java", "status": "modified" }, { "diff": "@@ -22,7 +22,7 @@\n import org.elasticsearch.test.ElasticsearchTestCase;\n import org.junit.Test;\n \n-import static org.hamcrest.MatcherAssert.assertThat;\n+import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.notNullValue;\n import static org.hamcrest.Matchers.nullValue;\n \n@@ -84,4 +84,12 @@ public void testValidators() throws Exception {\n assertThat(Validator.POSITIVE_INTEGER.validate(\"\", \"-1\"), notNullValue());\n assertThat(Validator.POSITIVE_INTEGER.validate(\"\", \"10.2\"), notNullValue());\n }\n+\n+ @Test\n+ public void testDynamicValidators() throws Exception {\n+ DynamicSettings ds = new DynamicSettings();\n+ ds.addDynamicSetting(\"my.test.*\", Validator.POSITIVE_INTEGER);\n+ String valid = ds.validateDynamicSetting(\"my.test.setting\", \"-1\");\n+ assertThat(valid, equalTo(\"the value of the setting my.test.setting must be a positive integer\"));\n+ }\n }", "filename": "src/test/java/org/elasticsearch/cluster/settings/SettingsValidatorTests.java", "status": "modified" } ] }
{ "body": "In the case of a long GC searchers might be released twice once by the reaper and once by the actual releasing thread. It's a cosmetic problem since we protect from double releasing but we should fix it.\n\nThis has been seen in the field:\n\n```\norg.elasticsearch.ElasticsearchIllegalStateException: Double release \nat org.elasticsearch.index.engine.internal.InternalEngine$EngineSearcher.close(InternalEngine.java:1512) \nat org.elasticsearch.common.lease.Releasables.close(Releasables.java:45) \nat org.elasticsearch.common.lease.Releasables.close(Releasables.java:60) \nat org.elasticsearch.common.lease.Releasables.close(Releasables.java:65) \nat org.elasticsearch.search.internal.DefaultSearchContext.doClose(DefaultSearchContext.java:212) \nat org.elasticsearch.search.internal.SearchContext.close(SearchContext.java:96) \nat org.elasticsearch.search.SearchService.freeContext(SearchService.java:560) \nat org.elasticsearch.search.SearchService.access$100(SearchService.java:97) \nat org.elasticsearch.search.SearchService$Reaper.run(SearchService.java:957) \n```\n", "comments": [], "number": 7625, "title": "Internal: Searcher might be released twice in the case of a LONG GC" }
{ "body": "Today there are two different ways to cleanup search contexts which can\npotentially lead to double releasing of a context. This commit unifies\nthe methods and prevents double closing.\n\nCloses #7625\n", "number": 7643, "review_comments": [], "title": "Unify search context cleanup" }
{ "commits": [ { "message": "[CORE] Unify search context cleanup\n\nToday there are two different ways to cleanup search contexts which can\npotentially lead to double releasing of a context. This commit unifies\nthe methods and prevents double closing.\n\nCloses #7625" } ], "files": [ { "diff": "@@ -175,8 +175,8 @@ protected void doStart() throws ElasticsearchException {\n \n @Override\n protected void doStop() throws ElasticsearchException {\n- for (SearchContext context : activeContexts.values()) {\n- freeContext(context);\n+ for (final SearchContext context : activeContexts.values()) {\n+ freeContext(context.id());\n }\n activeContexts.clear();\n }\n@@ -187,23 +187,23 @@ protected void doClose() throws ElasticsearchException {\n }\n \n public DfsSearchResult executeDfsPhase(ShardSearchRequest request) throws ElasticsearchException {\n- SearchContext context = createAndPutContext(request);\n+ final SearchContext context = createAndPutContext(request);\n try {\n contextProcessing(context);\n dfsPhase.execute(context);\n contextProcessedSuccessfully(context);\n return context.dfsResult();\n } catch (Throwable e) {\n logger.trace(\"Dfs phase failed\", e);\n- freeContext(context);\n+ freeContext(context.id());\n throw ExceptionsHelper.convertToRuntime(e);\n } finally {\n cleanContext(context);\n }\n }\n \n public QuerySearchResult executeScan(ShardSearchRequest request) throws ElasticsearchException {\n- SearchContext context = createAndPutContext(request);\n+ final SearchContext context = createAndPutContext(request);\n try {\n if (context.aggregations() != null) {\n throw new ElasticsearchIllegalArgumentException(\"aggregations are not supported with search_type=scan\");\n@@ -221,15 +221,15 @@ public QuerySearchResult executeScan(ShardSearchRequest request) throws Elastics\n return context.queryResult();\n } catch (Throwable e) {\n logger.trace(\"Scan phase failed\", e);\n- freeContext(context);\n+ freeContext(context.id());\n throw ExceptionsHelper.convertToRuntime(e);\n } finally {\n cleanContext(context);\n }\n }\n \n public ScrollQueryFetchSearchResult executeScan(InternalScrollSearchRequest request) throws ElasticsearchException {\n- SearchContext context = findContext(request.id());\n+ final SearchContext context = findContext(request.id());\n contextProcessing(context);\n try {\n processScroll(request, context);\n@@ -249,15 +249,15 @@ public ScrollQueryFetchSearchResult executeScan(InternalScrollSearchRequest requ\n return new ScrollQueryFetchSearchResult(new QueryFetchSearchResult(context.queryResult(), context.fetchResult()), context.shardTarget());\n } catch (Throwable e) {\n logger.trace(\"Scan phase failed\", e);\n- freeContext(context);\n+ freeContext(context.id());\n throw ExceptionsHelper.convertToRuntime(e);\n } finally {\n cleanContext(context);\n }\n }\n \n public QuerySearchResultProvider executeQueryPhase(ShardSearchRequest request) throws ElasticsearchException {\n- SearchContext context = createAndPutContext(request);\n+ final SearchContext context = createAndPutContext(request);\n try {\n context.indexShard().searchService().onPreQueryPhase(context);\n long time = System.nanoTime();\n@@ -287,15 +287,15 @@ public QuerySearchResultProvider executeQueryPhase(ShardSearchRequest request) t\n }\n context.indexShard().searchService().onFailedQueryPhase(context);\n logger.trace(\"Query phase failed\", e);\n- freeContext(context);\n+ freeContext(context.id());\n throw ExceptionsHelper.convertToRuntime(e);\n } finally {\n cleanContext(context);\n }\n }\n \n public ScrollQuerySearchResult executeQueryPhase(InternalScrollSearchRequest request) throws ElasticsearchException {\n- SearchContext context = findContext(request.id());\n+ final SearchContext context = findContext(request.id());\n try {\n context.indexShard().searchService().onPreQueryPhase(context);\n long time = System.nanoTime();\n@@ -308,20 +308,20 @@ public ScrollQuerySearchResult executeQueryPhase(InternalScrollSearchRequest req\n } catch (Throwable e) {\n context.indexShard().searchService().onFailedQueryPhase(context);\n logger.trace(\"Query phase failed\", e);\n- freeContext(context);\n+ freeContext(context.id());\n throw ExceptionsHelper.convertToRuntime(e);\n } finally {\n cleanContext(context);\n }\n }\n \n public QuerySearchResult executeQueryPhase(QuerySearchRequest request) throws ElasticsearchException {\n- SearchContext context = findContext(request.id());\n+ final SearchContext context = findContext(request.id());\n contextProcessing(context);\n try {\n context.searcher().dfSource(new CachedDfSource(context.searcher().getIndexReader(), request.dfs(), context.similarityService().similarity()));\n } catch (Throwable e) {\n- freeContext(context);\n+ freeContext(context.id());\n cleanContext(context);\n throw new QueryPhaseExecutionException(context, \"Failed to set aggregated df\", e);\n }\n@@ -335,15 +335,15 @@ public QuerySearchResult executeQueryPhase(QuerySearchRequest request) throws El\n } catch (Throwable e) {\n context.indexShard().searchService().onFailedQueryPhase(context);\n logger.trace(\"Query phase failed\", e);\n- freeContext(context);\n+ freeContext(context.id());\n throw ExceptionsHelper.convertToRuntime(e);\n } finally {\n cleanContext(context);\n }\n }\n \n public QueryFetchSearchResult executeFetchPhase(ShardSearchRequest request) throws ElasticsearchException {\n- SearchContext context = createAndPutContext(request);\n+ final SearchContext context = createAndPutContext(request);\n contextProcessing(context);\n try {\n context.indexShard().searchService().onPreQueryPhase(context);\n@@ -373,20 +373,20 @@ public QueryFetchSearchResult executeFetchPhase(ShardSearchRequest request) thro\n return new QueryFetchSearchResult(context.queryResult(), context.fetchResult());\n } catch (Throwable e) {\n logger.trace(\"Fetch phase failed\", e);\n- freeContext(context);\n+ freeContext(context.id());\n throw ExceptionsHelper.convertToRuntime(e);\n } finally {\n cleanContext(context);\n }\n }\n \n public QueryFetchSearchResult executeFetchPhase(QuerySearchRequest request) throws ElasticsearchException {\n- SearchContext context = findContext(request.id());\n+ final SearchContext context = findContext(request.id());\n contextProcessing(context);\n try {\n context.searcher().dfSource(new CachedDfSource(context.searcher().getIndexReader(), request.dfs(), context.similarityService().similarity()));\n } catch (Throwable e) {\n- freeContext(context);\n+ freeContext(context.id());\n cleanContext(context);\n throw new QueryPhaseExecutionException(context, \"Failed to set aggregated df\", e);\n }\n@@ -418,15 +418,15 @@ public QueryFetchSearchResult executeFetchPhase(QuerySearchRequest request) thro\n return new QueryFetchSearchResult(context.queryResult(), context.fetchResult());\n } catch (Throwable e) {\n logger.trace(\"Fetch phase failed\", e);\n- freeContext(context);\n+ freeContext(context.id());\n throw ExceptionsHelper.convertToRuntime(e);\n } finally {\n cleanContext(context);\n }\n }\n \n public ScrollQueryFetchSearchResult executeFetchPhase(InternalScrollSearchRequest request) throws ElasticsearchException {\n- SearchContext context = findContext(request.id());\n+ final SearchContext context = findContext(request.id());\n contextProcessing(context);\n try {\n processScroll(request, context);\n@@ -457,15 +457,15 @@ public ScrollQueryFetchSearchResult executeFetchPhase(InternalScrollSearchReques\n return new ScrollQueryFetchSearchResult(new QueryFetchSearchResult(context.queryResult(), context.fetchResult()), context.shardTarget());\n } catch (Throwable e) {\n logger.trace(\"Fetch phase failed\", e);\n- freeContext(context);\n+ freeContext(context.id());\n throw ExceptionsHelper.convertToRuntime(e);\n } finally {\n cleanContext(context);\n }\n }\n \n public FetchSearchResult executeFetchPhase(FetchSearchRequest request) throws ElasticsearchException {\n- SearchContext context = findContext(request.id());\n+ final SearchContext context = findContext(request.id());\n contextProcessing(context);\n try {\n if (request.lastEmittedDoc() != null) {\n@@ -485,7 +485,7 @@ public FetchSearchResult executeFetchPhase(FetchSearchRequest request) throws El\n } catch (Throwable e) {\n context.indexShard().searchService().onFailedFetchPhase(context);\n logger.trace(\"Fetch phase failed\", e);\n- freeContext(context); // we just try to make sure this is freed - rethrow orig exception.\n+ freeContext(context.id()); // we just try to make sure this is freed - rethrow orig exception.\n throw ExceptionsHelper.convertToRuntime(e);\n } finally {\n cleanContext(context);\n@@ -511,7 +511,7 @@ final SearchContext createAndPutContext(ShardSearchRequest request) throws Elast\n return context;\n } finally {\n if (!success) {\n- freeContext(context);\n+ freeContext(context.id());\n }\n }\n }\n@@ -561,27 +561,22 @@ final SearchContext createContext(ShardSearchRequest request, @Nullable Engine.S\n }\n \n public boolean freeContext(long id) {\n- SearchContext context = activeContexts.remove(id);\n- if (context == null) {\n- return false;\n- }\n- context.indexShard().searchService().onFreeContext(context);\n- context.close();\n- return true;\n- }\n-\n- private void freeContext(SearchContext context) {\n- SearchContext removed = activeContexts.remove(context.id());\n- if (removed != null) {\n- removed.indexShard().searchService().onFreeContext(removed);\n+ final SearchContext context = activeContexts.remove(id);\n+ if (context != null) {\n+ try {\n+ context.indexShard().searchService().onFreeContext(context);\n+ } finally {\n+ context.close();\n+ }\n+ return true;\n }\n- context.close();\n+ return false;\n }\n \n public void freeAllScrollContexts() {\n for (SearchContext searchContext : activeContexts.values()) {\n if (searchContext.scroll() != null) {\n- freeContext(searchContext);\n+ freeContext(searchContext.id());\n }\n }\n }\n@@ -969,7 +964,7 @@ public void run() {\n } finally {\n try {\n if (context != null) {\n- freeContext(context);\n+ freeContext(context.id());\n cleanContext(context);\n }\n } finally {\n@@ -1002,7 +997,7 @@ public void run() {\n }\n if ((time - lastAccessTime > context.keepAlive())) {\n logger.debug(\"freeing search context [{}], time [{}], lastAccessTime [{}], keepAlive [{}]\", context.id(), time, lastAccessTime, context.keepAlive());\n- freeContext(context);\n+ freeContext(context.id());\n }\n }\n }", "filename": "src/main/java/org/elasticsearch/search/SearchService.java", "status": "modified" }, { "diff": "@@ -145,7 +145,7 @@ public void sendFreeContext(DiscoveryNode node, final long contextId, SearchRequ\n \n public void sendFreeContext(DiscoveryNode node, long contextId, ClearScrollRequest request, final ActionListener<Boolean> actionListener) {\n if (clusterService.state().nodes().localNodeId().equals(node.id())) {\n- boolean freed = searchService.freeContext(contextId);\n+ final boolean freed = searchService.freeContext(contextId);\n actionListener.onResponse(freed);\n } else {\n transportService.sendRequest(node, FREE_CONTEXT_ACTION_NAME, new SearchFreeContextRequest(request, contextId), new FreeContextResponseHandler(actionListener));", "filename": "src/main/java/org/elasticsearch/search/action/SearchServiceTransportAction.java", "status": "modified" }, { "diff": "@@ -64,6 +64,7 @@\n import java.util.ArrayList;\n import java.util.Collection;\n import java.util.List;\n+import java.util.concurrent.atomic.AtomicBoolean;\n \n /**\n */\n@@ -87,12 +88,15 @@ public static SearchContext current() {\n }\n \n private Multimap<Lifetime, Releasable> clearables = null;\n+ private final AtomicBoolean closed = new AtomicBoolean(false);\n \n public final void close() {\n- try {\n- clearReleasables(Lifetime.CONTEXT);\n- } finally {\n- doClose();\n+ if (closed.compareAndSet(false, true)) { // prevent double release\n+ try {\n+ clearReleasables(Lifetime.CONTEXT);\n+ } finally {\n+ doClose();\n+ }\n }\n }\n ", "filename": "src/main/java/org/elasticsearch/search/internal/SearchContext.java", "status": "modified" } ] }
{ "body": "Indexed scripts might need to get fetched via a GET call which is very cheap since those shards are local since they expand `[0-all]` but sometimes in the case of a node client holding no data we need to do a get call on the first get. Yet this get call seems to be executed on the transport thread and might deadlock since it needs that thread to process the get response. See stacktrace below... The problem here is that some of the actions in `SearchServiceTransportAction` don't use the `search` threadpool but use `SAME` instead which can cause this issue. We should use `SEARCH` instead for the most of the operations except of free context I guess.\n\n```\n2> \"elasticsearch[node_s2][local_transport][T#1]\" ID=1421 WAITING on org.elasticsearch.common.util.concurrent.BaseFuture$Sync@2b1fdd72\n 2> at sun.misc.Unsafe.park(Native Method)\n 2> - waiting on org.elasticsearch.common.util.concurrent.BaseFuture$Sync@2b1fdd72\n 2> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)\n 2> at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)\n 2> at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)\n 2> at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)\n 2> at org.elasticsearch.common.util.concurrent.BaseFuture$Sync.get(BaseFuture.java:274)\n 2> at org.elasticsearch.common.util.concurrent.BaseFuture.get(BaseFuture.java:113)\n 2> at org.elasticsearch.action.support.AdapterActionFuture.actionGet(AdapterActionFuture.java:45)\n 2> at org.elasticsearch.script.ScriptService.getScriptFromIndex(ScriptService.java:377)\n 2> at org.elasticsearch.script.ScriptService.compile(ScriptService.java:295)\n 2> at org.elasticsearch.script.ScriptService.executable(ScriptService.java:457)\n 2> at org.elasticsearch.search.aggregations.metrics.scripted.InternalScriptedMetric.reduce(InternalScriptedMetric.java:99)\n 2> at org.elasticsearch.search.aggregations.InternalAggregations.reduce(InternalAggregations.java:140)\n 2> at org.elasticsearch.search.controller.SearchPhaseController.merge(SearchPhaseController.java:374)\n 2> at org.elasticsearch.action.search.type.TransportSearchDfsQueryThenFetchAction$AsyncAction.innerFinishHim(TransportSearchDfsQueryThenFetchAction.java:209)\n 2> at org.elasticsearch.action.search.type.TransportSearchDfsQueryThenFetchAction$AsyncAction.finishHim(TransportSearchDfsQueryThenFetchAction.java:196)\n 2> at org.elasticsearch.action.search.type.TransportSearchDfsQueryThenFetchAction$AsyncAction$2.onResult(TransportSearchDfsQueryThenFetchAction.java:172)\n 2> at org.elasticsearch.action.search.type.TransportSearchDfsQueryThenFetchAction$AsyncAction$2.onResult(TransportSearchDfsQueryThenFetchAction.java:166)\n 2> at org.elasticsearch.search.action.SearchServiceTransportAction$18.handleResponse(SearchServiceTransportAction.java:440)\n 2> at org.elasticsearch.search.action.SearchServiceTransportAction$18.handleResponse(SearchServiceTransportAction.java:431)\n 2> at org.elasticsearch.transport.local.LocalTransport$3.run(LocalTransport.java:322)\n 2> at com.google.common.util.concurrent.MoreExecutors$DirectExecutorService.execute(MoreExecutors.java:299)\n 2> at org.elasticsearch.transport.local.LocalTransport.handleParsedResponse(LocalTransport.java:317)\n 2> at org.elasticsearch.test.transport.AssertingLocalTransport.handleParsedResponse(AssertingLocalTransport.java:59)\n 2> at org.elasticsearch.transport.local.LocalTransport.handleResponse(LocalTransport.java:313)\n 2> at org.elasticsearch.transport.local.LocalTransport.messageReceived(LocalTransport.java:238)\n 2> at org.elasticsearch.transport.local.LocalTransportChannel$1.run(LocalTransportChannel.java:78)\n 2> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n 2> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n 2> at java.lang.Thread.run(Thread.java:745)\n 2> Locked synchronizers:\n 2> - java.util.concurrent.ThreadPoolExecutor$Worker@2339bcc9\n 2> \n```\n", "comments": [], "number": 7623, "title": "Indexed Scripts/Templates: Indexed Scripts used during reduce phase sometimes hang" }
{ "body": "In `SearchServiceTransportAction` we use `SAME` threadpool for\npotentially blocking / longer running operations this can cause\nslowdowns in response / requests processing or even deadlocks. This\ncommit uses `SEARCH` threadpool for all operations except of freeing\ncontexts.\n\nCloses #7623\n", "number": 7624, "review_comments": [], "title": "Use `SEARCH` threadpool for potentially blocking operations" }
{ "commits": [ { "message": "[SEARCH] Execute search reduce phase on the search threadpool\n\nReduce Phases can be expensive and some of them like the aggregations\nreduce phase might even execute a one-off call via an internal client\nthat might cause a deadlock due to execution on the network thread\nthat is needed to handle the one-off call. This commit dispatches\nthe reduce phase to the search threadpool to ensure we don't wait\nfor the current thread to be available.\n\nCloses #7623" } ], "files": [ { "diff": "@@ -29,6 +29,7 @@\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.util.concurrent.AtomicArray;\n+import org.elasticsearch.common.util.concurrent.EsRejectedExecutionException;\n import org.elasticsearch.search.action.SearchServiceListener;\n import org.elasticsearch.search.action.SearchServiceTransportAction;\n import org.elasticsearch.search.controller.SearchPhaseController;\n@@ -119,29 +120,33 @@ void onSecondPhaseFailure(Throwable t, QuerySearchRequest querySearchRequest, in\n }\n }\n \n- void finishHim() {\n+ private void finishHim() {\n try {\n- innerFinishHim();\n- } catch (Throwable e) {\n- ReduceSearchPhaseException failure = new ReduceSearchPhaseException(\"query_fetch\", \"\", e, buildShardFailures());\n- if (logger.isDebugEnabled()) {\n- logger.debug(\"failed to reduce search\", failure);\n- }\n- listener.onFailure(failure);\n- } finally {\n- //\n+ threadPool.executor(ThreadPool.Names.SEARCH).execute(new Runnable() {\n+ @Override\n+ public void run() {\n+ try {\n+ boolean useScroll = !useSlowScroll && request.scroll() != null;\n+ sortedShardList = searchPhaseController.sortDocs(useScroll, queryFetchResults);\n+ final InternalSearchResponse internalResponse = searchPhaseController.merge(sortedShardList, queryFetchResults, queryFetchResults);\n+ String scrollId = null;\n+ if (request.scroll() != null) {\n+ scrollId = TransportSearchHelper.buildScrollId(request.searchType(), firstResults, null);\n+ }\n+ listener.onResponse(new SearchResponse(internalResponse, scrollId, expectedSuccessfulOps, successfulOps.get(), buildTookInMillis(), buildShardFailures()));\n+ } catch (Throwable e) {\n+ ReduceSearchPhaseException failure = new ReduceSearchPhaseException(\"query_fetch\", \"\", e, buildShardFailures());\n+ if (logger.isDebugEnabled()) {\n+ logger.debug(\"failed to reduce search\", failure);\n+ }\n+ listener.onFailure(failure);\n+ }\n+ }\n+ });\n+ } catch (EsRejectedExecutionException ex) {\n+ listener.onFailure(ex);\n }\n- }\n \n- void innerFinishHim() throws Exception {\n- boolean useScroll = !useSlowScroll && request.scroll() != null;\n- sortedShardList = searchPhaseController.sortDocs(useScroll, queryFetchResults);\n- final InternalSearchResponse internalResponse = searchPhaseController.merge(sortedShardList, queryFetchResults, queryFetchResults);\n- String scrollId = null;\n- if (request.scroll() != null) {\n- scrollId = TransportSearchHelper.buildScrollId(request.searchType(), firstResults, null);\n- }\n- listener.onResponse(new SearchResponse(internalResponse, scrollId, expectedSuccessfulOps, successfulOps.get(), buildTookInMillis(), buildShardFailures()));\n }\n }\n }", "filename": "src/main/java/org/elasticsearch/action/search/type/TransportSearchDfsQueryAndFetchAction.java", "status": "modified" }, { "diff": "@@ -32,6 +32,7 @@\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.util.concurrent.AtomicArray;\n+import org.elasticsearch.common.util.concurrent.EsRejectedExecutionException;\n import org.elasticsearch.search.SearchShardTarget;\n import org.elasticsearch.search.action.SearchServiceListener;\n import org.elasticsearch.search.action.SearchServiceTransportAction;\n@@ -191,27 +192,37 @@ void onFetchFailure(Throwable t, FetchSearchRequest fetchSearchRequest, int shar\n }\n }\n \n- void finishHim() {\n+ private void finishHim() {\n try {\n- innerFinishHim();\n- } catch (Throwable e) {\n- ReduceSearchPhaseException failure = new ReduceSearchPhaseException(\"merge\", \"\", e, buildShardFailures());\n- if (logger.isDebugEnabled()) {\n- logger.debug(\"failed to reduce search\", failure);\n+ threadPool.executor(ThreadPool.Names.SEARCH).execute(new Runnable() {\n+ @Override\n+ public void run() {\n+ try {\n+ final InternalSearchResponse internalResponse = searchPhaseController.merge(sortedShardList, queryResults, fetchResults);\n+ String scrollId = null;\n+ if (request.scroll() != null) {\n+ scrollId = TransportSearchHelper.buildScrollId(request.searchType(), firstResults, null);\n+ }\n+ listener.onResponse(new SearchResponse(internalResponse, scrollId, expectedSuccessfulOps, successfulOps.get(), buildTookInMillis(), buildShardFailures()));\n+ } catch (Throwable e) {\n+ ReduceSearchPhaseException failure = new ReduceSearchPhaseException(\"merge\", \"\", e, buildShardFailures());\n+ if (logger.isDebugEnabled()) {\n+ logger.debug(\"failed to reduce search\", failure);\n+ }\n+ listener.onFailure(failure);\n+ } finally {\n+ releaseIrrelevantSearchContexts(queryResults, docIdsToLoad);\n+ }\n+ }\n+ });\n+ } catch (EsRejectedExecutionException ex) {\n+ try {\n+ releaseIrrelevantSearchContexts(queryResults, docIdsToLoad);\n+ } finally {\n+ listener.onFailure(ex);\n }\n- listener.onFailure(failure);\n- } finally {\n- releaseIrrelevantSearchContexts(queryResults, docIdsToLoad);\n }\n- }\n \n- void innerFinishHim() throws Exception {\n- final InternalSearchResponse internalResponse = searchPhaseController.merge(sortedShardList, queryResults, fetchResults);\n- String scrollId = null;\n- if (request.scroll() != null) {\n- scrollId = TransportSearchHelper.buildScrollId(request.searchType(), firstResults, null);\n- }\n- listener.onResponse(new SearchResponse(internalResponse, scrollId, expectedSuccessfulOps, successfulOps.get(), buildTookInMillis(), buildShardFailures()));\n }\n }\n }", "filename": "src/main/java/org/elasticsearch/action/search/type/TransportSearchDfsQueryThenFetchAction.java", "status": "modified" }, { "diff": "@@ -28,6 +28,7 @@\n import org.elasticsearch.cluster.node.DiscoveryNode;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.util.concurrent.EsRejectedExecutionException;\n import org.elasticsearch.search.action.SearchServiceListener;\n import org.elasticsearch.search.action.SearchServiceTransportAction;\n import org.elasticsearch.search.controller.SearchPhaseController;\n@@ -36,8 +37,6 @@\n import org.elasticsearch.search.internal.ShardSearchRequest;\n import org.elasticsearch.threadpool.ThreadPool;\n \n-import java.io.IOException;\n-\n import static org.elasticsearch.action.search.type.TransportSearchHelper.buildScrollId;\n \n /**\n@@ -75,25 +74,30 @@ protected void sendExecuteFirstPhase(DiscoveryNode node, ShardSearchRequest requ\n @Override\n protected void moveToSecondPhase() throws Exception {\n try {\n- innerFinishHim();\n- } catch (Throwable e) {\n- ReduceSearchPhaseException failure = new ReduceSearchPhaseException(\"merge\", \"\", e, buildShardFailures());\n- if (logger.isDebugEnabled()) {\n- logger.debug(\"failed to reduce search\", failure);\n- }\n- listener.onFailure(failure);\n- }\n- }\n-\n- private void innerFinishHim() throws IOException {\n- boolean useScroll = !useSlowScroll && request.scroll() != null;\n- sortedShardList = searchPhaseController.sortDocs(useScroll, firstResults);\n- final InternalSearchResponse internalResponse = searchPhaseController.merge(sortedShardList, firstResults, firstResults);\n- String scrollId = null;\n- if (request.scroll() != null) {\n- scrollId = buildScrollId(request.searchType(), firstResults, null);\n+ threadPool.executor(ThreadPool.Names.SEARCH).execute(new Runnable() {\n+ @Override\n+ public void run() {\n+ try {\n+ boolean useScroll = !useSlowScroll && request.scroll() != null;\n+ sortedShardList = searchPhaseController.sortDocs(useScroll, firstResults);\n+ final InternalSearchResponse internalResponse = searchPhaseController.merge(sortedShardList, firstResults, firstResults);\n+ String scrollId = null;\n+ if (request.scroll() != null) {\n+ scrollId = buildScrollId(request.searchType(), firstResults, null);\n+ }\n+ listener.onResponse(new SearchResponse(internalResponse, scrollId, expectedSuccessfulOps, successfulOps.get(), buildTookInMillis(), buildShardFailures()));\n+ } catch (Throwable e) {\n+ ReduceSearchPhaseException failure = new ReduceSearchPhaseException(\"merge\", \"\", e, buildShardFailures());\n+ if (logger.isDebugEnabled()) {\n+ logger.debug(\"failed to reduce search\", failure);\n+ }\n+ listener.onFailure(failure);\n+ }\n+ }\n+ });\n+ } catch (EsRejectedExecutionException ex) {\n+ listener.onFailure(ex);\n }\n- listener.onResponse(new SearchResponse(internalResponse, scrollId, expectedSuccessfulOps, successfulOps.get(), buildTookInMillis(), buildShardFailures()));\n }\n }\n }", "filename": "src/main/java/org/elasticsearch/action/search/type/TransportSearchQueryAndFetchAction.java", "status": "modified" }, { "diff": "@@ -23,6 +23,7 @@\n import org.apache.lucene.search.ScoreDoc;\n import org.elasticsearch.action.ActionListener;\n import org.elasticsearch.action.search.ReduceSearchPhaseException;\n+import org.elasticsearch.action.search.SearchPhaseExecutionException;\n import org.elasticsearch.action.search.SearchRequest;\n import org.elasticsearch.action.search.SearchResponse;\n import org.elasticsearch.action.support.ActionFilters;\n@@ -31,6 +32,7 @@\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.util.concurrent.AtomicArray;\n+import org.elasticsearch.common.util.concurrent.EsRejectedExecutionException;\n import org.elasticsearch.search.SearchShardTarget;\n import org.elasticsearch.search.action.SearchServiceListener;\n import org.elasticsearch.search.action.SearchServiceTransportAction;\n@@ -135,27 +137,36 @@ void onFetchFailure(Throwable t, FetchSearchRequest fetchSearchRequest, int shar\n }\n }\n \n- void finishHim() {\n+ private void finishHim() {\n try {\n- innerFinishHim();\n- } catch (Throwable e) {\n- ReduceSearchPhaseException failure = new ReduceSearchPhaseException(\"fetch\", \"\", e, buildShardFailures());\n- if (logger.isDebugEnabled()) {\n- logger.debug(\"failed to reduce search\", failure);\n+ threadPool.executor(ThreadPool.Names.SEARCH).execute(new Runnable() {\n+ @Override\n+ public void run() {\n+ try {\n+ final InternalSearchResponse internalResponse = searchPhaseController.merge(sortedShardList, firstResults, fetchResults);\n+ String scrollId = null;\n+ if (request.scroll() != null) {\n+ scrollId = TransportSearchHelper.buildScrollId(request.searchType(), firstResults, null);\n+ }\n+ listener.onResponse(new SearchResponse(internalResponse, scrollId, expectedSuccessfulOps, successfulOps.get(), buildTookInMillis(), buildShardFailures()));\n+ } catch (Throwable e) {\n+ ReduceSearchPhaseException failure = new ReduceSearchPhaseException(\"fetch\", \"\", e, buildShardFailures());\n+ if (logger.isDebugEnabled()) {\n+ logger.debug(\"failed to reduce search\", failure);\n+ }\n+ listener.onFailure(failure);\n+ } finally {\n+ releaseIrrelevantSearchContexts(firstResults, docIdsToLoad);\n+ }\n+ }\n+ });\n+ } catch (EsRejectedExecutionException ex) {\n+ try {\n+ releaseIrrelevantSearchContexts(firstResults, docIdsToLoad);\n+ } finally {\n+ listener.onFailure(ex);\n }\n- listener.onFailure(failure);\n- } finally {\n- releaseIrrelevantSearchContexts(firstResults, docIdsToLoad);\n- }\n- }\n-\n- void innerFinishHim() throws Exception {\n- InternalSearchResponse internalResponse = searchPhaseController.merge(sortedShardList, firstResults, fetchResults);\n- String scrollId = null;\n- if (request.scroll() != null) {\n- scrollId = TransportSearchHelper.buildScrollId(request.searchType(), firstResults, null);\n }\n- listener.onResponse(new SearchResponse(internalResponse, scrollId, expectedSuccessfulOps, successfulOps.get(), buildTookInMillis(), buildShardFailures()));\n }\n }\n }", "filename": "src/main/java/org/elasticsearch/action/search/type/TransportSearchQueryThenFetchAction.java", "status": "modified" }, { "diff": "@@ -24,7 +24,6 @@\n import com.google.common.collect.Maps;\n import com.google.common.util.concurrent.MoreExecutors;\n import org.elasticsearch.ElasticsearchIllegalArgumentException;\n-import org.elasticsearch.Version;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.component.AbstractComponent;\n import org.elasticsearch.common.inject.Inject;\n@@ -34,13 +33,9 @@\n import org.elasticsearch.common.settings.ImmutableSettings;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.settings.SettingsException;\n-import org.elasticsearch.common.unit.SizeUnit;\n import org.elasticsearch.common.unit.SizeValue;\n import org.elasticsearch.common.unit.TimeValue;\n-import org.elasticsearch.common.util.concurrent.EsAbortPolicy;\n-import org.elasticsearch.common.util.concurrent.EsExecutors;\n-import org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor;\n-import org.elasticsearch.common.util.concurrent.XRejectedExecutionHandler;\n+import org.elasticsearch.common.util.concurrent.*;\n import org.elasticsearch.common.xcontent.ToXContent;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentBuilderString;", "filename": "src/main/java/org/elasticsearch/threadpool/ThreadPool.java", "status": "modified" }, { "diff": "@@ -27,6 +27,7 @@\n import org.elasticsearch.action.search.ShardSearchFailure;\n import org.elasticsearch.common.settings.ImmutableSettings;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.util.concurrent.EsRejectedExecutionException;\n import org.elasticsearch.index.query.QueryBuilders;\n import org.elasticsearch.test.ElasticsearchIntegrationTest;\n import org.elasticsearch.test.ElasticsearchIntegrationTest.ClusterScope;\n@@ -102,7 +103,7 @@ public void onFailure(Throwable e) {\n for (ShardSearchFailure failure : e.shardFailures()) {\n assertTrue(\"got unexpected reason...\" + failure.reason(), failure.reason().toLowerCase(Locale.ENGLISH).contains(\"rejected\"));\n }\n- } else {\n+ } else if ((unwrap instanceof EsRejectedExecutionException) == false) {\n throw new AssertionError(\"unexpected failure\", (Throwable) response);\n }\n }", "filename": "src/test/java/org/elasticsearch/action/RejectionActionTests.java", "status": "modified" } ] }
{ "body": "I have been struggling with an issue lately caused by my own fault of providing a valid request body when attempting to add a mapping to an existing index. Attempting to do so will yield a NullPointerException:\n\n```\n[2014-09-01 20:08:31,486][DEBUG][action.admin.indices.mapping.put] [Prototype] failed to put mappings on indices [[my_index]], type [my_type]\njava.lang.NullPointerException\n at org.elasticsearch.common.xcontent.XContentFactory.xContent(XContentFactory.java:137)\n at org.elasticsearch.common.xcontent.XContentHelper.convertToMap(XContentHelper.java:113)\n at org.elasticsearch.common.xcontent.XContentHelper.convertToMap(XContentHelper.java:101)\n at org.elasticsearch.index.mapper.DocumentMapperParser.parseCompressed(DocumentMapperParser.java:181)\n at org.elasticsearch.index.mapper.MapperService.parse(MapperService.java:387)\n at org.elasticsearch.index.mapper.MapperService.parse(MapperService.java:377)\n at org.elasticsearch.cluster.metadata.MetaDataMappingService$5.execute(MetaDataMappingService.java:540)\n at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:309)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:134)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\n```\n\nThe issues is caused by https://github.com/elasticsearch/elasticsearch/blob/a059a6574a1c270ccc28ddec1671888fb0cfba28/src/main/java/org/elasticsearch/common/xcontent/XContentFactory.java#L213 which returns null if the input stream is empty. While this is fine, the real problem lies in `XContentHelper.convertToMap()` which does not properly check for `null`.\n\nhttps://github.com/elasticsearch/elasticsearch/blob/a059a6574a1c270ccc28ddec1671888fb0cfba28/src/main/java/org/elasticsearch/common/xcontent/XContentHelper.java#L111-L113\n\nTo be fair, this isn't _really_ a bug and I'm more like asking to yield a proper exception in case a developer is as dumb as me and fails to provide a proper request body.\n", "comments": [ { "body": "Could you provide some examples of the messages you used?\n", "created_at": "2014-09-04T05:00:00Z" }, { "body": "@cfontes The request was `PUT http://127.0.0.1:9200/my_index/_mapping/my_type`, message body had a length of 0.\n", "created_at": "2014-09-05T10:40:04Z" }, { "body": "Found the problem and fixed. Thanks for the report!\n", "created_at": "2014-09-05T20:32:06Z" } ], "number": 7536, "title": "Putting a mapping with an empty request body throws a NullPointerException" }
{ "body": "I also added validation for an empty type, and added tests (just for validation for now).\n\ncloses #7536\n", "number": 7618, "review_comments": [], "title": "Add explicit error when PUT mapping API is given an empty request body." }
{ "commits": [ { "message": "RestAPI: Add explicit error when PUT mapping API is given an empty request body.\n\ncloses #7536" } ], "files": [ { "diff": "@@ -83,9 +83,13 @@ public ActionRequestValidationException validate() {\n ActionRequestValidationException validationException = null;\n if (type == null) {\n validationException = addValidationError(\"mapping type is missing\", validationException);\n+ }else if (type.isEmpty()) {\n+ validationException = addValidationError(\"mapping type is empty\", validationException);\n }\n if (source == null) {\n validationException = addValidationError(\"mapping source is missing\", validationException);\n+ } else if (source.isEmpty()) {\n+ validationException = addValidationError(\"mapping source is empty\", validationException);\n }\n return validationException;\n }", "filename": "src/main/java/org/elasticsearch/action/admin/indices/mapping/put/PutMappingRequest.java", "status": "modified" }, { "diff": "@@ -0,0 +1,52 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.action.admin.indices.mapping.put;\n+\n+import org.elasticsearch.action.ActionRequestValidationException;\n+import org.elasticsearch.test.ElasticsearchTestCase;\n+\n+public class PutMappingRequestTests extends ElasticsearchTestCase {\n+\n+ public void testValidation() {\n+ PutMappingRequest r = new PutMappingRequest(\"myindex\");\n+ ActionRequestValidationException ex = r.validate();\n+ assertNotNull(\"type validation should fail\", ex);\n+ assertTrue(ex.getMessage().contains(\"type is missing\"));\n+\n+ r.type(\"\");\n+ ex = r.validate();\n+ assertNotNull(\"type validation should fail\", ex);\n+ assertTrue(ex.getMessage().contains(\"type is empty\"));\n+\n+ r.type(\"mytype\");\n+ ex = r.validate();\n+ assertNotNull(\"source validation should fail\", ex);\n+ assertTrue(ex.getMessage().contains(\"source is missing\"));\n+\n+ r.source(\"\");\n+ ex = r.validate();\n+ assertNotNull(\"source validation should fail\", ex);\n+ assertTrue(ex.getMessage().contains(\"source is empty\"));\n+\n+ r.source(\"somevalidmapping\");\n+ ex = r.validate();\n+ assertNull(\"validation should succeed\", ex);\n+ }\n+}", "filename": "src/test/java/org/elasticsearch/action/admin/indices/mapping/put/PutMappingRequestTests.java", "status": "added" } ] }
{ "body": "When creating a new index, if I have eager fielddata loading configured for the `_timestamp` field, that sticks.\n\nHowever when updating this setting via the PUT mapping API, although the request is acknowledged when I check the mapping via GET, fielddata configuration for that field is blank.\n\nI have been able to successfully configure it for other fields via the PUT mapping API, though. Seems like this problem may be specific to the special underscore-prefixed fields, though I've only really tried `_timestamp`.\n", "comments": [ { "body": "Hi @shikhar \n\nWhat version are you using? Also, could you provide a small curl recreation of the problem?\n\nthanks\n", "created_at": "2014-07-22T13:02:22Z" }, { "body": "```\ncurl -XPOST localhost:9200/test -d '{\n \"settings\" : {\n \"number_of_shards\" : 1\n },\n \"mappings\" : {\n \"type1\" : {\n \"_timestamp\" : { \"enabled\" : true },\n \"properties\" : {\n \"field1\" : { \"type\" : \"string\", \"index\" : \"not_analyzed\" }\n }\n }\n }\n}'\n```\n\n```\ncurl -XPUT 'localhost:9200/test/type1/_mapping' -d '{ \n \"type1\" : {\n \"_timestamp\" : { \"enabled\" : true, \"fielddata\": { \"loading\": \"eager\" } },\n \"properties\" : {\n \"field1\" : { \"type\" : \"string\", \"index\" : \"not_analyzed\", \"fielddata\": { \"loading\": \"eager\" } }\n }\n }\n }'\n```\n\nacknowledged: true\n\nbut:\n\n```\ncurl -XGET 'localhost:9200/test/type1/_mapping?pretty=true'\n{\n \"test\" : {\n \"mappings\" : {\n \"type1\" : {\n \"_timestamp\" : {\n \"enabled\" : true\n },\n \"properties\" : {\n \"field1\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\",\n \"fielddata\" : {\n \"loading\" : \"eager\"\n }\n }\n }\n }\n }\n }\n}\n```\n", "created_at": "2014-08-20T21:00:26Z" }, { "body": "forgot to mention, this is on ES 1.3.0\n", "created_at": "2014-08-21T06:51:52Z" }, { "body": "The fielddata settings are not merged when the mapping is updated (https://github.com/elasticsearch/elasticsearch/blob/master/src/main/java/org/elasticsearch/index/mapper/internal/TimestampFieldMapper.java#L280). This is similar to issue #777 and #5772. I will go through all the different root mappers and fix one by one, just have not gotten further than `_ttl` (#7316) yet.\n", "created_at": "2014-08-21T08:11:13Z" } ], "number": 6958, "title": "Unable to configure eager fielddata loading for _timestamp field via PUT mapping API for type with existing mapping" }
{ "body": "Updates on the _timestamp field were silently ignored.\nNow _timestamp undergoes the same merge as regular\nfields. This includes exceptions if a property cannot\nbe changed.\n\"path\" and \"default\" cannot be changed.\n\ncloses #5772\ncloses #6958\npartially fixes #777\n", "number": 7614, "review_comments": [ { "body": "Is this needed? It would be printed below on assert failure right?\n", "created_at": "2014-09-05T22:02:36Z" }, { "body": "Is it necessary to parse the same source a second time? Wasn't that the point of the test above?\n", "created_at": "2014-09-05T22:04:35Z" }, { "body": "Is parsing twice really necessary?\n", "created_at": "2014-09-05T22:12:45Z" }, { "body": "Not important, but couldn't this just be an array?\nString[] possiblePathValues = {\"some_path\", \"anotherPath\", null};\n", "created_at": "2014-09-05T22:14:43Z" }, { "body": "Can we not use `*` imports? See comments in #7587.\n", "created_at": "2014-09-05T22:17:49Z" }, { "body": "Maybe using the term \"missing\" instead of null? It was simply omitted before right?\n", "created_at": "2014-09-05T22:19:42Z" }, { "body": "I'm coming around to not using `!` for negations because it is so easy to overlook. Instead you could use `== false`?\n", "created_at": "2014-09-05T22:20:50Z" }, { "body": "Was this change intentional?\n", "created_at": "2014-09-05T22:21:31Z" }, { "body": "This question actually bubbled up a bug in the doXContent on TimestampFieldMapper. Fixed in \"fix toXContent, build correct \"index\" value\"\n", "created_at": "2014-09-08T13:54:12Z" }, { "body": "Yes, this was intentional. When the store value is updated there is a conflict reported now. This happens even if the store value is set and then just left out when updatet the mapping.\n", "created_at": "2014-09-08T14:56:54Z" } ], "title": "Enable merging of properties in the `_timestamp` field" }
{ "commits": [ { "message": "_timestamp: enable mapper properties merging\n\nUpdates on the _timestamp field were silently ignored.\nNow _timestamp undergoes the same merge as regular\nfields. This includes exceptions if a prroperty cannot\nbe changed.\n\"path\" and \"default\" cannot be changed.\n\ncloses #5772\ncloses #6958\npartially fixes #777" }, { "message": "fix toXContent, build correct \"index\" value" }, { "message": "make List an array" }, { "message": "remove wirdcard import" }, { "message": "better readablity in merge method conflict mesage and code" } ], "files": [ { "diff": "@@ -249,7 +249,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n if (includeDefaults || enabledState != Defaults.ENABLED) {\n builder.field(\"enabled\", enabledState.enabled);\n }\n- if (includeDefaults || fieldType.indexed() != Defaults.FIELD_TYPE.indexed()) {\n+ if (includeDefaults || (fieldType.indexed() != Defaults.FIELD_TYPE.indexed()) || (fieldType.tokenized() != Defaults.FIELD_TYPE.tokenized())) {\n builder.field(\"index\", indexTokenizeOptionToString(fieldType.indexed(), fieldType.tokenized()));\n }\n if (includeDefaults || fieldType.stored() != Defaults.FIELD_TYPE.stored()) {\n@@ -277,10 +277,22 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n @Override\n public void merge(Mapper mergeWith, MergeContext mergeContext) throws MergeMappingException {\n TimestampFieldMapper timestampFieldMapperMergeWith = (TimestampFieldMapper) mergeWith;\n+ super.merge(mergeWith, mergeContext);\n if (!mergeContext.mergeFlags().simulate()) {\n if (timestampFieldMapperMergeWith.enabledState != enabledState && !timestampFieldMapperMergeWith.enabledState.unset()) {\n this.enabledState = timestampFieldMapperMergeWith.enabledState;\n }\n+ } else {\n+ if (!timestampFieldMapperMergeWith.defaultTimestamp().equals(defaultTimestamp)) {\n+ mergeContext.addConflict(\"Cannot update default in _timestamp value. Value is \" + defaultTimestamp.toString() + \" now encountering \" + timestampFieldMapperMergeWith.defaultTimestamp());\n+ }\n+ if (this.path != null) {\n+ if (path.equals(timestampFieldMapperMergeWith.path()) == false) {\n+ mergeContext.addConflict(\"Cannot update path in _timestamp value. Value is \" + path + \" path in merged mapping is \" + (timestampFieldMapperMergeWith.path() == null ? \"missing\" : timestampFieldMapperMergeWith.path()));\n+ }\n+ } else if (timestampFieldMapperMergeWith.path() != null) {\n+ mergeContext.addConflict(\"Cannot update path in _timestamp value. Value is \" + path + \" path in merged mapping is missing\");\n+ }\n }\n }\n }", "filename": "src/main/java/org/elasticsearch/index/mapper/internal/TimestampFieldMapper.java", "status": "modified" }, { "diff": "@@ -33,15 +33,17 @@\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.common.xcontent.json.JsonXContent;\n import org.elasticsearch.index.mapper.DocumentMapper;\n-import org.elasticsearch.index.mapper.DocumentMapperParser;\n import org.elasticsearch.index.mapper.ParsedDocument;\n import org.elasticsearch.index.mapper.SourceToParse;\n+import org.elasticsearch.index.mapper.DocumentMapperParser;\n+import org.elasticsearch.index.mapper.FieldMapper;\n import org.elasticsearch.index.mapper.internal.TimestampFieldMapper;\n import org.elasticsearch.test.ElasticsearchSingleNodeTest;\n import org.junit.Test;\n \n import java.io.IOException;\n-import java.util.Locale;\n+import java.util.ArrayList;\n+import java.util.List;\n import java.util.Map;\n \n import static org.hamcrest.Matchers.*;\n@@ -89,7 +91,7 @@ public void testDefaultValues() throws Exception {\n assertThat(docMapper.timestampFieldMapper().enabled(), equalTo(TimestampFieldMapper.Defaults.ENABLED.enabled));\n assertThat(docMapper.timestampFieldMapper().fieldType().stored(), equalTo(TimestampFieldMapper.Defaults.FIELD_TYPE.stored()));\n assertThat(docMapper.timestampFieldMapper().fieldType().indexed(), equalTo(TimestampFieldMapper.Defaults.FIELD_TYPE.indexed()));\n- assertThat(docMapper.timestampFieldMapper().path(), equalTo(null));\n+ assertThat(docMapper.timestampFieldMapper().path(), equalTo(TimestampFieldMapper.Defaults.PATH));\n assertThat(docMapper.timestampFieldMapper().dateTimeFormatter().format(), equalTo(TimestampFieldMapper.DEFAULT_DATE_TIME_FORMAT));\n }\n \n@@ -390,4 +392,172 @@ public void testDefaultTimestampStream() throws IOException {\n assertThat(metaData, is(expected));\n }\n }\n+\n+ @Test\n+ public void testMergingFielddataLoadingWorks() throws Exception {\n+ String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n+ .startObject(\"_timestamp\").field(\"enabled\", randomBoolean()).startObject(\"fielddata\").field(\"loading\", \"lazy\").field(\"format\", \"doc_values\").endObject().field(\"store\", \"yes\").endObject()\n+ .endObject().endObject().string();\n+ DocumentMapperParser parser = createIndex(\"test\").mapperService().documentMapperParser();\n+\n+ DocumentMapper docMapper = parser.parse(mapping);\n+ assertThat(docMapper.timestampFieldMapper().fieldDataType().getLoading(), equalTo(FieldMapper.Loading.LAZY));\n+ assertThat(docMapper.timestampFieldMapper().fieldDataType().getFormat(docMapper.timestampFieldMapper().fieldDataType().getSettings()), equalTo(\"doc_values\"));\n+ mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n+ .startObject(\"_timestamp\").field(\"enabled\", randomBoolean()).startObject(\"fielddata\").field(\"loading\", \"eager\").field(\"format\", \"array\").endObject().field(\"store\", \"yes\").endObject()\n+ .endObject().endObject().string();\n+\n+ DocumentMapper.MergeResult mergeResult = docMapper.merge(parser.parse(mapping), DocumentMapper.MergeFlags.mergeFlags().simulate(false));\n+ assertThat(mergeResult.conflicts().length, equalTo(0));\n+ assertThat(docMapper.timestampFieldMapper().fieldDataType().getLoading(), equalTo(FieldMapper.Loading.EAGER));\n+ assertThat(docMapper.timestampFieldMapper().fieldDataType().getFormat(docMapper.timestampFieldMapper().fieldDataType().getSettings()), equalTo(\"array\"));\n+ }\n+\n+ @Test\n+ public void testParsingNotDefaultTwiceDoesNotChangeMapping() throws Exception {\n+ String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n+ .startObject(\"_timestamp\").field(\"enabled\", true)\n+ .field(\"index\", randomBoolean() ? \"no\" : \"analyzed\") // default is \"not_analyzed\" which will be omitted when building the source again\n+ .field(\"store\", true)\n+ .field(\"path\", \"foo\")\n+ .field(\"default\", \"1970-01-01\")\n+ .startObject(\"fielddata\").field(\"format\", \"doc_values\").endObject()\n+ .endObject()\n+ .startObject(\"properties\")\n+ .endObject()\n+ .endObject().endObject().string();\n+ DocumentMapperParser parser = createIndex(\"test\").mapperService().documentMapperParser();\n+\n+ DocumentMapper docMapper = parser.parse(mapping);\n+ docMapper.refreshSource();\n+ docMapper = parser.parse(docMapper.mappingSource().string());\n+ assertThat(docMapper.mappingSource().string(), equalTo(mapping));\n+ }\n+\n+ @Test\n+ public void testParsingTwiceDoesNotChangeTokenizeValue() throws Exception {\n+ String[] index_options = {\"no\", \"analyzed\", \"not_analyzed\"};\n+ String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n+ .startObject(\"_timestamp\").field(\"enabled\", true)\n+ .field(\"index\", index_options[randomInt(2)])\n+ .field(\"store\", true)\n+ .field(\"path\", \"foo\")\n+ .field(\"default\", \"1970-01-01\")\n+ .startObject(\"fielddata\").field(\"format\", \"doc_values\").endObject()\n+ .endObject()\n+ .startObject(\"properties\")\n+ .endObject()\n+ .endObject().endObject().string();\n+ DocumentMapperParser parser = createIndex(\"test\").mapperService().documentMapperParser();\n+\n+ DocumentMapper docMapper = parser.parse(mapping);\n+ boolean tokenized = docMapper.timestampFieldMapper().fieldType().tokenized();\n+ docMapper.refreshSource();\n+ docMapper = parser.parse(docMapper.mappingSource().string());\n+ assertThat(tokenized, equalTo(docMapper.timestampFieldMapper().fieldType().tokenized()));\n+ }\n+\n+ @Test\n+ public void testMergingConflicts() throws Exception {\n+ String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n+ .startObject(\"_timestamp\").field(\"enabled\", true)\n+ .startObject(\"fielddata\").field(\"format\", \"doc_values\").endObject()\n+ .field(\"store\", \"yes\")\n+ .field(\"index\", \"analyzed\")\n+ .field(\"path\", \"foo\")\n+ .field(\"default\", \"1970-01-01\")\n+ .endObject()\n+ .endObject().endObject().string();\n+ DocumentMapperParser parser = createIndex(\"test\").mapperService().documentMapperParser();\n+\n+ DocumentMapper docMapper = parser.parse(mapping);\n+ assertThat(docMapper.timestampFieldMapper().fieldDataType().getLoading(), equalTo(FieldMapper.Loading.LAZY));\n+ mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n+ .startObject(\"_timestamp\").field(\"enabled\", false)\n+ .startObject(\"fielddata\").field(\"format\", \"array\").endObject()\n+ .field(\"store\", \"no\")\n+ .field(\"index\", \"no\")\n+ .field(\"path\", \"bar\")\n+ .field(\"default\", \"1970-01-02\")\n+ .endObject()\n+ .endObject().endObject().string();\n+\n+ DocumentMapper.MergeResult mergeResult = docMapper.merge(parser.parse(mapping), DocumentMapper.MergeFlags.mergeFlags().simulate(true));\n+ String[] expectedConflicts = {\"mapper [_timestamp] has different index values\", \"mapper [_timestamp] has different store values\", \"Cannot update default in _timestamp value. Value is 1970-01-01 now encountering 1970-01-02\", \"Cannot update path in _timestamp value. Value is foo path in merged mapping is bar\", \"mapper [_timestamp] has different tokenize values\"};\n+\n+ for (String conflict : mergeResult.conflicts()) {\n+ assertThat(conflict, isIn(expectedConflicts));\n+ }\n+ assertThat(mergeResult.conflicts().length, equalTo(expectedConflicts.length));\n+ assertThat(docMapper.timestampFieldMapper().fieldDataType().getLoading(), equalTo(FieldMapper.Loading.LAZY));\n+ assertTrue(docMapper.timestampFieldMapper().enabled());\n+ assertThat(docMapper.timestampFieldMapper().fieldDataType().getFormat(docMapper.timestampFieldMapper().fieldDataType().getSettings()), equalTo(\"doc_values\"));\n+ }\n+\n+ @Test\n+ public void testMergingConflictsForIndexValues() throws Exception {\n+ List<String> indexValues = new ArrayList<>();\n+ indexValues.add(\"analyzed\");\n+ indexValues.add(\"no\");\n+ indexValues.add(\"not_analyzed\");\n+ String mapping = XContentFactory.jsonBuilder().startObject()\n+ .startObject(\"type\")\n+ .startObject(\"_timestamp\")\n+ .field(\"index\", indexValues.remove(randomInt(2)))\n+ .endObject()\n+ .endObject().endObject().string();\n+ DocumentMapperParser parser = createIndex(\"test\").mapperService().documentMapperParser();\n+\n+ DocumentMapper docMapper = parser.parse(mapping);\n+ mapping = XContentFactory.jsonBuilder().startObject()\n+ .startObject(\"type\")\n+ .startObject(\"_timestamp\")\n+ .field(\"index\", indexValues.remove(randomInt(1)))\n+ .endObject()\n+ .endObject().endObject().string();\n+\n+ DocumentMapper.MergeResult mergeResult = docMapper.merge(parser.parse(mapping), DocumentMapper.MergeFlags.mergeFlags().simulate(true));\n+ String[] expectedConflicts = {\"mapper [_timestamp] has different index values\", \"mapper [_timestamp] has different tokenize values\"};\n+\n+ for (String conflict : mergeResult.conflicts()) {\n+ assertThat(conflict, isIn(expectedConflicts));\n+ }\n+ }\n+\n+ @Test\n+ public void testMergePaths() throws Exception {\n+ String[] possiblePathValues = {\"some_path\", \"anotherPath\", null};\n+ DocumentMapperParser parser = createIndex(\"test\").mapperService().documentMapperParser();\n+ XContentBuilder mapping1 = XContentFactory.jsonBuilder().startObject()\n+ .startObject(\"type\")\n+ .startObject(\"_timestamp\");\n+ String path1 = possiblePathValues[randomInt(2)];\n+ if (path1!=null) {\n+ mapping1.field(\"path\", path1);\n+ }\n+ mapping1.endObject()\n+ .endObject().endObject();\n+ XContentBuilder mapping2 = XContentFactory.jsonBuilder().startObject()\n+ .startObject(\"type\")\n+ .startObject(\"_timestamp\");\n+ String path2 = possiblePathValues[randomInt(2)];\n+ if (path2!=null) {\n+ mapping2.field(\"path\", path2);\n+ }\n+ mapping2.endObject()\n+ .endObject().endObject();\n+\n+ testConflict(mapping1.string(), mapping2.string(), parser, (path1 == path2 ? null : \"Cannot update path in _timestamp value\"));\n+ }\n+\n+ void testConflict(String mapping1, String mapping2, DocumentMapperParser parser, String conflict) throws IOException {\n+ DocumentMapper docMapper = parser.parse(mapping1);\n+ docMapper.refreshSource();\n+ docMapper = parser.parse(docMapper.mappingSource().string());\n+ DocumentMapper.MergeResult mergeResult = docMapper.merge(parser.parse(mapping2), DocumentMapper.MergeFlags.mergeFlags().simulate(true));\n+ assertThat(mergeResult.conflicts().length, equalTo(conflict == null ? 0:1));\n+ if (conflict != null) {\n+ assertThat(mergeResult.conflicts()[0], containsString(conflict));\n+ }\n+ }\n }", "filename": "src/test/java/org/elasticsearch/index/mapper/timestamp/TimestampMappingTests.java", "status": "modified" }, { "diff": "@@ -20,13 +20,18 @@\n package org.elasticsearch.index.mapper.update;\n \n import org.elasticsearch.action.admin.indices.mapping.get.GetMappingsResponse;\n+import org.elasticsearch.action.admin.indices.mapping.put.PutMappingResponse;\n import org.elasticsearch.client.Client;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.index.mapper.MapperParsingException;\n import org.elasticsearch.index.mapper.MergeMappingException;\n import org.elasticsearch.test.ElasticsearchIntegrationTest;\n import org.junit.Test;\n \n+import java.io.IOException;\n+import java.util.LinkedHashMap;\n+\n import static org.elasticsearch.common.io.Streams.copyToStringFromClasspath;\n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n@@ -126,4 +131,62 @@ private void compareMappingOnNodes(GetMappingsResponse previousMapping) {\n assertThat(previousMapping.getMappings().get(INDEX).get(TYPE).source(), equalTo(currentMapping.getMappings().get(INDEX).get(TYPE).source()));\n }\n }\n+\n+ @Test\n+ public void testUpdateTimestamp() throws IOException {\n+ XContentBuilder mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n+ .startObject(\"_timestamp\").field(\"enabled\", randomBoolean()).startObject(\"fielddata\").field(\"loading\", \"lazy\").field(\"format\", \"doc_values\").endObject().field(\"store\", \"yes\").endObject()\n+ .endObject().endObject();\n+ client().admin().indices().prepareCreate(\"test\").addMapping(\"type\", mapping).get();\n+ GetMappingsResponse appliedMappings = client().admin().indices().prepareGetMappings(\"test\").get();\n+ LinkedHashMap timestampMapping = (LinkedHashMap) appliedMappings.getMappings().get(\"test\").get(\"type\").getSourceAsMap().get(\"_timestamp\");\n+ assertThat((Boolean) timestampMapping.get(\"store\"), equalTo(true));\n+ assertThat((String)((LinkedHashMap) timestampMapping.get(\"fielddata\")).get(\"loading\"), equalTo(\"lazy\"));\n+ assertThat((String)((LinkedHashMap) timestampMapping.get(\"fielddata\")).get(\"format\"), equalTo(\"doc_values\"));\n+ mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n+ .startObject(\"_timestamp\").field(\"enabled\", randomBoolean()).startObject(\"fielddata\").field(\"loading\", \"eager\").field(\"format\", \"array\").endObject().field(\"store\", \"yes\").endObject()\n+ .endObject().endObject();\n+ PutMappingResponse putMappingResponse = client().admin().indices().preparePutMapping(\"test\").setType(\"type\").setSource(mapping).get();\n+ appliedMappings = client().admin().indices().prepareGetMappings(\"test\").get();\n+ timestampMapping = (LinkedHashMap) appliedMappings.getMappings().get(\"test\").get(\"type\").getSourceAsMap().get(\"_timestamp\");\n+ assertThat((Boolean) timestampMapping.get(\"store\"), equalTo(true));\n+ assertThat((String)((LinkedHashMap) timestampMapping.get(\"fielddata\")).get(\"loading\"), equalTo(\"eager\"));\n+ assertThat((String)((LinkedHashMap) timestampMapping.get(\"fielddata\")).get(\"format\"), equalTo(\"array\"));\n+ }\n+\n+ @Test\n+ public void testTimestampMergingConflicts() throws Exception {\n+ String mapping = XContentFactory.jsonBuilder().startObject().startObject(TYPE)\n+ .startObject(\"_timestamp\").field(\"enabled\", true)\n+ .startObject(\"fielddata\").field(\"format\", \"doc_values\").endObject()\n+ .field(\"store\", \"yes\")\n+ .field(\"index\", \"analyzed\")\n+ .field(\"path\", \"foo\")\n+ .field(\"default\", \"1970-01-01\")\n+ .endObject()\n+ .endObject().endObject().string();\n+\n+ client().admin().indices().prepareCreate(INDEX).addMapping(TYPE, mapping).get();\n+\n+ mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n+ .startObject(\"_timestamp\").field(\"enabled\", false)\n+ .startObject(\"fielddata\").field(\"format\", \"array\").endObject()\n+ .field(\"store\", \"no\")\n+ .field(\"index\", \"no\")\n+ .field(\"path\", \"bar\")\n+ .field(\"default\", \"1970-01-02\")\n+ .endObject()\n+ .endObject().endObject().string();\n+ GetMappingsResponse mappingsBeforeUpdateResponse = client().admin().indices().prepareGetMappings(INDEX).addTypes(TYPE).get();\n+ try {\n+ client().admin().indices().preparePutMapping(INDEX).setType(TYPE).setSource(mapping).get();\n+ fail(\"This should result in conflicts when merging the mapping\");\n+ } catch (MergeMappingException e) {\n+ String[] expectedConflicts = {\"mapper [_timestamp] has different index values\", \"mapper [_timestamp] has different store values\", \"Cannot update default in _timestamp value. Value is 1970-01-01 now encountering 1970-01-02\", \"Cannot update path in _timestamp value. Value is foo path in merged mapping is bar\"};\n+ for (String conflict : expectedConflicts) {\n+ assertThat(e.getDetailedMessage(), containsString(conflict));\n+ }\n+ }\n+ compareMappingOnNodes(mappingsBeforeUpdateResponse);\n+ }\n }", "filename": "src/test/java/org/elasticsearch/index/mapper/update/UpdateMappingOnClusterTests.java", "status": "modified" }, { "diff": "@@ -122,7 +122,7 @@ public void testThatTimestampCanBeSwitchedOnAndOff() throws Exception {\n assertTimestampMappingEnabled(index, type, true);\n \n // update some field in the mapping\n- XContentBuilder updateMappingBuilder = jsonBuilder().startObject().startObject(\"_timestamp\").field(\"enabled\", false).endObject().endObject();\n+ XContentBuilder updateMappingBuilder = jsonBuilder().startObject().startObject(\"_timestamp\").field(\"enabled\", false).field(\"store\", true).endObject().endObject();\n PutMappingResponse putMappingResponse = client().admin().indices().preparePutMapping(index).setType(type).setSource(updateMappingBuilder).get();\n assertAcked(putMappingResponse);\n ", "filename": "src/test/java/org/elasticsearch/timestamp/SimpleTimestampTests.java", "status": "modified" } ] }
{ "body": "``` json\nDELETE /myindex\nPOST /myindex/mytype/1\n{\n \"prop1\":\"test1\"\n}\nGET /myindex/_search\n{\n \"fields\": [\n \"_timestamp\"\n ], \n \"query\": {\n \"match_all\": {}\n }\n}\nPUT /myindex/mytype/_mapping\n{\n\n \"mytype\": {\n \"_timestamp\": {\n \"enabled\": \"true\",\n \"store\": \"true\"\n }\n }\n}\nGET /myindex/mytype/_mapping\n```\n\nIf you create an index, put a doc into it, and then try to update the mapping to enable _timestamp and set store:true, you will notice that the response comes back as acknowledged:true with no error messages. But if you get the _mapping back, you will notice that store:true is not set (by design because we do not allow changing the store parameter post indexing). \n\nFor regular properties, if you attempt to update its store parameter post indexing, it will actually throw a 400 error indicating a MergeMappingException because of differences in store values. \n\nIt will be nice to add the same validation and throw a similar error when users attempt to update the store value of _timestamp post-indexing.\n", "comments": [ { "body": "Seems to the the same issue as #777\n", "created_at": "2014-08-04T06:24:38Z" } ], "number": 5772, "title": "Improve handling of store parameter updates post indexing" }
{ "body": "Updates on the _timestamp field were silently ignored.\nNow _timestamp undergoes the same merge as regular\nfields. This includes exceptions if a property cannot\nbe changed.\n\"path\" and \"default\" cannot be changed.\n\ncloses #5772\ncloses #6958\npartially fixes #777\n", "number": 7614, "review_comments": [ { "body": "Is this needed? It would be printed below on assert failure right?\n", "created_at": "2014-09-05T22:02:36Z" }, { "body": "Is it necessary to parse the same source a second time? Wasn't that the point of the test above?\n", "created_at": "2014-09-05T22:04:35Z" }, { "body": "Is parsing twice really necessary?\n", "created_at": "2014-09-05T22:12:45Z" }, { "body": "Not important, but couldn't this just be an array?\nString[] possiblePathValues = {\"some_path\", \"anotherPath\", null};\n", "created_at": "2014-09-05T22:14:43Z" }, { "body": "Can we not use `*` imports? See comments in #7587.\n", "created_at": "2014-09-05T22:17:49Z" }, { "body": "Maybe using the term \"missing\" instead of null? It was simply omitted before right?\n", "created_at": "2014-09-05T22:19:42Z" }, { "body": "I'm coming around to not using `!` for negations because it is so easy to overlook. Instead you could use `== false`?\n", "created_at": "2014-09-05T22:20:50Z" }, { "body": "Was this change intentional?\n", "created_at": "2014-09-05T22:21:31Z" }, { "body": "This question actually bubbled up a bug in the doXContent on TimestampFieldMapper. Fixed in \"fix toXContent, build correct \"index\" value\"\n", "created_at": "2014-09-08T13:54:12Z" }, { "body": "Yes, this was intentional. When the store value is updated there is a conflict reported now. This happens even if the store value is set and then just left out when updatet the mapping.\n", "created_at": "2014-09-08T14:56:54Z" } ], "title": "Enable merging of properties in the `_timestamp` field" }
{ "commits": [ { "message": "_timestamp: enable mapper properties merging\n\nUpdates on the _timestamp field were silently ignored.\nNow _timestamp undergoes the same merge as regular\nfields. This includes exceptions if a prroperty cannot\nbe changed.\n\"path\" and \"default\" cannot be changed.\n\ncloses #5772\ncloses #6958\npartially fixes #777" }, { "message": "fix toXContent, build correct \"index\" value" }, { "message": "make List an array" }, { "message": "remove wirdcard import" }, { "message": "better readablity in merge method conflict mesage and code" } ], "files": [ { "diff": "@@ -249,7 +249,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n if (includeDefaults || enabledState != Defaults.ENABLED) {\n builder.field(\"enabled\", enabledState.enabled);\n }\n- if (includeDefaults || fieldType.indexed() != Defaults.FIELD_TYPE.indexed()) {\n+ if (includeDefaults || (fieldType.indexed() != Defaults.FIELD_TYPE.indexed()) || (fieldType.tokenized() != Defaults.FIELD_TYPE.tokenized())) {\n builder.field(\"index\", indexTokenizeOptionToString(fieldType.indexed(), fieldType.tokenized()));\n }\n if (includeDefaults || fieldType.stored() != Defaults.FIELD_TYPE.stored()) {\n@@ -277,10 +277,22 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n @Override\n public void merge(Mapper mergeWith, MergeContext mergeContext) throws MergeMappingException {\n TimestampFieldMapper timestampFieldMapperMergeWith = (TimestampFieldMapper) mergeWith;\n+ super.merge(mergeWith, mergeContext);\n if (!mergeContext.mergeFlags().simulate()) {\n if (timestampFieldMapperMergeWith.enabledState != enabledState && !timestampFieldMapperMergeWith.enabledState.unset()) {\n this.enabledState = timestampFieldMapperMergeWith.enabledState;\n }\n+ } else {\n+ if (!timestampFieldMapperMergeWith.defaultTimestamp().equals(defaultTimestamp)) {\n+ mergeContext.addConflict(\"Cannot update default in _timestamp value. Value is \" + defaultTimestamp.toString() + \" now encountering \" + timestampFieldMapperMergeWith.defaultTimestamp());\n+ }\n+ if (this.path != null) {\n+ if (path.equals(timestampFieldMapperMergeWith.path()) == false) {\n+ mergeContext.addConflict(\"Cannot update path in _timestamp value. Value is \" + path + \" path in merged mapping is \" + (timestampFieldMapperMergeWith.path() == null ? \"missing\" : timestampFieldMapperMergeWith.path()));\n+ }\n+ } else if (timestampFieldMapperMergeWith.path() != null) {\n+ mergeContext.addConflict(\"Cannot update path in _timestamp value. Value is \" + path + \" path in merged mapping is missing\");\n+ }\n }\n }\n }", "filename": "src/main/java/org/elasticsearch/index/mapper/internal/TimestampFieldMapper.java", "status": "modified" }, { "diff": "@@ -33,15 +33,17 @@\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.common.xcontent.json.JsonXContent;\n import org.elasticsearch.index.mapper.DocumentMapper;\n-import org.elasticsearch.index.mapper.DocumentMapperParser;\n import org.elasticsearch.index.mapper.ParsedDocument;\n import org.elasticsearch.index.mapper.SourceToParse;\n+import org.elasticsearch.index.mapper.DocumentMapperParser;\n+import org.elasticsearch.index.mapper.FieldMapper;\n import org.elasticsearch.index.mapper.internal.TimestampFieldMapper;\n import org.elasticsearch.test.ElasticsearchSingleNodeTest;\n import org.junit.Test;\n \n import java.io.IOException;\n-import java.util.Locale;\n+import java.util.ArrayList;\n+import java.util.List;\n import java.util.Map;\n \n import static org.hamcrest.Matchers.*;\n@@ -89,7 +91,7 @@ public void testDefaultValues() throws Exception {\n assertThat(docMapper.timestampFieldMapper().enabled(), equalTo(TimestampFieldMapper.Defaults.ENABLED.enabled));\n assertThat(docMapper.timestampFieldMapper().fieldType().stored(), equalTo(TimestampFieldMapper.Defaults.FIELD_TYPE.stored()));\n assertThat(docMapper.timestampFieldMapper().fieldType().indexed(), equalTo(TimestampFieldMapper.Defaults.FIELD_TYPE.indexed()));\n- assertThat(docMapper.timestampFieldMapper().path(), equalTo(null));\n+ assertThat(docMapper.timestampFieldMapper().path(), equalTo(TimestampFieldMapper.Defaults.PATH));\n assertThat(docMapper.timestampFieldMapper().dateTimeFormatter().format(), equalTo(TimestampFieldMapper.DEFAULT_DATE_TIME_FORMAT));\n }\n \n@@ -390,4 +392,172 @@ public void testDefaultTimestampStream() throws IOException {\n assertThat(metaData, is(expected));\n }\n }\n+\n+ @Test\n+ public void testMergingFielddataLoadingWorks() throws Exception {\n+ String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n+ .startObject(\"_timestamp\").field(\"enabled\", randomBoolean()).startObject(\"fielddata\").field(\"loading\", \"lazy\").field(\"format\", \"doc_values\").endObject().field(\"store\", \"yes\").endObject()\n+ .endObject().endObject().string();\n+ DocumentMapperParser parser = createIndex(\"test\").mapperService().documentMapperParser();\n+\n+ DocumentMapper docMapper = parser.parse(mapping);\n+ assertThat(docMapper.timestampFieldMapper().fieldDataType().getLoading(), equalTo(FieldMapper.Loading.LAZY));\n+ assertThat(docMapper.timestampFieldMapper().fieldDataType().getFormat(docMapper.timestampFieldMapper().fieldDataType().getSettings()), equalTo(\"doc_values\"));\n+ mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n+ .startObject(\"_timestamp\").field(\"enabled\", randomBoolean()).startObject(\"fielddata\").field(\"loading\", \"eager\").field(\"format\", \"array\").endObject().field(\"store\", \"yes\").endObject()\n+ .endObject().endObject().string();\n+\n+ DocumentMapper.MergeResult mergeResult = docMapper.merge(parser.parse(mapping), DocumentMapper.MergeFlags.mergeFlags().simulate(false));\n+ assertThat(mergeResult.conflicts().length, equalTo(0));\n+ assertThat(docMapper.timestampFieldMapper().fieldDataType().getLoading(), equalTo(FieldMapper.Loading.EAGER));\n+ assertThat(docMapper.timestampFieldMapper().fieldDataType().getFormat(docMapper.timestampFieldMapper().fieldDataType().getSettings()), equalTo(\"array\"));\n+ }\n+\n+ @Test\n+ public void testParsingNotDefaultTwiceDoesNotChangeMapping() throws Exception {\n+ String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n+ .startObject(\"_timestamp\").field(\"enabled\", true)\n+ .field(\"index\", randomBoolean() ? \"no\" : \"analyzed\") // default is \"not_analyzed\" which will be omitted when building the source again\n+ .field(\"store\", true)\n+ .field(\"path\", \"foo\")\n+ .field(\"default\", \"1970-01-01\")\n+ .startObject(\"fielddata\").field(\"format\", \"doc_values\").endObject()\n+ .endObject()\n+ .startObject(\"properties\")\n+ .endObject()\n+ .endObject().endObject().string();\n+ DocumentMapperParser parser = createIndex(\"test\").mapperService().documentMapperParser();\n+\n+ DocumentMapper docMapper = parser.parse(mapping);\n+ docMapper.refreshSource();\n+ docMapper = parser.parse(docMapper.mappingSource().string());\n+ assertThat(docMapper.mappingSource().string(), equalTo(mapping));\n+ }\n+\n+ @Test\n+ public void testParsingTwiceDoesNotChangeTokenizeValue() throws Exception {\n+ String[] index_options = {\"no\", \"analyzed\", \"not_analyzed\"};\n+ String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n+ .startObject(\"_timestamp\").field(\"enabled\", true)\n+ .field(\"index\", index_options[randomInt(2)])\n+ .field(\"store\", true)\n+ .field(\"path\", \"foo\")\n+ .field(\"default\", \"1970-01-01\")\n+ .startObject(\"fielddata\").field(\"format\", \"doc_values\").endObject()\n+ .endObject()\n+ .startObject(\"properties\")\n+ .endObject()\n+ .endObject().endObject().string();\n+ DocumentMapperParser parser = createIndex(\"test\").mapperService().documentMapperParser();\n+\n+ DocumentMapper docMapper = parser.parse(mapping);\n+ boolean tokenized = docMapper.timestampFieldMapper().fieldType().tokenized();\n+ docMapper.refreshSource();\n+ docMapper = parser.parse(docMapper.mappingSource().string());\n+ assertThat(tokenized, equalTo(docMapper.timestampFieldMapper().fieldType().tokenized()));\n+ }\n+\n+ @Test\n+ public void testMergingConflicts() throws Exception {\n+ String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n+ .startObject(\"_timestamp\").field(\"enabled\", true)\n+ .startObject(\"fielddata\").field(\"format\", \"doc_values\").endObject()\n+ .field(\"store\", \"yes\")\n+ .field(\"index\", \"analyzed\")\n+ .field(\"path\", \"foo\")\n+ .field(\"default\", \"1970-01-01\")\n+ .endObject()\n+ .endObject().endObject().string();\n+ DocumentMapperParser parser = createIndex(\"test\").mapperService().documentMapperParser();\n+\n+ DocumentMapper docMapper = parser.parse(mapping);\n+ assertThat(docMapper.timestampFieldMapper().fieldDataType().getLoading(), equalTo(FieldMapper.Loading.LAZY));\n+ mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n+ .startObject(\"_timestamp\").field(\"enabled\", false)\n+ .startObject(\"fielddata\").field(\"format\", \"array\").endObject()\n+ .field(\"store\", \"no\")\n+ .field(\"index\", \"no\")\n+ .field(\"path\", \"bar\")\n+ .field(\"default\", \"1970-01-02\")\n+ .endObject()\n+ .endObject().endObject().string();\n+\n+ DocumentMapper.MergeResult mergeResult = docMapper.merge(parser.parse(mapping), DocumentMapper.MergeFlags.mergeFlags().simulate(true));\n+ String[] expectedConflicts = {\"mapper [_timestamp] has different index values\", \"mapper [_timestamp] has different store values\", \"Cannot update default in _timestamp value. Value is 1970-01-01 now encountering 1970-01-02\", \"Cannot update path in _timestamp value. Value is foo path in merged mapping is bar\", \"mapper [_timestamp] has different tokenize values\"};\n+\n+ for (String conflict : mergeResult.conflicts()) {\n+ assertThat(conflict, isIn(expectedConflicts));\n+ }\n+ assertThat(mergeResult.conflicts().length, equalTo(expectedConflicts.length));\n+ assertThat(docMapper.timestampFieldMapper().fieldDataType().getLoading(), equalTo(FieldMapper.Loading.LAZY));\n+ assertTrue(docMapper.timestampFieldMapper().enabled());\n+ assertThat(docMapper.timestampFieldMapper().fieldDataType().getFormat(docMapper.timestampFieldMapper().fieldDataType().getSettings()), equalTo(\"doc_values\"));\n+ }\n+\n+ @Test\n+ public void testMergingConflictsForIndexValues() throws Exception {\n+ List<String> indexValues = new ArrayList<>();\n+ indexValues.add(\"analyzed\");\n+ indexValues.add(\"no\");\n+ indexValues.add(\"not_analyzed\");\n+ String mapping = XContentFactory.jsonBuilder().startObject()\n+ .startObject(\"type\")\n+ .startObject(\"_timestamp\")\n+ .field(\"index\", indexValues.remove(randomInt(2)))\n+ .endObject()\n+ .endObject().endObject().string();\n+ DocumentMapperParser parser = createIndex(\"test\").mapperService().documentMapperParser();\n+\n+ DocumentMapper docMapper = parser.parse(mapping);\n+ mapping = XContentFactory.jsonBuilder().startObject()\n+ .startObject(\"type\")\n+ .startObject(\"_timestamp\")\n+ .field(\"index\", indexValues.remove(randomInt(1)))\n+ .endObject()\n+ .endObject().endObject().string();\n+\n+ DocumentMapper.MergeResult mergeResult = docMapper.merge(parser.parse(mapping), DocumentMapper.MergeFlags.mergeFlags().simulate(true));\n+ String[] expectedConflicts = {\"mapper [_timestamp] has different index values\", \"mapper [_timestamp] has different tokenize values\"};\n+\n+ for (String conflict : mergeResult.conflicts()) {\n+ assertThat(conflict, isIn(expectedConflicts));\n+ }\n+ }\n+\n+ @Test\n+ public void testMergePaths() throws Exception {\n+ String[] possiblePathValues = {\"some_path\", \"anotherPath\", null};\n+ DocumentMapperParser parser = createIndex(\"test\").mapperService().documentMapperParser();\n+ XContentBuilder mapping1 = XContentFactory.jsonBuilder().startObject()\n+ .startObject(\"type\")\n+ .startObject(\"_timestamp\");\n+ String path1 = possiblePathValues[randomInt(2)];\n+ if (path1!=null) {\n+ mapping1.field(\"path\", path1);\n+ }\n+ mapping1.endObject()\n+ .endObject().endObject();\n+ XContentBuilder mapping2 = XContentFactory.jsonBuilder().startObject()\n+ .startObject(\"type\")\n+ .startObject(\"_timestamp\");\n+ String path2 = possiblePathValues[randomInt(2)];\n+ if (path2!=null) {\n+ mapping2.field(\"path\", path2);\n+ }\n+ mapping2.endObject()\n+ .endObject().endObject();\n+\n+ testConflict(mapping1.string(), mapping2.string(), parser, (path1 == path2 ? null : \"Cannot update path in _timestamp value\"));\n+ }\n+\n+ void testConflict(String mapping1, String mapping2, DocumentMapperParser parser, String conflict) throws IOException {\n+ DocumentMapper docMapper = parser.parse(mapping1);\n+ docMapper.refreshSource();\n+ docMapper = parser.parse(docMapper.mappingSource().string());\n+ DocumentMapper.MergeResult mergeResult = docMapper.merge(parser.parse(mapping2), DocumentMapper.MergeFlags.mergeFlags().simulate(true));\n+ assertThat(mergeResult.conflicts().length, equalTo(conflict == null ? 0:1));\n+ if (conflict != null) {\n+ assertThat(mergeResult.conflicts()[0], containsString(conflict));\n+ }\n+ }\n }", "filename": "src/test/java/org/elasticsearch/index/mapper/timestamp/TimestampMappingTests.java", "status": "modified" }, { "diff": "@@ -20,13 +20,18 @@\n package org.elasticsearch.index.mapper.update;\n \n import org.elasticsearch.action.admin.indices.mapping.get.GetMappingsResponse;\n+import org.elasticsearch.action.admin.indices.mapping.put.PutMappingResponse;\n import org.elasticsearch.client.Client;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.index.mapper.MapperParsingException;\n import org.elasticsearch.index.mapper.MergeMappingException;\n import org.elasticsearch.test.ElasticsearchIntegrationTest;\n import org.junit.Test;\n \n+import java.io.IOException;\n+import java.util.LinkedHashMap;\n+\n import static org.elasticsearch.common.io.Streams.copyToStringFromClasspath;\n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n@@ -126,4 +131,62 @@ private void compareMappingOnNodes(GetMappingsResponse previousMapping) {\n assertThat(previousMapping.getMappings().get(INDEX).get(TYPE).source(), equalTo(currentMapping.getMappings().get(INDEX).get(TYPE).source()));\n }\n }\n+\n+ @Test\n+ public void testUpdateTimestamp() throws IOException {\n+ XContentBuilder mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n+ .startObject(\"_timestamp\").field(\"enabled\", randomBoolean()).startObject(\"fielddata\").field(\"loading\", \"lazy\").field(\"format\", \"doc_values\").endObject().field(\"store\", \"yes\").endObject()\n+ .endObject().endObject();\n+ client().admin().indices().prepareCreate(\"test\").addMapping(\"type\", mapping).get();\n+ GetMappingsResponse appliedMappings = client().admin().indices().prepareGetMappings(\"test\").get();\n+ LinkedHashMap timestampMapping = (LinkedHashMap) appliedMappings.getMappings().get(\"test\").get(\"type\").getSourceAsMap().get(\"_timestamp\");\n+ assertThat((Boolean) timestampMapping.get(\"store\"), equalTo(true));\n+ assertThat((String)((LinkedHashMap) timestampMapping.get(\"fielddata\")).get(\"loading\"), equalTo(\"lazy\"));\n+ assertThat((String)((LinkedHashMap) timestampMapping.get(\"fielddata\")).get(\"format\"), equalTo(\"doc_values\"));\n+ mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n+ .startObject(\"_timestamp\").field(\"enabled\", randomBoolean()).startObject(\"fielddata\").field(\"loading\", \"eager\").field(\"format\", \"array\").endObject().field(\"store\", \"yes\").endObject()\n+ .endObject().endObject();\n+ PutMappingResponse putMappingResponse = client().admin().indices().preparePutMapping(\"test\").setType(\"type\").setSource(mapping).get();\n+ appliedMappings = client().admin().indices().prepareGetMappings(\"test\").get();\n+ timestampMapping = (LinkedHashMap) appliedMappings.getMappings().get(\"test\").get(\"type\").getSourceAsMap().get(\"_timestamp\");\n+ assertThat((Boolean) timestampMapping.get(\"store\"), equalTo(true));\n+ assertThat((String)((LinkedHashMap) timestampMapping.get(\"fielddata\")).get(\"loading\"), equalTo(\"eager\"));\n+ assertThat((String)((LinkedHashMap) timestampMapping.get(\"fielddata\")).get(\"format\"), equalTo(\"array\"));\n+ }\n+\n+ @Test\n+ public void testTimestampMergingConflicts() throws Exception {\n+ String mapping = XContentFactory.jsonBuilder().startObject().startObject(TYPE)\n+ .startObject(\"_timestamp\").field(\"enabled\", true)\n+ .startObject(\"fielddata\").field(\"format\", \"doc_values\").endObject()\n+ .field(\"store\", \"yes\")\n+ .field(\"index\", \"analyzed\")\n+ .field(\"path\", \"foo\")\n+ .field(\"default\", \"1970-01-01\")\n+ .endObject()\n+ .endObject().endObject().string();\n+\n+ client().admin().indices().prepareCreate(INDEX).addMapping(TYPE, mapping).get();\n+\n+ mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n+ .startObject(\"_timestamp\").field(\"enabled\", false)\n+ .startObject(\"fielddata\").field(\"format\", \"array\").endObject()\n+ .field(\"store\", \"no\")\n+ .field(\"index\", \"no\")\n+ .field(\"path\", \"bar\")\n+ .field(\"default\", \"1970-01-02\")\n+ .endObject()\n+ .endObject().endObject().string();\n+ GetMappingsResponse mappingsBeforeUpdateResponse = client().admin().indices().prepareGetMappings(INDEX).addTypes(TYPE).get();\n+ try {\n+ client().admin().indices().preparePutMapping(INDEX).setType(TYPE).setSource(mapping).get();\n+ fail(\"This should result in conflicts when merging the mapping\");\n+ } catch (MergeMappingException e) {\n+ String[] expectedConflicts = {\"mapper [_timestamp] has different index values\", \"mapper [_timestamp] has different store values\", \"Cannot update default in _timestamp value. Value is 1970-01-01 now encountering 1970-01-02\", \"Cannot update path in _timestamp value. Value is foo path in merged mapping is bar\"};\n+ for (String conflict : expectedConflicts) {\n+ assertThat(e.getDetailedMessage(), containsString(conflict));\n+ }\n+ }\n+ compareMappingOnNodes(mappingsBeforeUpdateResponse);\n+ }\n }", "filename": "src/test/java/org/elasticsearch/index/mapper/update/UpdateMappingOnClusterTests.java", "status": "modified" }, { "diff": "@@ -122,7 +122,7 @@ public void testThatTimestampCanBeSwitchedOnAndOff() throws Exception {\n assertTimestampMappingEnabled(index, type, true);\n \n // update some field in the mapping\n- XContentBuilder updateMappingBuilder = jsonBuilder().startObject().startObject(\"_timestamp\").field(\"enabled\", false).endObject().endObject();\n+ XContentBuilder updateMappingBuilder = jsonBuilder().startObject().startObject(\"_timestamp\").field(\"enabled\", false).field(\"store\", true).endObject().endObject();\n PutMappingResponse putMappingResponse = client().admin().indices().preparePutMapping(index).setType(type).setSource(updateMappingBuilder).get();\n assertAcked(putMappingResponse);\n ", "filename": "src/test/java/org/elasticsearch/timestamp/SimpleTimestampTests.java", "status": "modified" } ] }
{ "body": "In #7238 it looked like index corruption (checksum errors) but in fact it was simply that the user selected bloom_pulsing postings format, which we don't support yet still allow.\n\nWe recently removed documentation showing these postings format as a choice, but it's still really dangerous we allow this option at all since it creates unusable indices in ES when we migrate shards and try to check integrity. Before 1.3, ES didn't check Lucene checksums, so these postings formats worked fine, but with 1.3 any index using pulsing will fail.\n\nThe pulsing optimization has already been folded into the default postings format for quite a while now.\n\nI think we should remove them; we are already removing pulsing from Lucene (https://issues.apache.org/jira/browse/LUCENE-5915)\n", "comments": [ { "body": "Should we go further and disable (for now) any custom formats that don't have backwards compatibility support from lucene? These can change across releases in such a way that looks like corruption.\n\nWe are currently trying to figure out a way in Lucene to safely provide options to the user AND backwards compatibility, but this is not going to happen overnight.\n", "created_at": "2014-09-03T16:54:15Z" }, { "body": "> Should we go further and disable (for now) any custom formats that don't have backwards compatibility support from lucene? These can change across releases in such a way that looks like corruption.\n\n+1, I'll do that.\n", "created_at": "2014-09-03T18:56:36Z" }, { "body": "++ to removing all the non-checksumming formats!\n", "created_at": "2014-09-03T19:38:44Z" } ], "number": 7566, "title": "Mapping: Remove pulsing/bloom_pulsing postings format" }
{ "body": "Today ES allows you to pick e.g. \"pulsing\", but this is very dangerous because that format, and all other postings/doc values formats from the Lucene codecs module, has no backwards compatibility support in Lucene. So on upgrade you can easily hit strange exceptions that make your index unusable / look like index corruption.\n\nSo I removed lucene-codecs JAR entirely from ES, which e.g. removes direct, simple text, memory PF, Lucene's BloomFilteringPF, and disk/memory DVF.\n\nI haven't verified, but I think users can still put the Lucene codecs JAR onto ES's CLASSPATH (e.g. in with a plugin) and then use these formats in their own apps (at their own risk). I think this extra step is better than the ease today with which users can select these formats that Lucene doesn't support.\n\nToday ES allows you to pick e.g. \"pulsing\", but this is very dangerous because that format, and all other postings/doc values formats from the Lucene codecs module, has no backwards compatibility support in Lucene. So on upgrade you can easily hit strange exceptions that make your index unusable / look like index corruption.\n\nSo I removed lucene-codecs JAR entirely from ES, which e.g. removes direct, simple text, memory PF, Lucene's BloomFilteringPF, and disk/memory DVF.\n\nI haven't verified, but I think users can still put the Lucene codecs JAR onto ES's CLASSPATH (e.g. in with a plugin) and then use these formats in their own apps (at their own risk). I think this extra step is better than the ease today with which users can select these formats that Lucene doesn't support.\n\nSee #7566 and #7238\n", "number": 7604, "review_comments": [], "title": "Remove unsupported `postings_format` / `doc_values_format`" }
{ "commits": [ { "message": "remove dependence on Lucene codecs" } ], "files": [ { "diff": "@@ -89,12 +89,6 @@\n <version>${lucene.version}</version>\n <scope>compile</scope>\n </dependency>\n- <dependency>\n- <groupId>org.apache.lucene</groupId>\n- <artifactId>lucene-codecs</artifactId>\n- <version>${lucene.version}</version>\n- <scope>compile</scope>\n- </dependency>\n <dependency>\n <groupId>org.apache.lucene</groupId>\n <artifactId>lucene-queries</artifactId>", "filename": "pom.xml", "status": "modified" }, { "diff": "@@ -22,70 +22,46 @@\n import com.google.common.collect.ImmutableCollection;\n import com.google.common.collect.ImmutableMap;\n import org.apache.lucene.codecs.PostingsFormat;\n-import org.apache.lucene.codecs.bloom.BloomFilteringPostingsFormat;\n-import org.apache.lucene.codecs.memory.DirectPostingsFormat;\n import org.elasticsearch.common.collect.MapBuilder;\n import org.elasticsearch.common.util.BloomFilter;\n \n /**\n- * This class represents the set of Elasticsearch \"build-in\"\n+ * This class represents the set of Elasticsearch \"built-in\"\n * {@link PostingsFormatProvider.Factory postings format factories}\n * <ul>\n- * <li><b>direct</b>: a postings format that uses disk-based storage but loads\n- * its terms and postings directly into memory. Note this postings format is\n- * very memory intensive and has certain limitation that don't allow segments to\n- * grow beyond 2.1GB see {@link DirectPostingsFormat} for details.</li>\n- * <p/>\n- * <li><b>memory</b>: a postings format that stores its entire terms, postings,\n- * positions and payloads in a finite state transducer. This format should only\n- * be used for primary keys or with fields where each term is contained in a\n- * very low number of documents.</li>\n- * <p/>\n- * <li><b>pulsing</b>: a postings format in-lines the posting lists for very low\n- * frequent terms in the term dictionary. This is useful to improve lookup\n- * performance for low-frequent terms.</li>\n- * <p/>\n * <li><b>bloom_default</b>: a postings format that uses a bloom filter to\n * improve term lookup performance. This is useful for primarily keys or fields\n * that are used as a delete key</li>\n- * <p/>\n- * <li><b>bloom_pulsing</b>: a postings format that combines the advantages of\n- * <b>bloom</b> and <b>pulsing</b> to further improve lookup performance</li>\n- * <p/>\n * <li><b>default</b>: the default Elasticsearch postings format offering best\n * general purpose performance. This format is used if no postings format is\n * specified in the field mapping.</li>\n+ * <li><b>***</b>: other formats from Lucene core (e.g. Lucene41 as of Lucene 4.10)\n * </ul>\n */\n public class PostingFormats {\n \n private static final ImmutableMap<String, PreBuiltPostingsFormatProvider.Factory> builtInPostingFormats;\n \n static {\n- MapBuilder<String, PreBuiltPostingsFormatProvider.Factory> buildInPostingFormatsX = MapBuilder.newMapBuilder();\n- // add defaults ones\n+ MapBuilder<String, PreBuiltPostingsFormatProvider.Factory> builtInPostingFormatsX = MapBuilder.newMapBuilder();\n+ // Add any PostingsFormat visible in the CLASSPATH (from Lucene core or via user's plugins). Note that we no longer include\n+ // lucene codecs module since those codecs have no backwards compatibility between releases and can easily cause exceptions that\n+ // look like index corruption on upgrade:\n for (String luceneName : PostingsFormat.availablePostingsFormats()) {\n- buildInPostingFormatsX.put(luceneName, new PreBuiltPostingsFormatProvider.Factory(PostingsFormat.forName(luceneName)));\n+ builtInPostingFormatsX.put(luceneName, new PreBuiltPostingsFormatProvider.Factory(PostingsFormat.forName(luceneName)));\n }\n final PostingsFormat defaultFormat = new Elasticsearch090PostingsFormat();\n- buildInPostingFormatsX.put(\"direct\", new PreBuiltPostingsFormatProvider.Factory(\"direct\", PostingsFormat.forName(\"Direct\")));\n- buildInPostingFormatsX.put(\"memory\", new PreBuiltPostingsFormatProvider.Factory(\"memory\", PostingsFormat.forName(\"Memory\")));\n- // LUCENE UPGRADE: Need to change this to the relevant ones on a lucene upgrade\n- buildInPostingFormatsX.put(\"pulsing\", new PreBuiltPostingsFormatProvider.Factory(\"pulsing\", PostingsFormat.forName(\"Pulsing41\")));\n- buildInPostingFormatsX.put(PostingsFormatService.DEFAULT_FORMAT, new PreBuiltPostingsFormatProvider.Factory(PostingsFormatService.DEFAULT_FORMAT, defaultFormat));\n+ builtInPostingFormatsX.put(PostingsFormatService.DEFAULT_FORMAT,\n+ new PreBuiltPostingsFormatProvider.Factory(PostingsFormatService.DEFAULT_FORMAT, defaultFormat));\n \n- buildInPostingFormatsX.put(\"bloom_pulsing\", new PreBuiltPostingsFormatProvider.Factory(\"bloom_pulsing\", wrapInBloom(PostingsFormat.forName(\"Pulsing41\"))));\n- buildInPostingFormatsX.put(\"bloom_default\", new PreBuiltPostingsFormatProvider.Factory(\"bloom_default\", wrapInBloom(PostingsFormat.forName(\"Lucene41\"))));\n+ builtInPostingFormatsX.put(\"bloom_default\", new PreBuiltPostingsFormatProvider.Factory(\"bloom_default\", wrapInBloom(PostingsFormat.forName(\"Lucene41\"))));\n \n- builtInPostingFormats = buildInPostingFormatsX.immutableMap();\n+ builtInPostingFormats = builtInPostingFormatsX.immutableMap();\n }\n \n public static final boolean luceneBloomFilter = false;\n \n static PostingsFormat wrapInBloom(PostingsFormat delegate) {\n- if (luceneBloomFilter) {\n- return new BloomFilteringPostingsFormat(delegate, new BloomFilterLucenePostingsFormatProvider.CustomBloomFilterFactory());\n- }\n return new BloomFilterPostingsFormat(delegate, BloomFilter.Factory.DEFAULT);\n }\n ", "filename": "src/main/java/org/elasticsearch/index/codec/postingsformat/PostingFormats.java", "status": "modified" }, { "diff": "@@ -101,9 +101,7 @@ public static PostingsFormatProvider lookup(@IndexSettings Settings indexSetting\n \n /**\n * A simple factory used to create {@link PostingsFormatProvider} used by\n- * delegating providers like {@link BloomFilterLucenePostingsFormatProvider} or\n- * {@link PulsingPostingsFormatProvider}. Those providers wrap other\n- * postings formats to enrich their capabilities.\n+ * delegating providers.\n */\n public interface Factory {\n PostingsFormatProvider create(String name, Settings settings);", "filename": "src/main/java/org/elasticsearch/index/codec/postingsformat/PostingsFormatProvider.java", "status": "modified" }, { "diff": "@@ -62,32 +62,7 @@ public void testFieldsWithCustomPostingsFormat() throws Exception {\n .field(\"postings_format\", \"test1\").field(\"index_options\", \"docs\").field(\"type\", \"string\").endObject().endObject().endObject().endObject())\n .setSettings(ImmutableSettings.settingsBuilder()\n .put(indexSettings())\n- .put(\"index.codec.postings_format.test1.type\", \"pulsing\")));\n-\n- client().prepareIndex(\"test\", \"type1\", \"1\").setSource(\"field1\", \"quick brown fox\", \"field2\", \"quick brown fox\").execute().actionGet();\n- client().prepareIndex(\"test\", \"type1\", \"2\").setSource(\"field1\", \"quick lazy huge brown fox\", \"field2\", \"quick lazy huge brown fox\").setRefresh(true).execute().actionGet();\n-\n- SearchResponse searchResponse = client().prepareSearch().setQuery(QueryBuilders.matchQuery(\"field2\", \"quick brown\").type(MatchQueryBuilder.Type.PHRASE).slop(0)).execute().actionGet();\n- assertThat(searchResponse.getHits().totalHits(), equalTo(1l));\n- try {\n- client().prepareSearch().setQuery(QueryBuilders.matchQuery(\"field1\", \"quick brown\").type(MatchQueryBuilder.Type.PHRASE).slop(0)).execute().actionGet();\n- } catch (SearchPhaseExecutionException e) {\n- assertThat(e.getMessage(), endsWith(\"IllegalStateException[field \\\"field1\\\" was indexed without position data; cannot run PhraseQuery (term=quick)]; }\"));\n- }\n- }\n-\n- @Test\n- public void testIndexingWithSimpleTextCodec() throws Exception {\n- try {\n- client().admin().indices().prepareDelete(\"test\").execute().actionGet();\n- } catch (Exception e) {\n- // ignore\n- }\n-\n- assertAcked(prepareCreate(\"test\")\n- .setSettings(ImmutableSettings.settingsBuilder()\n- .put(indexSettings())\n- .put(\"index.codec\", \"SimpleText\")));\n+ .put(\"index.codec.postings_format.test1.type\", \"default\")));\n \n client().prepareIndex(\"test\", \"type1\", \"1\").setSource(\"field1\", \"quick brown fox\", \"field2\", \"quick brown fox\").execute().actionGet();\n client().prepareIndex(\"test\", \"type1\", \"2\").setSource(\"field1\", \"quick lazy huge brown fox\", \"field2\", \"quick lazy huge brown fox\").setRefresh(true).execute().actionGet();\n@@ -116,7 +91,7 @@ public void testCustomDocValuesFormat() throws IOException {\n .endObject().endObject())\n .setSettings(ImmutableSettings.settingsBuilder()\n .put(indexSettings())\n- .put(\"index.codec.doc_values_format.dvf.type\", \"disk\")));\n+ .put(\"index.codec.doc_values_format.dvf.type\", \"default\")));\n \n for (int i = 10; i >= 0; --i) {\n client().prepareIndex(\"test\", \"test\", Integer.toString(i)).setSource(\"field\", randomLong()).setRefresh(i == 0 || rarely()).execute().actionGet();", "filename": "src/test/java/org/elasticsearch/codecs/CodecTests.java", "status": "modified" }, { "diff": "@@ -19,38 +19,32 @@\n \n package org.elasticsearch.index.codec;\n \n+import java.util.Arrays;\n+\n import org.apache.lucene.codecs.Codec;\n import org.apache.lucene.codecs.bloom.BloomFilteringPostingsFormat;\n import org.apache.lucene.codecs.lucene40.Lucene40Codec;\n import org.apache.lucene.codecs.lucene41.Lucene41Codec;\n+import org.apache.lucene.codecs.lucene41.Lucene41PostingsFormat;\n import org.apache.lucene.codecs.lucene42.Lucene42Codec;\n import org.apache.lucene.codecs.lucene45.Lucene45Codec;\n import org.apache.lucene.codecs.lucene46.Lucene46Codec;\n import org.apache.lucene.codecs.lucene49.Lucene49Codec;\n import org.apache.lucene.codecs.lucene49.Lucene49DocValuesFormat;\n-import org.apache.lucene.codecs.memory.DirectPostingsFormat;\n-import org.apache.lucene.codecs.memory.MemoryDocValuesFormat;\n-import org.apache.lucene.codecs.memory.MemoryPostingsFormat;\n import org.apache.lucene.codecs.perfield.PerFieldPostingsFormat;\n-import org.apache.lucene.codecs.pulsing.Pulsing41PostingsFormat;\n-import org.apache.lucene.codecs.simpletext.SimpleTextCodec;\n import org.elasticsearch.common.settings.ImmutableSettings;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.index.codec.docvaluesformat.*;\n import org.elasticsearch.index.codec.postingsformat.*;\n import org.elasticsearch.index.mapper.DocumentMapper;\n-import org.elasticsearch.index.mapper.internal.IdFieldMapper;\n import org.elasticsearch.index.mapper.internal.UidFieldMapper;\n import org.elasticsearch.index.mapper.internal.VersionFieldMapper;\n import org.elasticsearch.index.service.IndexService;\n import org.elasticsearch.test.ElasticsearchSingleNodeLuceneTestCase;\n import org.junit.Before;\n import org.junit.Test;\n \n-import java.io.IOException;\n-import java.util.Arrays;\n-\n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.instanceOf;\n \n@@ -74,7 +68,6 @@ public void testResolveDefaultCodecs() throws Exception {\n assertThat(codecService.codec(\"Lucene40\"), instanceOf(Lucene40Codec.class));\n assertThat(codecService.codec(\"Lucene41\"), instanceOf(Lucene41Codec.class));\n assertThat(codecService.codec(\"Lucene42\"), instanceOf(Lucene42Codec.class));\n- assertThat(codecService.codec(\"SimpleText\"), instanceOf(SimpleTextCodec.class));\n }\n \n @Test\n@@ -100,39 +93,15 @@ public void testResolveDefaultPostingFormats() throws Exception {\n \n assertThat(postingsFormatService.get(\"XBloomFilter\"), instanceOf(PreBuiltPostingsFormatProvider.class));\n assertThat(postingsFormatService.get(\"XBloomFilter\").get(), instanceOf(BloomFilterPostingsFormat.class));\n-\n- if (PostingFormats.luceneBloomFilter) {\n- assertThat(postingsFormatService.get(\"bloom_pulsing\").get(), instanceOf(BloomFilteringPostingsFormat.class));\n- } else {\n- assertThat(postingsFormatService.get(\"bloom_pulsing\").get(), instanceOf(BloomFilterPostingsFormat.class));\n- }\n-\n- assertThat(postingsFormatService.get(\"pulsing\"), instanceOf(PreBuiltPostingsFormatProvider.class));\n- assertThat(postingsFormatService.get(\"pulsing\").get(), instanceOf(Pulsing41PostingsFormat.class));\n- assertThat(postingsFormatService.get(\"Pulsing41\"), instanceOf(PreBuiltPostingsFormatProvider.class));\n- assertThat(postingsFormatService.get(\"Pulsing41\").get(), instanceOf(Pulsing41PostingsFormat.class));\n-\n- assertThat(postingsFormatService.get(\"memory\"), instanceOf(PreBuiltPostingsFormatProvider.class));\n- assertThat(postingsFormatService.get(\"memory\").get(), instanceOf(MemoryPostingsFormat.class));\n- assertThat(postingsFormatService.get(\"Memory\"), instanceOf(PreBuiltPostingsFormatProvider.class));\n- assertThat(postingsFormatService.get(\"Memory\").get(), instanceOf(MemoryPostingsFormat.class));\n-\n- assertThat(postingsFormatService.get(\"direct\"), instanceOf(PreBuiltPostingsFormatProvider.class));\n- assertThat(postingsFormatService.get(\"direct\").get(), instanceOf(DirectPostingsFormat.class));\n- assertThat(postingsFormatService.get(\"Direct\"), instanceOf(PreBuiltPostingsFormatProvider.class));\n- assertThat(postingsFormatService.get(\"Direct\").get(), instanceOf(DirectPostingsFormat.class));\n }\n \n @Test\n public void testResolveDefaultDocValuesFormats() throws Exception {\n DocValuesFormatService docValuesFormatService = createCodecService().docValuesFormatService();\n \n- for (String dvf : Arrays.asList(\"memory\", \"disk\", \"Disk\", \"default\")) {\n+ for (String dvf : Arrays.asList(\"default\")) {\n assertThat(docValuesFormatService.get(dvf), instanceOf(PreBuiltDocValuesFormatProvider.class));\n }\n- assertThat(docValuesFormatService.get(\"memory\").get(), instanceOf(MemoryDocValuesFormat.class));\n- assertThat(docValuesFormatService.get(\"disk\").get(), instanceOf(Lucene49DocValuesFormat.class));\n- assertThat(docValuesFormatService.get(\"Disk\").get(), instanceOf(Lucene49DocValuesFormat.class));\n assertThat(docValuesFormatService.get(\"default\").get(), instanceOf(Lucene49DocValuesFormat.class));\n }\n \n@@ -161,164 +130,16 @@ public void testResolvePostingFormatsFromMapping_default() throws Exception {\n assertThat(provider.maxBlockSize(), equalTo(64));\n }\n \n- @Test\n- public void testResolvePostingFormatsFromMapping_memory() throws Exception {\n- String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n- .startObject(\"properties\")\n- .startObject(\"field1\").field(\"type\", \"string\").field(\"postings_format\", \"memory\").endObject()\n- .startObject(\"field2\").field(\"type\", \"string\").field(\"postings_format\", \"my_format1\").endObject()\n- .endObject()\n- .endObject().endObject().string();\n-\n- Settings indexSettings = ImmutableSettings.settingsBuilder()\n- .put(\"index.codec.postings_format.my_format1.type\", \"memory\")\n- .put(\"index.codec.postings_format.my_format1.pack_fst\", true)\n- .put(\"index.codec.postings_format.my_format1.acceptable_overhead_ratio\", 0.3f)\n- .build();\n- CodecService codecService = createCodecService(indexSettings);\n- DocumentMapper documentMapper = codecService.mapperService().documentMapperParser().parse(mapping);\n- assertThat(documentMapper.mappers().name(\"field1\").mapper().postingsFormatProvider(), instanceOf(PreBuiltPostingsFormatProvider.class));\n- assertThat(documentMapper.mappers().name(\"field1\").mapper().postingsFormatProvider().get(), instanceOf(MemoryPostingsFormat.class));\n-\n- assertThat(documentMapper.mappers().name(\"field2\").mapper().postingsFormatProvider(), instanceOf(MemoryPostingsFormatProvider.class));\n- MemoryPostingsFormatProvider provider = (MemoryPostingsFormatProvider) documentMapper.mappers().name(\"field2\").mapper().postingsFormatProvider();\n- assertThat(provider.packFst(), equalTo(true));\n- assertThat(provider.acceptableOverheadRatio(), equalTo(0.3f));\n- }\n-\n- @Test\n- public void testResolvePostingFormatsFromMapping_direct() throws Exception {\n- String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n- .startObject(\"properties\")\n- .startObject(\"field1\").field(\"type\", \"string\").field(\"postings_format\", \"direct\").endObject()\n- .startObject(\"field2\").field(\"type\", \"string\").field(\"postings_format\", \"my_format1\").endObject()\n- .endObject()\n- .endObject().endObject().string();\n-\n- Settings indexSettings = ImmutableSettings.settingsBuilder()\n- .put(\"index.codec.postings_format.my_format1.type\", \"direct\")\n- .put(\"index.codec.postings_format.my_format1.min_skip_count\", 16)\n- .put(\"index.codec.postings_format.my_format1.low_freq_cutoff\", 64)\n- .build();\n- CodecService codecService = createCodecService(indexSettings);\n- DocumentMapper documentMapper = codecService.mapperService().documentMapperParser().parse(mapping);\n- assertThat(documentMapper.mappers().name(\"field1\").mapper().postingsFormatProvider(), instanceOf(PreBuiltPostingsFormatProvider.class));\n- assertThat(documentMapper.mappers().name(\"field1\").mapper().postingsFormatProvider().get(), instanceOf(DirectPostingsFormat.class));\n-\n- assertThat(documentMapper.mappers().name(\"field2\").mapper().postingsFormatProvider(), instanceOf(DirectPostingsFormatProvider.class));\n- DirectPostingsFormatProvider provider = (DirectPostingsFormatProvider) documentMapper.mappers().name(\"field2\").mapper().postingsFormatProvider();\n- assertThat(provider.minSkipCount(), equalTo(16));\n- assertThat(provider.lowFreqCutoff(), equalTo(64));\n- }\n-\n- @Test\n- public void testResolvePostingFormatsFromMapping_pulsing() throws Exception {\n- String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n- .startObject(\"properties\")\n- .startObject(\"field1\").field(\"type\", \"string\").field(\"postings_format\", \"pulsing\").endObject()\n- .startObject(\"field2\").field(\"type\", \"string\").field(\"postings_format\", \"my_format1\").endObject()\n- .endObject()\n- .endObject().endObject().string();\n-\n- Settings indexSettings = ImmutableSettings.settingsBuilder()\n- .put(\"index.codec.postings_format.my_format1.type\", \"pulsing\")\n- .put(\"index.codec.postings_format.my_format1.freq_cut_off\", 2)\n- .put(\"index.codec.postings_format.my_format1.min_block_size\", 32)\n- .put(\"index.codec.postings_format.my_format1.max_block_size\", 64)\n- .build();\n- CodecService codecService = createCodecService(indexSettings);\n- DocumentMapper documentMapper = codecService.mapperService().documentMapperParser().parse(mapping);\n- assertThat(documentMapper.mappers().name(\"field1\").mapper().postingsFormatProvider(), instanceOf(PreBuiltPostingsFormatProvider.class));\n- assertThat(documentMapper.mappers().name(\"field1\").mapper().postingsFormatProvider().get(), instanceOf(Pulsing41PostingsFormat.class));\n-\n- assertThat(documentMapper.mappers().name(\"field2\").mapper().postingsFormatProvider(), instanceOf(PulsingPostingsFormatProvider.class));\n- PulsingPostingsFormatProvider provider = (PulsingPostingsFormatProvider) documentMapper.mappers().name(\"field2\").mapper().postingsFormatProvider();\n- assertThat(provider.freqCutOff(), equalTo(2));\n- assertThat(provider.minBlockSize(), equalTo(32));\n- assertThat(provider.maxBlockSize(), equalTo(64));\n- }\n-\n- @Test\n- public void testResolvePostingFormatsFromMappingLuceneBloom() throws Exception {\n- String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n- .startObject(\"properties\")\n- .startObject(\"field1\").field(\"type\", \"string\").field(\"postings_format\", \"bloom_default\").endObject()\n- .startObject(\"field2\").field(\"type\", \"string\").field(\"postings_format\", \"bloom_pulsing\").endObject()\n- .startObject(\"field3\").field(\"type\", \"string\").field(\"postings_format\", \"my_format1\").endObject()\n- .endObject()\n- .endObject().endObject().string();\n-\n- Settings indexSettings = ImmutableSettings.settingsBuilder()\n- .put(\"index.codec.postings_format.my_format1.type\", \"bloom_filter_lucene\")\n- .put(\"index.codec.postings_format.my_format1.desired_max_saturation\", 0.2f)\n- .put(\"index.codec.postings_format.my_format1.saturation_limit\", 0.8f)\n- .put(\"index.codec.postings_format.my_format1.delegate\", \"delegate1\")\n- .put(\"index.codec.postings_format.delegate1.type\", \"direct\")\n- .put(\"index.codec.postings_format.delegate1.min_skip_count\", 16)\n- .put(\"index.codec.postings_format.delegate1.low_freq_cutoff\", 64)\n- .build();\n- CodecService codecService = createCodecService(indexSettings);\n- DocumentMapper documentMapper = codecService.mapperService().documentMapperParser().parse(mapping);\n- assertThat(documentMapper.mappers().name(\"field1\").mapper().postingsFormatProvider(), instanceOf(PreBuiltPostingsFormatProvider.class));\n- if (PostingFormats.luceneBloomFilter) {\n- assertThat(documentMapper.mappers().name(\"field1\").mapper().postingsFormatProvider().get(), instanceOf(BloomFilteringPostingsFormat.class));\n- } else {\n- assertThat(documentMapper.mappers().name(\"field1\").mapper().postingsFormatProvider().get(), instanceOf(BloomFilterPostingsFormat.class));\n- }\n-\n- assertThat(documentMapper.mappers().name(\"field2\").mapper().postingsFormatProvider(), instanceOf(PreBuiltPostingsFormatProvider.class));\n- if (PostingFormats.luceneBloomFilter) {\n- assertThat(documentMapper.mappers().name(\"field2\").mapper().postingsFormatProvider().get(), instanceOf(BloomFilteringPostingsFormat.class));\n- } else {\n- assertThat(documentMapper.mappers().name(\"field2\").mapper().postingsFormatProvider().get(), instanceOf(BloomFilterPostingsFormat.class));\n- }\n-\n- assertThat(documentMapper.mappers().name(\"field3\").mapper().postingsFormatProvider(), instanceOf(BloomFilterLucenePostingsFormatProvider.class));\n- BloomFilterLucenePostingsFormatProvider provider = (BloomFilterLucenePostingsFormatProvider) documentMapper.mappers().name(\"field3\").mapper().postingsFormatProvider();\n- assertThat(provider.desiredMaxSaturation(), equalTo(0.2f));\n- assertThat(provider.saturationLimit(), equalTo(0.8f));\n- assertThat(provider.delegate(), instanceOf(DirectPostingsFormatProvider.class));\n- DirectPostingsFormatProvider delegate = (DirectPostingsFormatProvider) provider.delegate();\n- assertThat(delegate.minSkipCount(), equalTo(16));\n- assertThat(delegate.lowFreqCutoff(), equalTo(64));\n- }\n-\n @Test\n public void testChangeUidPostingsFormat() throws Exception {\n String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n- .startObject(\"_uid\").field(\"postings_format\", \"memory\").endObject()\n+ .startObject(\"_uid\").field(\"postings_format\", \"Lucene41\").endObject()\n .endObject().endObject().string();\n \n CodecService codecService = createCodecService();\n DocumentMapper documentMapper = codecService.mapperService().documentMapperParser().parse(mapping);\n assertThat(documentMapper.rootMapper(UidFieldMapper.class).postingsFormatProvider(), instanceOf(PreBuiltPostingsFormatProvider.class));\n- assertThat(documentMapper.rootMapper(UidFieldMapper.class).postingsFormatProvider().get(), instanceOf(MemoryPostingsFormat.class));\n- }\n-\n- @Test\n- public void testChangeUidDocValuesFormat() throws IOException {\n- String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n- .startObject(\"_uid\").startObject(\"fielddata\").field(\"format\", \"doc_values\").endObject().field(\"doc_values_format\", \"disk\").endObject()\n- .endObject().endObject().string();\n-\n- CodecService codecService = createCodecService();\n- DocumentMapper documentMapper = codecService.mapperService().documentMapperParser().parse(mapping);\n- assertThat(documentMapper.rootMapper(UidFieldMapper.class).hasDocValues(), equalTo(true));\n- assertThat(documentMapper.rootMapper(UidFieldMapper.class).docValuesFormatProvider(), instanceOf(PreBuiltDocValuesFormatProvider.class));\n- assertThat(documentMapper.rootMapper(UidFieldMapper.class).docValuesFormatProvider().get(), instanceOf(Lucene49DocValuesFormat.class));\n- }\n-\n- @Test\n- public void testChangeIdDocValuesFormat() throws IOException {\n- String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n- .startObject(\"_id\").startObject(\"fielddata\").field(\"format\", \"doc_values\").endObject().field(\"doc_values_format\", \"disk\").endObject()\n- .endObject().endObject().string();\n-\n- CodecService codecService = createCodecService();\n- DocumentMapper documentMapper = codecService.mapperService().documentMapperParser().parse(mapping);\n- assertThat(documentMapper.rootMapper(IdFieldMapper.class).hasDocValues(), equalTo(true));\n- assertThat(documentMapper.rootMapper(IdFieldMapper.class).docValuesFormatProvider(), instanceOf(PreBuiltDocValuesFormatProvider.class));\n- assertThat(documentMapper.rootMapper(IdFieldMapper.class).docValuesFormatProvider().get(), instanceOf(Lucene49DocValuesFormat.class));\n+ assertThat(documentMapper.rootMapper(UidFieldMapper.class).postingsFormatProvider().get(), instanceOf(Lucene41PostingsFormat.class));\n }\n \n @Test\n@@ -341,50 +162,10 @@ public void testResolveDocValuesFormatsFromMapping_default() throws Exception {\n assertThat(documentMapper.mappers().name(\"field2\").mapper().docValuesFormatProvider(), instanceOf(DefaultDocValuesFormatProvider.class));\n }\n \n- @Test\n- public void testResolveDocValuesFormatsFromMapping_memory() throws Exception {\n- String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n- .startObject(\"properties\")\n- .startObject(\"field1\").field(\"type\", \"integer\").field(\"doc_values_format\", \"memory\").endObject()\n- .startObject(\"field2\").field(\"type\", \"double\").field(\"doc_values_format\", \"my_format1\").endObject()\n- .endObject()\n- .endObject().endObject().string();\n-\n- Settings indexSettings = ImmutableSettings.settingsBuilder()\n- .put(\"index.codec.doc_values_format.my_format1.type\", \"memory\")\n- .build();\n- CodecService codecService = createCodecService(indexSettings);\n- DocumentMapper documentMapper = codecService.mapperService().documentMapperParser().parse(mapping);\n- assertThat(documentMapper.mappers().name(\"field1\").mapper().docValuesFormatProvider(), instanceOf(PreBuiltDocValuesFormatProvider.class));\n- assertThat(documentMapper.mappers().name(\"field1\").mapper().docValuesFormatProvider().get(), instanceOf(MemoryDocValuesFormat.class));\n-\n- assertThat(documentMapper.mappers().name(\"field2\").mapper().docValuesFormatProvider(), instanceOf(MemoryDocValuesFormatProvider.class));\n- }\n-\n- @Test\n- public void testResolveDocValuesFormatsFromMapping_disk() throws Exception {\n- String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n- .startObject(\"properties\")\n- .startObject(\"field1\").field(\"type\", \"integer\").field(\"doc_values_format\", \"disk\").endObject()\n- .startObject(\"field2\").field(\"type\", \"double\").field(\"doc_values_format\", \"my_format1\").endObject()\n- .endObject()\n- .endObject().endObject().string();\n-\n- Settings indexSettings = ImmutableSettings.settingsBuilder()\n- .put(\"index.codec.doc_values_format.my_format1.type\", \"disk\")\n- .build();\n- CodecService codecService = createCodecService(indexSettings);\n- DocumentMapper documentMapper = codecService.mapperService().documentMapperParser().parse(mapping);\n- assertThat(documentMapper.mappers().name(\"field1\").mapper().docValuesFormatProvider(), instanceOf(PreBuiltDocValuesFormatProvider.class));\n- assertThat(documentMapper.mappers().name(\"field1\").mapper().docValuesFormatProvider().get(), instanceOf(Lucene49DocValuesFormat.class));\n-\n- assertThat(documentMapper.mappers().name(\"field2\").mapper().docValuesFormatProvider(), instanceOf(DiskDocValuesFormatProvider.class));\n- }\n-\n @Test\n public void testChangeVersionFormat() throws Exception {\n String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n- .startObject(\"_version\").field(\"doc_values_format\", \"disk\").endObject()\n+ .startObject(\"_version\").field(\"doc_values_format\", \"Lucene49\").endObject()\n .endObject().endObject().string();\n \n CodecService codecService = createCodecService();", "filename": "src/test/java/org/elasticsearch/index/codec/CodecTests.java", "status": "modified" }, { "diff": "@@ -129,7 +129,7 @@ public void testNotChangeSearchAnalyzer() throws Exception {\n .startObject(\"properties\").startObject(\"field\").field(\"type\", \"string\").field(\"search_analyzer\", \"whitespace\").endObject().endObject()\n .endObject().endObject().string();\n String mapping2 = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n- .startObject(\"properties\").startObject(\"field\").field(\"type\", \"string\").field(\"postings_format\", \"direct\").endObject().endObject()\n+ .startObject(\"properties\").startObject(\"field\").field(\"type\", \"string\").field(\"postings_format\", \"Lucene41\").endObject().endObject()\n .endObject().endObject().string();\n \n DocumentMapper existing = parser.parse(mapping1);\n@@ -140,7 +140,7 @@ public void testNotChangeSearchAnalyzer() throws Exception {\n \n assertThat(mergeResult.hasConflicts(), equalTo(false));\n assertThat(((NamedAnalyzer) existing.mappers().name(\"field\").mapper().searchAnalyzer()).name(), equalTo(\"whitespace\"));\n- assertThat((existing.mappers().name(\"field\").mapper().postingsFormatProvider()).name(), equalTo(\"direct\"));\n+ assertThat((existing.mappers().name(\"field\").mapper().postingsFormatProvider()).name(), equalTo(\"Lucene41\"));\n }\n \n }", "filename": "src/test/java/org/elasticsearch/index/mapper/merge/TestMergeMapperTests.java", "status": "modified" } ] }
{ "body": "Hi,\n\nWe are running an elasticsearch 1.1.1 6 node cluster with 256GB of ram, and using 96GB JVM heap sizes. I've noticed that when I set the filter cache size to 32GB or over with this command:\n\n```\ncurl -XPUT \"http://localhost:9200/_cluster/settings\" -d'\n{\n \"transient\" : {\n \"indices.cache.filter.size\" : \"50%\"\n }\n}'\n```\n\nThe field cache size keeps growing above and beyond the indicated limit. The relevant node stats show that the filter cache size is about 69GB in size, which is over the configured limit of 48GB\n\n```\n\"filter_cache\" : {\n \"memory_size_in_bytes\" : 74550217274,\n \"evictions\" : 8665179\n},\n```\n\nI've enable debug logging on the node itself and it looks like the cache itself is getting created with the correct values:\n\n```\n[2014-05-21 00:31:57,215][DEBUG][indices.cache.filter ] [ess02-006] using [node] weighted filter cache with size [50%], actual_size [47.9gb], expire [null], clean_interval [1m]\n```\n\nWhats strange is that when I set the limit to 31.9GB, the limit is enforced, which leads me to believe there is some sort of overflow going on.\n\nThanks,\nDaniel\n", "comments": [ { "body": "Hi,\nI dug a little deeper into the caching logic, and I think I have found the root cause. The class `IndicesFilterCache` sets `concurrencyLevel` to a hardcoded 16:\n\n``` java\nprivate void buildCache() {\n CacheBuilder<WeightedFilterCache.FilterCacheKey, DocIdSet> cacheBuilder = CacheBuilder.newBuilder()\n .removalListener(this)\n .maximumWeight(sizeInBytes).weigher(new WeightedFilterCache.FilterCacheValueWeigher());\n\n // defaults to 4, but this is a busy map for all indices, increase it a bit\n cacheBuilder.concurrencyLevel(16);\n\n if (expire != null) {\n cacheBuilder.expireAfterAccess(expire.millis(), TimeUnit.MILLISECONDS);\n }\n\n cache = cacheBuilder.build();\n}\n```\n\nhttps://github.com/elasticsearch/elasticsearch/blob/9ed34b5a9e9769b1264bf04d9b9a674794515bc6/src/main/java/org/elasticsearch/indices/cache/filter/IndicesFilterCache.java#L116\n\nIn the Guava libraries, the eviction code is as follows:\n\n``` java\nvoid evictEntries() {\n if (!map.evictsBySize()) {\n return;\n }\n\n drainRecencyQueue();\n while (totalWeight > maxSegmentWeight) {\n ReferenceEntry<K, V> e = getNextEvictable();\n if (!removeEntry(e, e.getHash(), RemovalCause.SIZE)) {\n throw new AssertionError();\n }\n }\n}\n```\n\nhttps://code.google.com/p/guava-libraries/source/browse/guava/src/com/google/common/cache/LocalCache.java#2659\n\nSince `totalWeight` is an `int` and `maxSegmentWeight` is a `long` set to `maxWeight / concurrencyLevel`, when `maxWeight` is 32GB or above, then the value of `maxSegmentWeight` will be set to above the maximum value of `int` and the check \n\n``` java\nwhile (totalWeight > maxSegmentWeight) {\n```\n\nwill always fail.\n", "created_at": "2014-05-21T18:54:46Z" }, { "body": "Wow, good catch! I think it would make sense to file a bug to Guava?\n", "created_at": "2014-05-22T00:22:51Z" }, { "body": "> Wow, good catch! I think it would make sense to file a bug to Guava?\n\nIndeed!\n\nI'd file that with Guava but also clamp the size of the cache in Elasticsearch to 32GB - 1 for the time being.\n\nAs an aside I imagine 96GB heaps cause super long pause times on hot spot.\n", "created_at": "2014-05-22T12:21:02Z" }, { "body": "+1\n", "created_at": "2014-05-22T12:21:57Z" }, { "body": "I've got the code open and have a few free moments so I can work on it if no one else wants it.\n", "created_at": "2014-05-22T12:22:53Z" }, { "body": "That works for me, feel free to ping me when it's ready and you want a review.\n", "created_at": "2014-05-22T12:23:58Z" }, { "body": "> Wow, good catch! I think it would make sense to file a bug to Guava?\n\nHuge ++!. @danp60 when you file the bug in guava, can you link back to it here?\n", "created_at": "2014-05-22T12:26:44Z" }, { "body": "I imagine you've already realized it but the work around is to force the cache size under 32GB.\n", "created_at": "2014-05-22T12:30:33Z" }, { "body": "Indeed. I think that's not too bad a workaround though since I would expect such a large filter cache to be quite wasteful compared to leaving the memory to the operating system so that it can do a better job with the filesystem cache.\n", "created_at": "2014-05-22T12:32:29Z" }, { "body": "@kimchy I've filed the guava bug here: https://code.google.com/p/guava-libraries/issues/detail?id=1761&colspec=ID%20Type%20Status%20Package%20Summary\n", "created_at": "2014-05-22T17:52:03Z" }, { "body": "@danp60 Thanks!\n", "created_at": "2014-05-22T18:30:13Z" }, { "body": "The bug has been [fixed upstream](https://code.google.com/p/guava-libraries/issues/detail?id=1761).\n", "created_at": "2014-05-28T11:01:13Z" } ], "number": 6268, "title": "Internal: Filter cache size limit not honored for 32GB or over" }
{ "body": "17.0 and earlier versions were affected by the following bug\nhttps://code.google.com/p/guava-libraries/issues/detail?id=1761\nwhich caused caches that are configured with weights that are greater than\n32GB to actually be unbounded. This is now fixed.\n\nRelates to #6268\n", "number": 7593, "review_comments": [], "title": "Upgrade Guava to 18.0." }
{ "commits": [ { "message": "Internal: Upgrade Guava to 18.0.\n\n17.0 and earlier versions were affected by the following bug\nhttps://code.google.com/p/guava-libraries/issues/detail?id=1761\nwhich caused caches that are configured with weights that are greater than\n32GB to actually be unbounded. This is now fixed.\n\nRelates to #6268" } ], "files": [ { "diff": "@@ -191,7 +191,7 @@\n <dependency>\n <groupId>com.google.guava</groupId>\n <artifactId>guava</artifactId>\n- <version>17.0</version>\n+ <version>18.0</version>\n <scope>compile</scope>\n </dependency>\n ", "filename": "pom.xml", "status": "modified" }, { "diff": "@@ -34,14 +34,6 @@\n *\n */\n public class ByteSizeValue implements Serializable, Streamable {\n- /**\n- * Largest size possible for Guava caches to prevent overflow. Guava's\n- * caches use integers to track weight per segment and we always 16 segments\n- * so caches of 32GB would always overflow that integer and they'd never be\n- * evicted by size. We set this to 31.9GB leaving 100MB of headroom to\n- * prevent overflow.\n- */\n- public static final ByteSizeValue MAX_GUAVA_CACHE_SIZE = new ByteSizeValue(32 * ByteSizeUnit.C3 - 100 * ByteSizeUnit.C2);\n \n private long size;\n ", "filename": "src/main/java/org/elasticsearch/common/unit/ByteSizeValue.java", "status": "modified" }, { "diff": "@@ -119,16 +119,7 @@ private void buildCache() {\n }\n \n private void computeSizeInBytes() {\n- long sizeInBytes = MemorySizeValue.parseBytesSizeValueOrHeapRatio(size).bytes();\n- if (sizeInBytes > ByteSizeValue.MAX_GUAVA_CACHE_SIZE.bytes()) {\n- logger.warn(\"reducing requested filter cache size of [{}] to the maximum allowed size of [{}]\", new ByteSizeValue(sizeInBytes),\n- ByteSizeValue.MAX_GUAVA_CACHE_SIZE);\n- sizeInBytes = ByteSizeValue.MAX_GUAVA_CACHE_SIZE.bytes();\n- // Even though it feels wrong for size and sizeInBytes to get out of\n- // sync we don't update size here because it might cause the cache\n- // to be rebuilt every time new settings are applied.\n- }\n- this.sizeInBytes = sizeInBytes;\n+ this.sizeInBytes = MemorySizeValue.parseBytesSizeValueOrHeapRatio(size).bytes();\n }\n \n public void addReaderKeyToClean(Object readerKey) {", "filename": "src/main/java/org/elasticsearch/indices/cache/filter/IndicesFilterCache.java", "status": "modified" }, { "diff": "@@ -38,7 +38,6 @@\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.settings.Settings;\n-import org.elasticsearch.common.unit.ByteSizeValue;\n import org.elasticsearch.common.unit.MemorySizeValue;\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.common.util.concurrent.ConcurrentCollections;\n@@ -118,13 +117,6 @@ public IndicesQueryCache(Settings settings, ClusterService clusterService, Threa\n \n private void buildCache() {\n long sizeInBytes = MemorySizeValue.parseBytesSizeValueOrHeapRatio(size).bytes();\n- if (sizeInBytes > ByteSizeValue.MAX_GUAVA_CACHE_SIZE.bytes()) {\n- logger.warn(\"reducing requested query cache size of [{}] to the maximum allowed size of [{}]\", new ByteSizeValue(sizeInBytes), ByteSizeValue.MAX_GUAVA_CACHE_SIZE);\n- sizeInBytes = ByteSizeValue.MAX_GUAVA_CACHE_SIZE.bytes();\n- // Even though it feels wrong for size and sizeInBytes to get out of\n- // sync we don't update size here because it might cause the cache\n- // to be rebuilt every time new settings are applied.\n- }\n \n CacheBuilder<Key, BytesReference> cacheBuilder = CacheBuilder.newBuilder()\n .maximumWeight(sizeInBytes).weigher(new QueryCacheWeigher()).removalListener(this);", "filename": "src/main/java/org/elasticsearch/indices/cache/query/IndicesQueryCache.java", "status": "modified" }, { "diff": "@@ -65,14 +65,8 @@ public IndicesFieldDataCache(Settings settings, IndicesFieldDataCacheListener in\n super(settings);\n this.threadPool = threadPool;\n this.indicesFieldDataCacheListener = indicesFieldDataCacheListener;\n- String size = componentSettings.get(\"size\", \"-1\");\n- long sizeInBytes = componentSettings.getAsMemory(\"size\", \"-1\").bytes();\n- if (sizeInBytes > ByteSizeValue.MAX_GUAVA_CACHE_SIZE.bytes()) {\n- logger.warn(\"reducing requested field data cache size of [{}] to the maximum allowed size of [{}]\", new ByteSizeValue(sizeInBytes),\n- ByteSizeValue.MAX_GUAVA_CACHE_SIZE);\n- sizeInBytes = ByteSizeValue.MAX_GUAVA_CACHE_SIZE.bytes();\n- size = ByteSizeValue.MAX_GUAVA_CACHE_SIZE.toString();\n- }\n+ final String size = componentSettings.get(\"size\", \"-1\");\n+ final long sizeInBytes = componentSettings.getAsMemory(\"size\", \"-1\").bytes();\n final TimeValue expire = componentSettings.getAsTime(\"expire\", null);\n CacheBuilder<Key, Accountable> cacheBuilder = CacheBuilder.newBuilder()\n .removalListener(this);", "filename": "src/main/java/org/elasticsearch/indices/fielddata/cache/IndicesFieldDataCache.java", "status": "modified" } ] }
{ "body": "The [get field mapping API](http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/indices-get-field-mapping.html#indices-get-field-mapping) \n\nreturns all the [special fields](http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-fields.html) except for the [_analyzer field](http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-analyzer-field.html) and [_boost](http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-boost-field.html) fields.\n\nGiven the following mapping\n\n```\nPUT /nest_test_data-1340/specialdto/_mapping\n{\n \"specialdto\": {\n \"_id\": {\n \"path\": \"myOtherId\",\n \"index\": \"not_analyzed\",\n \"store\": false\n },\n \"_source\": {\n \"enabled\": false,\n \"compress\": true,\n \"compress_threshold\": \"200b\",\n \"includes\": [\n \"path2.*\"\n ],\n \"excludes\": [\n \"path1.*\"\n ]\n },\n \"_type\": {\n \"index\": \"analyzed\",\n \"store\": true\n },\n \"_all\": {\n \"enabled\": true,\n \"store_term_vector_positions\": true,\n \"index_analyzer\": \"default\",\n \"search_analyzer\": \"default\"\n },\n \"_analyzer\": {\n \"index\": \"yes\",\n \"path\": \"name\"\n },\n \"_boost\": {\n \"name\": \"boost\",\n \"null_value\": 1.0\n },\n \"_parent\": {\n \"type\": \"person\"\n },\n \"_routing\": {\n \"required\": true,\n \"path\": \"name\"\n },\n \"_index\": {\n \"enabled\": false,\n \"store\": true\n },\n \"_size\": {\n \"enabled\": false,\n \"store\": true\n },\n \"_timestamp\": {\n \"enabled\": true,\n \"path\": \"timestamp\",\n \"format\": \"yyyy\"\n },\n \"_ttl\": {\n \"enabled\": false,\n \"default\": \"1d\"\n }\n }\n}\n```\n\nDoing a GET for all the special fields:\n\n```\nGET http://localhost:9200/nest_test_data-1340/_mapping/specialdto/field/_%2A\n```\n\nreturns:\n\n```\n{\n \"nest_test_data-1340\" : {\n \"mappings\" : {\n \"specialdto\" : {\n \"_type\" : {\n \"full_name\" : \"_type\",\n \"mapping\":{\"_type\":{\"store\":true}}\n },\n \"_version\" : {\n \"full_name\" : \"_version\",\n \"mapping\":{}\n },\n \"_id\" : {\n \"full_name\" : \"_id\",\n \"mapping\":{\"_id\":{\"index\":\"not_analyzed\",\"path\":\"myOtherId\"}}\n },\n \"_source\" : {\n \"full_name\" : \"_source\",\n \"mapping\":{\"_source\":{\"enabled\":false,\"compress\":true,\"compress_threshold\":\"200b\",\"includes\":[\"path2.*\"],\"excludes\":[\"path1.*\"]}}\n },\n \"_routing\" : {\n \"full_name\" : \"_routing\",\n \"mapping\":{\"_routing\":{\"required\":true,\"path\":\"name\"}}\n },\n \"_timestamp\" : {\n \"full_name\" : \"_timestamp\",\n \"mapping\":{\"_timestamp\":{\"enabled\":true,\"path\":\"timestamp\",\"format\":\"yyyy\"}}\n },\n \"_index\" : {\n \"full_name\" : \"_index\",\n \"mapping\":{\"_index\":{\"enabled\":false}}\n },\n \"_ttl\" : {\n \"full_name\" : \"_ttl\",\n \"mapping\":{\"_ttl\":{\"enabled\":false}}\n },\n \"_size\" : {\n \"full_name\" : \"_size\",\n \"mapping\":{\"_size\":{\"enabled\":false}}\n },\n \"_uid\" : {\n \"full_name\" : \"_uid\",\n \"mapping\":{}\n },\n \"_all\" : {\n \"full_name\" : \"_all\",\n \"mapping\":{\"_all\":{\"store_term_vectors\":true,\"store_term_vector_positions\":true,\"analyzer\":\"default\"}}\n },\n \"_parent\" : {\n \"full_name\" : \"_parent\",\n \"mapping\":{\"_parent\":{\"type\":\"person\"}}\n }\n }\n }\n }\n}\n```\n\nwithout including `_analyzer` and `_boost` in the response.\n", "comments": [ { "body": "There is two issues:\n1. `_boost` field mapping can be retrieved as `\"boost\"`, not as `\"_boost\"` with a field mapping request because it was given the `\"name\": \"boost\"`. I can change that but I am unsure what is the expected behavior here.\n2. `_analyzer` is currently not a field mapper at all and I think the documentation is lying: `\"index\": \"no\"` has no effect, the field will always use default configuration. \n\nIn addition for consistency I think `_analyzer` should behave just like `_boost`, that is: Either the `\"path\"` in `_analyzer` should also be called `\"name\"` like in `_boost` or the other way round. Depending on what the decision is for 1. `_analyzer` should also behave that way.\n\nSo: Do we want the `_boost` to be the name for get field mapping requests or the name defined in `\"name\"`?\n", "created_at": "2014-09-01T15:00:12Z" }, { "body": "I would expect the index boost information to be fixated using the `_boost` special name, the `name: boost` part is only metadata off the special `_boost` mapping.\n\nIn other words I map it use it the special `_boost` keyword I would expect to be able to fetch the information using `_boost` as well.\nhttp://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-boost-field.html\n\nthis is also the behaviour for al the other `_specialfields` mappings.\n\nIn similar fashion `_analyzer` is a special field mapper if I'm not mistaken:\nhttp://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-analyzer-field.html\n\nIt allows a document field to supply the default index time analyzer for fields dynamically during indexing. \n\nSince `_boost` is deprecated I would keep `path` for `_analyzer` and leave `name` for `_boost` even if `path` is the more descriptive property name.\n", "created_at": "2014-09-01T15:19:41Z" }, { "body": "OK, I'll make `_analyzer` a proper field mapper, change the naming for get field mapping and also leave the `path` for `_analyzer` and `name` for `_boost`.\n", "created_at": "2014-09-01T15:25:16Z" }, { "body": "Closing this because `_analyzer` will be removed in 2.0. \n", "created_at": "2015-02-23T15:33:33Z" } ], "number": 7237, "title": "Mapping: Get field mapping does not return _analyzer and _boost fields" }
{ "body": "...so\n\n_boost and _analyzer mapping can now be retrieved by their default name\nor by given name (\"path\" for _analyzer, \"name\" for _boost)\nBoth still appear with their default name when just retrieving the mapping.\n(see SimpleGetFieldMappingTests#testGet_boostAnd_analyzer and\n SimpleGetMappingTests#testGet_boostAnd_analyzer)\n\nThe index_name handling is removed from _boost. This never worked anyway,\nsee this test: https://github.com/brwe/elasticsearch/commit/36450043640f49f959d953dfd546f33606cb953a\n\nChange in behavior:\n\n_analyzer was never a field mapper. When defining an analyzer\nin a document, the field (_analyzer or custom name) was indexed\nwith default string field properties. These could be overwritten by\ndefining an explicit mapping for this field, for example:\n\n```\nPUT testidx/doc/_mapping\n{\n \"_analyzer\": {\n \"path\": \"custom_analyzer\"\n },\n \"properties\": {\n \"custom_analyzer\": {\n \"type\": \"string\",\n \"store\": true\n }\n }\n}\n```\n\nNow, this explicit mapping will be ignored completely, instead\none can only set the \"index\" option in the definition of _analyzer\nEvery other option will be ignored.\n\nReason for this change:\nThe documentation says\n\n\"By default, the _analyzer field is indexed, it can be disabled by settings index to no in the mapping.\"\n\nThis was not true - the setting was ignored. There was a test\nfor the explicit definition of the mapping (AnalyzerMapperTests#testAnalyzerMappingExplicit)\nbut this functionallity was never documented so I assume it is not in use.\n\ncloses #7237\n\nThings that worry me:\n\nI made it work, but am unsure if this is too hacky. I just made use of the fact that four different names are used for mappers (name, indexName, indexNameClean and full name) and set the name at the fitting place. However, there is plans for deprecating indexName (#6677) so I am unsure how long this solution will have any worth. \n\nIn addition the overwriting of the properties mapping relies on the fact that the order in which mappings are parsed is never changed. \n\nAlso, I wonder if the change in behavior for _analyzer qualifies as \"breaking change\".\n", "number": 7589, "review_comments": [], "title": "Mapping: Return `_boost` and `_analyzer` in the GET field mapping API" }
{ "commits": [ { "message": "get field mapping: return _boost and _analyzer by their default names also\n\n_boost and _analyzer mapping can now be retrieved by their default name\nor by given name (\"path\" for _analyzer, \"name\" for _boost)\nBoth still appear with their default name when just retrieving the mapping.\n(see SimpleGetFieldMappingTests#testGet_boostAnd_analyzer and\n SimpleGetMappingTests#testGet_boostAnd_analyzer)\n\nThe index_name handling is removed from _boost. This never worked anyway,\nsee this test: https://github.com/brwe/elasticsearch/commit/36450043640f49f959d953dfd546f33606cb953a\n\nChange in behavior:\n\n_analyzer was never a field mapper. When defining an analyzer\nin a document, the field (_analyzer or custom name) was indexed\nwith default string field properties. These could be overwritten by\ndefining an explicit mapping for this field, for example:\n\n```\nPUT testidx/doc/_mapping\n{\n \"_analyzer\": {\n \"path\": \"custom_analyzer\"\n },\n \"properties\": {\n \"custom_analyzer\": {\n \"type\": \"string\",\n \"store\": true\n }\n }\n}\n```\nNow, this explicit mapping will be ignored completely, instead\none can only set the \"index\" option in the definition of _analyzer\nEvery other option will be ignored.\n\nReason for this change:\nThe documentation says\n\n\"By default, the _analyzer field is indexed, it can be disabled by settings index to no in the mapping.\"\n\nThis was not true - the setting was ignored. There was a test\nfor the explicit definition of the mapping (AnalyzerMapperTests#testAnalyzerMappingExplicit)\nbut this functionallity was never documented so I assume it is not in use.\n\ncloses #7237" }, { "message": "remove unneeded post parse" }, { "message": "cleanup" } ], "files": [ { "diff": "@@ -20,36 +20,52 @@\n package org.elasticsearch.index.mapper.internal;\n \n import org.apache.lucene.analysis.Analyzer;\n-import org.apache.lucene.index.IndexableField;\n+import org.apache.lucene.document.Field;\n+import org.apache.lucene.document.FieldType;\n+import org.apache.lucene.document.StringField;\n+import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.Strings;\n+import org.elasticsearch.common.lucene.Lucene;\n+import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.index.codec.docvaluesformat.DocValuesFormatProvider;\n+import org.elasticsearch.index.codec.postingsformat.PostingsFormatProvider;\n+import org.elasticsearch.index.fielddata.FieldDataType;\n import org.elasticsearch.index.mapper.*;\n+import org.elasticsearch.index.mapper.core.AbstractFieldMapper;\n import org.elasticsearch.search.highlight.HighlighterContext;\n \n import java.io.IOException;\n import java.util.List;\n import java.util.Map;\n \n import static org.elasticsearch.index.mapper.MapperBuilders.analyzer;\n+import static org.elasticsearch.index.mapper.core.TypeParsers.parseField;\n \n /**\n *\n */\n-public class AnalyzerMapper implements Mapper, InternalMapper, RootMapper {\n+public class AnalyzerMapper extends AbstractFieldMapper<String> implements InternalMapper, RootMapper {\n \n public static final String NAME = \"_analyzer\";\n public static final String CONTENT_TYPE = \"_analyzer\";\n \n- public static class Defaults {\n+ public static class Defaults extends AbstractFieldMapper.Defaults {\n public static final String PATH = \"_analyzer\";\n+ public static final FieldType FIELD_TYPE = new FieldType(AbstractFieldMapper.Defaults.FIELD_TYPE);\n }\n \n- public static class Builder extends Mapper.Builder<Builder, AnalyzerMapper> {\n+ @Override\n+ public String value(Object value) {\n+ return (String) value;\n+ }\n+\n+ public static class Builder extends AbstractFieldMapper.Builder<Builder, AnalyzerMapper> {\n \n private String field = Defaults.PATH;\n \n public Builder() {\n- super(CONTENT_TYPE);\n+ super(CONTENT_TYPE, new FieldType(Defaults.FIELD_TYPE));\n this.builder = this;\n }\n \n@@ -60,14 +76,15 @@ public Builder field(String field) {\n \n @Override\n public AnalyzerMapper build(BuilderContext context) {\n- return new AnalyzerMapper(field);\n+ return new AnalyzerMapper(field, fieldType);\n }\n }\n \n public static class TypeParser implements Mapper.TypeParser {\n @Override\n public Mapper.Builder parse(String name, Map<String, Object> node, ParserContext parserContext) throws MapperParsingException {\n AnalyzerMapper.Builder builder = analyzer();\n+ parseField(builder, name, node, parserContext);\n for (Map.Entry<String, Object> entry : node.entrySet()) {\n String fieldName = Strings.toUnderscoreCase(entry.getKey());\n Object fieldNode = entry.getValue();\n@@ -82,16 +99,28 @@ public Mapper.Builder parse(String name, Map<String, Object> node, ParserContext\n private final String path;\n \n public AnalyzerMapper() {\n- this(Defaults.PATH);\n+ this(Defaults.PATH, Defaults.FIELD_TYPE);\n+ }\n+\n+ protected AnalyzerMapper(String path, FieldType fieldType) {\n+ this(path, Defaults.BOOST, fieldType, null, null, null, null);\n }\n \n- public AnalyzerMapper(String path) {\n+ public AnalyzerMapper(String path, float boost, FieldType fieldType, PostingsFormatProvider postingsProvider,\n+ DocValuesFormatProvider docValuesProvider, @Nullable Settings fieldDataSettings, Settings indexSettings) {\n+ super(new Names(path, path, NAME, NAME), boost, fieldType, null, Lucene.KEYWORD_ANALYZER,\n+ Lucene.KEYWORD_ANALYZER, postingsProvider, docValuesProvider, null, null, fieldDataSettings, indexSettings);\n this.path = path.intern();\n }\n \n @Override\n- public String name() {\n- return CONTENT_TYPE;\n+ public FieldType defaultFieldType() {\n+ return Defaults.FIELD_TYPE;\n+ }\n+\n+ @Override\n+ public FieldDataType defaultFieldDataType() {\n+ return new FieldDataType(\"string\");\n }\n \n @Override\n@@ -100,34 +129,15 @@ public void preParse(ParseContext context) throws IOException {\n \n @Override\n public void postParse(ParseContext context) throws IOException {\n- Analyzer analyzer = context.docMapper().mappers().indexAnalyzer();\n- if (path != null) {\n- String value = null;\n- List<IndexableField> fields = context.doc().getFields();\n- for (int i = 0, fieldsSize = fields.size(); i < fieldsSize; i++) {\n- IndexableField field = fields.get(i);\n- if (field.name().equals(path)) {\n- value = field.stringValue();\n- break;\n- }\n- }\n- if (value == null) {\n- value = context.ignoredValue(path);\n- }\n- if (value != null) {\n- analyzer = context.analysisService().analyzer(value);\n- if (analyzer == null) {\n- throw new MapperParsingException(\"No analyzer found for [\" + value + \"] from path [\" + path + \"]\");\n- }\n- analyzer = context.docMapper().mappers().indexAnalyzer(analyzer);\n- }\n+ if (context.analyzer() == null) {\n+ Analyzer analyzer = context.docMapper().mappers().indexAnalyzer();\n+ context.analyzer(analyzer);\n }\n- context.analyzer(analyzer);\n }\n \n @Override\n public boolean includeInObject() {\n- return false;\n+ return true;\n }\n \n public Analyzer setAnalyzer(HighlighterContext context){\n@@ -151,36 +161,50 @@ public Analyzer setAnalyzer(HighlighterContext context){\n }\n \n @Override\n- public void parse(ParseContext context) throws IOException {\n+ protected void parseCreateField(ParseContext context, List<Field> fields) throws IOException {\n+ String value = context.parser().textOrNull();\n+ if (fieldType().indexed()) {\n+ fields.add(new StringField(context.parser().currentName(), value, Field.Store.NO));\n+ } else {\n+ context.ignoredValue(context.parser().currentName(), value);\n+ }\n+ Analyzer analyzer = context.docMapper().mappers().indexAnalyzer();\n+ if (value != null) {\n+ analyzer = context.analysisService().analyzer(value);\n+ if (analyzer == null) {\n+ throw new MapperParsingException(\"No analyzer found for [\" + value + \"] from path [\" + path + \"]\");\n+ }\n+ analyzer = context.docMapper().mappers().indexAnalyzer(analyzer);\n+ }\n+ context.analyzer(analyzer);\n }\n \n @Override\n public void merge(Mapper mergeWith, MergeContext mergeContext) throws MergeMappingException {\n }\n \n- @Override\n- public void traverse(FieldMapperListener fieldMapperListener) {\n- }\n-\n- @Override\n- public void traverse(ObjectMapperListener objectMapperListener) {\n- }\n-\n @Override\n public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {\n- if (path.equals(Defaults.PATH)) {\n+ boolean includeDefaults = params.paramAsBoolean(\"include_defaults\", false);\n+ if (path.equals(Defaults.PATH) && fieldType.indexed() == Defaults.FIELD_TYPE.indexed() &&\n+ fieldType.stored() == Defaults.FIELD_TYPE.stored() && !includeDefaults) {\n return builder;\n }\n builder.startObject(CONTENT_TYPE);\n- if (!path.equals(Defaults.PATH)) {\n+ if (includeDefaults || !path.equals(Defaults.PATH)) {\n builder.field(\"path\", path);\n }\n+ if (includeDefaults || !(fieldType.indexed() == Defaults.FIELD_TYPE.indexed() &&\n+ fieldType.stored() == Defaults.FIELD_TYPE.stored())) {\n+ builder.field(\"index\", indexTokenizeOptionToString(fieldType.indexed(), fieldType.tokenized()));\n+ }\n builder.endObject();\n return builder;\n }\n \n @Override\n- public void close() {\n-\n+ protected String contentType() {\n+ return CONTENT_TYPE;\n }\n+\n }", "filename": "src/main/java/org/elasticsearch/index/mapper/internal/AnalyzerMapper.java", "status": "modified" }, { "diff": "@@ -89,7 +89,7 @@ public Builder nullValue(float nullValue) {\n \n @Override\n public BoostFieldMapper build(BuilderContext context) {\n- return new BoostFieldMapper(name, buildIndexName(context),\n+ return new BoostFieldMapper(name,\n fieldType.numericPrecisionStep(), boost, fieldType, docValues, nullValue, postingsProvider, docValuesProvider, fieldDataSettings, context.indexSettings());\n }\n }\n@@ -114,17 +114,13 @@ public Mapper.Builder parse(String fieldName, Map<String, Object> node, ParserCo\n private final Float nullValue;\n \n public BoostFieldMapper() {\n- this(Defaults.NAME, Defaults.NAME);\n- }\n-\n- protected BoostFieldMapper(String name, String indexName) {\n- this(name, indexName, Defaults.PRECISION_STEP_32_BIT, Defaults.BOOST, new FieldType(Defaults.FIELD_TYPE), null,\n+ this(Defaults.NAME, Defaults.PRECISION_STEP_32_BIT, Defaults.BOOST, new FieldType(Defaults.FIELD_TYPE), null,\n Defaults.NULL_VALUE, null, null, null, ImmutableSettings.EMPTY);\n }\n \n- protected BoostFieldMapper(String name, String indexName, int precisionStep, float boost, FieldType fieldType, Boolean docValues, Float nullValue,\n+ protected BoostFieldMapper(String name, int precisionStep, float boost, FieldType fieldType, Boolean docValues, Float nullValue,\n PostingsFormatProvider postingsProvider, DocValuesFormatProvider docValuesProvider, @Nullable Settings fieldDataSettings, Settings indexSettings) {\n- super(new Names(name, indexName, indexName, name), precisionStep, boost, fieldType, docValues, Defaults.IGNORE_MALFORMED, Defaults.COERCE,\n+ super(new Names(name, name, Defaults.NAME, Defaults.NAME), precisionStep, boost, fieldType, docValues, Defaults.IGNORE_MALFORMED, Defaults.COERCE,\n NumericFloatAnalyzer.buildNamedAnalyzer(precisionStep), NumericFloatAnalyzer.buildNamedAnalyzer(Integer.MAX_VALUE),\n postingsProvider, docValuesProvider, null, null, fieldDataSettings, indexSettings, MultiFields.empty(), null);\n this.nullValue = nullValue;\n@@ -240,24 +236,18 @@ public boolean includeInObject() {\n return true;\n }\n \n- @Override\n- public void parse(ParseContext context) throws IOException {\n- // we override parse since we want to handle cases where it is not indexed and not stored (the default)\n- float value = parseFloatValue(context);\n- if (!Float.isNaN(value)) {\n- context.docBoost(value);\n- }\n- super.parse(context);\n- }\n-\n @Override\n protected void innerParseCreateField(ParseContext context, List<Field> fields) throws IOException {\n final float value = parseFloatValue(context);\n if (Float.isNaN(value)) {\n return;\n }\n+ if (fieldType().indexed() || fieldType().stored()) {\n+ fields.add(new FloatFieldMapper.CustomFloatNumericField(this, value, fieldType));\n+ } else {\n+ context.ignoredValue(context.parser().currentName(), context.parser().textOrNull());\n+ }\n context.docBoost(value);\n- fields.add(new FloatFieldMapper.CustomFloatNumericField(this, value, fieldType));\n }\n \n private float parseFloatValue(ParseContext context) throws IOException {", "filename": "src/main/java/org/elasticsearch/index/mapper/internal/BoostFieldMapper.java", "status": "modified" }, { "diff": "@@ -32,6 +32,7 @@\n import org.elasticsearch.common.bytes.BytesArray;\n import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.settings.ImmutableSettings;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.index.engine.VersionConflictEngineException;\n import org.elasticsearch.test.ElasticsearchIntegrationTest;\n@@ -1020,6 +1021,41 @@ public void testUngeneratedFieldsPartOfSourceUnstoredSourceDisabled() throws IOE\n assertGetFieldsAlwaysNull(indexOrAlias(), \"doc\", \"1\", fieldsList);\n }\n \n+ @Test\n+ public void testBoostAnalyzerFieldDefaultPath() throws IOException {\n+ boolean stored = randomBoolean();\n+ indexSingleDocumentWithBoostAndAnalyzer(stored);\n+ String[] fieldsList = {\"_boost\", \"_analyzer\"};\n+ // before refresh - document is only in translog\n+ assertGetFieldsAlwaysWorks(indexOrAlias(), \"doc\", \"1\", fieldsList);\n+ refresh();\n+ //after refresh - document is in translog and also indexed\n+ assertGetFieldsAlwaysWorks(indexOrAlias(), \"doc\", \"1\", fieldsList);\n+ flush();\n+ //after flush - document is in not anymore translog - only indexed\n+ assertGetFieldsAlwaysWorks(indexOrAlias(), \"doc\", \"1\", fieldsList);\n+ }\n+\n+ void indexSingleDocumentWithBoostAndAnalyzer(boolean stored) throws IOException {\n+ XContentBuilder createIndexSource = jsonBuilder().startObject()\n+ .startObject(\"settings\")\n+ .field(\"index.translog.disable_flush\", true)\n+ .field(\"refresh_interval\", -1)\n+ .endObject()\n+ .startObject(\"mappings\")\n+ .startObject(\"doc\")\n+ .startObject(\"_boost\")\n+ .field(\"null_nalue\",1).field(\"store\", stored)\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject();\n+ assertAcked(prepareCreate(\"test\").addAlias(new Alias(\"alias\")).setSource(createIndexSource));\n+ ensureGreen();\n+ XContentBuilder doc = jsonBuilder().startObject().field(\"_boost\", 5.0).field(\"_analyzer\", \"whitespace\").endObject();\n+ client().prepareIndex(\"test\", \"doc\").setId(\"1\").setSource(doc).get();\n+ }\n+\n @Test\n public void testUngeneratedFieldsPartOfSourceEitherStoredOrSourceEnabled() throws IOException {\n boolean stored = randomBoolean();", "filename": "src/test/java/org/elasticsearch/get/GetActionTests.java", "status": "modified" }, { "diff": "@@ -0,0 +1,65 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.index.mapper.analyzer;\n+\n+import org.elasticsearch.action.search.SearchResponse;\n+import org.elasticsearch.common.settings.ImmutableSettings;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.common.xcontent.XContentFactory;\n+import org.elasticsearch.index.analysis.FieldNameAnalyzer;\n+import org.elasticsearch.index.analysis.NamedAnalyzer;\n+import org.elasticsearch.index.mapper.DocumentMapper;\n+import org.elasticsearch.index.mapper.DocumentMapperParser;\n+import org.elasticsearch.index.mapper.ParsedDocument;\n+import org.elasticsearch.index.query.QueryBuilders;\n+import org.elasticsearch.test.ElasticsearchIntegrationTest;\n+import org.elasticsearch.test.ElasticsearchSingleNodeTest;\n+import org.junit.Test;\n+\n+import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.nullValue;\n+\n+/**\n+ *\n+ */\n+public class AnalyzerMapperIntegrationTests extends ElasticsearchIntegrationTest {\n+\n+ @Test\n+ public void testAnalyzerMappingAppliedToDocs() throws Exception {\n+\n+ String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n+ .startObject(\"_analyzer\").field(\"path\", \"field_analyzer\").endObject()\n+ .startObject(\"properties\")\n+ .startObject(\"text\").field(\"type\", \"string\").endObject()\n+ .endObject()\n+ .endObject().endObject().string();\n+ prepareCreate(\"test\").addMapping(\"type\", mapping).get();\n+ XContentBuilder doc = XContentFactory.jsonBuilder().startObject().field(\"text\", \"foo bar\").field(\"field_analyzer\", \"keyword\");\n+ client().prepareIndex(\"test\", \"type\").setSource(doc).get();\n+ client().admin().indices().prepareRefresh(\"test\").get();\n+ SearchResponse response = client().prepareSearch(\"test\").setQuery(QueryBuilders.termQuery(\"text\", \"foo bar\")).get();\n+ assertThat(response.getHits().totalHits(), equalTo(1l));\n+\n+ response = client().prepareSearch(\"test\").setQuery(QueryBuilders.termQuery(\"field_analyzer\", \"keyword\")).get();\n+ assertThat(response.getHits().totalHits(), equalTo(1l));\n+ }\n+\n+\n+}", "filename": "src/test/java/org/elasticsearch/index/mapper/analyzer/AnalyzerMapperIntegrationTests.java", "status": "added" }, { "diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.index.mapper.analyzer;\n \n+import org.elasticsearch.common.bytes.BytesArray;\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.index.analysis.FieldNameAnalyzer;\n import org.elasticsearch.index.analysis.NamedAnalyzer;\n@@ -75,17 +76,28 @@ public void testAnalyzerMapping() throws Exception {\n assertThat(((NamedAnalyzer) analyzer.defaultAnalyzer()).name(), equalTo(\"whitespace\"));\n assertThat(((NamedAnalyzer) analyzer.analyzers().get(\"field1\")), nullValue());\n assertThat(((NamedAnalyzer) analyzer.analyzers().get(\"field2\")).name(), equalTo(\"simple\"));\n- }\n \n \n+ doc = reparsedMapper.parse(\"type\", \"1\", XContentFactory.jsonBuilder().startObject()\n+ .field(\"_analyzer\", \"whitespace\")\n+ .field(\"field1\", \"value1\")\n+ .field(\"field2\", \"value2\")\n+ .endObject().bytes());\n+ analyzer = (FieldNameAnalyzer) doc.analyzer();\n+ // test that _analyzer is ignored because we set the path\n+ assertThat(((NamedAnalyzer) analyzer.defaultAnalyzer()).name(), equalTo(\"default\"));\n+ assertNull(doc.docs().get(0).getField(\"field_analyzer\"));\n+ assertThat(((NamedAnalyzer) analyzer.analyzers().get(\"field1\")), nullValue());\n+ assertThat(((NamedAnalyzer) analyzer.analyzers().get(\"field2\")).name(), equalTo(\"simple\"));\n+ }\n+\n @Test\n- public void testAnalyzerMappingExplicit() throws Exception {\n+ public void testAnalyzerMappingNotIndexedNorStored() throws Exception {\n DocumentMapperParser parser = createIndex(\"test\").mapperService().documentMapperParser();\n \n String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n- .startObject(\"_analyzer\").field(\"path\", \"field_analyzer\").endObject()\n+ .startObject(\"_analyzer\").field(\"path\", \"field_analyzer\").field(\"index\", \"no\").endObject()\n .startObject(\"properties\")\n- .startObject(\"field_analyzer\").field(\"type\", \"string\").endObject()\n .startObject(\"field1\").field(\"type\", \"string\").endObject()\n .startObject(\"field2\").field(\"type\", \"string\").field(\"analyzer\", \"simple\").endObject()\n .endObject()\n@@ -103,6 +115,7 @@ public void testAnalyzerMappingExplicit() throws Exception {\n assertThat(((NamedAnalyzer) analyzer.defaultAnalyzer()).name(), equalTo(\"whitespace\"));\n assertThat(((NamedAnalyzer) analyzer.analyzers().get(\"field1\")), nullValue());\n assertThat(((NamedAnalyzer) analyzer.analyzers().get(\"field2\")).name(), equalTo(\"simple\"));\n+ assertNull(doc.docs().get(0).getField(\"field_analyzer\"));\n \n // check that it serializes and de-serializes correctly\n \n@@ -118,47 +131,31 @@ public void testAnalyzerMappingExplicit() throws Exception {\n assertThat(((NamedAnalyzer) analyzer.defaultAnalyzer()).name(), equalTo(\"whitespace\"));\n assertThat(((NamedAnalyzer) analyzer.analyzers().get(\"field1\")), nullValue());\n assertThat(((NamedAnalyzer) analyzer.analyzers().get(\"field2\")).name(), equalTo(\"simple\"));\n+ assertNull(doc.docs().get(0).getField(\"field_analyzer\"));\n }\n \n+ // test that _analyzer settings can not be overwritten when path is separately defined in properties\n @Test\n- public void testAnalyzerMappingNotIndexedNorStored() throws Exception {\n+ public void testPropertiesDefinitionDoesNotOverwriteDefault() throws Exception {\n DocumentMapperParser parser = createIndex(\"test\").mapperService().documentMapperParser();\n \n String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n- .startObject(\"_analyzer\").field(\"path\", \"field_analyzer\").endObject()\n+ .startObject(\"_analyzer\").field(\"path\", \"field_analyzer\").field(\"index\", \"no\").endObject()\n .startObject(\"properties\")\n- .startObject(\"field_analyzer\").field(\"type\", \"string\").field(\"index\", \"no\").field(\"store\", \"no\").endObject()\n- .startObject(\"field1\").field(\"type\", \"string\").endObject()\n- .startObject(\"field2\").field(\"type\", \"string\").field(\"analyzer\", \"simple\").endObject()\n+ .startObject(\"field_analyzer\").field(\"type\", \"string\").field(\"index\", \"analyzed\").field(\"store\", true).endObject()\n .endObject()\n .endObject().endObject().string();\n-\n DocumentMapper documentMapper = parser.parse(mapping);\n \n ParsedDocument doc = documentMapper.parse(\"type\", \"1\", XContentFactory.jsonBuilder().startObject()\n .field(\"field_analyzer\", \"whitespace\")\n- .field(\"field1\", \"value1\")\n- .field(\"field2\", \"value2\")\n .endObject().bytes());\n \n FieldNameAnalyzer analyzer = (FieldNameAnalyzer) doc.analyzer();\n- assertThat(((NamedAnalyzer) analyzer.defaultAnalyzer()).name(), equalTo(\"whitespace\"));\n- assertThat(((NamedAnalyzer) analyzer.analyzers().get(\"field1\")), nullValue());\n- assertThat(((NamedAnalyzer) analyzer.analyzers().get(\"field2\")).name(), equalTo(\"simple\"));\n-\n- // check that it serializes and de-serializes correctly\n-\n- DocumentMapper reparsedMapper = parser.parse(documentMapper.mappingSource().string());\n-\n- doc = reparsedMapper.parse(\"type\", \"1\", XContentFactory.jsonBuilder().startObject()\n- .field(\"field_analyzer\", \"whitespace\")\n- .field(\"field1\", \"value1\")\n- .field(\"field2\", \"value2\")\n- .endObject().bytes());\n \n- analyzer = (FieldNameAnalyzer) doc.analyzer();\n assertThat(((NamedAnalyzer) analyzer.defaultAnalyzer()).name(), equalTo(\"whitespace\"));\n- assertThat(((NamedAnalyzer) analyzer.analyzers().get(\"field1\")), nullValue());\n- assertThat(((NamedAnalyzer) analyzer.analyzers().get(\"field2\")).name(), equalTo(\"simple\"));\n+ assertNull(doc.docs().get(0).getField(\"field_analyzer\"));\n }\n+\n }\n+", "filename": "src/test/java/org/elasticsearch/index/mapper/analyzer/AnalyzerMapperTests.java", "status": "modified" }, { "diff": "@@ -20,9 +20,14 @@\n package org.elasticsearch.index.mapper.boost;\n \n import org.apache.lucene.index.IndexableField;\n+import org.elasticsearch.common.bytes.ByteBufferBytesReference;\n+import org.elasticsearch.common.bytes.BytesArray;\n import org.elasticsearch.common.compress.CompressedString;\n import org.elasticsearch.common.xcontent.XContentFactory;\n+import org.elasticsearch.index.analysis.FieldNameAnalyzer;\n+import org.elasticsearch.index.analysis.NamedAnalyzer;\n import org.elasticsearch.index.mapper.DocumentMapper;\n+import org.elasticsearch.index.mapper.DocumentMapperParser;\n import org.elasticsearch.index.mapper.ParsedDocument;\n import org.elasticsearch.index.mapper.internal.BoostFieldMapper;\n import org.elasticsearch.index.service.IndexService;\n@@ -31,6 +36,7 @@\n import org.junit.Test;\n \n import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.nullValue;\n \n /**\n */\n@@ -99,4 +105,62 @@ public void testSetValues() throws Exception {\n assertThat(docMapper.boostFieldMapper().fieldType().stored(), equalTo(true));\n assertThat(docMapper.boostFieldMapper().fieldType().indexed(), equalTo(true));\n }\n+\n+ @Test\n+ public void testSetValuesForName() throws Exception {\n+ String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n+ .startObject(\"_boost\")\n+ .field(\"store\", \"yes\").field(\"index\", \"no\").field(\"name\", \"custom_name\")\n+ .endObject()\n+ .endObject().endObject().string();\n+ IndexService indexServices = createIndex(\"test\");\n+ DocumentMapper docMapper = indexServices.mapperService().documentMapperParser().parse(\"type\", mapping);\n+ assertThat(docMapper.boostFieldMapper().fieldType().stored(), equalTo(true));\n+ assertThat(docMapper.boostFieldMapper().fieldType().indexed(), equalTo(false));\n+ docMapper.refreshSource();\n+ docMapper = indexServices.mapperService().documentMapperParser().parse(\"type\", docMapper.mappingSource().string());\n+ assertThat(docMapper.boostFieldMapper().fieldType().stored(), equalTo(true));\n+ assertThat(docMapper.boostFieldMapper().fieldType().indexed(), equalTo(false));\n+ ParsedDocument doc = docMapper.parse(\"type\", \"1\", new BytesArray(XContentFactory.jsonBuilder().startObject().field(\"custom_name\", 5).field(\"field\", \"value\").endObject().string()));\n+ assertTrue(doc.docs().get(0).getField(\"custom_name\").fieldType().stored());\n+ assertFalse(doc.docs().get(0).getField(\"custom_name\").fieldType().indexed());\n+ assertThat(doc.docs().get(0).getField(\"field\").boost(), equalTo(5.0f));\n+\n+ // test that _boost is ignored because we set the name\n+ doc = docMapper.parse(\"type\", \"1\", new BytesArray(XContentFactory.jsonBuilder().startObject().field(\"_boost\", 5).field(\"field\", \"value\").endObject().string()));\n+ assertNull(doc.docs().get(0).getField(\"custom_name\"));\n+ assertThat(doc.docs().get(0).getField(\"field\").boost(), equalTo(1.0f));\n+ assertThat(doc.docs().get(0).getField(\"_boost\").numericValue().intValue(), equalTo(5));\n+ }\n+\n+ @Test\n+ public void testBoostMappingNotIndexedNorStored() throws Exception {\n+ DocumentMapperParser parser = createIndex(\"test\").mapperService().documentMapperParser();\n+\n+ String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n+ .startObject(\"_boost\").field(\"name\", \"custom_boost\").field(\"index\", \"no\").field(\"store\", false).endObject()\n+ .startObject(\"properties\")\n+ .startObject(\"field\").field(\"type\", \"string\").endObject()\n+ .endObject()\n+ .endObject().endObject().string();\n+ DocumentMapper documentMapper = parser.parse(mapping);\n+ ParsedDocument doc = documentMapper.parse(\"type\", \"1\", XContentFactory.jsonBuilder().startObject()\n+ .field(\"custom_boost\", 5)\n+ .field(\"field\", \"value\")\n+ .endObject().bytes());\n+\n+ assertThat(doc.docs().get(0).getField(\"field\").boost(), equalTo(5.0f));\n+ assertNull(doc.docs().get(0).getField(\"custom_boost\"));\n+\n+ // check that it serializes and de-serializes correctly\n+ documentMapper.refreshSource();\n+ DocumentMapper reparsedMapper = parser.parse(documentMapper.mappingSource().string());\n+ doc = reparsedMapper.parse(\"type\", \"1\", XContentFactory.jsonBuilder().startObject()\n+ .field(\"custom_boost\", 5)\n+ .field(\"field\", \"value\")\n+ .endObject().bytes());\n+\n+ assertThat(doc.docs().get(0).getField(\"field\").boost(), equalTo(5.0f));\n+ assertNull(doc.docs().get(0).getField(\"custom_boost\"));\n+ }\n }", "filename": "src/test/java/org/elasticsearch/index/mapper/boost/BoostMappingTests.java", "status": "modified" }, { "diff": "@@ -20,6 +20,7 @@\n package org.elasticsearch.indices.mapping;\n \n import org.elasticsearch.action.admin.indices.mapping.get.GetFieldMappingsResponse;\n+import org.elasticsearch.common.settings.ImmutableSettings;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.test.ElasticsearchIntegrationTest;\n import org.hamcrest.Matchers;\n@@ -146,4 +147,37 @@ public void simpleGetFieldMappingsWithDefaults() throws Exception {\n \n \n }\n+\n+ // https://github.com/elasticsearch/elasticsearch/issues/7237\n+ @Test\n+ public void testGet_boostAnd_analyzer() throws IOException {\n+ XContentBuilder mappings = jsonBuilder();\n+ mappings.startObject()\n+ .startObject(\"doc\")\n+ .startObject(\"_analyzer\").field(\"index\", \"analyzed\").field(\"path\", \"name\").endObject()\n+ .startObject(\"_boost\").field(\"name\", \"boost\").field(\"null_value\", \"1.0\").endObject()\n+ .endObject()\n+ .endObject();\n+ prepareCreate(\"index\").addMapping(\"doc\", mappings).get();\n+\n+ ensureYellow();\n+\n+ // set if we can get the field mappings by default name\n+ GetFieldMappingsResponse response = client().admin().indices().prepareGetFieldMappings().addIndices(\"index\").addTypes(\"doc\").setFields(\"_boost\", \"_analyzer\").get();\n+ assertThat(response.mappings().size(), equalTo(1));\n+ assertTrue(response.mappings().get(\"index\").containsKey(\"doc\"));\n+ assertTrue(response.mappings().get(\"index\").get(\"doc\").containsKey(\"_analyzer\"));\n+ assertTrue(response.mappings().get(\"index\").get(\"doc\").containsKey(\"_boost\"));\n+ assertThat(response.mappings().get(\"index\").get(\"doc\").size(), equalTo(2));\n+\n+ //see if we can get the field mappings by given name\n+ response = client().admin().indices().prepareGetFieldMappings().addIndices(\"index\").addTypes(\"doc\").setFields(\"boost\", \"name\").get();\n+ assertThat(response.mappings().size(), equalTo(1));\n+ assertTrue(response.mappings().get(\"index\").containsKey(\"doc\"));\n+ assertTrue(response.mappings().get(\"index\").get(\"doc\").containsKey(\"name\"));\n+ assertTrue(response.mappings().get(\"index\").get(\"doc\").containsKey(\"boost\"));\n+ assertThat(response.mappings().get(\"index\").get(\"doc\").size(), equalTo(2));\n+\n+ }\n+\n }", "filename": "src/test/java/org/elasticsearch/indices/mapping/SimpleGetFieldMappingsTests.java", "status": "modified" }, { "diff": "@@ -20,6 +20,7 @@\n package org.elasticsearch.indices.mapping;\n \n import org.elasticsearch.action.admin.cluster.health.ClusterHealthResponse;\n+import org.elasticsearch.action.admin.indices.mapping.get.GetFieldMappingsResponse;\n import org.elasticsearch.action.admin.indices.mapping.get.GetMappingsResponse;\n import org.elasticsearch.common.Priority;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n@@ -28,6 +29,7 @@\n import org.junit.Test;\n \n import java.io.IOException;\n+import java.util.Map;\n \n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n import static org.hamcrest.Matchers.equalTo;\n@@ -143,4 +145,29 @@ public void simpleGetMappings() throws Exception {\n assertThat(response.mappings().get(\"indexb\").get(\"Btype\"), notNullValue());\n }\n \n+ // related to https://github.com/elasticsearch/elasticsearch/issues/7237\n+ // test that _boost and _analyzer still appear with default name in mapping, not as given name\n+ @Test\n+ public void testGet_boostAnd_analyzer() throws IOException {\n+ XContentBuilder mappings = jsonBuilder();\n+ mappings.startObject()\n+ .startObject(\"doc\")\n+ .startObject(\"_analyzer\").field(\"index\", \"analyzed\").field(\"path\", \"name\").endObject()\n+ .startObject(\"_boost\").field(\"name\", \"boost\").field(\"null_value\", \"1.0\").endObject()\n+ .endObject()\n+ .endObject();\n+ prepareCreate(\"index\").addMapping(\"doc\", mappings).get();\n+\n+ ensureYellow();\n+ GetMappingsResponse response = client().admin().indices().prepareGetMappings().addIndices(\"index\").addTypes(\"doc\").get();\n+ assertThat(response.mappings().size(), equalTo(1));\n+ assertTrue(response.mappings().get(\"index\").containsKey(\"doc\"));\n+ Map<String, Object> mappingAsMap = response.mappings().get(\"index\").get(\"doc\").getSourceAsMap();\n+ assertTrue(mappingAsMap.containsKey(\"_analyzer\"));\n+ assertTrue(mappingAsMap.containsKey(\"_boost\"));\n+ assertTrue(mappingAsMap.containsKey(\"properties\"));\n+ assertThat(mappingAsMap.size(), equalTo(3));\n+\n+ }\n+\n }", "filename": "src/test/java/org/elasticsearch/indices/mapping/SimpleGetMappingsTests.java", "status": "modified" } ] }
{ "body": "There is a bug in the collect method of org.elasticsearch.search.aggregations.metrics.geobounds.GeoBoundsAggregator. When it resizes all the arrays at the top of the method it resizes posLefts twice instead of resizing posRights. This causes and ArrayIndexOutOfBoundsException.\n", "comments": [ { "body": "Should have mentioned that this is discussed in https://groups.google.com/forum/#!topic/elasticsearch/iUMK5GJ3JsQ\n", "created_at": "2014-09-03T11:21:59Z" }, { "body": "@owainb thanks for reporting the bug\n", "created_at": "2014-09-03T14:26:20Z" } ], "number": 7556, "title": "Aggregations: Geo bounds aggregation throwing ArrayIndexOutOfBoundsException on array resize" }
{ "body": "Closes #7556\n", "number": 7565, "review_comments": [], "title": "Fixes resize bug in Geo bounds Aggregator" }
{ "commits": [ { "message": "Aggregations: Fixes resize bug in Geo bounds Aggregator\n\nCloses #7556" } ], "files": [ { "diff": "@@ -108,7 +108,7 @@ public void collect(int docId, long owningBucketOrdinal) throws IOException {\n bottoms.fill(from, bottoms.size(), Double.NEGATIVE_INFINITY);\n posLefts = bigArrays.resize(posLefts, tops.size());\n posLefts.fill(from, posLefts.size(), Double.NEGATIVE_INFINITY);\n- posLefts = bigArrays.resize(posLefts, tops.size());\n+ posRights = bigArrays.resize(posRights, tops.size());\n posRights.fill(from, posRights.size(), Double.NEGATIVE_INFINITY);\n negLefts = bigArrays.resize(negLefts, tops.size());\n negLefts.fill(from, negLefts.size(), Double.NEGATIVE_INFINITY);", "filename": "src/main/java/org/elasticsearch/search/aggregations/metrics/geobounds/GeoBoundsAggregator.java", "status": "modified" }, { "diff": "@@ -22,7 +22,12 @@\n import org.elasticsearch.action.index.IndexRequestBuilder;\n import org.elasticsearch.action.search.SearchResponse;\n import org.elasticsearch.common.geo.GeoPoint;\n+import org.elasticsearch.common.settings.ImmutableSettings;\n+import org.elasticsearch.common.util.BigArray;\n+import org.elasticsearch.search.aggregations.bucket.terms.Terms;\n+import org.elasticsearch.search.aggregations.bucket.terms.Terms.Bucket;\n import org.elasticsearch.search.aggregations.metrics.geobounds.GeoBounds;\n+import org.elasticsearch.search.aggregations.metrics.geobounds.GeoBoundsAggregator;\n import org.elasticsearch.test.ElasticsearchIntegrationTest;\n import org.junit.Test;\n \n@@ -32,6 +37,7 @@\n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.geoBounds;\n+import static org.elasticsearch.search.aggregations.AggregationBuilders.terms;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchResponse;\n import static org.hamcrest.Matchers.equalTo;\n@@ -120,6 +126,22 @@ public void setupSuiteScopeCluster() throws Exception {\n .field(\"tag\", \"tag\" + i)\n .endObject()));\n }\n+ assertAcked(prepareCreate(\"high_card_idx\").setSettings(ImmutableSettings.builder().put(\"number_of_shards\", 2))\n+ .addMapping(\"type\", SINGLE_VALUED_FIELD_NAME, \"type=geo_point\", MULTI_VALUED_FIELD_NAME, \"type=geo_point\", NUMBER_FIELD_NAME, \"type=long\", \"tag\", \"type=string,index=not_analyzed\"));\n+\n+\n+ for (int i = 0; i < 2000; i++) {\n+ builders.add(client().prepareIndex(\"high_card_idx\", \"type\").setSource(jsonBuilder()\n+ .startObject()\n+ .array(SINGLE_VALUED_FIELD_NAME, singleValues[i % numUniqueGeoPoints].lon(), singleValues[i % numUniqueGeoPoints].lat())\n+ .startArray(MULTI_VALUED_FIELD_NAME)\n+ .startArray().value(multiValues[i % numUniqueGeoPoints].lon()).value(multiValues[i % numUniqueGeoPoints].lat()).endArray() \n+ .startArray().value(multiValues[(i+1) % numUniqueGeoPoints].lon()).value(multiValues[(i+1) % numUniqueGeoPoints].lat()).endArray()\n+ .endArray()\n+ .field(NUMBER_FIELD_NAME, i)\n+ .field(\"tag\", \"tag\" + i)\n+ .endObject()));\n+ }\n \n indexRandom(true, builders);\n ensureSearchable();\n@@ -293,4 +315,31 @@ public void singleValuedFieldNearDateLineWrapLongitude() throws Exception {\n assertThat(bottomRight.lon(), equalTo(geoValuesBottomRight.lon()));\n }\n \n+ /**\n+ * This test forces the {@link GeoBoundsAggregator} to resize the {@link BigArray}s it uses to ensure they are resized correctly\n+ */\n+ @Test\n+ public void singleValuedFieldAsSubAggToHighCardTermsAgg() {\n+ SearchResponse response = client().prepareSearch(\"high_card_idx\")\n+ .addAggregation(terms(\"terms\").field(NUMBER_FIELD_NAME).subAggregation(geoBounds(\"geoBounds\").field(SINGLE_VALUED_FIELD_NAME)\n+ .wrapLongitude(false)))\n+ .execute().actionGet();\n+\n+ assertSearchResponse(response);\n+\n+ Terms terms = response.getAggregations().get(\"terms\");\n+ assertThat(terms, notNullValue());\n+ assertThat(terms.getName(), equalTo(\"terms\"));\n+ List<Bucket> buckets = terms.getBuckets();\n+ assertThat(buckets.size(), equalTo(10));\n+ for (int i = 0; i < 10; i++) {\n+ Bucket bucket = buckets.get(i);\n+ assertThat(bucket, notNullValue());\n+ assertThat(bucket.getDocCount(), equalTo(1l));\n+ GeoBounds geoBounds = bucket.getAggregations().get(\"geoBounds\");\n+ assertThat(geoBounds, notNullValue());\n+ assertThat(geoBounds.getName(), equalTo(\"geoBounds\"));\n+ }\n+ }\n+\n }", "filename": "src/test/java/org/elasticsearch/search/aggregations/metrics/GeoBoundsTests.java", "status": "modified" } ] }
{ "body": "<p>\nUsing the following hierarchical data structure\n\n</p>\n\n\n<ul>\n <li>author</li>\n <li><ul>\n <li>book</li>\n <li>\n <ul>\n <li>review</li>\n </ul>\n </li>\n </ul>\n </li>\n</ul>\n\n<p> \nI am trying to find number of books by genre given book.publisher and book.review.rating,\nbut getting incorrect value count aggregate result.\n</p>\n\n<p>\n\nThis is working correctly if I use only 2 levels (book and review), but when I add author level also\nthen it is failing.\n</p>\n\n\nMapping, data and query used below:\n\n<pre>\n<code>\n\ncurl -XDELETE localhost:9200/authors\n\ncurl -XPUT localhost:9200/authors\n\ncurl -XPUT localhost:9200/authors/author/_mapping\n'{\n \"author\": {\n \"properties\": {\n \"author_id\": {\n \"type\": \"long\"\n },\n \"name\": {\n \"type\": \"string\"\n },\n \"book\": {\n \"type\": \"nested\",\n \"properties\": {\n \"book_id\": {\n \"type\": \"long\"\n },\n \"name\": {\n \"type\": \"string\"\n },\n \"genre\": {\n \"type\": \"string\"\n },\n \"publisher\": {\n \"type\": \"string\"\n },\n \"review\": {\n \"type\": \"nested\",\n \"properties\": {\n \"rating\": {\n \"type\": \"string\"\n },\n \"posted_by\": {\n \"type\": \"string\"\n }\n }\n }\n }\n }\n }\n }\n }'\n \n curl -XPUT localhost:9200/authors/author/0\n '{\n \"author_id\": \"1\",\n \"name\": \"a1\",\n \"book\": [\n {\n \"book_id\": \"11\",\n \"name\": \"a1-b1\",\n \"genre\": \"g1\",\n \"publisher\": \"p1\",\n \"review\": [\n {\n \"rating\": \"1s\",\n \"posted_by\": \"a\"\n },\n {\n \"rating\": \"2s\",\n \"posted_by\": \"b\"\n },\n {\n \"rating\": \"1s\",\n \"posted_by\": \"a\"\n }\n ]\n },\n {\n \"book_id\": \"12\",\n \"name\": \"a1-b2\",\n \"genre\": \"g1\",\n \"publisher\": \"p1\",\n \"review\": [\n {\n \"rating\": \"1s\",\n \"posted_by\": \"a\"\n },\n {\n \"rating\": \"2s\",\n \"posted_by\": \"b\"\n },\n {\n \"rating\": \"1s\",\n \"posted_by\": \"a\"\n }\n ]\n }\n ]\n}'\n\n\nThe book count (book_count) from the following query should be 2 but instead it is 1. \nThe output at filter by rating is correct, but the value count isn't.\n\n\ncurl -XPOST localhost:9200/authors/_search\n'{\n \"size\": 0,\n \"aggs\": {\n \"nested_book\": {\n \"nested\": {\n \"path\": \"book\"\n },\n \"aggregations\": {\n \"group_by_genre\": {\n \"terms\": {\n \"field\": \"genre\"\n },\n \"aggregations\": {\n \"filter_by_publisher\": {\n \"filter\": {\n \"bool\": {\n \"must\": {\n \"term\": {\n \"book.publisher\": \"p1\"\n }\n }\n }\n },\n \"aggregations\": {\n \"nested_review\": {\n \"nested\": {\n \"path\": \"book.review\"\n },\n \"aggregations\": {\n \"filter_by_rating\": {\n \"filter\": {\n \"bool\": {\n \"must\": {\n \"term\": {\n \"book.review.rating\": \"1s\"\n }\n }\n }\n },\n \"aggregations\": {\n \"reverse_to_book\": {\n \"reverse_nested\": {\n \"path\": \"book\"\n },\n \"aggregations\": {\n \"book_count\": {\n \"value_count\": {\n \"field\": \"book_id\"\n }\n }\n }\n }\n }\n }\n }\n }\n }\n }\n }\n }\n }\n }\n }\n}'\n\n</code>\n</pre>\n", "comments": [ { "body": "@spotta Can you share the ES version that you're using?\n", "created_at": "2014-08-28T19:57:00Z" }, { "body": "I tried using 1.3.1 & 1.3.2, same results \n", "created_at": "2014-08-28T20:11:21Z" }, { "body": "Thanks for reporting this bug @spotta. The next release will include a fix for this bug.\n", "created_at": "2014-08-29T21:12:49Z" }, { "body": "awesome! thanks for the quick turnaround. \n", "created_at": "2014-08-30T13:33:53Z" } ], "number": 7505, "title": "Getting incorrect value count using reverse nested aggregation when using more than 1 nested level" }
{ "body": "PR for #7505\n", "number": 7514, "review_comments": [], "title": "The nested aggregator should iterate over the child doc ids in ascending order." }
{ "commits": [ { "message": "Aggregations: The nested aggregator should iterate over the child doc ids in ascending order.\n\nThe reverse_nested aggregator requires that the emitted doc ids are always in ascending order, which is already enforced on the scorer level,\nbut this also needs to be enforced on the nested aggrgetor level otherwise incorrect counts are a result.\n\nCloses #7505\nCloses #7514" } ], "files": [ { "diff": "@@ -105,10 +105,10 @@ public void collect(int parentDoc, long bucketOrd) throws IOException {\n }\n int prevParentDoc = parentDocs.prevSetBit(parentDoc - 1);\n int numChildren = 0;\n- for (int i = (parentDoc - 1); i > prevParentDoc; i--) {\n- if (childDocs.get(i)) {\n+ for (int childDocId = prevParentDoc + 1; childDocId < parentDoc; childDocId++) {\n+ if (childDocs.get(childDocId)) {\n ++numChildren;\n- collectBucketNoCounts(i, bucketOrd);\n+ collectBucketNoCounts(childDocId, bucketOrd);\n }\n }\n incrementBucketDocCount(bucketOrd, numChildren);", "filename": "src/main/java/org/elasticsearch/search/aggregations/bucket/nested/NestedAggregator.java", "status": "modified" }, { "diff": "@@ -81,7 +81,7 @@ public void setNextReader(AtomicReaderContext reader) {\n bucketOrdToLastCollectedParentDoc.clear();\n try {\n // In ES if parent is deleted, then also the children are deleted, so the child docs this agg receives\n- // must belong to parent docs that are live. For this reason acceptedDocs can also null here.\n+ // must belong to parent docs that is alive. For this reason acceptedDocs can be null here.\n DocIdSet docIdSet = parentFilter.getDocIdSet(reader, null);\n if (DocIdSets.isEmpty(docIdSet)) {\n parentDocs = null;", "filename": "src/main/java/org/elasticsearch/search/aggregations/bucket/nested/ReverseNestedAggregator.java", "status": "modified" }, { "diff": "@@ -70,23 +70,31 @@ public void setupSuiteScopeCluster() throws Exception {\n )\n );\n \n- insertDocs(Arrays.asList(\"a\", \"b\", \"c\"), Arrays.asList(\"1\", \"2\", \"3\", \"4\"));\n- insertDocs(Arrays.asList(\"b\", \"c\", \"d\"), Arrays.asList(\"4\", \"5\", \"6\", \"7\"));\n- insertDocs(Arrays.asList(\"c\", \"d\", \"e\"), Arrays.asList(\"7\", \"8\", \"9\", \"1\"));\n+ insertType1(Arrays.asList(\"a\", \"b\", \"c\"), Arrays.asList(\"1\", \"2\", \"3\", \"4\"));\n+ insertType1(Arrays.asList(\"b\", \"c\", \"d\"), Arrays.asList(\"4\", \"5\", \"6\", \"7\"));\n+ insertType1(Arrays.asList(\"c\", \"d\", \"e\"), Arrays.asList(\"7\", \"8\", \"9\", \"1\"));\n refresh();\n- insertDocs(Arrays.asList(\"a\", \"e\"), Arrays.asList(\"7\", \"4\", \"1\", \"1\"));\n- insertDocs(Arrays.asList(\"a\", \"c\"), Arrays.asList(\"2\", \"1\"));\n- insertDocs(Arrays.asList(\"a\"), Arrays.asList(\"3\", \"4\"));\n+ insertType1(Arrays.asList(\"a\", \"e\"), Arrays.asList(\"7\", \"4\", \"1\", \"1\"));\n+ insertType1(Arrays.asList(\"a\", \"c\"), Arrays.asList(\"2\", \"1\"));\n+ insertType1(Arrays.asList(\"a\"), Arrays.asList(\"3\", \"4\"));\n refresh();\n- insertDocs(Arrays.asList(\"x\", \"c\"), Arrays.asList(\"1\", \"8\"));\n- insertDocs(Arrays.asList(\"y\", \"c\"), Arrays.asList(\"6\"));\n- insertDocs(Arrays.asList(\"z\"), Arrays.asList(\"5\", \"9\"));\n+ insertType1(Arrays.asList(\"x\", \"c\"), Arrays.asList(\"1\", \"8\"));\n+ insertType1(Arrays.asList(\"y\", \"c\"), Arrays.asList(\"6\"));\n+ insertType1(Arrays.asList(\"z\"), Arrays.asList(\"5\", \"9\"));\n+ refresh();\n+\n+ insertType2(new String[][]{new String[]{\"a\", \"0\", \"0\", \"1\", \"2\"}, new String[]{\"b\", \"0\", \"1\", \"1\", \"2\"}, new String[]{\"a\", \"0\"}});\n+ insertType2(new String[][]{new String[]{\"c\", \"1\", \"1\", \"2\", \"2\"}, new String[]{\"d\", \"3\", \"4\"}});\n+ refresh();\n+\n+ insertType2(new String[][]{new String[]{\"a\", \"0\", \"0\", \"0\", \"0\"}, new String[]{\"b\", \"0\", \"0\", \"0\", \"0\"}});\n+ insertType2(new String[][]{new String[]{\"e\", \"1\", \"2\"}, new String[]{\"f\", \"3\", \"4\"}});\n refresh();\n \n ensureSearchable();\n }\n \n- private void insertDocs(List<String> values1, List<String> values2) throws Exception {\n+ private void insertType1(List<String> values1, List<String> values2) throws Exception {\n XContentBuilder source = jsonBuilder()\n .startObject()\n .array(\"field1\", values1.toArray())\n@@ -96,17 +104,20 @@ private void insertDocs(List<String> values1, List<String> values2) throws Excep\n }\n source.endArray().endObject();\n indexRandom(false, client().prepareIndex(\"idx\", \"type1\").setRouting(\"1\").setSource(source));\n+ }\n \n- source = jsonBuilder()\n+ private void insertType2(String[][] values) throws Exception {\n+ XContentBuilder source = jsonBuilder()\n .startObject()\n- .field(\"x\", \"y\")\n- .startArray(\"nested1\").startObject()\n- .array(\"field1\", values1.toArray())\n- .startArray(\"nested2\");\n- for (String value1 : values2) {\n- source.startObject().field(\"field2\", value1).endObject();\n+ .startArray(\"nested1\");\n+ for (String[] value : values) {\n+ source.startObject().field(\"field1\", value[0]).startArray(\"nested2\");\n+ for (int i = 1; i < value.length; i++) {\n+ source.startObject().field(\"field2\", value[i]).endObject();\n+ }\n+ source.endArray().endObject();\n }\n- source.endArray().endObject().endArray().endObject();\n+ source.endArray().endObject();\n indexRandom(false, client().prepareIndex(\"idx\", \"type2\").setRouting(\"1\").setSource(source));\n }\n \n@@ -126,67 +137,6 @@ public void simple_reverseNestedToRoot() throws Exception {\n )\n ).get();\n \n- verifyResults(response);\n- }\n-\n- @Test\n- public void simple_nested1ToRootToNested2() throws Exception {\n- SearchResponse response = client().prepareSearch(\"idx\").setTypes(\"type2\")\n- .addAggregation(nested(\"nested1\").path(\"nested1\")\n- .subAggregation(\n- reverseNested(\"nested1_to_root\")\n- .subAggregation(nested(\"root_to_nested2\").path(\"nested1.nested2\"))\n- )\n- )\n- .get();\n-\n- assertSearchResponse(response);\n- Nested nested = response.getAggregations().get(\"nested1\");\n- assertThat(nested.getName(), equalTo(\"nested1\"));\n- assertThat(nested.getDocCount(), equalTo(9l));\n- ReverseNested reverseNested = nested.getAggregations().get(\"nested1_to_root\");\n- assertThat(reverseNested.getName(), equalTo(\"nested1_to_root\"));\n- assertThat(reverseNested.getDocCount(), equalTo(9l));\n- nested = reverseNested.getAggregations().get(\"root_to_nested2\");\n- assertThat(nested.getName(), equalTo(\"root_to_nested2\"));\n- assertThat(nested.getDocCount(), equalTo(25l));\n- }\n-\n- @Test\n- public void simple_reverseNestedToNested1() throws Exception {\n- SearchResponse response = client().prepareSearch(\"idx\")\n- .addAggregation(nested(\"nested1\").path(\"nested1.nested2\")\n- .subAggregation(\n- terms(\"field2\").field(\"nested1.nested2.field2\")\n- .collectMode(randomFrom(SubAggCollectionMode.values()))\n- .subAggregation(\n- reverseNested(\"nested1_to_field1\").path(\"nested1\")\n- .subAggregation(\n- terms(\"field1\").field(\"nested1.field1\")\n- .collectMode(randomFrom(SubAggCollectionMode.values()))\n- )\n- )\n- )\n- ).get();\n- verifyResults(response);\n- }\n-\n- @Test(expected = SearchPhaseExecutionException.class)\n- public void testReverseNestedAggWithoutNestedAgg() throws Exception {\n- client().prepareSearch(\"idx\")\n- .addAggregation(terms(\"field2\").field(\"nested1.nested2.field2\")\n- .collectMode(randomFrom(SubAggCollectionMode.values()))\n- .subAggregation(\n- reverseNested(\"nested1_to_field1\")\n- .subAggregation(\n- terms(\"field1\").field(\"nested1.field1\")\n- .collectMode(randomFrom(SubAggCollectionMode.values()))\n- )\n- )\n- ).get();\n- }\n-\n- private void verifyResults(SearchResponse response) {\n assertSearchResponse(response);\n \n Nested nested = response.getAggregations().get(\"nested1\");\n@@ -357,4 +307,145 @@ private void verifyResults(SearchResponse response) {\n assertThat(tagsBuckets.get(3).getKey(), equalTo(\"z\"));\n assertThat(tagsBuckets.get(3).getDocCount(), equalTo(1l));\n }\n+\n+ @Test\n+ public void simple_nested1ToRootToNested2() throws Exception {\n+ SearchResponse response = client().prepareSearch(\"idx\").setTypes(\"type2\")\n+ .addAggregation(nested(\"nested1\").path(\"nested1\")\n+ .subAggregation(\n+ reverseNested(\"nested1_to_root\")\n+ .subAggregation(nested(\"root_to_nested2\").path(\"nested1.nested2\"))\n+ )\n+ )\n+ .get();\n+\n+ assertSearchResponse(response);\n+ Nested nested = response.getAggregations().get(\"nested1\");\n+ assertThat(nested.getName(), equalTo(\"nested1\"));\n+ assertThat(nested.getDocCount(), equalTo(9l));\n+ ReverseNested reverseNested = nested.getAggregations().get(\"nested1_to_root\");\n+ assertThat(reverseNested.getName(), equalTo(\"nested1_to_root\"));\n+ assertThat(reverseNested.getDocCount(), equalTo(4l));\n+ nested = reverseNested.getAggregations().get(\"root_to_nested2\");\n+ assertThat(nested.getName(), equalTo(\"root_to_nested2\"));\n+ assertThat(nested.getDocCount(), equalTo(27l));\n+ }\n+\n+ @Test\n+ public void simple_reverseNestedToNested1() throws Exception {\n+ SearchResponse response = client().prepareSearch(\"idx\").setTypes(\"type2\")\n+ .addAggregation(nested(\"nested1\").path(\"nested1.nested2\")\n+ .subAggregation(\n+ terms(\"field2\").field(\"nested1.nested2.field2\").order(Terms.Order.term(true))\n+ .collectMode(randomFrom(SubAggCollectionMode.values()))\n+ .size(0)\n+ .subAggregation(\n+ reverseNested(\"nested1_to_field1\").path(\"nested1\")\n+ .subAggregation(\n+ terms(\"field1\").field(\"nested1.field1\").order(Terms.Order.term(true))\n+ .collectMode(randomFrom(SubAggCollectionMode.values()))\n+ )\n+ )\n+ )\n+ ).get();\n+\n+ assertSearchResponse(response);\n+\n+ Nested nested = response.getAggregations().get(\"nested1\");\n+ assertThat(nested, notNullValue());\n+ assertThat(nested.getName(), equalTo(\"nested1\"));\n+ assertThat(nested.getDocCount(), equalTo(27l));\n+ assertThat(nested.getAggregations().asList().isEmpty(), is(false));\n+\n+ Terms usernames = nested.getAggregations().get(\"field2\");\n+ assertThat(usernames, notNullValue());\n+ assertThat(usernames.getBuckets().size(), equalTo(5));\n+ List<Terms.Bucket> usernameBuckets = new ArrayList<>(usernames.getBuckets());\n+\n+ Terms.Bucket bucket = usernameBuckets.get(0);\n+ assertThat(bucket.getKey(), equalTo(\"0\"));\n+ assertThat(bucket.getDocCount(), equalTo(12l));\n+ ReverseNested reverseNested = bucket.getAggregations().get(\"nested1_to_field1\");\n+ assertThat(reverseNested.getDocCount(), equalTo(5l));\n+ Terms tags = reverseNested.getAggregations().get(\"field1\");\n+ List<Terms.Bucket> tagsBuckets = new ArrayList<>(tags.getBuckets());\n+ assertThat(tagsBuckets.size(), equalTo(2));\n+ assertThat(tagsBuckets.get(0).getKey(), equalTo(\"a\"));\n+ assertThat(tagsBuckets.get(0).getDocCount(), equalTo(3l));\n+ assertThat(tagsBuckets.get(1).getKey(), equalTo(\"b\"));\n+ assertThat(tagsBuckets.get(1).getDocCount(), equalTo(2l));\n+\n+ bucket = usernameBuckets.get(1);\n+ assertThat(bucket.getKey(), equalTo(\"1\"));\n+ assertThat(bucket.getDocCount(), equalTo(6l));\n+ reverseNested = bucket.getAggregations().get(\"nested1_to_field1\");\n+ assertThat(reverseNested.getDocCount(), equalTo(4l));\n+ tags = reverseNested.getAggregations().get(\"field1\");\n+ tagsBuckets = new ArrayList<>(tags.getBuckets());\n+ assertThat(tagsBuckets.size(), equalTo(4));\n+ assertThat(tagsBuckets.get(0).getKey(), equalTo(\"a\"));\n+ assertThat(tagsBuckets.get(0).getDocCount(), equalTo(1l));\n+ assertThat(tagsBuckets.get(1).getKey(), equalTo(\"b\"));\n+ assertThat(tagsBuckets.get(1).getDocCount(), equalTo(1l));\n+ assertThat(tagsBuckets.get(2).getKey(), equalTo(\"c\"));\n+ assertThat(tagsBuckets.get(2).getDocCount(), equalTo(1l));\n+ assertThat(tagsBuckets.get(3).getKey(), equalTo(\"e\"));\n+ assertThat(tagsBuckets.get(3).getDocCount(), equalTo(1l));\n+\n+ bucket = usernameBuckets.get(2);\n+ assertThat(bucket.getKey(), equalTo(\"2\"));\n+ assertThat(bucket.getDocCount(), equalTo(5l));\n+ reverseNested = bucket.getAggregations().get(\"nested1_to_field1\");\n+ assertThat(reverseNested.getDocCount(), equalTo(4l));\n+ tags = reverseNested.getAggregations().get(\"field1\");\n+ tagsBuckets = new ArrayList<>(tags.getBuckets());\n+ assertThat(tagsBuckets.size(), equalTo(4));\n+ assertThat(tagsBuckets.get(0).getKey(), equalTo(\"a\"));\n+ assertThat(tagsBuckets.get(0).getDocCount(), equalTo(1l));\n+ assertThat(tagsBuckets.get(1).getKey(), equalTo(\"b\"));\n+ assertThat(tagsBuckets.get(1).getDocCount(), equalTo(1l));\n+ assertThat(tagsBuckets.get(2).getKey(), equalTo(\"c\"));\n+ assertThat(tagsBuckets.get(2).getDocCount(), equalTo(1l));\n+ assertThat(tagsBuckets.get(3).getKey(), equalTo(\"e\"));\n+ assertThat(tagsBuckets.get(3).getDocCount(), equalTo(1l));\n+\n+ bucket = usernameBuckets.get(3);\n+ assertThat(bucket.getKey(), equalTo(\"3\"));\n+ assertThat(bucket.getDocCount(), equalTo(2l));\n+ reverseNested = bucket.getAggregations().get(\"nested1_to_field1\");\n+ assertThat(reverseNested.getDocCount(), equalTo(2l));\n+ tags = reverseNested.getAggregations().get(\"field1\");\n+ tagsBuckets = new ArrayList<>(tags.getBuckets());\n+ assertThat(tagsBuckets.size(), equalTo(2));\n+ assertThat(tagsBuckets.get(0).getKey(), equalTo(\"d\"));\n+ assertThat(tagsBuckets.get(0).getDocCount(), equalTo(1l));\n+ assertThat(tagsBuckets.get(1).getKey(), equalTo(\"f\"));\n+\n+ bucket = usernameBuckets.get(4);\n+ assertThat(bucket.getKey(), equalTo(\"4\"));\n+ assertThat(bucket.getDocCount(), equalTo(2l));\n+ reverseNested = bucket.getAggregations().get(\"nested1_to_field1\");\n+ assertThat(reverseNested.getDocCount(), equalTo(2l));\n+ tags = reverseNested.getAggregations().get(\"field1\");\n+ tagsBuckets = new ArrayList<>(tags.getBuckets());\n+ assertThat(tagsBuckets.size(), equalTo(2));\n+ assertThat(tagsBuckets.get(0).getKey(), equalTo(\"d\"));\n+ assertThat(tagsBuckets.get(0).getDocCount(), equalTo(1l));\n+ assertThat(tagsBuckets.get(1).getKey(), equalTo(\"f\"));\n+ }\n+\n+ @Test(expected = SearchPhaseExecutionException.class)\n+ public void testReverseNestedAggWithoutNestedAgg() throws Exception {\n+ client().prepareSearch(\"idx\")\n+ .addAggregation(terms(\"field2\").field(\"nested1.nested2.field2\")\n+ .collectMode(randomFrom(SubAggCollectionMode.values()))\n+ .subAggregation(\n+ reverseNested(\"nested1_to_field1\")\n+ .subAggregation(\n+ terms(\"field1\").field(\"nested1.field1\")\n+ .collectMode(randomFrom(SubAggCollectionMode.values()))\n+ )\n+ )\n+ ).get();\n+ }\n }", "filename": "src/test/java/org/elasticsearch/search/aggregations/bucket/ReverseNestedTests.java", "status": "modified" } ] }
{ "body": "We seem to have a problem with stuck threads in an Elasticsearch cluster. It appears at random, but once a thread is stuck it seems to keep being stuck until elasticsearch on that node is restarted. The theads get stuck in a busy loop and the stack trace of one is:\n\n```\nThread 3744: (state = IN_JAVA)\n - java.util.HashMap.getEntry(java.lang.Object) @bci=72, line=446 (Compiled frame; information may be imprecise)\n - java.util.HashMap.get(java.lang.Object) @bci=11, line=405 (Compiled frame)\n - org.elasticsearch.search.scan.ScanContext$ScanFilter.getDocIdSet(org.apache.lucene.index.AtomicReaderContext, org.apache.lucene.util.Bits) @bci=8, line=156 (Compiled frame)\n - org.elasticsearch.common.lucene.search.ApplyAcceptedDocsFilter.getDocIdSet(org.apache.lucene.index.AtomicReaderContext, org.apache.lucene.util.Bits) @bci=6, line=45 (Compiled frame)\n - org.apache.lucene.search.FilteredQuery$1.scorer(org.apache.lucene.index.AtomicReaderContext, boolean, boolean, org.apache.lucene.util.Bits) @bci=34, line=130 (Compiled frame)\n - org.apache.lucene.search.IndexSearcher.search(java.util.List, org.apache.lucene.search.Weight, org.apache.lucene.search.Collector) @bci=68, line=618 (Compiled frame)\n - org.elasticsearch.search.internal.ContextIndexSearcher.search(java.util.List, org.apache.lucene.search.Weight, org.apache.lucene.search.Collector) @bci=225, line=173 (Compiled frame)\n - org.apache.lucene.search.IndexSearcher.search(org.apache.lucene.search.Query, org.apache.lucene.search.Collector) @bci=11, line=309 (Interpreted frame)\n - org.elasticsearch.search.scan.ScanContext.execute(org.elasticsearch.search.internal.SearchContext) @bci=54, line=52 (Interpreted frame)\n - org.elasticsearch.search.query.QueryPhase.execute(org.elasticsearch.search.internal.SearchContext) @bci=174, line=119 (Compiled frame)\n - org.elasticsearch.search.SearchService.executeScan(org.elasticsearch.search.internal.InternalScrollSearchRequest) @bci=49, line=233 (Interpreted frame)\n - org.elasticsearch.search.action.SearchServiceTransportAction$SearchScanScrollTransportHandler.messageReceived(org.elasticsearch.search.internal.InternalScrollSearchRequest, org.elasticsearch.transport.TransportChannel) @bci=8, line=791 (Interpreted frame)\n - org.elasticsearch.search.action.SearchServiceTransportAction$SearchScanScrollTransportHandler.messageReceived(org.elasticsearch.transport.TransportRequest, org.elasticsearch.transport.TransportChannel) @bci=6, line=780 (Interpreted frame)\n - org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.run() @bci=12, line=270 (Compiled frame)\n - java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker) @bci=95, line=1145 (Compiled frame)\n - java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=615 (Interpreted frame)\n - java.lang.Thread.run() @bci=11, line=724 (Interpreted frame)\n```\n\nIt looks very much as the known problem of using the non-synchronized HashMap class in a threaded environment, see (http://stackoverflow.com/questions/17070184/hashmap-stuck-on-get). Unfortunately I'm not familiar enough with the es code to know if this can be the issue.\n\nThe solution mentioned at the link is to use ConcurrentHashMap instead.\n", "comments": [ { "body": "this does look like the bug you are referring to. thanks for reporting this!\n", "created_at": "2014-08-27T09:50:35Z" }, { "body": "An additional note, we have noted that this seems to happen (at least more frequently) when we post multiple parallel scan queries. Which seems to make sense from what I can see in the stack trace.\n", "created_at": "2014-08-27T09:56:06Z" }, { "body": "@maf23 Were you using the same scroll id multiple times in the parallel scan queries? \n", "created_at": "2014-08-27T12:29:39Z" }, { "body": "The scroll id should be different. But we will check our code to make sure\nthis is actually the case.\n\nOn Wed, Aug 27, 2014 at 2:30 PM, Martijn van Groningen <\nnotifications@github.com> wrote:\n\n> @maf23 https://github.com/maf23 Were you using the same scroll id\n> multiple times in the parallel scan queries?\n> \n> —\n> Reply to this email directly or view it on GitHub\n> https://github.com/elasticsearch/elasticsearch/issues/7478#issuecomment-53565244\n> .\n", "created_at": "2014-08-27T13:00:10Z" }, { "body": "I can see how this situation can occur if multiple scroll requests are scrolling in parallel with the same scroll id (or same scroll id prefix), the scroll api was never designed to support this. I think we need proper validation if two search requests try to access the same scan context that is open on a node.\n", "created_at": "2014-08-27T13:23:42Z" }, { "body": "Also running the clear scoll api during a scroll session can cause this bug.\n", "created_at": "2014-08-27T14:15:38Z" }, { "body": "@maf23 Can you share what jvm version and vendor you're using?\n", "created_at": "2014-08-28T10:05:20Z" }, { "body": "Sure, Oracle JVM 1.7.0_25\n", "created_at": "2014-08-28T10:12:50Z" }, { "body": "Ok thanks, like you mentioned the ConcurrentHashMap should be used here since the map in question is accessed by different threads during the entire scroll.\n", "created_at": "2014-08-28T12:09:30Z" }, { "body": "@maf23 Pushed a fix for this bug, which will be included in the next release. Thanks for reporting this!\n", "created_at": "2014-08-28T14:38:40Z" } ], "number": 7478, "title": "Internal: Stuck on java.util.HashMap.get?" }
{ "body": "PR for #7478\n", "number": 7499, "review_comments": [], "title": "Use ConcurrentHashMap in SCAN search to keep track of the reader states." }
{ "commits": [ { "message": "Scan: Use ConcurrentHashMap instead of HashMap, because the readerStates is accessed by multiple threads during the entire scroll session.\n\nCloses #7499\nCloses #7478" } ], "files": [ { "diff": "@@ -19,18 +19,18 @@\n \n package org.elasticsearch.search.scan;\n \n-import com.google.common.collect.Maps;\n import org.apache.lucene.index.AtomicReaderContext;\n import org.apache.lucene.index.IndexReader;\n import org.apache.lucene.search.*;\n import org.apache.lucene.util.Bits;\n import org.elasticsearch.common.lucene.docset.AllDocIdSet;\n import org.elasticsearch.common.lucene.search.XFilteredQuery;\n+import org.elasticsearch.common.util.concurrent.ConcurrentCollections;\n import org.elasticsearch.search.internal.SearchContext;\n \n import java.io.IOException;\n import java.util.ArrayList;\n-import java.util.Map;\n+import java.util.concurrent.ConcurrentMap;\n \n /**\n * The scan context allows to optimize readers we already processed during scanning. We do that by keeping track\n@@ -39,7 +39,7 @@\n */\n public class ScanContext {\n \n- private final Map<IndexReader, ReaderState> readerStates = Maps.newHashMap();\n+ private final ConcurrentMap<IndexReader, ReaderState> readerStates = ConcurrentCollections.newConcurrentMap();\n \n public void clear() {\n readerStates.clear();\n@@ -58,7 +58,7 @@ public TopDocs execute(SearchContext context) throws IOException {\n \n static class ScanCollector extends Collector {\n \n- private final Map<IndexReader, ReaderState> readerStates;\n+ private final ConcurrentMap<IndexReader, ReaderState> readerStates;\n \n private final int from;\n \n@@ -77,7 +77,7 @@ static class ScanCollector extends Collector {\n private IndexReader currentReader;\n private ReaderState readerState;\n \n- ScanCollector(Map<IndexReader, ReaderState> readerStates, int from, int size, boolean trackScores) {\n+ ScanCollector(ConcurrentMap<IndexReader, ReaderState> readerStates, int from, int size, boolean trackScores) {\n this.readerStates = readerStates;\n this.from = from;\n this.to = from + size;\n@@ -142,11 +142,11 @@ public Throwable fillInStackTrace() {\n \n public static class ScanFilter extends Filter {\n \n- private final Map<IndexReader, ReaderState> readerStates;\n+ private final ConcurrentMap<IndexReader, ReaderState> readerStates;\n \n private final ScanCollector scanCollector;\n \n- public ScanFilter(Map<IndexReader, ReaderState> readerStates, ScanCollector scanCollector) {\n+ public ScanFilter(ConcurrentMap<IndexReader, ReaderState> readerStates, ScanCollector scanCollector) {\n this.readerStates = readerStates;\n this.scanCollector = scanCollector;\n }", "filename": "src/main/java/org/elasticsearch/search/scan/ScanContext.java", "status": "modified" } ] }
{ "body": "If create an index the following settings, elasticsearch success to create an index.\nHowever, when we create a document, we have error.\nThere are servral errors.\n### set 0 to number_of_shards\n\nsetting and create a document\n\n```\ncurl -XPUT \"http://localhost:9200/hoge\" -d'\n{\n \"settings\": {\n \"number_of_shards\": 0\n }\n}'\n\ncurl -XPUT \"http://localhost:9200/hoge/fuga/1\" -d'{ \"title\": \"fuga\"}'\n```\n\nerorr\n\n```\n{\n \"error\": \"ArithmeticException[/ by zero]\",\n \"status\": 500\n}\n```\n### set -2 to number_of_shards\n\nsetting and create adocument\n\n```\ncurl -XPUT \"http://localhost:9200/hoge\" -d'\n{\n \"settings\": {\n \"number_of_shards\": -2\n }\n}'\n\ncurl -XPUT \"http://localhost:9200/hoge/fuga/1\" -d'{ \"title\": \"fuga\"}'\n```\n\nerror\n\n```\n{\n \"error\": \"IndexShardMissingException[[hoge][0] missing]\",\n \"status\": 404\n}\n```\n### set -2 to number_of_replicas\n\nsetting and create a document\n\n```\ncurl -XPUT \"http://localhost:9200/hoge\" -d'\n{\n \"settings\": {\n \"number_of_shards\": 2,\n \"number_of_replicas\": -2\n }\n}'\n\ncurl -XPUT \"http://localhost:9200/hoge/fuga/1\" -d'{ \"title\": \"fuga\"}'\n```\n\nerror\n\n```\n{\n \"error\": \"UnavailableShardsException[[hoge][0] [0] shardIt, [0] active : Timeout waiting for [1m], request: index {[hoge][fuga][1], source[{\\n \\\"title\\\": \\\"fuga\\\"\\n}\\n]}]\",\n \"status\": 503\n}\n```\n\nElasticsearch should return error message and should not create an index.\n", "comments": [], "number": 7495, "title": "Validation of number_of_shards and number_of_replicas request to reject illegal number" }
{ "body": "Fixes #7495\n\nThere were also two `CreateIndexTests` files, so I collapsed them into a single one.\n", "number": 7496, "review_comments": [], "title": "Validate create index requests' number of primary/replica shards" }
{ "commits": [ { "message": "Validate create index requests' number of primary/replica shards\n\nFixes #7495" } ], "files": [ { "diff": "@@ -107,6 +107,14 @@ public ActionRequestValidationException validate() {\n if (index == null) {\n validationException = addValidationError(\"index is missing\", validationException);\n }\n+ Integer number_of_primaries = settings.getAsInt(IndexMetaData.SETTING_NUMBER_OF_SHARDS, null);\n+ Integer number_of_replicas = settings.getAsInt(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, null);\n+ if (number_of_primaries != null && number_of_primaries <= 0) {\n+ validationException = addValidationError(\"index must have 1 or more primary shards\", validationException);\n+ }\n+ if (number_of_replicas != null && number_of_replicas < 0) {\n+ validationException = addValidationError(\"index must have 0 or more replica shards\", validationException);\n+ }\n return validationException;\n }\n ", "filename": "src/main/java/org/elasticsearch/action/admin/indices/create/CreateIndexRequest.java", "status": "modified" }, { "diff": "@@ -184,8 +184,8 @@ public static State fromString(String state) {\n private final DiscoveryNodeFilters excludeFilters;\n \n private IndexMetaData(String index, long version, State state, Settings settings, ImmutableOpenMap<String, MappingMetaData> mappings, ImmutableOpenMap<String, AliasMetaData> aliases, ImmutableOpenMap<String, Custom> customs) {\n- Preconditions.checkArgument(settings.getAsInt(SETTING_NUMBER_OF_SHARDS, -1) != -1, \"must specify numberOfShards for index [\" + index + \"]\");\n- Preconditions.checkArgument(settings.getAsInt(SETTING_NUMBER_OF_REPLICAS, -1) != -1, \"must specify numberOfReplicas for index [\" + index + \"]\");\n+ Preconditions.checkArgument(settings.getAsInt(SETTING_NUMBER_OF_SHARDS, null) != null, \"must specify numberOfShards for index [\" + index + \"]\");\n+ Preconditions.checkArgument(settings.getAsInt(SETTING_NUMBER_OF_REPLICAS, null) != null, \"must specify numberOfReplicas for index [\" + index + \"]\");\n this.index = index;\n this.version = version;\n this.state = state;", "filename": "src/main/java/org/elasticsearch/cluster/metadata/IndexMetaData.java", "status": "modified" }, { "diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.action.admin.indices.create;\n \n+import org.elasticsearch.action.ActionRequestValidationException;\n import org.elasticsearch.action.admin.cluster.state.ClusterStateResponse;\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n@@ -30,6 +31,9 @@\n import org.elasticsearch.test.ElasticsearchIntegrationTest.Scope;\n import org.junit.Test;\n \n+import java.util.HashMap;\n+\n+import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n import static org.hamcrest.Matchers.*;\n import static org.hamcrest.core.IsNull.notNullValue;\n \n@@ -51,7 +55,7 @@ public void testCreationDate_Given() {\n assertThat(index, notNullValue());\n assertThat(index.creationDate(), equalTo(4l));\n }\n- \n+\n @Test\n public void testCreationDate_Generated() {\n long timeBeforeRequest = System.currentTimeMillis();\n@@ -70,4 +74,70 @@ public void testCreationDate_Generated() {\n assertThat(index.creationDate(), allOf(lessThanOrEqualTo(timeAfterRequest), greaterThanOrEqualTo(timeBeforeRequest)));\n }\n \n+ @Test\n+ public void testDoubleAddMapping() throws Exception {\n+ try {\n+ prepareCreate(\"test\")\n+ .addMapping(\"type1\", \"date\", \"type=date\")\n+ .addMapping(\"type1\", \"num\", \"type=integer\");\n+ fail(\"did not hit expected exception\");\n+ } catch (IllegalStateException ise) {\n+ // expected\n+ }\n+ try {\n+ prepareCreate(\"test\")\n+ .addMapping(\"type1\", new HashMap<String,Object>())\n+ .addMapping(\"type1\", new HashMap<String,Object>());\n+ fail(\"did not hit expected exception\");\n+ } catch (IllegalStateException ise) {\n+ // expected\n+ }\n+ try {\n+ prepareCreate(\"test\")\n+ .addMapping(\"type1\", jsonBuilder())\n+ .addMapping(\"type1\", jsonBuilder());\n+ fail(\"did not hit expected exception\");\n+ } catch (IllegalStateException ise) {\n+ // expected\n+ }\n+ }\n+\n+ @Test\n+ public void testInvalidShardCountSettings() throws Exception {\n+ try {\n+ prepareCreate(\"test\").setSettings(ImmutableSettings.builder()\n+ .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, randomIntBetween(-10, 0))\n+ .build())\n+ .get();\n+ fail(\"should have thrown an exception about the primary shard count\");\n+ } catch (ActionRequestValidationException e) {\n+ assertThat(\"message contains error about shard count: \" + e.getMessage(),\n+ e.getMessage().contains(\"index must have 1 or more primary shards\"), equalTo(true));\n+ }\n+\n+ try {\n+ prepareCreate(\"test\").setSettings(ImmutableSettings.builder()\n+ .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, randomIntBetween(-10, -1))\n+ .build())\n+ .get();\n+ fail(\"should have thrown an exception about the replica shard count\");\n+ } catch (ActionRequestValidationException e) {\n+ assertThat(\"message contains error about shard count: \" + e.getMessage(),\n+ e.getMessage().contains(\"index must have 0 or more replica shards\"), equalTo(true));\n+ }\n+\n+ try {\n+ prepareCreate(\"test\").setSettings(ImmutableSettings.builder()\n+ .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, randomIntBetween(-10, 0))\n+ .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, randomIntBetween(-10, -1))\n+ .build())\n+ .get();\n+ fail(\"should have thrown an exception about the shard count\");\n+ } catch (ActionRequestValidationException e) {\n+ assertThat(\"message contains error about shard count: \" + e.getMessage(),\n+ e.getMessage().contains(\"index must have 1 or more primary shards\"), equalTo(true));\n+ assertThat(\"message contains error about shard count: \" + e.getMessage(),\n+ e.getMessage().contains(\"index must have 0 or more replica shards\"), equalTo(true));\n+ }\n+ }\n }", "filename": "src/test/java/org/elasticsearch/action/admin/indices/create/CreateIndexTests.java", "status": "modified" } ] }
{ "body": "G'day,\n\nI'm using ElasticSearch 0.19.11 with the unicast Zen discovery protocol.\n\nWith this setup, I can easily split a 3-node cluster into two 'hemispheres' (continuing with the brain metaphor) with one node acting as a participant in both hemispheres. I believe this to be a significant problem, because now `minimum_master_nodes` is incapable of preventing certain split-brain scenarios.\n\nHere's what my 3-node test cluster looked like before I broke it:\n\n![](https://saj.beta.anchortrove.com/es-splitbrain-1.png)\n\nHere's what the cluster looked like after simulating a communications failure between nodes (2) and (3):\n\n![](https://saj.beta.anchortrove.com/es-splitbrain-2.png)\n\nHere's what seems to have happened immediately after the split:\n1. Node (2) and (3) lose contact with one another. (`zen-disco-node_failed` ... `reason failed to ping`)\n2. Node (2), still master of the left hemisphere, notes the disappearance of node (3) and broadcasts an advisory message to all of its followers. Node (1) takes note of the advisory.\n3. Node (3) has now lost contact with its old master and decides to hold an election. It declares itself winner of the election. On declaring itself, it assumes master role of the right hemisphere, then broadcasts an advisory message to all of its followers. Node (1) takes note of this advisory, too.\n\nAt this point, I can't say I know what to expect to find on node (1). If I query both masters for a list of nodes, I see node (1) in both clusters.\n\nLet's look at `minimum_master_nodes` as it applies to this test cluster. Assume I had set `minimum_master_nodes` to 2. Had node (3) been completely isolated from nodes (1) and (2), I would not have run into this problem. The left hemisphere would have enough nodes to satisfy the constraint; the right hemisphere would not. This would continue to work for larger clusters (with an appropriately larger value for `minimum_master_nodes`).\n\nThe problem with `minimum_master_nodes` is that it does not work when the split brains are intersecting, as in my example above. Even on a larger cluster of, say, 7 nodes with `minimum_master_nodes` set to 4, all that needs to happen is for the 'right' two nodes to lose contact with one another (a master election has to take place) for the cluster to split.\n\nIs there anything that can be done to detect the intersecting split on node (1)?\n\nWould #1057 help?\n\nAm I missing something obvious? :)\n", "comments": [ { "body": "We also had at some point a similar issue, where minimum_master_nodes did not prevent the cluster from having two different views of the nodes at the same time. \n\nAs our indices were created automatically, some of the indices were created twice, once in every half of the cluster with the two masters broadcasting different states, and after a full cluster restart some shards were unable to be allocated, as the state has been mixed up. This was on 0.17. so I am not sure, if data would still be lost, as the state is now saved with the shards. But the other question is what happens when an index exists twice in the cluster (as it has been created on every master).\n\nI think we should have a method to recover from such a situation. As I don't know how the zen discovery works exactly, I can not say how to solve it, but IMHO a node should only be in one cluster, in your second image node 1 should either be with 2, preventing 3 from becoming master, or with node 3, preventing 2 from staying master.\n", "created_at": "2012-12-18T08:43:56Z" }, { "body": "see Issue #2117 as well, I'm not sure if the Unicast discovery is making it worse for you, but I think we captured the underlying problem over on that issue, but would like your thoughts too.\n", "created_at": "2012-12-18T20:37:35Z" }, { "body": "From #2117:\n\n> The split brain occurs if the nodeId(UUID) of the disconnected node is such that the disconnected node picks itself as the next logical master while pinging the other nodes(NodeFaultDetection).\n\nDitto.\n\n> The split brain only occurs on the second time that the node is disconnected/isolated.\n\nI see a split on the _first partial isolation_. To me, these bug reports look like two different problems.\n", "created_at": "2012-12-20T00:31:49Z" }, { "body": "I believe I ran into this issue yesterday in a 3 node cluster- a node elects itself master when the current master is disconnected from it. The remaining partipant node toggles between having the other nodes as its master before settling on one. Is this what you saw @saj?\n", "created_at": "2013-04-03T16:23:29Z" }, { "body": "Yes, @trollybaz.\n\nI ended up working around the problem (in testing) by using [elasticsearch-zookeeper](https://github.com/sonian/elasticsearch-zookeeper) in place of Zen discovery. We already had reliable Zookeeper infrastructure up for other applications, so this approach made a whole lot of sense to me. I was unable to reproduce the problem with the Zookeeper discovery module.\n", "created_at": "2013-04-03T23:12:54Z" }, { "body": "I'm pretty sure we're suffering from this in certain situations, and I don't think that it's limited to unicast discovery. \n\nWe've had some bad networking, some Virtual Machine stalls (result of SAN issues, or VMWare doing weird stuff), or even heavy GC activity can cause enough pauses for aspects of the split brain to occur. \n\nWe were originally running pre-0.19.5 which contained an important fix for an edge case I thought we were suffering from, but since moving to 0.19.10 we've had at least one split brain (VMware->SAN related) that caused 1 of the 3 ES nodes to lose touch with the master, and declare itself master, while still then maintaing links back to other nodes. \n\nI'm going to be tweaking our ES logging config to output DEBUG level discovery to a separate file so that I can properly trace these cases, but there have just been too many of these not to consider ES not handling these adversarial environment cases.\n\nI believe #2117 is still an issue and is an interesting edge case, but I think this issue here best represents the majority of the issues people are having. My gut/intuition seems to indicate that the probability of this issue occurring does drop with a larger cluster, so the 3-node, minimum_master_node=2 is the most prevalent case.\n\nIt seems like when the 'split brain' new master connects to it's known child nodes, any node that already has an upstream connection to an existing master probably should be flagging it as a problem, and telling the newly connected master node \"hey, I don't think you fully understand the cluster situation\".\n", "created_at": "2013-04-04T23:31:22Z" }, { "body": "I believe there are two issues at hand. One being the possible culprits for a node being disconnected from the cluster: network issues, large GC, discover bug, etc... The other issue, and the more important one IMHO, is the failure in the master election process to detect that a node belongs to two separate clusters (with different masters). Clusters should embrace node failures for whatever reason, but master election needs to be rock solid. Tough problem in systems without an authoritative process such as ZooKeeper.\n\nTo add more data to the issue: I have seen the issue on two different 0.20RC1 clusters. One having eight nodes, the other with four.\n", "created_at": "2013-04-05T04:00:52Z" }, { "body": "I'm not sure the former is really something ES should be actively dealing with, the latter I agree, and is the main point here, in how ES detects and recovers from cases where 2 masters have been elected. \n\nThere was supposed to have been some code in, I think, 0.19.5 that 'recovers' from this state by choosing the side that has the most recent ClusterStatus object (see Issue #2042) , but it doesn't appear in practice to be working as expected, because we get these child nodes accepting connections from multiple masters.\n\nI think gathering the discovery-level DEBUG logging from the multiple nodes and presenting it here is the only way to get further traction on this case. \n\nIt's possible going through the steps in Issue #2117 may uncover edge cases related to this one (even though the source conditions are different); at least it might be a reproducible case to explore.\n\n@s1monw nudge - have you had a chance to look into #2117 at all... ? :)\n", "created_at": "2013-04-05T05:11:18Z" }, { "body": "Paul, I agree that the former is not something to focus on. Should have stated that. :) The beauty of many of the new big data systems is that they embrace failure. Nodes will come and go, either due to errors or just simple maintenance. #2117 might have a different source condition, but the recovery process after the fact should be identical.\n\nI have enabled DEBUG logging at the discovery level and I can pinpoint when a node has left/joined a cluster, but I still have no insights on the election process.\n", "created_at": "2013-04-05T16:27:54Z" }, { "body": "suffered from this the other day when an accidental provisioning error had a 4GB ES Heap instance running on a 4GB O/S memory, which was always going to end up in trouble. The node swapped, process hung, and the intersection issue described here happened.\n\nYes, the provisioning error could have been avoided, yes, probably use of mlockall may have prevented the destined-to-die-a-horrible-swap-death, but there's other scenarios that could cause a hung process (bad I/O causing stalls for example) where the way ES handles the cluster state is poor, and leads to this problem.\n\nwe hope very much someone is looking hard into ways to make ES a bit more resilient when facing these situations to improve data integrity... (goes on bended knees while pleading)\n", "created_at": "2013-05-24T05:43:42Z" }, { "body": "Btw. why not adopt ZK, which I believe would make this situation impossible(?)? I don't love the extra process/management that the use of ZK would imply..... though maybe it could be embedded, like in SolrCloud, to work around that?\n", "created_at": "2013-05-24T15:30:05Z" }, { "body": "From my understanding, the single embedded Zookeeper model is not ideal for production and that a full Zookeeper cluster is preferred. Never tried myself, so I cannot personally comment.\n", "created_at": "2013-05-24T15:44:14Z" }, { "body": "FYI - there is a zookeeper plugin for ES\n", "created_at": "2013-05-24T16:04:55Z" }, { "body": "Oh, I didn't mean to imply a _single_ embedded ZK. I meant N of them in different ES processes. Right Simon, there is the plugin, but I suspect people are afraid of using it because it's not clear if it's 100% maintained, if it works with the latest ES and such. So my Q is really about adopting something like that and supporting it officially. Is that a possibility?\n", "created_at": "2013-05-24T16:06:18Z" }, { "body": "@otisg: The problem with the ZK plugin is that with clients being part of the cluster, they need to know about ZK in order to be able to discover the servers in the cluster. Some client libraries (such as the one used by the application that started this bug report -- I'm a colleague of Saj's) doesn't support ZK discovery. In order for ZK to be a useful alternative in general, there either needs to be universal support of ZK in client libraries, or a backwards-compatible way for non-ZK-aware client libraries to discover the servers (perhaps a ZK-to-Zen translator or something... I don't know, I've got bugger-all knowledge of how ES actually works under the hood).\n", "created_at": "2013-05-24T21:46:57Z" }, { "body": "We've gotten into this situation twice now in our QA environment. 3 nodes. minimum_master_nodes = 2. Log flies at https://gist.github.com/aochsner/5749640 (sorry they are big and repetitive). \n\nWe are on 0.9.0 and using multicast\n\nAs a bit of a walkthrough. sthapqa02 was the master and all it noticed was that sthapqa01 went bye bye and never rejoined. According to sthapqa02, the cluster was sthapqa02 (itself) and sthapqa03. \n\nsthapqa01 is what appeared to have problems. It couldn't reach sthapqa02 and decided to create a cluster between itself and sthapqa03. \n\nsthapqa03 went along w/ sthapqa01 to create a cluster and didn't notify sthapqa02. \n\nSo 01 and 03 are in a cluster and 02 thinks it's in a cluster w/ 03. \n", "created_at": "2013-06-10T15:26:37Z" }, { "body": "just an update that this behaves much better in 0.90.3 with dedicated master nodes deployment, but we are working on a better implementation down the road (with potential constraints on requiring fixed dedicated master nodes by the nature of some consensus algo impls, we will see how it goes...).\n", "created_at": "2013-08-13T23:46:16Z" }, { "body": "@kimchy that sounds promising, I would love to to understand more of the changes in that 0.90.x series that is in this area to understand what movements are going on ? Is there a commit hash you could point to that you can remember that I could peek at ?\n\nBy dedicated master node, do you mean nodes that _just_ perform the master role, and not data role? (so additional nodes on top of existing data nodes). This would sort of mimic how adding Zookeeper as a Master Election co-ordinator works?\n", "created_at": "2013-08-14T02:56:27Z" }, { "body": "@kimchy Does 0.90.2 has the same features or they are only available in 0.90.3? \n", "created_at": "2013-08-14T03:05:05Z" }, { "body": "Shay, thanks for the update.\n\nFor us, the problem has gone away with the adoption of 0.90.2. The actual underlying problem might not have been fixed, but the improved memory usage with elasticsearch 0.90/Lucene 4 has eliminated large GCs, which probably were the root cause of our disconnections. No disconnections means no need to elect another master.\n", "created_at": "2013-08-14T17:06:22Z" }, { "body": "This situation happened to us recently running 0.90.1 with `minimum_master_nodes` set to `N/2 + 1`, with `N = 15`. I'm not sure what the root cause was, but this shows that such a scenario is probable in larger clusters as well.\n", "created_at": "2013-09-20T16:21:47Z" }, { "body": "We have been frequently experiencing this 'mix brain' issue in several of our clusters - up to 3 or 4 times a week. We have always had dedicated master eligible nodes (i.e. master=true, data=false), correctly configured minimum_master_nodes and have recently moved to 0.90.3, and seen no improvement in the situation.\n\nAs a side note, the initial cause of the disruption to our cluster is 'something' to do with the network links between the nodes I imagine - one of the master eligible nodes occasionally loses connectivity with the master node briefly - \"transport disconnected (with verified connect)\" is all we get in the logs. We haven't figured out this issue yet (something is killing the tcp connection?), but this explains the frequency with which we are affected by this bug as it seems its a double hit due to the inability for the cluster to recover itself correctly when this disconnect occurs.\n\n@kimchy Is there any latest status on the 'better implementation down the road' and when it might be delivered?\n\nSounds like zookeeper is our reluctant interim solution.\n", "created_at": "2013-10-18T09:28:37Z" }, { "body": "just as I was beginning plans to go to a set of dedicated master-only nodes I ready @trevorreeves post where he's still hitting the same problem. Doh!\n\nOur situation appears to be IOWait related, in that a master node (also a data-node) hits an issue that causes extensive IOWait (a _scroll based search can trigger this, we already cap the # streams and Mb/second recovery rate through settings), the JVM becomes unresponsive. The other nodes that are doing the Master Fault Detection are configured with 3 x 30 second ping timeouts, all of which fail, and then they give up on the master.\n\nI'm not really sure what is stalling the master node JVM, particularly when I'm positive it's not GC related, it's definitely linked to heavy IOWait. We have one node in one installation with a 'tenuous' connection to a NetApp storage backing the volume used by the ES local disk image, and that seems to be the underlying root of our issues, but it is the way the ES cluster is failing to recover from this situation and not properly reestabling a consensus on the cluster that causes issues (I don't mind any weirdness during times of whacky IO patterns that form the split brain so much as I dislike the way ES is failing to keep track of who thinks who's who in the cluster).\n\nAt this point, it does seem like the Zookeeper based discovery/cluster management plugin is the most reliable way, though I'm not looking forward to setting up that up to be honest. \n", "created_at": "2013-10-22T01:04:52Z" }, { "body": "We haven't hit this but this report is worrying - is this being worked on? This is the kind of thing that'd make us switch to Zookeeper.\n", "created_at": "2013-11-21T19:25:27Z" }, { "body": "Just wanted to point out to Nik a comment in the other related issue: https://github.com/elasticsearch/elasticsearch/issues/2117#issuecomment-16078340\n\n_\"Unfortunately, this situation can in-fact occur with zen discovery at this point. We are working on a fix for this issue which might take a bit until we have something that can bring a solid solution for this.\"_\n\nI wonder what has happened since then and if their findings correspond to my scenario.\n\nFor my clusters, split-brains always occur when a node becomes isolated and then elects themselves as master. More visibility (logging) of the election process would be helpful. Re-discovery would be helpful as well since I rarely see the cluster self heal despite being in erroneous situations (nodes belongs to two clusters_. I am on version 0.90.2, so I am not sure if I am perhaps missing a critical update although I do scan the issues and commits.\n", "created_at": "2013-11-21T19:40:25Z" }, { "body": "Could you do me a huge favor and _not_ patch this until, like, May or so? I need to finish some other things before the next installation of Jepsen. ;-)\n", "created_at": "2013-12-11T21:16:44Z" }, { "body": "Is there any update on this or timeline for when it will be fixed?\n", "created_at": "2014-01-02T15:10:35Z" }, { "body": "Ran into this very problem on a 4 node cluster.\n\nNode 1 and Node 2 got disconnected and elected themselves as masters, \nNode 3 and 4 remained followers for both Node 1 and Node 2.\n\nWe do not have the option of running ZK.\n\nDoes anyone know the election process is governed (I know it runs off the Praxos Consensus algorithm) but in layman's term does each follower vote exactly once or do they case multiple votes?\n", "created_at": "2014-01-15T18:38:00Z" }, { "body": "We just ran into this problem on a 41 data node and 5 master node cluster running 0.90.9\n@kimchy is your recommendation to use zookeeper and not zen?\n", "created_at": "2014-02-08T05:37:44Z" }, { "body": "@amitelad7 \nYou have a few options running at Zen, you can increases the fd timeouts/retries/intervals if your network/node is unresponsive. The other option is to explicitly define master nodes, but in the case of yours where you have 5 masters it may get tricky.\n", "created_at": "2014-02-17T04:13:48Z" } ], "number": 2488, "title": "minimum_master_nodes does not prevent split-brain if splits are intersecting" }
{ "body": "This PR contains the accumulated work from the feautre/improve_zen branch. Here are the highlights of the changes: \n\n**Testing infra**\n- Networking:\n - all symmetric partitioning\n - dropping packets\n - hard disconnects\n - Jepsen Tests\n- Single node service disruptions:\n - Long GC / Halt\n - Slow cluster state updates\n- Discovery settings\n - Easy to setup unicast with partial host list\n\n**Zen Discovery**\n- Pinging after master loss (no local elects) \n- Fixes the split brain issue: #2488\n- Batching join requests\n- More resilient joining process (wait on a publish from master)\n", "number": 7493, "review_comments": [ { "body": "this `end exclude` comment is wrong copy/paste?\n", "created_at": "2014-08-28T08:50:15Z" }, { "body": "maybe call this `ForcedClusterStateUpdateTask`?\n", "created_at": "2014-08-28T08:51:46Z" }, { "body": "can we maybe do `global(level).size() > 0` ?\n", "created_at": "2014-08-28T08:55:11Z" }, { "body": "I'd appreciate if we can do `== false` it's just so much easier to read\n", "created_at": "2014-08-28T08:55:54Z" }, { "body": "are we sure the `clusterService.state()` can never be null?\n", "created_at": "2014-08-28T08:56:37Z" }, { "body": "I also think instead of all the instanceof checks we should maybe make this an abstract class and add a method\n\n```\npublic boolean force() {\n return true|false;\n}\n```\n\nto ClusterStateUpdateTask maybe?\n", "created_at": "2014-08-28T09:01:15Z" }, { "body": "can we maybe make this a constant? the setting I mean?\n", "created_at": "2014-08-28T09:22:57Z" }, { "body": "can we use a switch / case statement here it's easier to read and 1.7 supports strings\n", "created_at": "2014-08-28T09:24:22Z" }, { "body": "can we make those all constants? and does it make sense to randomize some of them?\n", "created_at": "2014-08-28T09:28:34Z" }, { "body": "should we validate this setting? ie `>= 0`?\n", "created_at": "2014-08-28T09:29:04Z" }, { "body": "can we make the `unknown cluster state version` a constant?\n", "created_at": "2014-08-28T09:29:41Z" }, { "body": "should we do this in a try / catch fashion? and make sure we call it on all of them?\n", "created_at": "2014-08-28T10:44:29Z" }, { "body": "it might be good to have an assertion somewhere that make sure it's not there?\n", "created_at": "2014-08-28T10:45:12Z" }, { "body": "maybe we can turn this around and do \n\n``` Java\nif (electMaster.hasEnoughMasterNodes(possibleMasterNodes)) {\n // lets tie break between discovered nodes\n return electMaster.electMaster(possibleMasterNodes);\n} else {\n logger.trace(\"not enough master nodes [{}]\", possibleMasterNodes);\n return null;\n}\n```\n", "created_at": "2014-08-28T10:46:36Z" }, { "body": "maybe add a logging statement here?\n", "created_at": "2014-08-28T10:49:28Z" }, { "body": "I start to see this a lot, can we have a static helper somewhere?\n", "created_at": "2014-08-28T10:50:36Z" }, { "body": "it doens't matter if that one overflows no? I mean grows larger than `maxPingsFromAnotherMaster`?\n", "created_at": "2014-08-28T10:51:31Z" }, { "body": "can't this be list from the beginning?\n", "created_at": "2014-08-28T10:52:33Z" }, { "body": "can we make this a constant? and Ideally not using componentSettings it's so confusing\n", "created_at": "2014-08-28T10:59:08Z" }, { "body": "I am not sure I understand this change here?\n", "created_at": "2014-08-28T11:00:34Z" }, { "body": "maybe it makes sense to put this logic somehwere else to make sure we know it can be null?\n", "created_at": "2014-08-28T11:04:52Z" }, { "body": "I can't see who uses this?\n", "created_at": "2014-08-28T11:05:08Z" }, { "body": "it took me a while to figure out what all these different node lists / sets are maybe you can find a better name for `this.nodes`?\n", "created_at": "2014-08-28T11:10:48Z" }, { "body": "oh is this change BW compatible?\n", "created_at": "2014-08-28T11:12:06Z" }, { "body": "I like the fact that you indicate the criteria (master or not) in the class name. I also don't like the instanceof checks. Perhaps keep the current name and add a method call `runOnlyIfMaster`and the default implementation will return true while the ClusterStateNonMasterUpdateTask will override it to false?\n", "created_at": "2014-08-28T14:34:09Z" }, { "body": "sure. will do.\n", "created_at": "2014-08-28T14:34:34Z" }, { "body": "It can't. many things rely on that fact... \n", "created_at": "2014-08-28T14:36:44Z" }, { "body": "unrelated but will do (will also convert it to a full path)\n", "created_at": "2014-08-28T14:37:08Z" }, { "body": "++\n", "created_at": "2014-08-28T14:37:31Z" }, { "body": "I don't think we should randomize this one. We do want to rarely randomize the `discovery.zen.rejoin_on_master_gone` which is already a constant.\n", "created_at": "2014-08-28T14:39:46Z" } ], "title": "Accumulated improvements to ZenDiscovery" }
{ "commits": [ { "message": "[Discovery] lightweight minimum master node recovery\ndon't perform full recovery when minimum master nodes are not met, keep the state around and use it once elected as master" }, { "message": "[Internal] make no master lock an instance var so it can be configured" }, { "message": "[Discovery] add rejoin on master gone flag, defaults to false\n\ndefaults to false since there is still work left to properly make it work" }, { "message": "[Discovery] Make noMasterBlock configurable and added simple test that shows reads do execute (partially) when m_m_n isn't met" }, { "message": "[Discovery] Enable `discovery.zen.rejoin_on_master_gone` setting in DiscoveryWithNetworkFailuresTests only." }, { "message": "[Discovery] Changed the default for the 'rejoin_on_master_gone' option from false to true in zen discovery.\n\nAdded AwaitFix for the FullRollingRestartTests." }, { "message": "[Discovery] If available newly elected master node should take over previous known nodes." }, { "message": "[Discovery] Eagerly clean the routing table of shards that exist on nodes that are not in the latestDiscoNodes list.\n\nOnly the previous master node has been removed, so only shards allocated to that node will get failed.\nThis would have happened anyhow on later on when AllocationService#reroute is invoked (for example when a cluster setting changes or another cluster event),\nbut by cleaning the routing table pro-actively, the stale routing table is fixed sooner and therefor the shards\nthat are not accessible anyhow (because the node these shards were on has left the cluster) will get re-assigned sooner." }, { "message": "Updated to use ClusterBlocks new constructor signature\n\nIntroduced with: 11a3201a092ed6c5d31516ae4b30dbb618ba348c" }, { "message": "[Internal] Do not execute cluster state changes if current node is no longer master\n\nWhen a node steps down from being a master (because, for example, min_master_node is breached), it may still have\ncluster state update tasks queued up. Most (but not all) are tasks that should no longer be executed as the node\nno longer has authority to do so. Other cluster states updates, like electing the current node as master, should be\nexecuted even if the current node is no longer master.\n\nThis commit make sure that, by default, `ClusterStateUpdateTask` is not executed if the node is no longer master. Tasks\nthat should run on non masters are changed to implement a new interface called `ClusterStateNonMasterUpdateTask`\n\nCloses #6230" }, { "message": "[TEST] It may take a little bit before the unlucky node deals with the fact the master left" }, { "message": "[TEST] Added test that verifies data integrity during and after a simulated network split." }, { "message": "[TEST] Make sure there no initializing shards when network partition is simulated" }, { "message": "[TEST] Added test that exposes a shard consistency problem when isolated node(s) rejoin the cluster after network segmentation and when the elected master node ended up on the lesser side of the network segmentation." }, { "message": "[Discovery] Removed METADATA block" }, { "message": "[Discovery] Made 'discovery.zen.rejoin_on_master_gone' setting updatable at runtime." }, { "message": "[Discovery] do not use versions to optimize cluster state copying for a first update from a new master\n\nWe have an optimization which compares routing/meta data version of cluster states and tries to reuse the current object if the versions are equal. This can cause rare failures during recovery from a minimum_master_node breach when using the \"new light rejoin\" mechanism and simulated network disconnects. This happens where the current master updates it's state, doesn't manage to broadcast it to other nodes due to the disconnect and then steps down. The new master will start with a previous version and continue to update it. When the old master rejoins, the versions of it's state can equal but the content is different.\n\nAlso improved DiscoveryWithNetworkFailuresTests to simulate this failure (and other improvements)\n\nCloses #6466" }, { "message": "[TEST] Remove 'index.routing.allocation.total_shards_per_node' setting in data consistency test" }, { "message": "[Test] testIsolateMasterAndVerifyClusterStateConsensus didn't wait on initializing shards before comparing cluster states" }, { "message": "[Discovery] Change (Master|Nodes)FaultDetection's connect_on_network_disconnect default to false\n\nThe previous default was true, which means that after a node disconnected event we try to connect to it as an extra validation. This can result in slow detection of network partitions if the extra reconnect times out before failure.\n\nAlso added tests to verify the settings' behaviour" }, { "message": "[Discovery] Improved logging when a join request is not executed because local node is no longer master" }, { "message": "[Discovery] when master is gone, flush all pending cluster states\n\nIf the master FD flags master as gone while there are still pending cluster states, the processing of those cluster states we re-instate that node a master again.\n\nCloses #6526" }, { "message": "[Tests] Added ServiceDisruptionScheme(s) and testAckedIndexing\n\nThis commit adds the notion of ServiceDisruptionScheme allowing for introducing disruptions in our test cluster. This\nabstraction as used in a couple of wrappers around the functionality offered by MockTransportService to simulate various\nnetwork partions. There is also one implementation for causing a node to be slow in processing cluster state updates.\n\nThis new mechnaism is integrated into existing tests DiscoveryWithNetworkFailuresTests.\n\nA new test called testAckedIndexing is added to verify retrieval of documents whose indexing was acked during various disruptions.\n\nCloses #6505" }, { "message": "[TEST] Check if worker if null to prevent NPE on double stopping" }, { "message": "[TEST] Reduced failures in DiscoveryWithNetworkFailuresTests#testAckedIndexing test:\n* waiting time should be long enough depending on the type of the disruption scheme\n* MockTransportService#addUnresponsiveRule if remaining delay is smaller than 0 don't double execute transport logic" }, { "message": "[TEST] Renamed afterDistribution timeout to expectedTimeToHeal\nAccumulate expected shard failures to log later" }, { "message": "[Test] ensureStableCluster failed to pass viaNode parameter correctly\n\nAlso improved timeouts & logs" }, { "message": "[Tests] Disabling testAckedIndexing\n\nThe test is currently unstable and needs some more work" }, { "message": "Fixed compilation issue caused by the lack of a thread pool name" }, { "message": "[TEST] Added test to verify if 'discovery.zen.rejoin_on_master_gone' is updatable at runtime." } ], "files": [ { "diff": "@@ -183,7 +183,7 @@\n <version>0.8.13</version>\n <optional>true</optional>\n </dependency>\n- <!-- Lucene spatial -->\n+ <!-- Lucene spatial -->\n \n \n <!-- START: dependencies that are shaded -->\n@@ -483,7 +483,8 @@\n <haltOnFailure>${tests.failfast}</haltOnFailure>\n <uniqueSuiteNames>false</uniqueSuiteNames>\n <systemProperties>\n- <java.io.tmpdir>.</java.io.tmpdir> <!-- we use '.' since this is different per JVM-->\n+ <java.io.tmpdir>.</java.io.tmpdir>\n+ <!-- we use '.' since this is different per JVM-->\n <!-- RandomizedTesting library system properties -->\n <tests.bwc>${tests.bwc}</tests.bwc>\n <tests.bwc.path>${tests.bwc.path}</tests.bwc.path>\n@@ -537,15 +538,15 @@\n <version>1.7</version>\n <executions>\n <execution>\n- <phase>validate</phase>\n- <goals>\n- <goal>run</goal>\n- </goals>\n- <configuration>\n- <target>\n- <echo>Using ${java.runtime.name} ${java.runtime.version} ${java.vendor}</echo>\n- </target>\n- </configuration>\n+ <phase>validate</phase>\n+ <goals>\n+ <goal>run</goal>\n+ </goals>\n+ <configuration>\n+ <target>\n+ <echo>Using ${java.runtime.name} ${java.runtime.version} ${java.vendor}</echo>\n+ </target>\n+ </configuration>\n </execution>\n <execution>\n <id>invalid-patterns</id>\n@@ -573,15 +574,18 @@\n </fileset>\n <map from=\"${basedir}${file.separator}\" to=\"* \"/>\n </pathconvert>\n- <fail if=\"validate.patternsFound\">The following files contain tabs or nocommits:${line.separator}${validate.patternsFound}</fail>\n+ <fail if=\"validate.patternsFound\">The following files contain tabs or\n+ nocommits:${line.separator}${validate.patternsFound}\n+ </fail>\n </target>\n </configuration>\n </execution>\n <execution>\n <id>tests</id>\n <phase>test</phase>\n <configuration>\n- <skip>${skipTests}</skip> <!-- don't run if we skip the tests -->\n+ <skip>${skipTests}</skip>\n+ <!-- don't run if we skip the tests -->\n <failOnError>false</failOnError>\n <target>\n <property name=\"runtime_classpath\" refid=\"maven.runtime.classpath\"/>\n@@ -595,7 +599,7 @@\n </classpath>\n </taskdef>\n <tophints max=\"${tests.topn}\">\n- <file file=\"${basedir}/${execution.hint.file}\" />\n+ <file file=\"${basedir}/${execution.hint.file}\"/>\n </tophints>\n </target>\n </configuration>\n@@ -708,7 +712,7 @@\n <shadedPattern>org.elasticsearch.common.compress</shadedPattern>\n </relocation>\n <relocation>\n- <pattern>com.github.mustachejava</pattern>\n+ <pattern>com.github.mustachejava</pattern>\n <shadedPattern>org.elasticsearch.common.mustache</shadedPattern>\n </relocation>\n <relocation>\n@@ -1219,6 +1223,11 @@\n <bundledSignature>jdk-unsafe</bundledSignature>\n <bundledSignature>jdk-deprecated</bundledSignature>\n </bundledSignatures>\n+ <excludes>\n+ <!-- start exclude for test GC simulation using Thread.suspend -->\n+ <exclude>org/elasticsearch/test/disruption/LongGCDisruption.class</exclude>\n+ <!-- end exclude for GC simulation -->\n+ </excludes>\n <signaturesFiles>\n <signaturesFile>test-signatures.txt</signaturesFile>\n <signaturesFile>all-signatures.txt</signaturesFile>\n@@ -1343,219 +1352,220 @@\n </pluginManagement>\n </build>\n <profiles>\n- <!-- default profile, with randomization setting kicks in -->\n- <profile>\n- <id>default</id>\n- <activation>\n- <activeByDefault>true</activeByDefault>\n- </activation>\n- <build>\n- <plugins>\n- <plugin>\n- <groupId>com.carrotsearch.randomizedtesting</groupId>\n- <artifactId>junit4-maven-plugin</artifactId>\n- <configuration>\n- <argLine>${tests.jvm.argline}</argLine>\n- </configuration>\n- </plugin>\n- <plugin>\n- <groupId>com.mycila</groupId>\n- <artifactId>license-maven-plugin</artifactId>\n- <version>2.5</version>\n- <configuration>\n- <header>dev-tools/elasticsearch_license_header.txt</header>\n- <headerDefinitions>\n- <headerDefinition>dev-tools/license_header_definition.xml</headerDefinition>\n- </headerDefinitions>\n- <includes>\n- <include>src/main/java/org/elasticsearch/**/*.java</include>\n- <include>src/test/java/org/elasticsearch/**/*.java</include>\n- </includes>\n- <excludes>\n- <exclude>src/main/java/org/elasticsearch/common/inject/**</exclude>\n- <!-- Guice -->\n- <exclude>src/main/java/org/elasticsearch/common/geo/GeoHashUtils.java</exclude>\n- <exclude>src/main/java/org/elasticsearch/common/lucene/search/XBooleanFilter.java</exclude>\n- <exclude>src/main/java/org/elasticsearch/common/lucene/search/XFilteredQuery.java</exclude>\n- <exclude>src/main/java/org/apache/lucene/queryparser/XSimpleQueryParser.java</exclude>\n- <exclude>src/main/java/org/apache/lucene/**/X*.java</exclude>\n- <!-- t-digest -->\n- <exclude>src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/tdigest/TDigestState.java</exclude>\n- <exclude>src/test/java/org/elasticsearch/search/aggregations/metrics/GroupTree.java</exclude>\n- </excludes>\n- </configuration>\n- <executions>\n- <execution>\n- <phase>compile</phase>\n- <goals>\n- <goal>check</goal>\n- </goals>\n- </execution>\n- </executions>\n- </plugin>\n- </plugins>\n- </build>\n- </profile>\n- <!-- profile for development that doesn't check forbidden-apis, no-commit validation or license headers run with mvn -Pdev -->\n- <profile>\n- <id>dev</id>\n- <properties>\n- <validate.skip>true</validate.skip>\n- </properties>\n- <build>\n- <plugins>\n- <plugin>\n- <groupId>de.thetaphi</groupId>\n- <artifactId>forbiddenapis</artifactId>\n- <version>1.5.1</version>\n- <executions>\n- <execution>\n- <id>check-forbidden-apis</id>\n- <phase>none</phase>\n- </execution>\n- <execution>\n- <id>check-forbidden-test-apis</id>\n- <phase>none</phase>\n- </execution>\n- </executions>\n- </plugin>\n- </plugins>\n- </build>\n- </profile>\n- <!-- license profile, to generate third party license file -->\n- <profile>\n- <id>license</id>\n- <activation>\n- <property>\n- <name>license.generation</name>\n- <value>true</value>\n- </property>\n- </activation>\n- <!-- not including license-maven-plugin is sufficent to expose default license -->\n- </profile>\n- <!-- jacoco coverage profile. This will insert -jagent -->\n- <profile>\n- <id>coverage</id>\n- <activation>\n- <property>\n- <name>tests.coverage</name>\n- <value>true</value>\n- </property>\n- </activation>\n- <dependencies>\n- <dependency>\n- <!-- must be on the classpath -->\n- <groupId>org.jacoco</groupId>\n- <artifactId>org.jacoco.agent</artifactId>\n- <classifier>runtime</classifier>\n- <version>0.6.4.201312101107</version>\n- <scope>test</scope>\n- </dependency>\n- </dependencies>\n- <build>\n- <plugins>\n- <plugin>\n- <groupId>org.jacoco</groupId>\n- <artifactId>jacoco-maven-plugin</artifactId>\n- <version>0.6.4.201312101107</version>\n- <executions>\n- <execution>\n- <id>default-prepare-agent</id>\n- <goals>\n- <goal>prepare-agent</goal>\n- </goals>\n- </execution>\n- <execution>\n- <id>default-report</id>\n- <phase>prepare-package</phase>\n- <goals>\n- <goal>report</goal>\n- </goals>\n- </execution>\n- <execution>\n- <id>default-check</id>\n- <goals>\n- <goal>check</goal>\n- </goals>\n- </execution>\n- </executions>\n- <configuration>\n- <excludes>\n- <exclude>jsr166e/**</exclude>\n- <exclude>org/apache/lucene/**</exclude>\n- </excludes>\n- </configuration>\n- </plugin>\n- </plugins>\n- </build>\n- </profile>\n- <profile>\n- <id>static</id>\n- <activation>\n- <property>\n- <name>tests.static</name>\n- <value>true</value>\n- </property>\n- </activation>\n- <build>\n- <plugins>\n- <plugin>\n- <groupId>org.codehaus.mojo</groupId>\n- <artifactId>findbugs-maven-plugin</artifactId>\n- <version>2.5.3</version>\n- </plugin>\n- </plugins>\n- </build>\n- <reporting>\n- <plugins>\n- <plugin>\n- <groupId>org.apache.maven.plugins</groupId>\n- <artifactId>maven-jxr-plugin</artifactId>\n- <version>2.3</version>\n- </plugin>\n- <plugin>\n- <groupId>org.apache.maven.plugins</groupId>\n- <artifactId>maven-pmd-plugin</artifactId>\n- <version>3.0.1</version>\n- <configuration>\n- <rulesets>\n- <ruleset>${basedir}/dev-tools/pmd/custom.xml</ruleset>\n- </rulesets>\n- <targetJdk>1.7</targetJdk>\n- <excludes>\n- <exclude>**/jsr166e/**</exclude>\n- <exclude>**/org/apache/lucene/**</exclude>\n- <exclude>**/org/apache/elasticsearch/common/Base64.java</exclude>\n- </excludes>\n- </configuration>\n- </plugin>\n- <plugin>\n- <groupId>org.codehaus.mojo</groupId>\n- <artifactId>findbugs-maven-plugin</artifactId>\n- <version>2.5.3</version>\n- <configuration>\n- <xmlOutput>true</xmlOutput>\n- <xmlOutputDirectory>target/site</xmlOutputDirectory>\n- <fork>true</fork>\n- <maxHeap>2048</maxHeap>\n- <timeout>1800000</timeout>\n- <onlyAnalyze>org.elasticsearch.-</onlyAnalyze>\n- </configuration>\n- </plugin>\n- <plugin>\n- <groupId>org.apache.maven.plugins</groupId>\n- <artifactId>maven-project-info-reports-plugin</artifactId>\n- <version>2.7</version>\n- <reportSets>\n- <reportSet>\n- <reports>\n- <report>index</report>\n- </reports>\n- </reportSet>\n- </reportSets>\n- </plugin>\n- </plugins>\n- </reporting>\n- </profile>\n+ <!-- default profile, with randomization setting kicks in -->\n+ <profile>\n+ <id>default</id>\n+ <activation>\n+ <activeByDefault>true</activeByDefault>\n+ </activation>\n+ <build>\n+ <plugins>\n+ <plugin>\n+ <groupId>com.carrotsearch.randomizedtesting</groupId>\n+ <artifactId>junit4-maven-plugin</artifactId>\n+ <configuration>\n+ <argLine>${tests.jvm.argline}</argLine>\n+ </configuration>\n+ </plugin>\n+ <plugin>\n+ <groupId>com.mycila</groupId>\n+ <artifactId>license-maven-plugin</artifactId>\n+ <version>2.5</version>\n+ <configuration>\n+ <header>dev-tools/elasticsearch_license_header.txt</header>\n+ <headerDefinitions>\n+ <headerDefinition>dev-tools/license_header_definition.xml</headerDefinition>\n+ </headerDefinitions>\n+ <includes>\n+ <include>src/main/java/org/elasticsearch/**/*.java</include>\n+ <include>src/test/java/org/elasticsearch/**/*.java</include>\n+ </includes>\n+ <excludes>\n+ <exclude>src/main/java/org/elasticsearch/common/inject/**</exclude>\n+ <!-- Guice -->\n+ <exclude>src/main/java/org/elasticsearch/common/geo/GeoHashUtils.java</exclude>\n+ <exclude>src/main/java/org/elasticsearch/common/lucene/search/XBooleanFilter.java</exclude>\n+ <exclude>src/main/java/org/elasticsearch/common/lucene/search/XFilteredQuery.java</exclude>\n+ <exclude>src/main/java/org/apache/lucene/queryparser/XSimpleQueryParser.java</exclude>\n+ <exclude>src/main/java/org/apache/lucene/**/X*.java</exclude>\n+ <!-- t-digest -->\n+ <exclude>src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/tdigest/TDigestState.java\n+ </exclude>\n+ <exclude>src/test/java/org/elasticsearch/search/aggregations/metrics/GroupTree.java</exclude>\n+ </excludes>\n+ </configuration>\n+ <executions>\n+ <execution>\n+ <phase>compile</phase>\n+ <goals>\n+ <goal>check</goal>\n+ </goals>\n+ </execution>\n+ </executions>\n+ </plugin>\n+ </plugins>\n+ </build>\n+ </profile>\n+ <!-- profile for development that doesn't check forbidden-apis, no-commit validation or license headers run with mvn -Pdev -->\n+ <profile>\n+ <id>dev</id>\n+ <properties>\n+ <validate.skip>true</validate.skip>\n+ </properties>\n+ <build>\n+ <plugins>\n+ <plugin>\n+ <groupId>de.thetaphi</groupId>\n+ <artifactId>forbiddenapis</artifactId>\n+ <version>1.5.1</version>\n+ <executions>\n+ <execution>\n+ <id>check-forbidden-apis</id>\n+ <phase>none</phase>\n+ </execution>\n+ <execution>\n+ <id>check-forbidden-test-apis</id>\n+ <phase>none</phase>\n+ </execution>\n+ </executions>\n+ </plugin>\n+ </plugins>\n+ </build>\n+ </profile>\n+ <!-- license profile, to generate third party license file -->\n+ <profile>\n+ <id>license</id>\n+ <activation>\n+ <property>\n+ <name>license.generation</name>\n+ <value>true</value>\n+ </property>\n+ </activation>\n+ <!-- not including license-maven-plugin is sufficent to expose default license -->\n+ </profile>\n+ <!-- jacoco coverage profile. This will insert -jagent -->\n+ <profile>\n+ <id>coverage</id>\n+ <activation>\n+ <property>\n+ <name>tests.coverage</name>\n+ <value>true</value>\n+ </property>\n+ </activation>\n+ <dependencies>\n+ <dependency>\n+ <!-- must be on the classpath -->\n+ <groupId>org.jacoco</groupId>\n+ <artifactId>org.jacoco.agent</artifactId>\n+ <classifier>runtime</classifier>\n+ <version>0.6.4.201312101107</version>\n+ <scope>test</scope>\n+ </dependency>\n+ </dependencies>\n+ <build>\n+ <plugins>\n+ <plugin>\n+ <groupId>org.jacoco</groupId>\n+ <artifactId>jacoco-maven-plugin</artifactId>\n+ <version>0.6.4.201312101107</version>\n+ <executions>\n+ <execution>\n+ <id>default-prepare-agent</id>\n+ <goals>\n+ <goal>prepare-agent</goal>\n+ </goals>\n+ </execution>\n+ <execution>\n+ <id>default-report</id>\n+ <phase>prepare-package</phase>\n+ <goals>\n+ <goal>report</goal>\n+ </goals>\n+ </execution>\n+ <execution>\n+ <id>default-check</id>\n+ <goals>\n+ <goal>check</goal>\n+ </goals>\n+ </execution>\n+ </executions>\n+ <configuration>\n+ <excludes>\n+ <exclude>jsr166e/**</exclude>\n+ <exclude>org/apache/lucene/**</exclude>\n+ </excludes>\n+ </configuration>\n+ </plugin>\n+ </plugins>\n+ </build>\n+ </profile>\n+ <profile>\n+ <id>static</id>\n+ <activation>\n+ <property>\n+ <name>tests.static</name>\n+ <value>true</value>\n+ </property>\n+ </activation>\n+ <build>\n+ <plugins>\n+ <plugin>\n+ <groupId>org.codehaus.mojo</groupId>\n+ <artifactId>findbugs-maven-plugin</artifactId>\n+ <version>2.5.3</version>\n+ </plugin>\n+ </plugins>\n+ </build>\n+ <reporting>\n+ <plugins>\n+ <plugin>\n+ <groupId>org.apache.maven.plugins</groupId>\n+ <artifactId>maven-jxr-plugin</artifactId>\n+ <version>2.3</version>\n+ </plugin>\n+ <plugin>\n+ <groupId>org.apache.maven.plugins</groupId>\n+ <artifactId>maven-pmd-plugin</artifactId>\n+ <version>3.0.1</version>\n+ <configuration>\n+ <rulesets>\n+ <ruleset>${basedir}/dev-tools/pmd/custom.xml</ruleset>\n+ </rulesets>\n+ <targetJdk>1.7</targetJdk>\n+ <excludes>\n+ <exclude>**/jsr166e/**</exclude>\n+ <exclude>**/org/apache/lucene/**</exclude>\n+ <exclude>**/org/apache/elasticsearch/common/Base64.java</exclude>\n+ </excludes>\n+ </configuration>\n+ </plugin>\n+ <plugin>\n+ <groupId>org.codehaus.mojo</groupId>\n+ <artifactId>findbugs-maven-plugin</artifactId>\n+ <version>2.5.3</version>\n+ <configuration>\n+ <xmlOutput>true</xmlOutput>\n+ <xmlOutputDirectory>target/site</xmlOutputDirectory>\n+ <fork>true</fork>\n+ <maxHeap>2048</maxHeap>\n+ <timeout>1800000</timeout>\n+ <onlyAnalyze>org.elasticsearch.-</onlyAnalyze>\n+ </configuration>\n+ </plugin>\n+ <plugin>\n+ <groupId>org.apache.maven.plugins</groupId>\n+ <artifactId>maven-project-info-reports-plugin</artifactId>\n+ <version>2.7</version>\n+ <reportSets>\n+ <reportSet>\n+ <reports>\n+ <report>index</report>\n+ </reports>\n+ </reportSet>\n+ </reportSets>\n+ </plugin>\n+ </plugins>\n+ </reporting>\n+ </profile>\n </profiles>\n </project>", "filename": "pom.xml", "status": "modified" }, { "diff": "@@ -137,6 +137,12 @@ protected ClusterUpdateSettingsResponse newResponse(boolean acknowledged) {\n return new ClusterUpdateSettingsResponse(updateSettingsAcked && acknowledged, transientUpdates.build(), persistentUpdates.build());\n }\n \n+ @Override\n+ public void onNoLongerMaster(String source) {\n+ logger.debug(\"failed to preform reroute after cluster settings were updated - current node is no longer a master\");\n+ listener.onResponse(new ClusterUpdateSettingsResponse(updateSettingsAcked, transientUpdates.build(), persistentUpdates.build()));\n+ }\n+\n @Override\n public void onFailure(String source, Throwable t) {\n //if the reroute fails we only log", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/settings/TransportClusterUpdateSettingsAction.java", "status": "modified" }, { "diff": "@@ -173,12 +173,12 @@ protected GroupShardsIterator shards(ClusterState state, RecoveryRequest request\n \n @Override\n protected ClusterBlockException checkGlobalBlock(ClusterState state, RecoveryRequest request) {\n- return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA);\n+ return state.blocks().globalBlockedException(ClusterBlockLevel.READ);\n }\n \n @Override\n protected ClusterBlockException checkRequestBlock(ClusterState state, RecoveryRequest request, String[] concreteIndices) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, concreteIndices);\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.READ, concreteIndices);\n }\n \n static class ShardRecoveryRequest extends BroadcastShardOperationRequest {", "filename": "src/main/java/org/elasticsearch/action/admin/indices/recovery/TransportRecoveryAction.java", "status": "modified" }, { "diff": "@@ -66,11 +66,11 @@ public class BenchmarkService extends AbstractLifecycleComponent<BenchmarkServic\n /**\n * Constructs a service component for running benchmarks\n *\n- * @param settings Settings\n- * @param clusterService Cluster service\n- * @param threadPool Thread pool\n- * @param client Client\n- * @param transportService Transport service\n+ * @param settings Settings\n+ * @param clusterService Cluster service\n+ * @param threadPool Thread pool\n+ * @param client Client\n+ * @param transportService Transport service\n */\n @Inject\n public BenchmarkService(Settings settings, ClusterService clusterService, ThreadPool threadPool,\n@@ -86,19 +86,22 @@ public BenchmarkService(Settings settings, ClusterService clusterService, Thread\n }\n \n @Override\n- protected void doStart() throws ElasticsearchException { }\n+ protected void doStart() throws ElasticsearchException {\n+ }\n \n @Override\n- protected void doStop() throws ElasticsearchException { }\n+ protected void doStop() throws ElasticsearchException {\n+ }\n \n @Override\n- protected void doClose() throws ElasticsearchException { }\n+ protected void doClose() throws ElasticsearchException {\n+ }\n \n /**\n * Lists actively running benchmarks on the cluster\n *\n- * @param request Status request\n- * @param listener Response listener\n+ * @param request Status request\n+ * @param listener Response listener\n */\n public void listBenchmarks(final BenchmarkStatusRequest request, final ActionListener<BenchmarkStatusResponse> listener) {\n \n@@ -171,8 +174,8 @@ public void onFailure(Throwable t) {\n /**\n * Executes benchmarks on the cluster\n *\n- * @param request Benchmark request\n- * @param listener Response listener\n+ * @param request Benchmark request\n+ * @param listener Response listener\n */\n public void startBenchmark(final BenchmarkRequest request, final ActionListener<BenchmarkResponse> listener) {\n \n@@ -228,7 +231,7 @@ public void onFailure(Throwable t) {\n listener.onFailure(t);\n }\n }, (benchmarkResponse.state() != BenchmarkResponse.State.ABORTED) &&\n- (benchmarkResponse.state() != BenchmarkResponse.State.FAILED)));\n+ (benchmarkResponse.state() != BenchmarkResponse.State.FAILED)));\n }\n \n private final boolean isBenchmarkNode(DiscoveryNode node) {\n@@ -403,6 +406,7 @@ protected CountDownAsyncHandler(int size) {\n }\n \n public abstract T newInstance();\n+\n protected abstract void sendResponse();\n \n @Override\n@@ -593,7 +597,7 @@ public ClusterState execute(ClusterState currentState) {\n \n if (bmd != null) {\n for (BenchmarkMetaData.Entry entry : bmd.entries()) {\n- if (request.benchmarkName().equals(entry.benchmarkId())){\n+ if (request.benchmarkName().equals(entry.benchmarkId())) {\n if (entry.state() != BenchmarkMetaData.State.SUCCESS && entry.state() != BenchmarkMetaData.State.FAILED) {\n throw new ElasticsearchException(\"A benchmark with ID [\" + request.benchmarkName() + \"] is already running in state [\" + entry.state() + \"]\");\n }\n@@ -648,7 +652,7 @@ public FinishBenchmarkTask(String reason, String benchmarkId, BenchmarkStateList\n @Override\n protected BenchmarkMetaData.Entry process(BenchmarkMetaData.Entry entry) {\n BenchmarkMetaData.State state = entry.state();\n- assert state == BenchmarkMetaData.State.STARTED || state == BenchmarkMetaData.State.ABORTED : \"Expected state: STARTED or ABORTED but was: \" + entry.state();\n+ assert state == BenchmarkMetaData.State.STARTED || state == BenchmarkMetaData.State.ABORTED : \"Expected state: STARTED or ABORTED but was: \" + entry.state();\n if (success) {\n return new BenchmarkMetaData.Entry(entry, BenchmarkMetaData.State.SUCCESS);\n } else {\n@@ -661,7 +665,7 @@ public final class AbortBenchmarkTask extends UpdateBenchmarkStateTask {\n private final String[] patterns;\n \n public AbortBenchmarkTask(String[] patterns, BenchmarkStateListener listener) {\n- super(\"abort_benchmark\", null , listener);\n+ super(\"abort_benchmark\", null, listener);\n this.patterns = patterns;\n }\n \n@@ -675,7 +679,7 @@ protected BenchmarkMetaData.Entry process(BenchmarkMetaData.Entry entry) {\n }\n }\n \n- public abstract class UpdateBenchmarkStateTask implements ProcessedClusterStateUpdateTask {\n+ public abstract class UpdateBenchmarkStateTask extends ProcessedClusterStateUpdateTask {\n \n private final String reason;\n protected final String benchmarkId;\n@@ -702,7 +706,7 @@ public ClusterState execute(ClusterState currentState) {\n ImmutableList.Builder<BenchmarkMetaData.Entry> builder = new ImmutableList.Builder<BenchmarkMetaData.Entry>();\n for (BenchmarkMetaData.Entry e : bmd.entries()) {\n if (benchmarkId == null || match(e)) {\n- e = process(e) ;\n+ e = process(e);\n instances.add(e);\n }\n // Don't keep finished benchmarks around in cluster state\n@@ -741,7 +745,7 @@ public String reason() {\n }\n }\n \n- public abstract class BenchmarkStateChangeAction<R extends MasterNodeOperationRequest> implements TimeoutClusterStateUpdateTask {\n+ public abstract class BenchmarkStateChangeAction<R extends MasterNodeOperationRequest> extends TimeoutClusterStateUpdateTask {\n protected final R request;\n \n public BenchmarkStateChangeAction(R request) {", "filename": "src/main/java/org/elasticsearch/action/bench/BenchmarkService.java", "status": "modified" }, { "diff": "@@ -28,7 +28,7 @@\n * An extension interface to {@link ClusterStateUpdateTask} that allows to be notified when\n * all the nodes have acknowledged a cluster state update request\n */\n-public abstract class AckedClusterStateUpdateTask<Response> implements TimeoutClusterStateUpdateTask {\n+public abstract class AckedClusterStateUpdateTask<Response> extends TimeoutClusterStateUpdateTask {\n \n private final ActionListener<Response> listener;\n private final AckedRequest request;\n@@ -40,6 +40,7 @@ protected AckedClusterStateUpdateTask(AckedRequest request, ActionListener<Respo\n \n /**\n * Called to determine which nodes the acknowledgement is expected from\n+ *\n * @param discoveryNode a node\n * @return true if the node is expected to send ack back, false otherwise\n */\n@@ -50,6 +51,7 @@ public boolean mustAck(DiscoveryNode discoveryNode) {\n /**\n * Called once all the nodes have acknowledged the cluster state update request. Must be\n * very lightweight execution, since it gets executed on the cluster service thread.\n+ *\n * @param t optional error that might have been thrown\n */\n public void onAllNodesAcked(@Nullable Throwable t) {", "filename": "src/main/java/org/elasticsearch/cluster/AckedClusterStateUpdateTask.java", "status": "modified" }, { "diff": "@@ -110,4 +110,5 @@ public interface ClusterService extends LifecycleComponent<ClusterService> {\n * Returns the tasks that are pending.\n */\n List<PendingClusterTask> pendingTasks();\n+\n }", "filename": "src/main/java/org/elasticsearch/cluster/ClusterService.java", "status": "modified" }, { "diff": "@@ -115,6 +115,8 @@ public static <T extends Custom> Custom.Factory<T> lookupFactorySafe(String type\n }\n \n \n+ public static final long UNKNOWN_VERSION = -1;\n+\n private final long version;\n \n private final RoutingTable routingTable;", "filename": "src/main/java/org/elasticsearch/cluster/ClusterState.java", "status": "modified" }, { "diff": "@@ -0,0 +1,32 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.cluster;\n+\n+/**\n+ * This is a marker interface to indicate that the task should be executed\n+ * even if the current node is not a master.\n+ */\n+public abstract class ClusterStateNonMasterUpdateTask extends ClusterStateUpdateTask {\n+\n+ @Override\n+ public boolean runOnlyOnMaster() {\n+ return false;\n+ }\n+}", "filename": "src/main/java/org/elasticsearch/cluster/ClusterStateNonMasterUpdateTask.java", "status": "added" }, { "diff": "@@ -19,19 +19,37 @@\n \n package org.elasticsearch.cluster;\n \n+import org.elasticsearch.common.Nullable;\n+import org.elasticsearch.common.util.concurrent.EsRejectedExecutionException;\n+\n /**\n * A task that can update the cluster state.\n */\n-public interface ClusterStateUpdateTask {\n+abstract public class ClusterStateUpdateTask {\n \n /**\n * Update the cluster state based on the current state. Return the *same instance* if no state\n * should be changed.\n */\n- ClusterState execute(ClusterState currentState) throws Exception;\n+ abstract public ClusterState execute(ClusterState currentState) throws Exception;\n \n /**\n * A callback called when execute fails.\n */\n- void onFailure(String source, Throwable t);\n+ abstract public void onFailure(String source, @Nullable Throwable t);\n+\n+\n+ /**\n+ * indicates whether this task should only run if current node is master\n+ */\n+ public boolean runOnlyOnMaster() {\n+ return true;\n+ }\n+\n+ /**\n+ * called when the task was rejected because the local node is no longer master\n+ */\n+ public void onNoLongerMaster(String source) {\n+ onFailure(source, new EsRejectedExecutionException(\"no longer master. source: [\" + source + \"]\"));\n+ }\n }", "filename": "src/main/java/org/elasticsearch/cluster/ClusterStateUpdateTask.java", "status": "modified" }, { "diff": "@@ -0,0 +1,31 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.cluster;\n+\n+/**\n+ * A combination between {@link org.elasticsearch.cluster.ProcessedClusterStateUpdateTask} and\n+ * {@link org.elasticsearch.cluster.ClusterStateNonMasterUpdateTask} to allow easy creation of anonymous classes\n+ */\n+abstract public class ProcessedClusterStateNonMasterUpdateTask extends ProcessedClusterStateUpdateTask {\n+\n+ @Override\n+ public boolean runOnlyOnMaster() {\n+ return false;\n+ }\n+}", "filename": "src/main/java/org/elasticsearch/cluster/ProcessedClusterStateNonMasterUpdateTask.java", "status": "added" }, { "diff": "@@ -23,11 +23,11 @@\n * An extension interface to {@link ClusterStateUpdateTask} that allows to be notified when\n * the cluster state update has been processed.\n */\n-public interface ProcessedClusterStateUpdateTask extends ClusterStateUpdateTask {\n+public abstract class ProcessedClusterStateUpdateTask extends ClusterStateUpdateTask {\n \n /**\n * Called when the result of the {@link #execute(ClusterState)} have been processed\n * properly by all listeners.\n */\n- void clusterStateProcessed(String source, ClusterState oldState, ClusterState newState);\n+ public abstract void clusterStateProcessed(String source, ClusterState oldState, ClusterState newState);\n }", "filename": "src/main/java/org/elasticsearch/cluster/ProcessedClusterStateUpdateTask.java", "status": "modified" }, { "diff": "@@ -25,11 +25,11 @@\n * An extension interface to {@link org.elasticsearch.cluster.ClusterStateUpdateTask} that allows to associate\n * a timeout.\n */\n-public interface TimeoutClusterStateUpdateTask extends ProcessedClusterStateUpdateTask {\n+abstract public class TimeoutClusterStateUpdateTask extends ProcessedClusterStateUpdateTask {\n \n /**\n * If the cluster state update task wasn't processed by the provided timeout, call\n * {@link #onFailure(String, Throwable)}\n */\n- TimeValue timeout();\n+ abstract public TimeValue timeout();\n }", "filename": "src/main/java/org/elasticsearch/cluster/TimeoutClusterStateUpdateTask.java", "status": "modified" }, { "diff": "@@ -108,6 +108,19 @@ public boolean hasGlobalBlock(ClusterBlock block) {\n return global.contains(block);\n }\n \n+ public boolean hasGlobalBlock(int blockId) {\n+ for (ClusterBlock clusterBlock : global) {\n+ if (clusterBlock.id() == blockId) {\n+ return true;\n+ }\n+ }\n+ return false;\n+ }\n+\n+ public boolean hasGlobalBlock(ClusterBlockLevel level) {\n+ return global(level).size() > 0;\n+ }\n+\n /**\n * Is there a global block with the provided status?\n */", "filename": "src/main/java/org/elasticsearch/cluster/block/ClusterBlocks.java", "status": "modified" }, { "diff": "@@ -149,10 +149,15 @@ public ClusterState execute(ClusterState currentState) {\n return ClusterState.builder(currentState).routingResult(routingResult).build();\n }\n \n+ @Override\n+ public void onNoLongerMaster(String source) {\n+ // no biggie\n+ }\n+\n @Override\n public void onFailure(String source, Throwable t) {\n- ClusterState state = clusterService.state();\n- logger.error(\"unexpected failure during [{}], current state:\\n{}\", t, source, state.prettyPrint());\n+ ClusterState state = clusterService.state();\n+ logger.error(\"unexpected failure during [{}], current state:\\n{}\", t, source, state.prettyPrint());\n }\n });\n routingTableDirty = false;", "filename": "src/main/java/org/elasticsearch/cluster/routing/RoutingService.java", "status": "modified" }, { "diff": "@@ -84,7 +84,7 @@ public class InternalClusterService extends AbstractLifecycleComponent<ClusterSe\n \n private volatile ClusterState clusterState;\n \n- private final ClusterBlocks.Builder initialBlocks = ClusterBlocks.builder().addGlobalBlock(Discovery.NO_MASTER_BLOCK);\n+ private final ClusterBlocks.Builder initialBlocks;\n \n private volatile ScheduledFuture reconnectToNodes;\n \n@@ -104,6 +104,8 @@ public InternalClusterService(Settings settings, DiscoveryService discoveryServi\n this.reconnectInterval = componentSettings.getAsTime(\"reconnect_interval\", TimeValue.timeValueSeconds(10));\n \n localNodeMasterListeners = new LocalNodeMasterListeners(threadPool);\n+\n+ initialBlocks = ClusterBlocks.builder().addGlobalBlock(discoveryService.getNoMasterBlock());\n }\n \n public NodeSettingsService settingsService() {\n@@ -134,7 +136,7 @@ protected void doStart() throws ElasticsearchException {\n discoveryService.addLifecycleListener(new LifecycleListener() {\n @Override\n public void afterStart() {\n- submitStateUpdateTask(\"update local node\", Priority.IMMEDIATE, new ClusterStateUpdateTask() {\n+ submitStateUpdateTask(\"update local node\", Priority.IMMEDIATE, new ClusterStateNonMasterUpdateTask() {\n @Override\n public ClusterState execute(ClusterState currentState) throws Exception {\n return ClusterState.builder(currentState)\n@@ -144,7 +146,7 @@ public ClusterState execute(ClusterState currentState) throws Exception {\n \n @Override\n public void onFailure(String source, Throwable t) {\n- logger.warn(\"failed ot update local node\", t);\n+ logger.warn(\"failed to update local node\", t);\n }\n });\n }\n@@ -323,6 +325,11 @@ public void run() {\n }\n logger.debug(\"processing [{}]: execute\", source);\n ClusterState previousClusterState = clusterState;\n+ if (!previousClusterState.nodes().localNodeMaster() && updateTask.runOnlyOnMaster()) {\n+ logger.debug(\"failing [{}]: local node is no longer master\", source);\n+ updateTask.onNoLongerMaster(source);\n+ return;\n+ }\n ClusterState newClusterState;\n try {\n newClusterState = updateTask.execute(previousClusterState);\n@@ -379,20 +386,6 @@ public void run() {\n }\n }\n }\n- } else {\n- if (previousClusterState.blocks().hasGlobalBlock(Discovery.NO_MASTER_BLOCK) && !newClusterState.blocks().hasGlobalBlock(Discovery.NO_MASTER_BLOCK)) {\n- // force an update, its a fresh update from the master as we transition from a start of not having a master to having one\n- // have a fresh instances of routing and metadata to remove the chance that version might be the same\n- Builder builder = ClusterState.builder(newClusterState);\n- builder.routingTable(RoutingTable.builder(newClusterState.routingTable()));\n- builder.metaData(MetaData.builder(newClusterState.metaData()));\n- newClusterState = builder.build();\n- logger.debug(\"got first state from fresh master [{}]\", newClusterState.nodes().masterNodeId());\n- } else if (newClusterState.version() < previousClusterState.version()) {\n- // we got a cluster state with older version, when we are *not* the master, let it in since it might be valid\n- // we check on version where applicable, like at ZenDiscovery#handleNewClusterStateFromMaster\n- logger.debug(\"got smaller cluster state when not master [\" + newClusterState.version() + \"<\" + previousClusterState.version() + \"] from source [\" + source + \"]\");\n- }\n }\n \n newClusterState.status(ClusterState.ClusterStateStatus.BEING_APPLIED);\n@@ -720,5 +713,4 @@ public void onTimeout() {\n }\n }\n }\n-\n }\n\\ No newline at end of file", "filename": "src/main/java/org/elasticsearch/cluster/service/InternalClusterService.java", "status": "modified" }, { "diff": "@@ -27,6 +27,7 @@\n import org.elasticsearch.cluster.routing.allocation.decider.*;\n import org.elasticsearch.common.inject.AbstractModule;\n import org.elasticsearch.discovery.DiscoverySettings;\n+import org.elasticsearch.discovery.zen.ZenDiscovery;\n import org.elasticsearch.discovery.zen.elect.ElectMasterService;\n import org.elasticsearch.indices.breaker.HierarchyCircuitBreakerService;\n import org.elasticsearch.indices.cache.filter.IndicesFilterCache;\n@@ -57,6 +58,8 @@ public ClusterDynamicSettingsModule() {\n clusterDynamicSettings.addDynamicSetting(DisableAllocationDecider.CLUSTER_ROUTING_ALLOCATION_DISABLE_ALLOCATION);\n clusterDynamicSettings.addDynamicSetting(DisableAllocationDecider.CLUSTER_ROUTING_ALLOCATION_DISABLE_REPLICA_ALLOCATION);\n clusterDynamicSettings.addDynamicSetting(ElectMasterService.DISCOVERY_ZEN_MINIMUM_MASTER_NODES, Validator.INTEGER);\n+ clusterDynamicSettings.addDynamicSetting(ZenDiscovery.SETTING_REJOIN_ON_MASTER_GONE, Validator.BOOLEAN);\n+ clusterDynamicSettings.addDynamicSetting(DiscoverySettings.NO_MASTER_BLOCK);\n clusterDynamicSettings.addDynamicSetting(FilterAllocationDecider.CLUSTER_ROUTING_INCLUDE_GROUP + \"*\");\n clusterDynamicSettings.addDynamicSetting(FilterAllocationDecider.CLUSTER_ROUTING_EXCLUDE_GROUP + \"*\");\n clusterDynamicSettings.addDynamicSetting(FilterAllocationDecider.CLUSTER_ROUTING_REQUIRE_GROUP + \"*\");", "filename": "src/main/java/org/elasticsearch/cluster/settings/ClusterDynamicSettingsModule.java", "status": "modified" }, { "diff": "@@ -36,8 +36,6 @@\n */\n public interface Discovery extends LifecycleComponent<Discovery> {\n \n- final ClusterBlock NO_MASTER_BLOCK = new ClusterBlock(2, \"no master\", true, true, RestStatus.SERVICE_UNAVAILABLE, ClusterBlockLevel.ALL);\n-\n DiscoveryNode localNode();\n \n void addListener(InitialStateDiscoveryListener listener);", "filename": "src/main/java/org/elasticsearch/discovery/Discovery.java", "status": "modified" }, { "diff": "@@ -22,6 +22,7 @@\n import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.ElasticsearchTimeoutException;\n import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.block.ClusterBlock;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.component.AbstractLifecycleComponent;\n@@ -38,6 +39,8 @@\n */\n public class DiscoveryService extends AbstractLifecycleComponent<DiscoveryService> {\n \n+ public static final String SETTING_INITIAL_STATE_TIMEOUT = \"discovery.initial_state_timeout\";\n+\n private static class InitialStateListener implements InitialStateDiscoveryListener {\n \n private final CountDownLatch latch = new CountDownLatch(1);\n@@ -60,12 +63,18 @@ public boolean waitForInitialState(TimeValue timeValue) throws InterruptedExcept\n private final TimeValue initialStateTimeout;\n private final Discovery discovery;\n private InitialStateListener initialStateListener;\n+ private final DiscoverySettings discoverySettings;\n \n @Inject\n- public DiscoveryService(Settings settings, Discovery discovery) {\n+ public DiscoveryService(Settings settings, DiscoverySettings discoverySettings, Discovery discovery) {\n super(settings);\n+ this.discoverySettings = discoverySettings;\n this.discovery = discovery;\n- this.initialStateTimeout = componentSettings.getAsTime(\"initial_state_timeout\", TimeValue.timeValueSeconds(30));\n+ this.initialStateTimeout = settings.getAsTime(SETTING_INITIAL_STATE_TIMEOUT, TimeValue.timeValueSeconds(30));\n+ }\n+\n+ public ClusterBlock getNoMasterBlock() {\n+ return discoverySettings.getNoMasterBlock();\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/discovery/DiscoveryService.java", "status": "modified" }, { "diff": "@@ -19,27 +19,42 @@\n \n package org.elasticsearch.discovery;\n \n+import org.elasticsearch.ElasticsearchIllegalArgumentException;\n+import org.elasticsearch.cluster.block.ClusterBlock;\n+import org.elasticsearch.cluster.block.ClusterBlockLevel;\n import org.elasticsearch.common.component.AbstractComponent;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.node.settings.NodeSettingsService;\n+import org.elasticsearch.rest.RestStatus;\n+\n+import java.util.EnumSet;\n \n /**\n * Exposes common discovery settings that may be supported by all the different discovery implementations\n */\n public class DiscoverySettings extends AbstractComponent {\n \n public static final String PUBLISH_TIMEOUT = \"discovery.zen.publish_timeout\";\n+ public static final String NO_MASTER_BLOCK = \"discovery.zen.no_master_block\";\n \n public static final TimeValue DEFAULT_PUBLISH_TIMEOUT = TimeValue.timeValueSeconds(30);\n+ public static final String DEFAULT_NO_MASTER_BLOCK = \"write\";\n+ public final static int NO_MASTER_BLOCK_ID = 2;\n+\n+ public final static ClusterBlock NO_MASTER_BLOCK_ALL = new ClusterBlock(NO_MASTER_BLOCK_ID, \"no master\", true, true, RestStatus.SERVICE_UNAVAILABLE, ClusterBlockLevel.ALL);\n+ public final static ClusterBlock NO_MASTER_BLOCK_WRITES = new ClusterBlock(NO_MASTER_BLOCK_ID, \"no master\", true, false, RestStatus.SERVICE_UNAVAILABLE, EnumSet.of(ClusterBlockLevel.WRITE, ClusterBlockLevel.METADATA));\n \n+ private volatile ClusterBlock noMasterBlock;\n private volatile TimeValue publishTimeout = DEFAULT_PUBLISH_TIMEOUT;\n \n @Inject\n public DiscoverySettings(Settings settings, NodeSettingsService nodeSettingsService) {\n super(settings);\n nodeSettingsService.addListener(new ApplySettings());\n+ this.noMasterBlock = parseNoMasterBlock(settings.get(NO_MASTER_BLOCK, DEFAULT_NO_MASTER_BLOCK));\n+ this.publishTimeout = settings.getAsTime(PUBLISH_TIMEOUT, publishTimeout);\n }\n \n /**\n@@ -49,6 +64,10 @@ public TimeValue getPublishTimeout() {\n return publishTimeout;\n }\n \n+ public ClusterBlock getNoMasterBlock() {\n+ return noMasterBlock;\n+ }\n+\n private class ApplySettings implements NodeSettingsService.Listener {\n @Override\n public void onRefreshSettings(Settings settings) {\n@@ -59,6 +78,24 @@ public void onRefreshSettings(Settings settings) {\n publishTimeout = newPublishTimeout;\n }\n }\n+ String newNoMasterBlockValue = settings.get(NO_MASTER_BLOCK);\n+ if (newNoMasterBlockValue != null) {\n+ ClusterBlock newNoMasterBlock = parseNoMasterBlock(newNoMasterBlockValue);\n+ if (newNoMasterBlock != noMasterBlock) {\n+ noMasterBlock = newNoMasterBlock;\n+ }\n+ }\n+ }\n+ }\n+\n+ private ClusterBlock parseNoMasterBlock(String value) {\n+ switch (value) {\n+ case \"all\":\n+ return NO_MASTER_BLOCK_ALL;\n+ case \"write\":\n+ return NO_MASTER_BLOCK_WRITES;\n+ default:\n+ throw new ElasticsearchIllegalArgumentException(\"invalid master block [\" + value + \"]\");\n }\n }\n }", "filename": "src/main/java/org/elasticsearch/discovery/DiscoverySettings.java", "status": "modified" }, { "diff": "@@ -58,6 +58,7 @@ public class LocalDiscovery extends AbstractLifecycleComponent<Discovery> implem\n \n private final TransportService transportService;\n private final ClusterService clusterService;\n+ private final DiscoveryService discoveryService;\n private final DiscoveryNodeService discoveryNodeService;\n private AllocationService allocationService;\n private final ClusterName clusterName;\n@@ -77,14 +78,15 @@ public class LocalDiscovery extends AbstractLifecycleComponent<Discovery> implem\n \n @Inject\n public LocalDiscovery(Settings settings, ClusterName clusterName, TransportService transportService, ClusterService clusterService,\n- DiscoveryNodeService discoveryNodeService, Version version, DiscoverySettings discoverySettings) {\n+ DiscoveryNodeService discoveryNodeService, Version version, DiscoverySettings discoverySettings, DiscoveryService discoveryService) {\n super(settings);\n this.clusterName = clusterName;\n this.clusterService = clusterService;\n this.transportService = transportService;\n this.discoveryNodeService = discoveryNodeService;\n this.version = version;\n this.discoverySettings = discoverySettings;\n+ this.discoveryService = discoveryService;\n }\n \n @Override\n@@ -123,7 +125,7 @@ protected void doStart() throws ElasticsearchException {\n // we are the first master (and the master)\n master = true;\n final LocalDiscovery master = firstMaster;\n- clusterService.submitStateUpdateTask(\"local-disco-initial_connect(master)\", new ProcessedClusterStateUpdateTask() {\n+ clusterService.submitStateUpdateTask(\"local-disco-initial_connect(master)\", new ProcessedClusterStateNonMasterUpdateTask() {\n @Override\n public ClusterState execute(ClusterState currentState) {\n DiscoveryNodes.Builder nodesBuilder = DiscoveryNodes.builder();\n@@ -132,7 +134,7 @@ public ClusterState execute(ClusterState currentState) {\n }\n nodesBuilder.localNodeId(master.localNode().id()).masterNodeId(master.localNode().id());\n // remove the NO_MASTER block in this case\n- ClusterBlocks.Builder blocks = ClusterBlocks.builder().blocks(currentState.blocks()).removeGlobalBlock(Discovery.NO_MASTER_BLOCK);\n+ ClusterBlocks.Builder blocks = ClusterBlocks.builder().blocks(currentState.blocks()).removeGlobalBlock(discoverySettings.getNoMasterBlock());\n return ClusterState.builder(currentState).nodes(nodesBuilder).blocks(blocks).build();\n }\n \n@@ -149,7 +151,7 @@ public void clusterStateProcessed(String source, ClusterState oldState, ClusterS\n } else if (firstMaster != null) {\n // update as fast as we can the local node state with the new metadata (so we create indices for example)\n final ClusterState masterState = firstMaster.clusterService.state();\n- clusterService.submitStateUpdateTask(\"local-disco(detected_master)\", new ClusterStateUpdateTask() {\n+ clusterService.submitStateUpdateTask(\"local-disco(detected_master)\", new ClusterStateNonMasterUpdateTask() {\n @Override\n public ClusterState execute(ClusterState currentState) {\n // make sure we have the local node id set, we might need it as a result of the new metadata\n@@ -165,7 +167,7 @@ public void onFailure(String source, Throwable t) {\n \n // tell the master to send the fact that we are here\n final LocalDiscovery master = firstMaster;\n- firstMaster.clusterService.submitStateUpdateTask(\"local-disco-receive(from node[\" + localNode + \"])\", new ProcessedClusterStateUpdateTask() {\n+ firstMaster.clusterService.submitStateUpdateTask(\"local-disco-receive(from node[\" + localNode + \"])\", new ProcessedClusterStateNonMasterUpdateTask() {\n @Override\n public ClusterState execute(ClusterState currentState) {\n DiscoveryNodes.Builder nodesBuilder = DiscoveryNodes.builder();\n@@ -225,7 +227,7 @@ protected void doStop() throws ElasticsearchException {\n }\n \n final LocalDiscovery master = firstMaster;\n- master.clusterService.submitStateUpdateTask(\"local-disco-update\", new ClusterStateUpdateTask() {\n+ master.clusterService.submitStateUpdateTask(\"local-disco-update\", new ClusterStateNonMasterUpdateTask() {\n @Override\n public ClusterState execute(ClusterState currentState) {\n DiscoveryNodes newNodes = currentState.nodes().removeDeadMembers(newMembers, master.localNode.id());\n@@ -305,13 +307,22 @@ private void publish(LocalDiscovery[] members, ClusterState clusterState, final\n nodeSpecificClusterState.status(ClusterState.ClusterStateStatus.RECEIVED);\n // ignore cluster state messages that do not include \"me\", not in the game yet...\n if (nodeSpecificClusterState.nodes().localNode() != null) {\n- discovery.clusterService.submitStateUpdateTask(\"local-disco-receive(from master)\", new ProcessedClusterStateUpdateTask() {\n+ assert nodeSpecificClusterState.nodes().masterNode() != null : \"received a cluster state without a master\";\n+ assert !nodeSpecificClusterState.blocks().hasGlobalBlock(discoveryService.getNoMasterBlock()) : \"received a cluster state with a master block\";\n+\n+ discovery.clusterService.submitStateUpdateTask(\"local-disco-receive(from master)\", new ProcessedClusterStateNonMasterUpdateTask() {\n @Override\n public ClusterState execute(ClusterState currentState) {\n if (nodeSpecificClusterState.version() < currentState.version() && Objects.equal(nodeSpecificClusterState.nodes().masterNodeId(), currentState.nodes().masterNodeId())) {\n return currentState;\n }\n \n+ if (currentState.blocks().hasGlobalBlock(discoveryService.getNoMasterBlock())) {\n+ // its a fresh update from the master as we transition from a start of not having a master to having one\n+ logger.debug(\"got first state from fresh master [{}]\", nodeSpecificClusterState.nodes().masterNodeId());\n+ return nodeSpecificClusterState;\n+ }\n+\n ClusterState.Builder builder = ClusterState.builder(nodeSpecificClusterState);\n // if the routing table did not change, use the original one\n if (nodeSpecificClusterState.routingTable().version() == currentState.routingTable().version()) {", "filename": "src/main/java/org/elasticsearch/discovery/local/LocalDiscovery.java", "status": "modified" }, { "diff": "@@ -22,20 +22,18 @@\n import com.google.common.base.Objects;\n import com.google.common.collect.Lists;\n import com.google.common.collect.Sets;\n-import org.elasticsearch.ElasticsearchException;\n-import org.elasticsearch.ElasticsearchIllegalStateException;\n-import org.elasticsearch.Version;\n+import org.elasticsearch.*;\n import org.elasticsearch.cluster.*;\n import org.elasticsearch.cluster.block.ClusterBlocks;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.MetaData;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n import org.elasticsearch.cluster.node.DiscoveryNodeService;\n import org.elasticsearch.cluster.node.DiscoveryNodes;\n-import org.elasticsearch.cluster.routing.RoutingTable;\n import org.elasticsearch.cluster.routing.allocation.AllocationService;\n import org.elasticsearch.cluster.routing.allocation.RoutingAllocation;\n import org.elasticsearch.common.Priority;\n+import org.elasticsearch.common.collect.Tuple;\n import org.elasticsearch.common.component.AbstractLifecycleComponent;\n import org.elasticsearch.common.component.Lifecycle;\n import org.elasticsearch.common.inject.Inject;\n@@ -45,6 +43,7 @@\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.common.util.concurrent.ConcurrentCollections;\n+import org.elasticsearch.common.util.concurrent.EsRejectedExecutionException;\n import org.elasticsearch.discovery.Discovery;\n import org.elasticsearch.discovery.DiscoveryService;\n import org.elasticsearch.discovery.DiscoverySettings;\n@@ -56,19 +55,20 @@\n import org.elasticsearch.discovery.zen.ping.ZenPing;\n import org.elasticsearch.discovery.zen.ping.ZenPingService;\n import org.elasticsearch.discovery.zen.publish.PublishClusterStateAction;\n-import org.elasticsearch.gateway.GatewayService;\n import org.elasticsearch.node.service.NodeService;\n import org.elasticsearch.node.settings.NodeSettingsService;\n import org.elasticsearch.threadpool.ThreadPool;\n import org.elasticsearch.transport.*;\n \n import java.io.IOException;\n+import java.util.ArrayList;\n import java.util.List;\n import java.util.Map;\n import java.util.Set;\n import java.util.concurrent.BlockingQueue;\n import java.util.concurrent.CopyOnWriteArrayList;\n import java.util.concurrent.atomic.AtomicBoolean;\n+import java.util.concurrent.atomic.AtomicInteger;\n \n import static com.google.common.collect.Lists.newArrayList;\n import static org.elasticsearch.common.unit.TimeValue.timeValueSeconds;\n@@ -78,6 +78,16 @@\n */\n public class ZenDiscovery extends AbstractLifecycleComponent<Discovery> implements Discovery, DiscoveryNodesProvider {\n \n+ public final static String SETTING_REJOIN_ON_MASTER_GONE = \"discovery.zen.rejoin_on_master_gone\";\n+ public final static String SETTING_PING_TIMEOUT = \"discovery.zen.ping.timeout\";\n+ public final static String SETTING_JOIN_TIMEOUT = \"discovery.zen.join_timeout\";\n+ public final static String SETTING_JOIN_RETRY_ATTEMPTS = \"discovery.zen.join_retry_attempts\";\n+ public final static String SETTING_JOIN_RETRY_DELAY = \"discovery.zen.join_retry_delay\";\n+ public final static String SETTING_MAX_PINGS_FROM_ANOTHER_MASTER = \"discovery.zen.max_pings_from_another_master\";\n+ public final static String SETTING_SEND_LEAVE_REQUEST = \"discovery.zen.send_leave_request\";\n+ public final static String SETTING_MASTER_ELECTION_FILTER_CLIENT = \"discovery.zen.master_election.filter_client\";\n+ public final static String SETTING_MASTER_ELECTION_FILTER_DATA = \"discovery.zen.master_election.filter_data\";\n+\n public static final String DISCOVERY_REJOIN_ACTION_NAME = \"internal:discovery/zen/rejoin\";\n \n private final ThreadPool threadPool;\n@@ -86,6 +96,7 @@ public class ZenDiscovery extends AbstractLifecycleComponent<Discovery> implemen\n private AllocationService allocationService;\n private final ClusterName clusterName;\n private final DiscoveryNodeService discoveryNodeService;\n+ private final DiscoverySettings discoverySettings;\n private final ZenPingService pingService;\n private final MasterFaultDetection masterFD;\n private final NodesFaultDetection nodesFD;\n@@ -97,6 +108,14 @@ public class ZenDiscovery extends AbstractLifecycleComponent<Discovery> implemen\n private final TimeValue pingTimeout;\n private final TimeValue joinTimeout;\n \n+ /** how many retry attempts to perform if join request failed with an retriable error */\n+ private final int joinRetryAttempts;\n+ /** how long to wait before performing another join attempt after a join request failed with an retriable error */\n+ private final TimeValue joinRetryDelay;\n+\n+ /** how many pings from *another* master to tolerate before forcing a rejoin on other or local master */\n+ private final int maxPingsFromAnotherMaster;\n+\n // a flag that should be used only for testing\n private final boolean sendLeaveRequest;\n \n@@ -118,41 +137,61 @@ public class ZenDiscovery extends AbstractLifecycleComponent<Discovery> implemen\n \n private final AtomicBoolean initialStateSent = new AtomicBoolean();\n \n+ private volatile boolean rejoinOnMasterGone;\n \n @Nullable\n private NodeService nodeService;\n \n+ private final BlockingQueue<Tuple<DiscoveryNode, MembershipAction.JoinCallback>> processJoinRequests = ConcurrentCollections.newBlockingQueue();\n+\n @Inject\n public ZenDiscovery(Settings settings, ClusterName clusterName, ThreadPool threadPool,\n TransportService transportService, ClusterService clusterService, NodeSettingsService nodeSettingsService,\n- DiscoveryNodeService discoveryNodeService, ZenPingService pingService, Version version, DiscoverySettings discoverySettings) {\n+ DiscoveryNodeService discoveryNodeService, ZenPingService pingService, ElectMasterService electMasterService, Version version,\n+ DiscoverySettings discoverySettings) {\n super(settings);\n this.clusterName = clusterName;\n this.threadPool = threadPool;\n this.clusterService = clusterService;\n this.transportService = transportService;\n this.discoveryNodeService = discoveryNodeService;\n+ this.discoverySettings = discoverySettings;\n this.pingService = pingService;\n this.version = version;\n-\n- // also support direct discovery.zen settings, for cases when it gets extended\n- this.pingTimeout = settings.getAsTime(\"discovery.zen.ping.timeout\", settings.getAsTime(\"discovery.zen.ping_timeout\", componentSettings.getAsTime(\"ping_timeout\", componentSettings.getAsTime(\"initial_ping_timeout\", timeValueSeconds(3)))));\n- this.joinTimeout = settings.getAsTime(\"discovery.zen.join_timeout\", TimeValue.timeValueMillis(pingTimeout.millis() * 20));\n- this.sendLeaveRequest = componentSettings.getAsBoolean(\"send_leave_request\", true);\n-\n- this.masterElectionFilterClientNodes = settings.getAsBoolean(\"discovery.zen.master_election.filter_client\", true);\n- this.masterElectionFilterDataNodes = settings.getAsBoolean(\"discovery.zen.master_election.filter_data\", false);\n+ this.electMaster = electMasterService;\n+\n+ // keep using componentSettings for BWC, in case this class gets extended.\n+ TimeValue pingTimeout = componentSettings.getAsTime(\"initial_ping_timeout\", timeValueSeconds(3));\n+ pingTimeout = componentSettings.getAsTime(\"ping_timeout\", pingTimeout);\n+ pingTimeout = settings.getAsTime(\"discovery.zen.ping_timeout\", pingTimeout);\n+ this.pingTimeout = settings.getAsTime(SETTING_PING_TIMEOUT, pingTimeout);\n+\n+ this.joinTimeout = settings.getAsTime(SETTING_JOIN_TIMEOUT, TimeValue.timeValueMillis(pingTimeout.millis() * 20));\n+ this.joinRetryAttempts = settings.getAsInt(SETTING_JOIN_RETRY_ATTEMPTS, 3);\n+ this.joinRetryDelay = settings.getAsTime(SETTING_JOIN_RETRY_DELAY, TimeValue.timeValueMillis(100));\n+ this.maxPingsFromAnotherMaster = settings.getAsInt(SETTING_MAX_PINGS_FROM_ANOTHER_MASTER, 3);\n+ this.sendLeaveRequest = settings.getAsBoolean(SETTING_SEND_LEAVE_REQUEST, true);\n+\n+ this.masterElectionFilterClientNodes = settings.getAsBoolean(SETTING_MASTER_ELECTION_FILTER_CLIENT, true);\n+ this.masterElectionFilterDataNodes = settings.getAsBoolean(SETTING_MASTER_ELECTION_FILTER_DATA, false);\n+ this.rejoinOnMasterGone = settings.getAsBoolean(SETTING_REJOIN_ON_MASTER_GONE, true);\n+\n+ if (this.joinRetryAttempts < 1) {\n+ throw new ElasticsearchIllegalArgumentException(\"'\" + SETTING_JOIN_RETRY_ATTEMPTS + \"' must be a positive number. got [\" + this.SETTING_JOIN_RETRY_ATTEMPTS + \"]\");\n+ }\n+ if (this.maxPingsFromAnotherMaster < 1) {\n+ throw new ElasticsearchIllegalArgumentException(\"'\" + SETTING_MAX_PINGS_FROM_ANOTHER_MASTER + \"' must be a positive number. got [\" + this.maxPingsFromAnotherMaster + \"]\");\n+ }\n \n logger.debug(\"using ping.timeout [{}], join.timeout [{}], master_election.filter_client [{}], master_election.filter_data [{}]\", pingTimeout, joinTimeout, masterElectionFilterClientNodes, masterElectionFilterDataNodes);\n \n- this.electMaster = new ElectMasterService(settings);\n nodeSettingsService.addListener(new ApplySettings());\n \n- this.masterFD = new MasterFaultDetection(settings, threadPool, transportService, this);\n+ this.masterFD = new MasterFaultDetection(settings, threadPool, transportService, this, clusterName);\n this.masterFD.addListener(new MasterNodeFailureListener());\n \n- this.nodesFD = new NodesFaultDetection(settings, threadPool, transportService);\n- this.nodesFD.addListener(new NodeFailureListener());\n+ this.nodesFD = new NodesFaultDetection(settings, threadPool, transportService, clusterName);\n+ this.nodesFD.addListener(new NodeFaultDetectionListener());\n \n this.publishClusterState = new PublishClusterStateAction(settings, transportService, this, new NewClusterStateListener(), discoverySettings);\n this.pingService.setNodesProvider(this);\n@@ -178,7 +217,7 @@ protected void doStart() throws ElasticsearchException {\n final String nodeId = DiscoveryService.generateNodeId(settings);\n localNode = new DiscoveryNode(settings.get(\"name\"), nodeId, transportService.boundAddress().publishAddress(), nodeAttributes, version);\n latestDiscoNodes = new DiscoveryNodes.Builder().put(localNode).localNodeId(localNode.id()).build();\n- nodesFD.updateNodes(latestDiscoNodes);\n+ nodesFD.updateNodes(latestDiscoNodes, ClusterState.UNKNOWN_VERSION);\n pingService.start();\n \n // do the join on a different thread, the DiscoveryService waits for 30s anyhow till it is discovered\n@@ -272,7 +311,7 @@ public void publish(ClusterState clusterState, AckListener ackListener) {\n throw new ElasticsearchIllegalStateException(\"Shouldn't publish state when not master\");\n }\n latestDiscoNodes = clusterState.nodes();\n- nodesFD.updateNodes(clusterState.nodes());\n+ nodesFD.updateNodes(clusterState.nodes(), clusterState.version());\n publishClusterState.publish(clusterState, ackListener);\n }\n \n@@ -295,6 +334,15 @@ public void run() {\n });\n }\n \n+\n+ /**\n+ * returns true if there is a currently a background thread active for (re)joining the cluster\n+ * used for testing.\n+ */\n+ public boolean joiningCluster() {\n+ return currentJoinThread != null;\n+ }\n+\n private void innerJoinCluster() {\n boolean retry = true;\n while (retry) {\n@@ -311,18 +359,24 @@ private void innerJoinCluster() {\n if (localNode.equals(masterNode)) {\n this.master = true;\n nodesFD.start(); // start the nodes FD\n- clusterService.submitStateUpdateTask(\"zen-disco-join (elected_as_master)\", Priority.URGENT, new ProcessedClusterStateUpdateTask() {\n+ clusterService.submitStateUpdateTask(\"zen-disco-join (elected_as_master)\", Priority.URGENT, new ProcessedClusterStateNonMasterUpdateTask() {\n @Override\n public ClusterState execute(ClusterState currentState) {\n- DiscoveryNodes.Builder builder = new DiscoveryNodes.Builder()\n+ // Take into account the previous known nodes, if they happen not to be available\n+ // then fault detection will remove these nodes.\n+ DiscoveryNodes.Builder builder = new DiscoveryNodes.Builder(latestDiscoNodes)\n .localNodeId(localNode.id())\n .masterNodeId(localNode.id())\n // put our local node\n .put(localNode);\n // update the fact that we are the master...\n latestDiscoNodes = builder.build();\n- ClusterBlocks clusterBlocks = ClusterBlocks.builder().blocks(currentState.blocks()).removeGlobalBlock(NO_MASTER_BLOCK).build();\n- return ClusterState.builder(currentState).nodes(latestDiscoNodes).blocks(clusterBlocks).build();\n+ ClusterBlocks clusterBlocks = ClusterBlocks.builder().blocks(currentState.blocks()).removeGlobalBlock(discoverySettings.getNoMasterBlock()).build();\n+ currentState = ClusterState.builder(currentState).nodes(latestDiscoNodes).blocks(clusterBlocks).build();\n+\n+ // eagerly run reroute to remove dead nodes from routing table\n+ RoutingAllocation.Result result = allocationService.reroute(currentState);\n+ return ClusterState.builder(currentState).routingResult(result).build();\n }\n \n @Override\n@@ -337,37 +391,71 @@ public void clusterStateProcessed(String source, ClusterState oldState, ClusterS\n });\n } else {\n this.master = false;\n- try {\n- // first, make sure we can connect to the master\n- transportService.connectToNode(masterNode);\n- } catch (Exception e) {\n- logger.warn(\"failed to connect to master [{}], retrying...\", e, masterNode);\n- retry = true;\n+ // send join request\n+ retry = !joinElectedMaster(masterNode);\n+ if (retry) {\n continue;\n }\n- // send join request\n- try {\n- membership.sendJoinRequestBlocking(masterNode, localNode, joinTimeout);\n- } catch (Exception e) {\n- if (e instanceof ElasticsearchException) {\n- logger.info(\"failed to send join request to master [{}], reason [{}]\", masterNode, ((ElasticsearchException) e).getDetailedMessage());\n- } else {\n- logger.info(\"failed to send join request to master [{}], reason [{}]\", masterNode, e.getMessage());\n- }\n- if (logger.isTraceEnabled()) {\n- logger.trace(\"detailed failed reason\", e);\n- }\n- // failed to send the join request, retry\n+\n+ if (latestDiscoNodes.masterNode() == null) {\n+ logger.debug(\"no master node is set, despite of join request completing. retrying pings\");\n retry = true;\n continue;\n }\n+\n masterFD.start(masterNode, \"initial_join\");\n // no need to submit the received cluster state, we will get it from the master when it publishes\n // the fact that we joined\n }\n }\n }\n \n+ /**\n+ * Join a newly elected master.\n+ *\n+ * @return true if successful\n+ */\n+ private boolean joinElectedMaster(DiscoveryNode masterNode) {\n+ try {\n+ // first, make sure we can connect to the master\n+ transportService.connectToNode(masterNode);\n+ } catch (Exception e) {\n+ logger.warn(\"failed to connect to master [{}], retrying...\", e, masterNode);\n+ return false;\n+ }\n+ int joinAttempt = 0; // we retry on illegal state if the master is not yet ready\n+ while (true) {\n+ try {\n+ logger.trace(\"joining master {}\", masterNode);\n+ membership.sendJoinRequestBlocking(masterNode, localNode, joinTimeout);\n+ return true;\n+ } catch (Throwable t) {\n+ Throwable unwrap = ExceptionsHelper.unwrapCause(t);\n+ if (unwrap instanceof ElasticsearchIllegalStateException) {\n+ if (++joinAttempt == this.joinRetryAttempts) {\n+ logger.info(\"failed to send join request to master [{}], reason [{}], tried [{}] times\", masterNode, ExceptionsHelper.detailedMessage(t), joinAttempt);\n+ return false;\n+ } else {\n+ logger.trace(\"master {} failed with [{}]. retrying... (attempts done: [{}])\", masterNode, ExceptionsHelper.detailedMessage(t), joinAttempt);\n+ }\n+ } else {\n+ if (logger.isTraceEnabled()) {\n+ logger.trace(\"failed to send join request to master [{}]\", t, masterNode);\n+ } else {\n+ logger.info(\"failed to send join request to master [{}], reason [{}]\", masterNode, ExceptionsHelper.detailedMessage(t));\n+ }\n+ return false;\n+ }\n+ }\n+\n+ try {\n+ Thread.sleep(this.joinRetryDelay.millis());\n+ } catch (InterruptedException e) {\n+ Thread.currentThread().interrupt();\n+ }\n+ }\n+ }\n+\n private void handleLeaveRequest(final DiscoveryNode node) {\n if (lifecycleState() != Lifecycle.State.STARTED) {\n // not started, ignore a node failure\n@@ -389,6 +477,11 @@ public ClusterState execute(ClusterState currentState) {\n return ClusterState.builder(currentState).routingResult(routingResult).build();\n }\n \n+ @Override\n+ public void onNoLongerMaster(String source) {\n+ // ignoring (already logged)\n+ }\n+\n @Override\n public void onFailure(String source, Throwable t) {\n logger.error(\"unexpected failure during [{}]\", t, source);\n@@ -424,6 +517,11 @@ public ClusterState execute(ClusterState currentState) {\n return ClusterState.builder(currentState).routingResult(routingResult).build();\n }\n \n+ @Override\n+ public void onNoLongerMaster(String source) {\n+ // already logged\n+ }\n+\n @Override\n public void onFailure(String source, Throwable t) {\n logger.error(\"unexpected failure during [{}]\", t, source);\n@@ -457,6 +555,12 @@ public ClusterState execute(ClusterState currentState) {\n return currentState;\n }\n \n+\n+ @Override\n+ public void onNoLongerMaster(String source) {\n+ // ignoring (already logged)\n+ }\n+\n @Override\n public void onFailure(String source, Throwable t) {\n logger.error(\"unexpected failure during [{}]\", t, source);\n@@ -481,7 +585,7 @@ private void handleMasterGone(final DiscoveryNode masterNode, final String reaso\n \n logger.info(\"master_left [{}], reason [{}]\", masterNode, reason);\n \n- clusterService.submitStateUpdateTask(\"zen-disco-master_failed (\" + masterNode + \")\", Priority.IMMEDIATE, new ProcessedClusterStateUpdateTask() {\n+ clusterService.submitStateUpdateTask(\"zen-disco-master_failed (\" + masterNode + \")\", Priority.IMMEDIATE, new ProcessedClusterStateNonMasterUpdateTask() {\n @Override\n public ClusterState execute(ClusterState currentState) {\n if (!masterNode.id().equals(currentState.nodes().masterNodeId())) {\n@@ -493,6 +597,16 @@ public ClusterState execute(ClusterState currentState) {\n // make sure the old master node, which has failed, is not part of the nodes we publish\n .remove(masterNode.id())\n .masterNodeId(null).build();\n+ latestDiscoNodes = discoveryNodes;\n+\n+ // flush any pending cluster states from old master, so it will not be set as master again\n+ ArrayList<ProcessClusterState> pendingNewClusterStates = new ArrayList<>();\n+ processNewClusterStates.drainTo(pendingNewClusterStates);\n+ logger.trace(\"removed [{}] pending cluster states\", pendingNewClusterStates.size());\n+\n+ if (rejoinOnMasterGone) {\n+ return rejoin(ClusterState.builder(currentState).nodes(discoveryNodes).build(), \"master left (reason = \" + reason + \")\");\n+ }\n \n if (!electMaster.hasEnoughMasterNodes(discoveryNodes)) {\n return rejoin(ClusterState.builder(currentState).nodes(discoveryNodes).build(), \"not enough master nodes after master left (reason = \" + reason + \")\");\n@@ -561,29 +675,7 @@ void handleNewClusterStateFromMaster(ClusterState newClusterState, final Publish\n clusterService.submitStateUpdateTask(\"zen-disco-master_receive_cluster_state_from_another_master [\" + newState.nodes().masterNode() + \"]\", Priority.URGENT, new ProcessedClusterStateUpdateTask() {\n @Override\n public ClusterState execute(ClusterState currentState) {\n- if (newState.version() > currentState.version()) {\n- logger.warn(\"received cluster state from [{}] which is also master but with a newer cluster_state, rejoining to cluster...\", newState.nodes().masterNode());\n- return rejoin(currentState, \"zen-disco-master_receive_cluster_state_from_another_master [\" + newState.nodes().masterNode() + \"]\");\n- } else {\n- logger.warn(\"received cluster state from [{}] which is also master but with an older cluster_state, telling [{}] to rejoin the cluster\", newState.nodes().masterNode(), newState.nodes().masterNode());\n-\n- try {\n- // make sure we're connected to this node (connect to node does nothing if we're already connected)\n- // since the network connections are asymmetric, it may be that we received a state but have disconnected from the node\n- // in the past (after a master failure, for example)\n- transportService.connectToNode(newState.nodes().masterNode());\n- transportService.sendRequest(newState.nodes().masterNode(), DISCOVERY_REJOIN_ACTION_NAME, new RejoinClusterRequest(currentState.nodes().localNodeId()), new EmptyTransportResponseHandler(ThreadPool.Names.SAME) {\n- @Override\n- public void handleException(TransportException exp) {\n- logger.warn(\"failed to send rejoin request to [{}]\", exp, newState.nodes().masterNode());\n- }\n- });\n- } catch (Exception e) {\n- logger.warn(\"failed to send rejoin request to [{}]\", e, newState.nodes().masterNode());\n- }\n-\n- return currentState;\n- }\n+ return handleAnotherMaster(currentState, newState.nodes().masterNode(), newState.version(), \"via a new cluster state\");\n }\n \n @Override\n@@ -610,7 +702,11 @@ public void onFailure(String source, Throwable t) {\n final ProcessClusterState processClusterState = new ProcessClusterState(newClusterState, newStateProcessed);\n processNewClusterStates.add(processClusterState);\n \n- clusterService.submitStateUpdateTask(\"zen-disco-receive(from master [\" + newClusterState.nodes().masterNode() + \"])\", Priority.URGENT, new ProcessedClusterStateUpdateTask() {\n+\n+ assert newClusterState.nodes().masterNode() != null : \"received a cluster state without a master\";\n+ assert !newClusterState.blocks().hasGlobalBlock(discoverySettings.getNoMasterBlock()) : \"received a cluster state with a master block\";\n+\n+ clusterService.submitStateUpdateTask(\"zen-disco-receive(from master [\" + newClusterState.nodes().masterNode() + \"])\", Priority.URGENT, new ProcessedClusterStateNonMasterUpdateTask() {\n @Override\n public ClusterState execute(ClusterState currentState) {\n // we already processed it in a previous event\n@@ -642,6 +738,11 @@ public ClusterState execute(ClusterState currentState) {\n \n // we are going to use it for sure, poll (remove) it\n potentialState = processNewClusterStates.poll();\n+ if (potentialState == null) {\n+ // might happen if the queue is drained\n+ break;\n+ }\n+\n potentialState.processed = true;\n \n if (potentialState.clusterState.version() > stateToProcess.clusterState.version()) {\n@@ -670,7 +771,16 @@ public ClusterState execute(ClusterState currentState) {\n masterFD.restart(latestDiscoNodes.masterNode(), \"new cluster state received and we are monitoring the wrong master [\" + masterFD.masterNode() + \"]\");\n }\n \n+ if (currentState.blocks().hasGlobalBlock(discoverySettings.getNoMasterBlock())) {\n+ // its a fresh update from the master as we transition from a start of not having a master to having one\n+ logger.debug(\"got first state from fresh master [{}]\", updatedState.nodes().masterNodeId());\n+ return updatedState;\n+ }\n+\n+\n+ // some optimizations to make sure we keep old objects where possible\n ClusterState.Builder builder = ClusterState.builder(updatedState);\n+\n // if the routing table did not change, use the original one\n if (updatedState.routingTable().version() == currentState.routingTable().version()) {\n builder.routingTable(currentState.routingTable());\n@@ -726,37 +836,75 @@ private void handleJoinRequest(final DiscoveryNode node, final MembershipAction.\n // validate the join request, will throw a failure if it fails, which will get back to the\n // node calling the join request\n membership.sendValidateJoinRequestBlocking(node, joinTimeout);\n-\n+ processJoinRequests.add(new Tuple<>(node, callback));\n clusterService.submitStateUpdateTask(\"zen-disco-receive(join from node[\" + node + \"])\", Priority.IMMEDIATE, new ProcessedClusterStateUpdateTask() {\n+\n+ private final List<Tuple<DiscoveryNode, MembershipAction.JoinCallback>> drainedTasks = new ArrayList<>();\n+\n @Override\n public ClusterState execute(ClusterState currentState) {\n- if (currentState.nodes().nodeExists(node.id())) {\n- // the node already exists in the cluster\n- logger.info(\"received a join request for an existing node [{}]\", node);\n- // still send a new cluster state, so it will be re published and possibly update the other node\n- return ClusterState.builder(currentState).build();\n+ processJoinRequests.drainTo(drainedTasks);\n+ if (drainedTasks.isEmpty()) {\n+ return currentState;\n }\n- DiscoveryNodes.Builder builder = DiscoveryNodes.builder(currentState.nodes());\n- for (DiscoveryNode existingNode : currentState.nodes()) {\n- if (node.address().equals(existingNode.address())) {\n- builder.remove(existingNode.id());\n- logger.warn(\"received join request from node [{}], but found existing node {} with same address, removing existing node\", node, existingNode);\n+\n+ boolean modified = false;\n+ DiscoveryNodes.Builder nodesBuilder = DiscoveryNodes.builder(currentState.nodes());\n+ for (Tuple<DiscoveryNode, MembershipAction.JoinCallback> task : drainedTasks) {\n+ DiscoveryNode node = task.v1();\n+ if (currentState.nodes().nodeExists(node.id())) {\n+ logger.debug(\"received a join request for an existing node [{}]\", node);\n+ } else {\n+ modified = true;\n+ nodesBuilder.put(node);\n+ for (DiscoveryNode existingNode : currentState.nodes()) {\n+ if (node.address().equals(existingNode.address())) {\n+ nodesBuilder.remove(existingNode.id());\n+ logger.warn(\"received join request from node [{}], but found existing node {} with same address, removing existing node\", node, existingNode);\n+ }\n+ }\n+ }\n+ }\n+\n+ ClusterState.Builder stateBuilder = ClusterState.builder(currentState);\n+ if (modified) {\n+ latestDiscoNodes = nodesBuilder.build();\n+ stateBuilder.nodes(latestDiscoNodes);\n+ }\n+ return stateBuilder.build();\n+ }\n+\n+ @Override\n+ public void onNoLongerMaster(String source) {\n+ Exception e = new EsRejectedExecutionException(\"no longer master. source: [\" + source + \"]\");\n+ innerOnFailure(e);\n+ }\n+\n+ void innerOnFailure(Throwable t) {\n+ for (Tuple<DiscoveryNode, MembershipAction.JoinCallback> drainedTask : drainedTasks) {\n+ try {\n+ drainedTask.v2().onFailure(t);\n+ } catch (Exception e) {\n+ logger.error(\"error during task failure\", e);\n }\n }\n- latestDiscoNodes = builder.build();\n- // add the new node now (will update latestDiscoNodes on publish)\n- return ClusterState.builder(currentState).nodes(latestDiscoNodes.newNode(node)).build();\n }\n \n @Override\n public void onFailure(String source, Throwable t) {\n logger.error(\"unexpected failure during [{}]\", t, source);\n- callback.onFailure(t);\n+ innerOnFailure(t);\n }\n \n @Override\n public void clusterStateProcessed(String source, ClusterState oldState, ClusterState newState) {\n- callback.onSuccess();\n+ for (Tuple<DiscoveryNode, MembershipAction.JoinCallback> drainedTask : drainedTasks) {\n+ try {\n+ drainedTask.v2().onSuccess();\n+ } catch (Exception e) {\n+ logger.error(\"unexpected error during [{}]\", e, source);\n+ }\n+ }\n }\n });\n }\n@@ -807,35 +955,36 @@ private DiscoveryNode findMaster() {\n List<DiscoveryNode> pingMasters = newArrayList();\n for (ZenPing.PingResponse pingResponse : pingResponses) {\n if (pingResponse.master() != null) {\n- pingMasters.add(pingResponse.master());\n+ // We can't include the local node in pingMasters list, otherwise we may up electing ourselves without\n+ // any check / verifications from other nodes in ZenDiscover#innerJoinCluster()\n+ if (!localNode.equals(pingResponse.master())) {\n+ pingMasters.add(pingResponse.master());\n+ }\n }\n }\n \n Set<DiscoveryNode> possibleMasterNodes = Sets.newHashSet();\n- possibleMasterNodes.add(localNode);\n+ if (localNode.masterNode()) {\n+ possibleMasterNodes.add(localNode);\n+ }\n for (ZenPing.PingResponse pingResponse : pingResponses) {\n possibleMasterNodes.add(pingResponse.target());\n }\n- // if we don't have enough master nodes, we bail, even if we get a response that indicates\n- // there is a master by other node, we don't see enough...\n- if (!electMaster.hasEnoughMasterNodes(possibleMasterNodes)) {\n- logger.trace(\"not enough master nodes [{}]\", possibleMasterNodes);\n- return null;\n- }\n \n if (pingMasters.isEmpty()) {\n- // lets tie break between discovered nodes\n- DiscoveryNode electedMaster = electMaster.electMaster(possibleMasterNodes);\n- if (localNode.equals(electedMaster)) {\n- return localNode;\n+ // if we don't have enough master nodes, we bail, because there are not enough master to elect from\n+ if (electMaster.hasEnoughMasterNodes(possibleMasterNodes)) {\n+ return electMaster.electMaster(possibleMasterNodes);\n+ } else {\n+ logger.trace(\"not enough master nodes [{}]\", possibleMasterNodes);\n+ return null;\n }\n } else {\n- DiscoveryNode electedMaster = electMaster.electMaster(pingMasters);\n- if (electedMaster != null) {\n- return electedMaster;\n- }\n+\n+ assert !pingMasters.contains(localNode) : \"local node should never be elected as master when other nodes indicate an active master\";\n+ // lets tie break between discovered nodes\n+ return electMaster.electMaster(pingMasters);\n }\n- return null;\n }\n \n private ClusterState rejoin(ClusterState clusterState, String reason) {\n@@ -845,28 +994,45 @@ private ClusterState rejoin(ClusterState clusterState, String reason) {\n master = false;\n \n ClusterBlocks clusterBlocks = ClusterBlocks.builder().blocks(clusterState.blocks())\n- .addGlobalBlock(NO_MASTER_BLOCK)\n- .addGlobalBlock(GatewayService.STATE_NOT_RECOVERED_BLOCK)\n+ .addGlobalBlock(discoverySettings.getNoMasterBlock())\n .build();\n \n- // clear the routing table, we have no master, so we need to recreate the routing when we reform the cluster\n- RoutingTable routingTable = RoutingTable.builder().build();\n- // we also clean the metadata, since we are going to recover it if we become master\n- MetaData metaData = MetaData.builder().build();\n-\n // clean the nodes, we are now not connected to anybody, since we try and reform the cluster\n- latestDiscoNodes = new DiscoveryNodes.Builder().put(localNode).localNodeId(localNode.id()).build();\n+ latestDiscoNodes = new DiscoveryNodes.Builder(latestDiscoNodes).masterNodeId(null).build();\n \n asyncJoinCluster();\n \n return ClusterState.builder(clusterState)\n .blocks(clusterBlocks)\n .nodes(latestDiscoNodes)\n- .routingTable(routingTable)\n- .metaData(metaData)\n .build();\n }\n \n+ private ClusterState handleAnotherMaster(ClusterState localClusterState, final DiscoveryNode otherMaster, long otherClusterStateVersion, String reason) {\n+ assert master : \"handleAnotherMaster called but current node is not a master\";\n+ if (otherClusterStateVersion > localClusterState.version()) {\n+ return rejoin(localClusterState, \"zen-disco-discovered another master with a new cluster_state [\" + otherMaster + \"][\" + reason + \"]\");\n+ } else {\n+ logger.warn(\"discovered [{}] which is also master but with an older cluster_state, telling [{}] to rejoin the cluster ([{}])\", otherMaster, otherMaster, reason);\n+ try {\n+ // make sure we're connected to this node (connect to node does nothing if we're already connected)\n+ // since the network connections are asymmetric, it may be that we received a state but have disconnected from the node\n+ // in the past (after a master failure, for example)\n+ transportService.connectToNode(otherMaster);\n+ transportService.sendRequest(otherMaster, DISCOVERY_REJOIN_ACTION_NAME, new RejoinClusterRequest(localClusterState.nodes().localNodeId()), new EmptyTransportResponseHandler(ThreadPool.Names.SAME) {\n+\n+ @Override\n+ public void handleException(TransportException exp) {\n+ logger.warn(\"failed to send rejoin request to [{}]\", exp, otherMaster);\n+ }\n+ });\n+ } catch (Exception e) {\n+ logger.warn(\"failed to send rejoin request to [{}]\", e, otherMaster);\n+ }\n+ return localClusterState;\n+ }\n+ }\n+\n private void sendInitialStateEventIfNeeded() {\n if (initialStateSent.compareAndSet(false, true)) {\n for (InitialStateDiscoveryListener listener : initialStateListeners) {\n@@ -895,12 +1061,48 @@ public void onLeave(DiscoveryNode node) {\n }\n }\n \n- private class NodeFailureListener implements NodesFaultDetection.Listener {\n+ private class NodeFaultDetectionListener extends NodesFaultDetection.Listener {\n+\n+ private final AtomicInteger pingsWhileMaster = new AtomicInteger(0);\n \n @Override\n public void onNodeFailure(DiscoveryNode node, String reason) {\n handleNodeFailure(node, reason);\n }\n+\n+ @Override\n+ public void onPingReceived(final NodesFaultDetection.PingRequest pingRequest) {\n+ // if we are master, we don't expect any fault detection from another node. If we get it\n+ // means we potentially have two masters in the cluster.\n+ if (!master) {\n+ pingsWhileMaster.set(0);\n+ return;\n+ }\n+\n+ // nodes pre 1.4.0 do not send this information\n+ if (pingRequest.masterNode() == null) {\n+ return;\n+ }\n+\n+ if (pingsWhileMaster.incrementAndGet() < maxPingsFromAnotherMaster) {\n+ logger.trace(\"got a ping from another master {}. current ping count: [{}]\", pingRequest.masterNode(), pingsWhileMaster.get());\n+ return;\n+ }\n+ logger.debug(\"got a ping from another master {}. resolving who should rejoin. current ping count: [{}]\", pingRequest.masterNode(), pingsWhileMaster.get());\n+ clusterService.submitStateUpdateTask(\"ping from another master\", Priority.URGENT, new ClusterStateUpdateTask() {\n+\n+ @Override\n+ public ClusterState execute(ClusterState currentState) throws Exception {\n+ pingsWhileMaster.set(0);\n+ return handleAnotherMaster(currentState, pingRequest.masterNode(), pingRequest.clusterStateVersion(), \"node fd ping\");\n+ }\n+\n+ @Override\n+ public void onFailure(String source, Throwable t) {\n+ logger.debug(\"unexpected error during cluster state update task after pings from another master\", t);\n+ }\n+ });\n+ }\n }\n \n private class MasterNodeFailureListener implements MasterFaultDetection.Listener {\n@@ -922,6 +1124,10 @@ public void onDisconnectedFromMaster() {\n }\n }\n \n+ boolean isRejoinOnMasterGone() {\n+ return rejoinOnMasterGone;\n+ }\n+\n static class RejoinClusterRequest extends TransportRequest {\n \n private String fromNodeId;\n@@ -955,7 +1161,7 @@ public RejoinClusterRequest newInstance() {\n \n @Override\n public void messageReceived(final RejoinClusterRequest request, final TransportChannel channel) throws Exception {\n- clusterService.submitStateUpdateTask(\"received a request to rejoin the cluster from [\" + request.fromNodeId + \"]\", Priority.URGENT, new ClusterStateUpdateTask() {\n+ clusterService.submitStateUpdateTask(\"received a request to rejoin the cluster from [\" + request.fromNodeId + \"]\", Priority.URGENT, new ClusterStateNonMasterUpdateTask() {\n @Override\n public ClusterState execute(ClusterState currentState) {\n try {\n@@ -966,6 +1172,11 @@ public ClusterState execute(ClusterState currentState) {\n return rejoin(currentState, \"received a request to rejoin the cluster from [\" + request.fromNodeId + \"]\");\n }\n \n+ @Override\n+ public void onNoLongerMaster(String source) {\n+ // already logged\n+ }\n+\n @Override\n public void onFailure(String source, Throwable t) {\n logger.error(\"unexpected failure during [{}]\", t, source);\n@@ -989,6 +1200,12 @@ public void onRefreshSettings(Settings settings) {\n ZenDiscovery.this.electMaster.minimumMasterNodes(), minimumMasterNodes);\n handleMinimumMasterNodesChanged(minimumMasterNodes);\n }\n+\n+ boolean rejoinOnMasterGone = settings.getAsBoolean(SETTING_REJOIN_ON_MASTER_GONE, ZenDiscovery.this.rejoinOnMasterGone);\n+ if (rejoinOnMasterGone != ZenDiscovery.this.rejoinOnMasterGone) {\n+ logger.info(\"updating {} from [{}] to [{}]\", SETTING_REJOIN_ON_MASTER_GONE, ZenDiscovery.this.rejoinOnMasterGone, rejoinOnMasterGone);\n+ ZenDiscovery.this.rejoinOnMasterGone = rejoinOnMasterGone;\n+ }\n }\n }\n }", "filename": "src/main/java/org/elasticsearch/discovery/zen/ZenDiscovery.java", "status": "modified" }, { "diff": "@@ -23,6 +23,7 @@\n import org.elasticsearch.common.inject.AbstractModule;\n import org.elasticsearch.common.inject.multibindings.Multibinder;\n import org.elasticsearch.discovery.Discovery;\n+import org.elasticsearch.discovery.zen.elect.ElectMasterService;\n import org.elasticsearch.discovery.zen.ping.ZenPingService;\n import org.elasticsearch.discovery.zen.ping.unicast.UnicastHostsProvider;\n \n@@ -44,6 +45,7 @@ public ZenDiscoveryModule addUnicastHostProvider(Class<? extends UnicastHostsPro\n \n @Override\n protected void configure() {\n+ bind(ElectMasterService.class).asEagerSingleton();\n bind(ZenPingService.class).asEagerSingleton();\n Multibinder<UnicastHostsProvider> unicastHostsProviderMultibinder = Multibinder.newSetBinder(binder(), UnicastHostsProvider.class);\n for (Class<? extends UnicastHostsProvider> unicastHostProvider : unicastHostProviders) {", "filename": "src/main/java/org/elasticsearch/discovery/zen/ZenDiscoveryModule.java", "status": "modified" }, { "diff": "@@ -24,12 +24,10 @@\n import org.apache.lucene.util.CollectionUtil;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n import org.elasticsearch.common.component.AbstractComponent;\n+import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n \n-import java.util.Arrays;\n-import java.util.Comparator;\n-import java.util.Iterator;\n-import java.util.List;\n+import java.util.*;\n \n /**\n *\n@@ -42,6 +40,7 @@ public class ElectMasterService extends AbstractComponent {\n \n private volatile int minimumMasterNodes;\n \n+ @Inject\n public ElectMasterService(Settings settings) {\n super(settings);\n this.minimumMasterNodes = settings.getAsInt(DISCOVERY_ZEN_MINIMUM_MASTER_NODES, -1);\n@@ -69,6 +68,18 @@ public boolean hasEnoughMasterNodes(Iterable<DiscoveryNode> nodes) {\n return count >= minimumMasterNodes;\n }\n \n+ /**\n+ * Returns the given nodes sorted by likelyhood of being elected as master, most likely first.\n+ * Non-master nodes are not removed but are rather put in the end\n+ * @param nodes\n+ * @return\n+ */\n+ public List<DiscoveryNode> sortByMasterLikelihood(Iterable<DiscoveryNode> nodes) {\n+ ArrayList<DiscoveryNode> sortedNodes = Lists.newArrayList(nodes);\n+ CollectionUtil.introSort(sortedNodes, nodeComparator);\n+ return sortedNodes;\n+ }\n+\n /**\n * Returns a list of the next possible masters.\n */\n@@ -120,6 +131,12 @@ private static class NodeComparator implements Comparator<DiscoveryNode> {\n \n @Override\n public int compare(DiscoveryNode o1, DiscoveryNode o2) {\n+ if (o1.masterNode() && !o2.masterNode()) {\n+ return -1;\n+ }\n+ if (!o1.masterNode() && o2.masterNode()) {\n+ return 1;\n+ }\n return o1.id().compareTo(o2.id());\n }\n }", "filename": "src/main/java/org/elasticsearch/discovery/zen/elect/ElectMasterService.java", "status": "modified" }, { "diff": "@@ -0,0 +1,95 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.discovery.zen.fd;\n+\n+import org.elasticsearch.cluster.ClusterName;\n+import org.elasticsearch.cluster.node.DiscoveryNode;\n+import org.elasticsearch.common.component.AbstractComponent;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.unit.TimeValue;\n+import org.elasticsearch.threadpool.ThreadPool;\n+import org.elasticsearch.transport.TransportConnectionListener;\n+import org.elasticsearch.transport.TransportService;\n+\n+import static org.elasticsearch.common.unit.TimeValue.timeValueSeconds;\n+\n+/**\n+ * A base class for {@link org.elasticsearch.discovery.zen.fd.MasterFaultDetection} & {@link org.elasticsearch.discovery.zen.fd.NodesFaultDetection},\n+ * making sure both use the same setting.\n+ */\n+public abstract class FaultDetection extends AbstractComponent {\n+\n+ public static final String SETTING_CONNECT_ON_NETWORK_DISCONNECT = \"discovery.zen.fd.connect_on_network_disconnect\";\n+ public static final String SETTING_PING_INTERVAL = \"discovery.zen.fd.ping_interval\";\n+ public static final String SETTING_PING_TIMEOUT = \"discovery.zen.fd.ping_timeout\";\n+ public static final String SETTING_PING_RETRIES = \"discovery.zen.fd.ping_retries\";\n+ public static final String SETTING_REGISTER_CONNECTION_LISTENER = \"discovery.zen.fd.register_connection_listener\";\n+\n+ protected final ThreadPool threadPool;\n+ protected final ClusterName clusterName;\n+ protected final TransportService transportService;\n+\n+ // used mainly for testing, should always be true\n+ protected final boolean registerConnectionListener;\n+ protected final FDConnectionListener connectionListener;\n+ protected final boolean connectOnNetworkDisconnect;\n+\n+ protected final TimeValue pingInterval;\n+ protected final TimeValue pingRetryTimeout;\n+ protected final int pingRetryCount;\n+\n+ public FaultDetection(Settings settings, ThreadPool threadPool, TransportService transportService, ClusterName clusterName) {\n+ super(settings);\n+ this.threadPool = threadPool;\n+ this.transportService = transportService;\n+ this.clusterName = clusterName;\n+\n+ this.connectOnNetworkDisconnect = settings.getAsBoolean(SETTING_CONNECT_ON_NETWORK_DISCONNECT, false);\n+ this.pingInterval = settings.getAsTime(SETTING_PING_INTERVAL, timeValueSeconds(1));\n+ this.pingRetryTimeout = settings.getAsTime(SETTING_PING_TIMEOUT, timeValueSeconds(30));\n+ this.pingRetryCount = settings.getAsInt(SETTING_PING_RETRIES, 3);\n+ this.registerConnectionListener = settings.getAsBoolean(SETTING_REGISTER_CONNECTION_LISTENER, true);\n+\n+ this.connectionListener = new FDConnectionListener();\n+ if (registerConnectionListener) {\n+ transportService.addConnectionListener(connectionListener);\n+ }\n+ }\n+\n+ public void close() {\n+ transportService.removeConnectionListener(connectionListener);\n+ }\n+\n+ /**\n+ * This method will be called when the {@link org.elasticsearch.transport.TransportService} raised a node disconnected event\n+ */\n+ abstract void handleTransportDisconnect(DiscoveryNode node);\n+\n+ private class FDConnectionListener implements TransportConnectionListener {\n+ @Override\n+ public void onNodeConnected(DiscoveryNode node) {\n+ }\n+\n+ @Override\n+ public void onNodeDisconnected(DiscoveryNode node) {\n+ handleTransportDisconnect(node);\n+ }\n+ }\n+\n+}", "filename": "src/main/java/org/elasticsearch/discovery/zen/fd/FaultDetection.java", "status": "added" }, { "diff": "@@ -20,9 +20,10 @@\n package org.elasticsearch.discovery.zen.fd;\n \n import org.elasticsearch.ElasticsearchIllegalStateException;\n+import org.elasticsearch.Version;\n+import org.elasticsearch.cluster.ClusterName;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n import org.elasticsearch.cluster.node.DiscoveryNodes;\n-import org.elasticsearch.common.component.AbstractComponent;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.settings.Settings;\n@@ -35,13 +36,12 @@\n import java.util.concurrent.CopyOnWriteArrayList;\n import java.util.concurrent.atomic.AtomicBoolean;\n \n-import static org.elasticsearch.common.unit.TimeValue.timeValueSeconds;\n import static org.elasticsearch.transport.TransportRequestOptions.options;\n \n /**\n * A fault detection that pings the master periodically to see if its alive.\n */\n-public class MasterFaultDetection extends AbstractComponent {\n+public class MasterFaultDetection extends FaultDetection {\n \n public static final String MASTER_PING_ACTION_NAME = \"internal:discovery/zen/fd/master_ping\";\n \n@@ -52,29 +52,10 @@ public static interface Listener {\n void onDisconnectedFromMaster();\n }\n \n- private final ThreadPool threadPool;\n-\n- private final TransportService transportService;\n-\n private final DiscoveryNodesProvider nodesProvider;\n \n private final CopyOnWriteArrayList<Listener> listeners = new CopyOnWriteArrayList<>();\n \n-\n- private final boolean connectOnNetworkDisconnect;\n-\n- private final TimeValue pingInterval;\n-\n- private final TimeValue pingRetryTimeout;\n-\n- private final int pingRetryCount;\n-\n- // used mainly for testing, should always be true\n- private final boolean registerConnectionListener;\n-\n-\n- private final FDConnectionListener connectionListener;\n-\n private volatile MasterPinger masterPinger;\n \n private final Object masterNodeMutex = new Object();\n@@ -85,25 +66,13 @@ public static interface Listener {\n \n private final AtomicBoolean notifiedMasterFailure = new AtomicBoolean();\n \n- public MasterFaultDetection(Settings settings, ThreadPool threadPool, TransportService transportService, DiscoveryNodesProvider nodesProvider) {\n- super(settings);\n- this.threadPool = threadPool;\n- this.transportService = transportService;\n+ public MasterFaultDetection(Settings settings, ThreadPool threadPool, TransportService transportService,\n+ DiscoveryNodesProvider nodesProvider, ClusterName clusterName) {\n+ super(settings, threadPool, transportService, clusterName);\n this.nodesProvider = nodesProvider;\n \n- this.connectOnNetworkDisconnect = componentSettings.getAsBoolean(\"connect_on_network_disconnect\", true);\n- this.pingInterval = componentSettings.getAsTime(\"ping_interval\", timeValueSeconds(1));\n- this.pingRetryTimeout = componentSettings.getAsTime(\"ping_timeout\", timeValueSeconds(30));\n- this.pingRetryCount = componentSettings.getAsInt(\"ping_retries\", 3);\n- this.registerConnectionListener = componentSettings.getAsBoolean(\"register_connection_listener\", true);\n-\n logger.debug(\"[master] uses ping_interval [{}], ping_timeout [{}], ping_retries [{}]\", pingInterval, pingRetryTimeout, pingRetryCount);\n \n- this.connectionListener = new FDConnectionListener();\n- if (registerConnectionListener) {\n- transportService.addConnectionListener(connectionListener);\n- }\n-\n transportService.registerHandler(MASTER_PING_ACTION_NAME, new MasterPingRequestHandler());\n }\n \n@@ -155,7 +124,8 @@ private void innerStart(final DiscoveryNode masterNode) {\n masterPinger.stop();\n }\n this.masterPinger = new MasterPinger();\n- // start the ping process\n+\n+ // we start pinging slightly later to allow the chosen master to complete it's own master election\n threadPool.schedule(pingInterval, ThreadPool.Names.SAME, masterPinger);\n }\n \n@@ -181,13 +151,14 @@ private void innerStop() {\n }\n \n public void close() {\n+ super.close();\n stop(\"closing\");\n this.listeners.clear();\n- transportService.removeConnectionListener(connectionListener);\n transportService.removeHandler(MASTER_PING_ACTION_NAME);\n }\n \n- private void handleTransportDisconnect(DiscoveryNode node) {\n+ @Override\n+ protected void handleTransportDisconnect(DiscoveryNode node) {\n synchronized (masterNodeMutex) {\n if (!node.equals(this.masterNode)) {\n return;\n@@ -200,7 +171,8 @@ private void handleTransportDisconnect(DiscoveryNode node) {\n masterPinger.stop();\n }\n this.masterPinger = new MasterPinger();\n- threadPool.schedule(pingInterval, ThreadPool.Names.SAME, masterPinger);\n+ // we use schedule with a 0 time value to run the pinger on the pool as it will run on later\n+ threadPool.schedule(TimeValue.timeValueMillis(0), ThreadPool.Names.SAME, masterPinger);\n } catch (Exception e) {\n logger.trace(\"[master] [{}] transport disconnected (with verified connect)\", masterNode);\n notifyMasterFailure(masterNode, \"transport disconnected (with verified connect)\");\n@@ -237,17 +209,6 @@ public void run() {\n }\n }\n \n- private class FDConnectionListener implements TransportConnectionListener {\n- @Override\n- public void onNodeConnected(DiscoveryNode node) {\n- }\n-\n- @Override\n- public void onNodeDisconnected(DiscoveryNode node) {\n- handleTransportDisconnect(node);\n- }\n- }\n-\n private class MasterPinger implements Runnable {\n \n private volatile boolean running = true;\n@@ -268,8 +229,10 @@ public void run() {\n threadPool.schedule(pingInterval, ThreadPool.Names.SAME, MasterPinger.this);\n return;\n }\n- transportService.sendRequest(masterToPing, MASTER_PING_ACTION_NAME, new MasterPingRequest(nodesProvider.nodes().localNode().id(), masterToPing.id()), options().withType(TransportRequestOptions.Type.PING).withTimeout(pingRetryTimeout),\n- new BaseTransportResponseHandler<MasterPingResponseResponse>() {\n+ final MasterPingRequest request = new MasterPingRequest(nodesProvider.nodes().localNode().id(), masterToPing.id(), clusterName);\n+ final TransportRequestOptions options = options().withType(TransportRequestOptions.Type.PING).withTimeout(pingRetryTimeout);\n+ transportService.sendRequest(masterToPing, MASTER_PING_ACTION_NAME, request, options, new BaseTransportResponseHandler<MasterPingResponseResponse>() {\n+\n @Override\n public MasterPingResponseResponse newInstance() {\n return new MasterPingResponseResponse();\n@@ -326,7 +289,7 @@ public void handleException(TransportException exp) {\n notifyMasterFailure(masterToPing, \"failed to ping, tried [\" + pingRetryCount + \"] times, each with maximum [\" + pingRetryTimeout + \"] timeout\");\n } else {\n // resend the request, not reschedule, rely on send timeout\n- transportService.sendRequest(masterToPing, MASTER_PING_ACTION_NAME, new MasterPingRequest(nodesProvider.nodes().localNode().id(), masterToPing.id()), options().withType(TransportRequestOptions.Type.PING).withTimeout(pingRetryTimeout), this);\n+ transportService.sendRequest(masterToPing, MASTER_PING_ACTION_NAME, request, options, this);\n }\n }\n }\n@@ -349,6 +312,14 @@ public Throwable fillInStackTrace() {\n }\n \n static class NotMasterException extends ElasticsearchIllegalStateException {\n+\n+ NotMasterException(String msg) {\n+ super(msg);\n+ }\n+\n+ NotMasterException() {\n+ }\n+\n @Override\n public Throwable fillInStackTrace() {\n return null;\n@@ -377,6 +348,13 @@ public void messageReceived(MasterPingRequest request, TransportChannel channel)\n if (!request.masterNodeId.equals(nodes.localNodeId())) {\n throw new NotMasterException();\n }\n+\n+ // ping from nodes of version < 1.4.0 will have the clustername set to null\n+ if (request.clusterName != null && !request.clusterName.equals(clusterName)) {\n+ logger.trace(\"master fault detection ping request is targeted for a different [{}] cluster then us [{}]\", request.clusterName, clusterName);\n+ throw new NotMasterException(\"master fault detection ping request is targeted for a different [\" + request.clusterName + \"] cluster then us [\" + clusterName + \"]\");\n+ }\n+\n // if we are no longer master, fail...\n if (!nodes.localNodeMaster()) {\n throw new NoLongerMasterException();\n@@ -400,27 +378,35 @@ private static class MasterPingRequest extends TransportRequest {\n private String nodeId;\n \n private String masterNodeId;\n+ private ClusterName clusterName;\n \n private MasterPingRequest() {\n }\n \n- private MasterPingRequest(String nodeId, String masterNodeId) {\n+ private MasterPingRequest(String nodeId, String masterNodeId, ClusterName clusterName) {\n this.nodeId = nodeId;\n this.masterNodeId = masterNodeId;\n+ this.clusterName = clusterName;\n }\n \n @Override\n public void readFrom(StreamInput in) throws IOException {\n super.readFrom(in);\n nodeId = in.readString();\n masterNodeId = in.readString();\n+ if (in.getVersion().onOrAfter(Version.V_1_4_0)) {\n+ clusterName = ClusterName.readClusterName(in);\n+ }\n }\n \n @Override\n public void writeTo(StreamOutput out) throws IOException {\n super.writeTo(out);\n out.writeString(nodeId);\n out.writeString(masterNodeId);\n+ if (out.getVersion().onOrAfter(Version.V_1_4_0)) {\n+ clusterName.writeTo(out);\n+ }\n }\n }\n ", "filename": "src/main/java/org/elasticsearch/discovery/zen/fd/MasterFaultDetection.java", "status": "modified" }, { "diff": "@@ -20,9 +20,11 @@\n package org.elasticsearch.discovery.zen.fd;\n \n import org.elasticsearch.ElasticsearchIllegalStateException;\n+import org.elasticsearch.Version;\n+import org.elasticsearch.cluster.ClusterName;\n+import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n import org.elasticsearch.cluster.node.DiscoveryNodes;\n-import org.elasticsearch.common.component.AbstractComponent;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.settings.Settings;\n@@ -35,68 +37,40 @@\n import java.util.concurrent.CopyOnWriteArrayList;\n \n import static org.elasticsearch.cluster.node.DiscoveryNodes.EMPTY_NODES;\n-import static org.elasticsearch.common.unit.TimeValue.timeValueSeconds;\n import static org.elasticsearch.common.util.concurrent.ConcurrentCollections.newConcurrentMap;\n import static org.elasticsearch.transport.TransportRequestOptions.options;\n \n /**\n * A fault detection of multiple nodes.\n */\n-public class NodesFaultDetection extends AbstractComponent {\n+public class NodesFaultDetection extends FaultDetection {\n \n public static final String PING_ACTION_NAME = \"internal:discovery/zen/fd/ping\";\n+ \n+ public abstract static class Listener {\n \n- public static interface Listener {\n+ public void onNodeFailure(DiscoveryNode node, String reason) {}\n \n- void onNodeFailure(DiscoveryNode node, String reason);\n- }\n-\n- private final ThreadPool threadPool;\n-\n- private final TransportService transportService;\n-\n-\n- private final boolean connectOnNetworkDisconnect;\n-\n- private final TimeValue pingInterval;\n-\n- private final TimeValue pingRetryTimeout;\n-\n- private final int pingRetryCount;\n-\n- // used mainly for testing, should always be true\n- private final boolean registerConnectionListener;\n+ public void onPingReceived(PingRequest pingRequest) {}\n \n+ }\n \n private final CopyOnWriteArrayList<Listener> listeners = new CopyOnWriteArrayList<>();\n \n private final ConcurrentMap<DiscoveryNode, NodeFD> nodesFD = newConcurrentMap();\n \n- private final FDConnectionListener connectionListener;\n-\n private volatile DiscoveryNodes latestNodes = EMPTY_NODES;\n \n- private volatile boolean running = false;\n+ private volatile long clusterStateVersion = ClusterState.UNKNOWN_VERSION;\n \n- public NodesFaultDetection(Settings settings, ThreadPool threadPool, TransportService transportService) {\n- super(settings);\n- this.threadPool = threadPool;\n- this.transportService = transportService;\n+ private volatile boolean running = false;\n \n- this.connectOnNetworkDisconnect = componentSettings.getAsBoolean(\"connect_on_network_disconnect\", true);\n- this.pingInterval = componentSettings.getAsTime(\"ping_interval\", timeValueSeconds(1));\n- this.pingRetryTimeout = componentSettings.getAsTime(\"ping_timeout\", timeValueSeconds(30));\n- this.pingRetryCount = componentSettings.getAsInt(\"ping_retries\", 3);\n- this.registerConnectionListener = componentSettings.getAsBoolean(\"register_connection_listener\", true);\n+ public NodesFaultDetection(Settings settings, ThreadPool threadPool, TransportService transportService, ClusterName clusterName) {\n+ super(settings, threadPool, transportService, clusterName);\n \n logger.debug(\"[node ] uses ping_interval [{}], ping_timeout [{}], ping_retries [{}]\", pingInterval, pingRetryTimeout, pingRetryCount);\n \n transportService.registerHandler(PING_ACTION_NAME, new PingRequestHandler());\n-\n- this.connectionListener = new FDConnectionListener();\n- if (registerConnectionListener) {\n- transportService.addConnectionListener(connectionListener);\n- }\n }\n \n public void addListener(Listener listener) {\n@@ -107,9 +81,10 @@ public void removeListener(Listener listener) {\n listeners.remove(listener);\n }\n \n- public void updateNodes(DiscoveryNodes nodes) {\n+ public void updateNodes(DiscoveryNodes nodes, long clusterStateVersion) {\n DiscoveryNodes prevNodes = latestNodes;\n this.latestNodes = nodes;\n+ this.clusterStateVersion = clusterStateVersion;\n if (!running) {\n return;\n }\n@@ -121,7 +96,8 @@ public void updateNodes(DiscoveryNodes nodes) {\n }\n if (!nodesFD.containsKey(newNode)) {\n nodesFD.put(newNode, new NodeFD());\n- threadPool.schedule(pingInterval, ThreadPool.Names.SAME, new SendPingRequest(newNode));\n+ // we use schedule with a 0 time value to run the pinger on the pool as it will run on later\n+ threadPool.schedule(TimeValue.timeValueMillis(0), ThreadPool.Names.SAME, new SendPingRequest(newNode));\n }\n }\n for (DiscoveryNode removedNode : delta.removedNodes()) {\n@@ -146,12 +122,13 @@ public NodesFaultDetection stop() {\n }\n \n public void close() {\n+ super.close();\n stop();\n transportService.removeHandler(PING_ACTION_NAME);\n- transportService.removeConnectionListener(connectionListener);\n }\n \n- private void handleTransportDisconnect(DiscoveryNode node) {\n+ @Override\n+ protected void handleTransportDisconnect(DiscoveryNode node) {\n if (!latestNodes.nodeExists(node.id())) {\n return;\n }\n@@ -167,7 +144,8 @@ private void handleTransportDisconnect(DiscoveryNode node) {\n try {\n transportService.connectToNode(node);\n nodesFD.put(node, new NodeFD());\n- threadPool.schedule(pingInterval, ThreadPool.Names.SAME, new SendPingRequest(node));\n+ // we use schedule with a 0 time value to run the pinger on the pool as it will run on later\n+ threadPool.schedule(TimeValue.timeValueMillis(0), ThreadPool.Names.SAME, new SendPingRequest(node));\n } catch (Exception e) {\n logger.trace(\"[node ] [{}] transport disconnected (with verified connect)\", node);\n notifyNodeFailure(node, \"transport disconnected (with verified connect)\");\n@@ -189,6 +167,19 @@ public void run() {\n });\n }\n \n+ private void notifyPingReceived(final PingRequest pingRequest) {\n+ threadPool.generic().execute(new Runnable() {\n+\n+ @Override\n+ public void run() {\n+ for (Listener listener : listeners) {\n+ listener.onPingReceived(pingRequest);\n+ }\n+ }\n+\n+ });\n+ }\n+\n private class SendPingRequest implements Runnable {\n \n private final DiscoveryNode node;\n@@ -202,8 +193,9 @@ public void run() {\n if (!running) {\n return;\n }\n- transportService.sendRequest(node, PING_ACTION_NAME, new PingRequest(node.id()), options().withType(TransportRequestOptions.Type.PING).withTimeout(pingRetryTimeout),\n- new BaseTransportResponseHandler<PingResponse>() {\n+ final PingRequest pingRequest = new PingRequest(node.id(), clusterName, latestNodes.localNode(), clusterStateVersion);\n+ final TransportRequestOptions options = options().withType(TransportRequestOptions.Type.PING).withTimeout(pingRetryTimeout);\n+ transportService.sendRequest(node, PING_ACTION_NAME, pingRequest, options, new BaseTransportResponseHandler<PingResponse>() {\n @Override\n public PingResponse newInstance() {\n return new PingResponse();\n@@ -250,8 +242,7 @@ public void handleException(TransportException exp) {\n }\n } else {\n // resend the request, not reschedule, rely on send timeout\n- transportService.sendRequest(node, PING_ACTION_NAME, new PingRequest(node.id()),\n- options().withType(TransportRequestOptions.Type.PING).withTimeout(pingRetryTimeout), this);\n+ transportService.sendRequest(node, PING_ACTION_NAME, pingRequest, options, this);\n }\n }\n }\n@@ -270,18 +261,6 @@ static class NodeFD {\n volatile boolean running = true;\n }\n \n- private class FDConnectionListener implements TransportConnectionListener {\n- @Override\n- public void onNodeConnected(DiscoveryNode node) {\n- }\n-\n- @Override\n- public void onNodeDisconnected(DiscoveryNode node) {\n- handleTransportDisconnect(node);\n- }\n- }\n-\n-\n class PingRequestHandler extends BaseTransportRequestHandler<PingRequest> {\n \n @Override\n@@ -296,6 +275,15 @@ public void messageReceived(PingRequest request, TransportChannel channel) throw\n if (!latestNodes.localNodeId().equals(request.nodeId)) {\n throw new ElasticsearchIllegalStateException(\"Got pinged as node [\" + request.nodeId + \"], but I am node [\" + latestNodes.localNodeId() + \"]\");\n }\n+\n+ // PingRequest will have clusterName set to null if it came from a node of version <1.4.0\n+ if (request.clusterName != null && !request.clusterName.equals(clusterName)) {\n+ // Don't introduce new exception for bwc reasons\n+ throw new ElasticsearchIllegalStateException(\"Got pinged with cluster name [\" + request.clusterName + \"], but I'm part of cluster [\" + clusterName + \"]\");\n+ }\n+\n+ notifyPingReceived(request);\n+\n channel.sendResponse(new PingResponse());\n }\n \n@@ -306,28 +294,63 @@ public String executor() {\n }\n \n \n- static class PingRequest extends TransportRequest {\n+ public static class PingRequest extends TransportRequest {\n \n // the (assumed) node id we are pinging\n private String nodeId;\n \n+ private ClusterName clusterName;\n+\n+ private DiscoveryNode masterNode;\n+\n+ private long clusterStateVersion = ClusterState.UNKNOWN_VERSION;\n+\n PingRequest() {\n }\n \n- PingRequest(String nodeId) {\n+ PingRequest(String nodeId, ClusterName clusterName, DiscoveryNode masterNode, long clusterStateVersion) {\n this.nodeId = nodeId;\n+ this.clusterName = clusterName;\n+ this.masterNode = masterNode;\n+ this.clusterStateVersion = clusterStateVersion;\n+ }\n+\n+ public String nodeId() {\n+ return nodeId;\n+ }\n+\n+ public ClusterName clusterName() {\n+ return clusterName;\n+ }\n+\n+ public DiscoveryNode masterNode() {\n+ return masterNode;\n+ }\n+\n+ public long clusterStateVersion() {\n+ return clusterStateVersion;\n }\n \n @Override\n public void readFrom(StreamInput in) throws IOException {\n super.readFrom(in);\n nodeId = in.readString();\n+ if (in.getVersion().onOrAfter(Version.V_1_4_0)) {\n+ clusterName = ClusterName.readClusterName(in);\n+ masterNode = DiscoveryNode.readNode(in);\n+ clusterStateVersion = in.readLong();\n+ }\n }\n \n @Override\n public void writeTo(StreamOutput out) throws IOException {\n super.writeTo(out);\n out.writeString(nodeId);\n+ if (out.getVersion().onOrAfter(Version.V_1_4_0)) {\n+ clusterName.writeTo(out);\n+ masterNode.writeTo(out);\n+ out.writeLong(clusterStateVersion);\n+ }\n }\n }\n ", "filename": "src/main/java/org/elasticsearch/discovery/zen/fd/NodesFaultDetection.java", "status": "modified" }, { "diff": "@@ -34,6 +34,7 @@\n import org.elasticsearch.common.util.concurrent.ConcurrentCollections;\n import org.elasticsearch.common.util.concurrent.EsRejectedExecutionException;\n import org.elasticsearch.discovery.zen.DiscoveryNodesProvider;\n+import org.elasticsearch.discovery.zen.elect.ElectMasterService;\n import org.elasticsearch.discovery.zen.ping.multicast.MulticastZenPing;\n import org.elasticsearch.discovery.zen.ping.unicast.UnicastHostsProvider;\n import org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing;\n@@ -55,20 +56,20 @@ public class ZenPingService extends AbstractLifecycleComponent<ZenPing> implemen\n \n // here for backward comp. with discovery plugins\n public ZenPingService(Settings settings, ThreadPool threadPool, TransportService transportService, ClusterName clusterName, NetworkService networkService,\n- @Nullable Set<UnicastHostsProvider> unicastHostsProviders) {\n- this(settings, threadPool, transportService, clusterName, networkService, Version.CURRENT, unicastHostsProviders);\n+ ElectMasterService electMasterService, @Nullable Set<UnicastHostsProvider> unicastHostsProviders) {\n+ this(settings, threadPool, transportService, clusterName, networkService, Version.CURRENT, electMasterService, unicastHostsProviders);\n }\n \n @Inject\n public ZenPingService(Settings settings, ThreadPool threadPool, TransportService transportService, ClusterName clusterName, NetworkService networkService,\n- Version version, @Nullable Set<UnicastHostsProvider> unicastHostsProviders) {\n+ Version version, ElectMasterService electMasterService, @Nullable Set<UnicastHostsProvider> unicastHostsProviders) {\n super(settings);\n ImmutableList.Builder<ZenPing> zenPingsBuilder = ImmutableList.builder();\n if (componentSettings.getAsBoolean(\"multicast.enabled\", true)) {\n zenPingsBuilder.add(new MulticastZenPing(settings, threadPool, transportService, clusterName, networkService, version));\n }\n // always add the unicast hosts, so it will be able to receive unicast requests even when working in multicast\n- zenPingsBuilder.add(new UnicastZenPing(settings, threadPool, transportService, clusterName, version, unicastHostsProviders));\n+ zenPingsBuilder.add(new UnicastZenPing(settings, threadPool, transportService, clusterName, version, electMasterService, unicastHostsProviders));\n \n this.zenPings = zenPingsBuilder.build();\n }", "filename": "src/main/java/org/elasticsearch/discovery/zen/ping/ZenPingService.java", "status": "modified" }, { "diff": "@@ -19,8 +19,12 @@\n \n package org.elasticsearch.discovery.zen.ping.unicast;\n \n+import com.carrotsearch.hppc.cursors.ObjectCursor;\n import com.google.common.collect.Lists;\n-import org.elasticsearch.*;\n+import org.elasticsearch.ElasticsearchException;\n+import org.elasticsearch.ElasticsearchIllegalArgumentException;\n+import org.elasticsearch.ElasticsearchIllegalStateException;\n+import org.elasticsearch.Version;\n import org.elasticsearch.cluster.ClusterName;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n import org.elasticsearch.cluster.node.DiscoveryNodes;\n@@ -35,6 +39,7 @@\n import org.elasticsearch.common.util.concurrent.EsExecutors;\n import org.elasticsearch.common.util.concurrent.EsRejectedExecutionException;\n import org.elasticsearch.discovery.zen.DiscoveryNodesProvider;\n+import org.elasticsearch.discovery.zen.elect.ElectMasterService;\n import org.elasticsearch.discovery.zen.ping.ZenPing;\n import org.elasticsearch.threadpool.ThreadPool;\n import org.elasticsearch.transport.*;\n@@ -62,27 +67,30 @@ public class UnicastZenPing extends AbstractLifecycleComponent<ZenPing> implemen\n private final ThreadPool threadPool;\n private final TransportService transportService;\n private final ClusterName clusterName;\n+ private final ElectMasterService electMasterService;\n \n private final int concurrentConnects;\n \n- private final DiscoveryNode[] nodes;\n+ private final DiscoveryNode[] configuredTargetNodes;\n \n private volatile DiscoveryNodesProvider nodesProvider;\n \n private final AtomicInteger pingIdGenerator = new AtomicInteger();\n \n private final Map<Integer, ConcurrentMap<DiscoveryNode, PingResponse>> receivedResponses = newConcurrentMap();\n \n- // a list of temporal responses a node will return for a request (holds requests from other nodes)\n+ // a list of temporal responses a node will return for a request (holds requests from other configuredTargetNodes)\n private final Queue<PingResponse> temporalResponses = ConcurrentCollections.newQueue();\n \n private final CopyOnWriteArrayList<UnicastHostsProvider> hostsProviders = new CopyOnWriteArrayList<>();\n \n- public UnicastZenPing(Settings settings, ThreadPool threadPool, TransportService transportService, ClusterName clusterName, Version version, @Nullable Set<UnicastHostsProvider> unicastHostsProviders) {\n+ public UnicastZenPing(Settings settings, ThreadPool threadPool, TransportService transportService, ClusterName clusterName,\n+ Version version, ElectMasterService electMasterService, @Nullable Set<UnicastHostsProvider> unicastHostsProviders) {\n super(settings);\n this.threadPool = threadPool;\n this.transportService = transportService;\n this.clusterName = clusterName;\n+ this.electMasterService = electMasterService;\n \n if (unicastHostsProviders != null) {\n for (UnicastHostsProvider unicastHostsProvider : unicastHostsProviders) {\n@@ -99,20 +107,20 @@ public UnicastZenPing(Settings settings, ThreadPool threadPool, TransportService\n List<String> hosts = Lists.newArrayList(hostArr);\n logger.debug(\"using initial hosts {}, with concurrent_connects [{}]\", hosts, concurrentConnects);\n \n- List<DiscoveryNode> nodes = Lists.newArrayList();\n+ List<DiscoveryNode> configuredTargetNodes = Lists.newArrayList();\n int idCounter = 0;\n for (String host : hosts) {\n try {\n TransportAddress[] addresses = transportService.addressesFromString(host);\n // we only limit to 1 addresses, makes no sense to ping 100 ports\n for (int i = 0; (i < addresses.length && i < LIMIT_PORTS_COUNT); i++) {\n- nodes.add(new DiscoveryNode(\"#zen_unicast_\" + (++idCounter) + \"#\", addresses[i], version.minimumCompatibilityVersion()));\n+ configuredTargetNodes.add(new DiscoveryNode(\"#zen_unicast_\" + (++idCounter) + \"#\", addresses[i], version.minimumCompatibilityVersion()));\n }\n } catch (Exception e) {\n throw new ElasticsearchIllegalArgumentException(\"Failed to resolve address for [\" + host + \"]\", e);\n }\n }\n- this.nodes = nodes.toArray(new DiscoveryNode[nodes.size()]);\n+ this.configuredTargetNodes = configuredTargetNodes.toArray(new DiscoveryNode[configuredTargetNodes.size()]);\n \n transportService.registerHandler(ACTION_NAME, new UnicastPingRequestHandler());\n }\n@@ -143,6 +151,13 @@ public void setNodesProvider(DiscoveryNodesProvider nodesProvider) {\n this.nodesProvider = nodesProvider;\n }\n \n+ /**\n+ * Clears the list of cached ping responses.\n+ */\n+ public void clearTemporalReponses() {\n+ temporalResponses.clear();\n+ }\n+\n public PingResponse[] pingAndWait(TimeValue timeout) {\n final AtomicReference<PingResponse[]> response = new AtomicReference<>();\n final CountDownLatch latch = new CountDownLatch(1);\n@@ -237,18 +252,30 @@ void sendPings(final TimeValue timeout, @Nullable TimeValue waitTime, final Send\n DiscoveryNodes discoNodes = nodesProvider.nodes();\n pingRequest.pingResponse = new PingResponse(discoNodes.localNode(), discoNodes.masterNode(), clusterName);\n \n- HashSet<DiscoveryNode> nodesToPing = new HashSet<>(Arrays.asList(nodes));\n+ HashSet<DiscoveryNode> nodesToPingSet = new HashSet<>();\n for (PingResponse temporalResponse : temporalResponses) {\n // Only send pings to nodes that have the same cluster name.\n if (clusterName.equals(temporalResponse.clusterName())) {\n- nodesToPing.add(temporalResponse.target());\n+ nodesToPingSet.add(temporalResponse.target());\n }\n }\n \n for (UnicastHostsProvider provider : hostsProviders) {\n- nodesToPing.addAll(provider.buildDynamicNodes());\n+ nodesToPingSet.addAll(provider.buildDynamicNodes());\n+ }\n+\n+ // add all possible master nodes that were active in the last known cluster configuration\n+ for (ObjectCursor<DiscoveryNode> masterNode : discoNodes.getMasterNodes().values()) {\n+ nodesToPingSet.add(masterNode.value);\n }\n \n+ // sort the nodes by likelihood of being an active master\n+ List<DiscoveryNode> sortedNodesToPing = electMasterService.sortByMasterLikelihood(nodesToPingSet);\n+\n+ // new add the the unicast targets first\n+ ArrayList<DiscoveryNode> nodesToPing = Lists.newArrayList(configuredTargetNodes);\n+ nodesToPing.addAll(sortedNodesToPing);\n+\n final CountDownLatch latch = new CountDownLatch(nodesToPing.size());\n for (final DiscoveryNode node : nodesToPing) {\n // make sure we are connected", "filename": "src/main/java/org/elasticsearch/discovery/zen/ping/unicast/UnicastZenPing.java", "status": "modified" }, { "diff": "@@ -39,6 +39,7 @@\n import org.elasticsearch.transport.*;\n \n import java.util.Map;\n+import java.util.concurrent.atomic.AtomicBoolean;\n \n /**\n *\n@@ -82,12 +83,15 @@ public void publish(ClusterState clusterState, final Discovery.AckListener ackLi\n publish(clusterState, new AckClusterStatePublishResponseHandler(clusterState.nodes().size() - 1, ackListener));\n }\n \n- private void publish(ClusterState clusterState, final ClusterStatePublishResponseHandler publishResponseHandler) {\n+ private void publish(final ClusterState clusterState, final ClusterStatePublishResponseHandler publishResponseHandler) {\n \n DiscoveryNode localNode = nodesProvider.nodes().localNode();\n \n Map<Version, BytesReference> serializedStates = Maps.newHashMap();\n \n+ final AtomicBoolean timedOutWaitingForNodes = new AtomicBoolean(false);\n+ final TimeValue publishTimeout = discoverySettings.getPublishTimeout();\n+\n for (final DiscoveryNode node : clusterState.nodes()) {\n if (node.equals(localNode)) {\n continue;\n@@ -122,28 +126,30 @@ private void publish(ClusterState clusterState, final ClusterStatePublishRespons\n \n @Override\n public void handleResponse(TransportResponse.Empty response) {\n+ if (timedOutWaitingForNodes.get()) {\n+ logger.debug(\"node {} responded for cluster state [{}] (took longer than [{}])\", node, clusterState.version(), publishTimeout);\n+ }\n publishResponseHandler.onResponse(node);\n }\n \n @Override\n public void handleException(TransportException exp) {\n- logger.debug(\"failed to send cluster state to [{}]\", exp, node);\n+ logger.debug(\"failed to send cluster state to {}\", exp, node);\n publishResponseHandler.onFailure(node, exp);\n }\n });\n } catch (Throwable t) {\n- logger.debug(\"error sending cluster state to [{}]\", t, node);\n+ logger.debug(\"error sending cluster state to {}\", t, node);\n publishResponseHandler.onFailure(node, t);\n }\n }\n \n- TimeValue publishTimeout = discoverySettings.getPublishTimeout();\n if (publishTimeout.millis() > 0) {\n // only wait if the publish timeout is configured...\n try {\n- boolean awaited = publishResponseHandler.awaitAllNodes(publishTimeout);\n- if (!awaited) {\n- logger.debug(\"awaiting all nodes to process published state {} timed out, timeout {}\", clusterState.version(), publishTimeout);\n+ timedOutWaitingForNodes.set(!publishResponseHandler.awaitAllNodes(publishTimeout));\n+ if (timedOutWaitingForNodes.get()) {\n+ logger.debug(\"timed out waiting for all nodes to process published state [{}] (timeout [{}])\", clusterState.version(), publishTimeout);\n }\n } catch (InterruptedException e) {\n // ignore & restore interrupt", "filename": "src/main/java/org/elasticsearch/discovery/zen/publish/PublishClusterStateAction.java", "status": "modified" }, { "diff": "@@ -134,20 +134,14 @@ public void clusterChanged(final ClusterChangedEvent event) {\n if (lifecycle.stoppedOrClosed()) {\n return;\n }\n- if (event.state().blocks().hasGlobalBlock(Discovery.NO_MASTER_BLOCK)) {\n- // we need to clear those flags, since we might need to recover again in case we disconnect\n- // from the cluster and then reconnect\n- recovered.set(false);\n- scheduledRecovery.set(false);\n- }\n if (event.localNodeMaster() && event.state().blocks().hasGlobalBlock(STATE_NOT_RECOVERED_BLOCK)) {\n checkStateMeetsSettingsAndMaybeRecover(event.state(), true);\n }\n }\n \n protected void checkStateMeetsSettingsAndMaybeRecover(ClusterState state, boolean asyncRecovery) {\n DiscoveryNodes nodes = state.nodes();\n- if (state.blocks().hasGlobalBlock(Discovery.NO_MASTER_BLOCK)) {\n+ if (state.blocks().hasGlobalBlock(discoveryService.getNoMasterBlock())) {\n logger.debug(\"not recovering from gateway, no master elected yet\");\n } else if (recoverAfterNodes != -1 && (nodes.masterAndDataNodes().size()) < recoverAfterNodes) {\n logger.debug(\"not recovering from gateway, nodes_size (data+master) [\" + nodes.masterAndDataNodes().size() + \"] < recover_after_nodes [\" + recoverAfterNodes + \"]\");", "filename": "src/main/java/org/elasticsearch/gateway/GatewayService.java", "status": "modified" } ] }
{ "body": "Fix for #6133, added the ability to send empty arrays as part of an index mapping json. For single and multi field properties objects\n\nSorry for the import changes, IntelliJ doesn't like eclipse imports and fights it like the devil.\n\nIf the test are in a non standard format please advise. I just find them easier to read like this.\n", "comments": [ { "body": "@cfontes I left two comments on the PR but otherwise it looks good, especially the tests.\n\nI would be nice to fix these imports. I'm quite surprised that you mention that you use Intellij since I thought we were using Intellij's defaults (we even have an eclipse configuration to make sure we are using Intellij's import style). Or maybe you have a non-default configuration of imports in Intellij?\n\nCould you please also sign [our contributor license agreement](http://www.elasticsearch.org/contributor-agreement/) so that we can eventually merge this pull request?\n\nThanks!\n", "created_at": "2014-08-26T09:07:59Z" }, { "body": "> Could you please also sign our contributor license agreement so that we can eventually merge this pull request?\n\nI just learned that it is in, so it is allright for the CLA.\n", "created_at": "2014-08-26T09:25:40Z" }, { "body": "Thanks for the review.\n\nI will look into fixing those and push it asap.\n\nAbout the imports, you are right... it's my bad, one of my other projects have a very specific rule for imports and it got into my ES project by mistake. Will fix that too.\n\nCheers!\n", "created_at": "2014-08-27T00:53:04Z" }, { "body": "@jpountz\n\nAdded back the IntelliJ default import settings.\nAdded some code to validate the scenarios explained.\nAdded some tests to validate those.\n\nSince there is nothing really to do in `parseProperties` when the parameter is an empty List it just skip the parsing and proceed without any exception since now it's a valid input that does nothing. If it's anything else (besides a map) it throws an exception. Please take a look to see if it's OK like that for you.\n\nThanks!\n", "created_at": "2014-08-27T06:48:38Z" }, { "body": "Merged, thanks!\n", "created_at": "2014-08-27T08:20:51Z" } ], "number": 7271, "title": "Added support for empty field arrays in mappings" }
{ "body": "While working on #7271 I found some methods that were not being used (I searched for the names too, to see if I could find any reflective calls) and some method parameters too.\n\nTo make it easier to merge #7271 I am submitting this as a side Pull request.\n\nI've ran all tests and they OK!\n\nThanks\n", "number": 7474, "review_comments": [], "title": "Removing useless methods and method parameters from ObjectMapper.java and TypeParsers.java" }
{ "commits": [ { "message": "Removing useless parameters" } ], "files": [ { "diff": "@@ -170,7 +170,7 @@ public static class TypeParser implements Mapper.TypeParser {\n } else if (Fields.MAX_INPUT_LENGTH.match(fieldName)) {\n builder.maxInputLength(Integer.parseInt(fieldNode.toString()));\n } else if (\"fields\".equals(fieldName) || \"path\".equals(fieldName)) {\n- parseMultiField(builder, name, node, parserContext, fieldName, fieldNode);\n+ parseMultiField(builder, name, parserContext, fieldName, fieldNode);\n } else if (fieldName.equals(Fields.CONTEXT)) {\n builder.contextMapping(ContextBuilder.loadMappings(fieldNode));\n } else {", "filename": "src/main/java/org/elasticsearch/index/mapper/core/CompletionFieldMapper.java", "status": "modified" }, { "diff": "@@ -153,7 +153,7 @@ public static class TypeParser implements Mapper.TypeParser {\n if (propName.equals(\"null_value\")) {\n builder.nullValue(propNode.toString());\n } else if (propName.equals(\"format\")) {\n- builder.dateTimeFormatter(parseDateTimeFormatter(propName, propNode));\n+ builder.dateTimeFormatter(parseDateTimeFormatter(propNode));\n } else if (propName.equals(\"numeric_resolution\")) {\n builder.timeUnit(TimeUnit.valueOf(propNode.toString().toUpperCase(Locale.ROOT)));\n } else if (propName.equals(\"locale\")) {", "filename": "src/main/java/org/elasticsearch/index/mapper/core/DateFieldMapper.java", "status": "modified" }, { "diff": "@@ -178,7 +178,7 @@ public Mapper.Builder parse(String name, Map<String, Object> node, ParserContext\n } else if (propName.equals(\"ignore_above\")) {\n builder.ignoreAbove(XContentMapValues.nodeIntegerValue(propNode, -1));\n } else {\n- parseMultiField(builder, name, node, parserContext, propName, propNode);\n+ parseMultiField(builder, name, parserContext, propName, propNode);\n }\n }\n return builder;", "filename": "src/main/java/org/elasticsearch/index/mapper/core/StringFieldMapper.java", "status": "modified" }, { "diff": "@@ -154,7 +154,7 @@ public static void parseNumberField(NumberFieldMapper.Builder builder, String na\n } else if (propName.equals(\"similarity\")) {\n builder.similarity(parserContext.similarityLookupService().similarity(propNode.toString()));\n } else {\n- parseMultiField(builder, name, numberNode, parserContext, propName, propNode);\n+ parseMultiField(builder, name, parserContext, propName, propNode);\n }\n }\n }\n@@ -245,7 +245,7 @@ public static void parseField(AbstractFieldMapper.Builder builder, String name,\n }\n }\n \n- public static void parseMultiField(AbstractFieldMapper.Builder builder, String name, Map<String, Object> node, Mapper.TypeParser.ParserContext parserContext, String propName, Object propNode) {\n+ public static void parseMultiField(AbstractFieldMapper.Builder builder, String name, Mapper.TypeParser.ParserContext parserContext, String propName, Object propNode) {\n if (propName.equals(\"path\")) {\n builder.multiFieldPathType(parsePathType(name, propNode.toString()));\n } else if (propName.equals(\"fields\")) {\n@@ -291,7 +291,7 @@ private static IndexOptions nodeIndexOptionValue(final Object propNode) {\n }\n }\n \n- public static FormatDateTimeFormatter parseDateTimeFormatter(String fieldName, Object node) {\n+ public static FormatDateTimeFormatter parseDateTimeFormatter(Object node) {\n return Joda.forPattern(node.toString());\n }\n \n@@ -335,16 +335,6 @@ public static void parseIndex(String fieldName, String index, AbstractFieldMappe\n }\n }\n \n- public static boolean parseDocValues(String docValues) {\n- if (\"no\".equals(docValues)) {\n- return false;\n- } else if (\"yes\".equals(docValues)) {\n- return true;\n- } else {\n- return nodeBooleanValue(docValues);\n- }\n- }\n-\n public static boolean parseStore(String fieldName, String store) throws MapperParsingException {\n if (\"no\".equals(store)) {\n return false;", "filename": "src/main/java/org/elasticsearch/index/mapper/core/TypeParsers.java", "status": "modified" }, { "diff": "@@ -245,7 +245,7 @@ public static class TypeParser implements Mapper.TypeParser {\n } else if (fieldName.equals(\"normalize_lon\")) {\n builder.normalizeLon = XContentMapValues.nodeBooleanValue(fieldNode);\n } else {\n- parseMultiField(builder, name, node, parserContext, fieldName, fieldNode);\n+ parseMultiField(builder, name, parserContext, fieldName, fieldNode);\n }\n }\n return builder;", "filename": "src/main/java/org/elasticsearch/index/mapper/geo/GeoPointFieldMapper.java", "status": "modified" }, { "diff": "@@ -130,7 +130,7 @@ public Mapper.Builder parse(String name, Map<String, Object> node, ParserContext\n } else if (fieldName.equals(\"path\")) {\n builder.path(fieldNode.toString());\n } else if (fieldName.equals(\"format\")) {\n- builder.dateTimeFormatter(parseDateTimeFormatter(builder.name(), fieldNode.toString()));\n+ builder.dateTimeFormatter(parseDateTimeFormatter(fieldNode.toString()));\n } else if (fieldName.equals(\"default\")) {\n builder.defaultTimestamp(fieldNode == null ? null : fieldNode.toString());\n }", "filename": "src/main/java/org/elasticsearch/index/mapper/internal/TimestampFieldMapper.java", "status": "modified" }, { "diff": "@@ -415,10 +415,6 @@ public String fullPath() {\n return this.fullPath;\n }\n \n- public BytesRef nestedTypePathAsBytes() {\n- return nestedTypePathAsBytes;\n- }\n-\n public String nestedTypePathAsString() {\n return nestedTypePathAsString;\n }\n@@ -791,21 +787,6 @@ public void parseDynamicValue(final ParseContext context, String currentFieldNam\n }\n }\n }\n- // DON'T do automatic ip detection logic, since it messes up with docs that have hosts and ips\n- // check if its an ip\n-// if (!resolved && text.indexOf('.') != -1) {\n-// try {\n-// IpFieldMapper.ipToLong(text);\n-// XContentMapper.Builder builder = context.root().findTemplateBuilder(context, currentFieldName, \"ip\");\n-// if (builder == null) {\n-// builder = ipField(currentFieldName);\n-// }\n-// mapper = builder.build(builderContext);\n-// resolved = true;\n-// } catch (Exception e) {\n-// // failure to parse, not ip...\n-// }\n-// }\n if (!resolved) {\n Mapper.Builder builder = context.root().findTemplateBuilder(context, currentFieldName, \"string\");\n if (builder == null) {", "filename": "src/main/java/org/elasticsearch/index/mapper/object/ObjectMapper.java", "status": "modified" }, { "diff": "@@ -146,12 +146,12 @@ protected boolean processField(ObjectMapper.Builder builder, String fieldName, O\n List<FormatDateTimeFormatter> dateTimeFormatters = newArrayList();\n if (fieldNode instanceof List) {\n for (Object node1 : (List) fieldNode) {\n- dateTimeFormatters.add(parseDateTimeFormatter(fieldName, node1));\n+ dateTimeFormatters.add(parseDateTimeFormatter(node1));\n }\n } else if (\"none\".equals(fieldNode.toString())) {\n dateTimeFormatters = null;\n } else {\n- dateTimeFormatters.add(parseDateTimeFormatter(fieldName, fieldNode));\n+ dateTimeFormatters.add(parseDateTimeFormatter(fieldNode));\n }\n if (dateTimeFormatters == null) {\n ((Builder) builder).noDynamicDateTimeFormatter();", "filename": "src/main/java/org/elasticsearch/index/mapper/object/RootObjectMapper.java", "status": "modified" }, { "diff": "@@ -131,7 +131,7 @@ public Mapper.Builder parse(String name, Map<String, Object> node, ParserContext\n String propName = Strings.toUnderscoreCase(entry.getKey());\n Object propNode = entry.getValue();\n \n- parseMultiField(builder, name, node, parserContext, propName, propNode);\n+ parseMultiField(builder, name, parserContext, propName, propNode);\n }\n \n return builder;", "filename": "src/test/java/org/elasticsearch/index/mapper/externalvalues/ExternalMapper.java", "status": "modified" } ] }
{ "body": "The changes for Issue 6962 is present in 1.3.2, but there are still uses of Unsafe methods in other classes, apart from UnsafeUtils.\n\njprante signalled one occurrence on the site below:\nhttp://www.snip2code.com/Snippet/140415/Solaris-SPARC-JVM-64bit-crash-with-Java-\n\nbut I could not find a reference to it here at ElasticSearch.\n\nThese classes are involved.\n\nUnsafeChunkDecoder.class\nUnsafeChunkEncoder.class\nUnsafeChunkEncoderBE.class\nUnsafeChunkEncoderLE.class\nUnsafeChunkEncoders.class\nUnsafeDynamicChannelBuffer.class\n", "comments": [ { "body": "This was fixed in #7468 but was not backported to 1.3.\n", "created_at": "2014-10-14T23:30:24Z" }, { "body": "@rjernst should we backport to 1.3 - it's a bugfix from a sparc user perspective?\n", "created_at": "2014-10-15T07:46:08Z" }, { "body": "Agreed \n", "created_at": "2014-10-16T12:39:06Z" }, { "body": "I backported this to `1.3.5`\n", "created_at": "2014-10-16T13:39:14Z" } ], "number": 8078, "title": "Still use of unsafe methods in 1.3.4 - causing crashes on SPARC" }
{ "body": "The \"optimized\" encoders/decoders have been unreliable and error prone.\nAlso, fix LZFCompressor.compress to use LZFEncoder.safeEncode, which\ncreates a new safe encoder, instead of using a shared encoder (which\nis not threadsafe).\n\nCloses #8078\n", "number": 7468, "review_comments": [], "title": "Add all unsafe variants of LZF compress library functions to forbidden APIs." }
{ "commits": [ { "message": "Add all unsafe variants of LZF compress library functions to forbidden\nAPIs.\n\nThe \"optimized\" encoders/decoders have been unreliable and error prone.\nAlso, fix LZFCompressor.compress to use LZFEncoder.safeEncode, which\ncreates a new safe encoder, instead of using a shared encoder (which\nis not threadsafe)." } ], "files": [ { "diff": "@@ -69,3 +69,45 @@ java.nio.channels.FileChannel#read(java.nio.ByteBuffer, long)\n \n @defaultMessage Use Lucene.parseLenient instead it strips off minor version\n org.apache.lucene.util.Version#parseLeniently(java.lang.String)\n+\n+@defaultMessage unsafe encoders/decoders have problems in the lzf compress library. Use variants of encode/decode functions which take Encoder/Decoder.\n+com.ning.compress.lzf.impl.UnsafeChunkEncoders#createEncoder(int)\n+com.ning.compress.lzf.impl.UnsafeChunkEncoders#createNonAllocatingEncoder(int)\n+com.ning.compress.lzf.impl.UnsafeChunkEncoders#createEncoder(int, com.ning.compress.BufferRecycler)\n+com.ning.compress.lzf.impl.UnsafeChunkEncoders#createNonAllocatingEncoder(int, com.ning.compress.BufferRecycler)\n+com.ning.compress.lzf.impl.UnsafeChunkDecoder#<init>()\n+com.ning.compress.lzf.parallel.CompressTask\n+com.ning.compress.lzf.util.ChunkEncoderFactory#optimalInstance()\n+com.ning.compress.lzf.util.ChunkEncoderFactory#optimalInstance(int)\n+com.ning.compress.lzf.util.ChunkEncoderFactory#optimalNonAllocatingInstance(int)\n+com.ning.compress.lzf.util.ChunkEncoderFactory#optimalInstance(com.ning.compress.BufferRecycler)\n+com.ning.compress.lzf.util.ChunkEncoderFactory#optimalInstance(int, com.ning.compress.BufferRecycler)\n+com.ning.compress.lzf.util.ChunkEncoderFactory#optimalNonAllocatingInstance(int, com.ning.compress.BufferRecycler)\n+com.ning.compress.lzf.util.ChunkDecoderFactory#optimalInstance()\n+com.ning.compress.lzf.util.LZFFileInputStream#<init>(java.io.File)\n+com.ning.compress.lzf.util.LZFFileInputStream#<init>(java.io.FileDescriptor)\n+com.ning.compress.lzf.util.LZFFileInputStream#<init>(java.lang.String)\n+com.ning.compress.lzf.util.LZFFileOutputStream#<init>(java.io.File)\n+com.ning.compress.lzf.util.LZFFileOutputStream#<init>(java.io.File, boolean)\n+com.ning.compress.lzf.util.LZFFileOutputStream#<init>(java.io.FileDescriptor)\n+com.ning.compress.lzf.util.LZFFileOutputStream#<init>(java.lang.String)\n+com.ning.compress.lzf.util.LZFFileOutputStream#<init>(java.lang.String, boolean)\n+com.ning.compress.lzf.LZFEncoder#encode(byte[])\n+com.ning.compress.lzf.LZFEncoder#encode(byte[], int, int)\n+com.ning.compress.lzf.LZFEncoder#encode(byte[], int, int, com.ning.compress.BufferRecycler)\n+com.ning.compress.lzf.LZFEncoder#appendEncoded(byte[], int, int, byte[], int)\n+com.ning.compress.lzf.LZFEncoder#appendEncoded(byte[], int, int, byte[], int, com.ning.compress.BufferRecycler)\n+com.ning.compress.lzf.LZFCompressingInputStream#<init>(java.io.InputStream)\n+com.ning.compress.lzf.LZFDecoder#fastDecoder()\n+com.ning.compress.lzf.LZFDecoder#decode(byte[])\n+com.ning.compress.lzf.LZFDecoder#decode(byte[], int, int)\n+com.ning.compress.lzf.LZFDecoder#decode(byte[], byte[])\n+com.ning.compress.lzf.LZFDecoder#decode(byte[], int, int, byte[])\n+com.ning.compress.lzf.LZFInputStream#<init>(java.io.InputStream)\n+com.ning.compress.lzf.LZFInputStream#<init>(java.io.InputStream, boolean)\n+com.ning.compress.lzf.LZFInputStream#<init>(java.io.InputStream, com.ning.compress.BufferRecycler)\n+com.ning.compress.lzf.LZFInputStream#<init>(java.io.InputStream, com.ning.compress.BufferRecycler, boolean)\n+com.ning.compress.lzf.LZFOutputStream#<init>(java.io.OutputStream)\n+com.ning.compress.lzf.LZFOutputStream#<init>(java.io.OutputStream, com.ning.compress.BufferRecycler)\n+com.ning.compress.lzf.LZFUncompressor#<init>(com.ning.compress.DataHandler)\n+com.ning.compress.lzf.LZFUncompressor#<init>(com.ning.compress.DataHandler, com.ning.compress.BufferRecycler)", "filename": "core-signatures.txt", "status": "modified" }, { "diff": "@@ -48,15 +48,11 @@ public class LZFCompressor implements Compressor {\n \n public static final String TYPE = \"lzf\";\n \n- private ChunkEncoder encoder;\n-\n private ChunkDecoder decoder;\n \n public LZFCompressor() {\n- this.encoder = ChunkEncoderFactory.safeInstance();\n this.decoder = ChunkDecoderFactory.safeInstance();\n Loggers.getLogger(LZFCompressor.class).debug(\"using encoder [{}] and decoder[{}] \",\n- this.encoder.getClass().getSimpleName(),\n this.decoder.getClass().getSimpleName());\n }\n \n@@ -117,7 +113,7 @@ public byte[] uncompress(byte[] data, int offset, int length) throws IOException\n \n @Override\n public byte[] compress(byte[] data, int offset, int length) throws IOException {\n- return LZFEncoder.encode(encoder, data, offset, length);\n+ return LZFEncoder.safeEncode(data, offset, length);\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/common/compress/lzf/LZFCompressor.java", "status": "modified" } ] }
{ "body": "This exception happened after a node restart and a delete-by-query hits the node immediately. Happened on 1.3.1\n\n```\n[2014-08-25 16:34:50,926][ERROR][index.engine.internal ] [node_name] [2013_09][3] failed to acquire searcher, source delete_by_query\njava.lang.NullPointerException\n at org.elasticsearch.index.engine.internal.InternalEngine.acquireSearcher(InternalEngine.java:694)\n at org.elasticsearch.index.shard.service.InternalIndexShard.acquireSearcher(InternalIndexShard.java:653)\n at org.elasticsearch.action.deletebyquery.TransportShardDeleteByQueryAction.shardOperationOnReplica(TransportShardDeleteByQueryAction.java:139)\n at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$ReplicaOperationTransportHandler.messageReceived(TransportShardReplicationOperationAction.java:242)\n at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$ReplicaOperationTransportHandler.messageReceived(TransportShardReplicationOperationAction.java:221)\n at org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.run(MessageChannelHandler.java:275)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n at java.lang.Thread.run(Thread.java:744)\n[2014-08-25 16:34:50,927][WARN ][index.engine.internal ] [node_name] [2013_09][3] failed engine [deleteByQuery/shard failed on replica]\n[2014-08-25 16:35:13,744][WARN ][cluster.action.shard ] [node_name] [2013_09][3] sending failed shard for [2013_09][3], node[0Y7oOI64Qea6GCaSh3OtLw], [R], s[INITIALIZING], indexUUID [_na_], reason [engine failure, message [deleteByQuery/shard failed on replica][EngineException[[2013_09][3] failed to acquire searcher, source delete_by_query]; nested: NullPointerException; ]]\n```\n", "comments": [], "number": 7455, "title": "Internal: Wait until engine has started up when acquiring searcher" }
{ "body": "Today we have a small window where a searcher can be acquired but the\nengine is in the state of starting up. This causes a NPE triggering a\nshard failure if we are fast enough. This commit fixes this situation\ngracefully.\n\nCloses #7455\n", "number": 7456, "review_comments": [], "title": "Wait until engine is started up when acquiring searcher" }
{ "commits": [ { "message": "[ENGINE] Wait until engine is started up when acquireing searcher\n\nToday we have a small window where a searcher can be acquired but the\nengine is in the state of starting up. This causes a NPE triggering a\nshard failure if we are fast enough. This commit fixes this situation\ngracefully.\n\nCloses #7455" } ], "files": [ { "diff": "@@ -689,12 +689,23 @@ public void delete(DeleteByQuery delete) throws EngineException {\n @Override\n public final Searcher acquireSearcher(String source) throws EngineException {\n boolean success = false;\n+ /* Acquire order here is store -> manager since we need\n+ * to make sure that the store is not closed before\n+ * the searcher is acquired. */\n+ store.incRef();\n try {\n- /* Acquire order here is store -> manager since we need\n- * to make sure that the store is not closed before\n- * the searcher is acquired. */\n- store.incRef();\n- final SearcherManager manager = this.searcherManager;\n+ SearcherManager manager = this.searcherManager;\n+ if (manager == null) {\n+ ensureOpen();\n+ try (InternalLock _ = this.readLock.acquire()) {\n+ // we might start up right now and the searcherManager is not initialized\n+ // we take the read lock and retry again since write lock is taken\n+ // while start() is called and otherwise the ensureOpen() call will\n+ // barf.\n+ manager = this.searcherManager;\n+ assert manager != null : \"SearcherManager is null but shouldn't\";\n+ }\n+ }\n /* This might throw NPE but that's fine we will run ensureOpen()\n * in the catch block and throw the right exception */\n final IndexSearcher searcher = manager.acquire();\n@@ -707,6 +718,8 @@ public final Searcher acquireSearcher(String source) throws EngineException {\n manager.release(searcher);\n }\n }\n+ } catch (EngineClosedException ex) {\n+ throw ex;\n } catch (Throwable ex) {\n ensureOpen(); // throw EngineCloseException here if we are already closed\n logger.error(\"failed to acquire searcher, source {}\", ex, source);", "filename": "src/main/java/org/elasticsearch/index/engine/internal/InternalEngine.java", "status": "modified" }, { "diff": "@@ -31,6 +31,7 @@\n import org.apache.lucene.index.IndexDeletionPolicy;\n import org.apache.lucene.index.Term;\n import org.apache.lucene.search.TermQuery;\n+import org.apache.lucene.store.AlreadyClosedException;\n import org.apache.lucene.util.LuceneTestCase.Slow;\n import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.common.bytes.BytesArray;\n@@ -80,6 +81,7 @@\n import java.util.Arrays;\n import java.util.List;\n import java.util.concurrent.CountDownLatch;\n+import java.util.concurrent.atomic.AtomicBoolean;\n import java.util.concurrent.atomic.AtomicReference;\n \n import static org.elasticsearch.common.settings.ImmutableSettings.Builder.EMPTY_SETTINGS;\n@@ -321,6 +323,33 @@ public void testSegments() throws Exception {\n assertThat(segments.get(2).isCompound(), equalTo(true));\n }\n \n+ public void testStartAndAcquireConcurrently() {\n+ ConcurrentMergeSchedulerProvider mergeSchedulerProvider = new ConcurrentMergeSchedulerProvider(shardId, EMPTY_SETTINGS, threadPool, new IndexSettingsService(shardId.index(), EMPTY_SETTINGS));\n+ final Engine engine = createEngine(engineSettingsService, store, createTranslog(), mergeSchedulerProvider);\n+ final AtomicBoolean startPending = new AtomicBoolean(true);\n+ Thread thread = new Thread() {\n+ public void run() {\n+ try {\n+ Thread.yield();\n+ engine.start();\n+ } finally {\n+ startPending.set(false);\n+ }\n+\n+ }\n+ };\n+ thread.start();\n+ while(startPending.get()) {\n+ try {\n+ engine.acquireSearcher(\"foobar\").close();\n+ break;\n+ } catch (EngineClosedException ex) {\n+ // all good\n+ }\n+ }\n+ engine.close();\n+ }\n+\n \n @Test\n public void testSegmentsWithMergeFlag() throws Exception {", "filename": "src/test/java/org/elasticsearch/index/engine/internal/InternalEngineTests.java", "status": "modified" } ] }
{ "body": "Tried using discovery.id.seed and it didn't work as expected.\n", "comments": [], "number": 7437, "title": "discovery.id.seed doesn't look like its working" }
{ "body": "Closes #7437\n", "number": 7439, "review_comments": [], "title": "Fix discovery.id.seed" }
{ "commits": [ { "message": "Fix discovery.id.seed\n\nCloses #7437" } ], "files": [ { "diff": "@@ -132,7 +132,7 @@ public void publish(ClusterState clusterState, Discovery.AckListener ackListener\n public static String generateNodeId(Settings settings) {\n String seed = settings.get(\"discovery.id.seed\");\n if (seed != null) {\n- Strings.randomBase64UUID(new Random(Long.parseLong(seed)));\n+ return Strings.randomBase64UUID(new Random(Long.parseLong(seed)));\n }\n return Strings.randomBase64UUID();\n }", "filename": "src/main/java/org/elasticsearch/discovery/DiscoveryService.java", "status": "modified" } ] }
{ "body": "This commit changes the way how files are selected for retransmission\non recovery / restore. Today this happens on a per-file basis where the\nrather weak checksum and the file length in bytes is compared to check if\na file is identical. This is prone to fail in the case of a checksum collision\nwhich can happen under certain circumstances.\nThe changes in this commit move the identity comparison to a per-commit / per-segment\nlevel where files are only treated as identical iff all the other files in the\ncommit / segment are the same. This `all or nothing` strategy is reducing the chance for\na collision dramatically since we also use a strong hash to identify commits / segments\nbased on the content of the `.si` / `segments.N` file.\n", "comments": [ { "body": "@imotov @rmuir can you guys do a review here? I am not sure about the XContent changes in Backup/Restore would be good to get some ideas here...\n", "created_at": "2014-08-20T14:46:28Z" }, { "body": "The diffing logic here etc looks great to me.\n", "created_at": "2014-08-20T15:15:30Z" }, { "body": "I left a couple of minor comments. Otherwise, looks good to me.\n", "created_at": "2014-08-20T17:34:09Z" }, { "body": "@imotov I pushed a new commit including a test for the `FileInfo` serialization\n", "created_at": "2014-08-21T07:47:54Z" }, { "body": "LGTM\n", "created_at": "2014-08-21T14:58:17Z" }, { "body": "I think we have a small regression here for snapshot and restore since we don't have the hash for the segments in the already existing snapshot. I think we can read the hashes for those where we calculated them from the snapshot on the fly if necessary. I will open a followup for this as I already discussed this with @imotov \n", "created_at": "2014-08-21T15:25:19Z" } ], "number": 7351, "title": "Improve recovery / snapshot restoring file identity handling" }
{ "body": "Due to additional safety added in #7351 we compute now a strong hash for\n.si and segments_N files which are compared during snapshot / restore.\nOld snapshots don't have this hash which can cause unnecessary copying\nof large amount of data. This commit adds the ability to fetch this\nhash from the blob store if needed.\n\nCloses #7434\n", "number": 7436, "review_comments": [ { "body": "Not needed\n", "created_at": "2014-08-26T00:06:09Z" }, { "body": "Debug leftovers?\n", "created_at": "2014-08-26T00:08:35Z" }, { "body": "The (int) conversion is not needed here.\n", "created_at": "2014-08-26T00:15:28Z" }, { "body": "The copyBytes method is going to copy the entire source regardless of the safety limit that was set above. \n", "created_at": "2014-08-26T00:18:27Z" } ], "title": "Add BWC layer to .si / segments_N hashing to identify segments accurately" }
{ "commits": [ { "message": "[SNAPSHOT] Add BWC layer to .si / segments_N hashing\n\nDue to additional safety added in #7351 we compute now a strong hash for\n.si and segments_N files which are compared during snapshot / restore.\nOld snapshots don't have this hash which can cause unnecessary copying\nof large amount of data. This commit adds the ability to fetch this\nhash from the blob store if needed.\n\nCloses #7434" } ], "files": [ { "diff": "@@ -24,6 +24,7 @@\n import com.google.common.collect.Lists;\n import org.apache.lucene.index.CorruptIndexException;\n import org.apache.lucene.store.*;\n+import org.apache.lucene.util.BytesRef;\n import org.apache.lucene.util.IOUtils;\n import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.cluster.metadata.SnapshotId;\n@@ -422,6 +423,7 @@ public void snapshot(SnapshotIndexCommit snapshotIndexCommit) {\n long indexTotalFilesSize = 0;\n ArrayList<FileInfo> filesToSnapshot = newArrayList();\n final Store.MetadataSnapshot metadata;\n+ // TODO apparently we don't use the MetadataSnapshot#.recoveryDiff(...) here but we should\n try {\n metadata = store.getMetadata(snapshotIndexCommit);\n } catch (IOException e) {\n@@ -436,7 +438,15 @@ public void snapshot(SnapshotIndexCommit snapshotIndexCommit) {\n final StoreFileMetaData md = metadata.get(fileName);\n boolean snapshotRequired = false;\n BlobStoreIndexShardSnapshot.FileInfo fileInfo = snapshots.findPhysicalIndexFile(fileName);\n-\n+ try {\n+ // in 1.4.0 we added additional hashes for .si / segments_N files\n+ // to ensure we don't double the space in the repo since old snapshots\n+ // don't have this hash we try to read that hash from the blob store\n+ // in a bwc compatible way.\n+ maybeRecalculateMetadataHash(blobContainer, fileInfo, metadata);\n+ } catch (Throwable e) {\n+ logger.warn(\"{} Can't calculate hash from blob for file [{}] [{}]\", e, shardId, fileInfo.physicalName(), fileInfo.metadata());\n+ }\n if (fileInfo == null || !fileInfo.isSame(md) || !snapshotFileExistsInBlobs(fileInfo, blobs)) {\n // commit point file does not exists in any commit point, or has different length, or does not fully exists in the listed blobs\n snapshotRequired = true;\n@@ -677,6 +687,25 @@ private void checkAborted() {\n }\n }\n \n+ /**\n+ * This is a BWC layer to ensure we update the snapshots metdata with the corresponding hashes before we compare them.\n+ * The new logic for StoreFileMetaData reads the entire <tt>.si</tt> and <tt>segments.n</tt> files to strengthen the\n+ * comparison of the files on a per-segment / per-commit level.\n+ */\n+ private static final void maybeRecalculateMetadataHash(ImmutableBlobContainer blobContainer, FileInfo fileInfo, Store.MetadataSnapshot snapshot) throws IOException {\n+ final StoreFileMetaData metadata;\n+ if (fileInfo != null && (metadata = snapshot.get(fileInfo.name())) != null) {\n+ if (metadata.hash().length > 0 && fileInfo.metadata().hash().length == 0) {\n+ // we have a hash - check if our repo has a hash too otherwise we have\n+ // to calculate it.\n+ byte[] bytes = blobContainer.readBlobFully(fileInfo.physicalName());\n+ final BytesRef spare = new BytesRef(bytes);\n+ Store.MetadataSnapshot.hashFile(fileInfo.metadata().hash(), spare);\n+ }\n+ }\n+\n+ }\n+\n /**\n * Context for restore operations\n */\n@@ -728,8 +757,17 @@ public void restore() {\n final List<FileInfo> filesToRecover = Lists.newArrayList();\n final Map<String, StoreFileMetaData> snapshotMetaData = new HashMap<>();\n final Map<String, FileInfo> fileInfos = new HashMap<>();\n-\n for (final FileInfo fileInfo : snapshot.indexFiles()) {\n+ try {\n+ // in 1.4.0 we added additional hashes for .si / segments_N files\n+ // to ensure we don't double the space in the repo since old snapshots\n+ // don't have this hash we try to read that hash from the blob store\n+ // in a bwc compatible way.\n+ maybeRecalculateMetadataHash(blobContainer, fileInfo, recoveryTargetMetadata);\n+ } catch (Throwable e) {\n+ // if the index is broken we might not be able to read it\n+ logger.warn(\"{} Can't calculate hash from blog for file [{}] [{}]\", e, shardId, fileInfo.physicalName(), fileInfo.metadata());\n+ }\n snapshotMetaData.put(fileInfo.metadata().name(), fileInfo.metadata());\n fileInfos.put(fileInfo.metadata().name(), fileInfo);\n }", "filename": "src/main/java/org/elasticsearch/index/snapshots/blobstore/BlobStoreIndexShardRepository.java", "status": "modified" }, { "diff": "@@ -550,18 +550,15 @@ static Map<String, String> readLegacyChecksums(Directory directory) throws IOExc\n \n private static void checksumFromLuceneFile(Directory directory, String file, ImmutableMap.Builder<String, StoreFileMetaData> builder, ESLogger logger, Version version, boolean readFileAsHash) throws IOException {\n final String checksum;\n- BytesRef fileHash = new BytesRef();\n+ final BytesRef fileHash = new BytesRef();\n try (IndexInput in = directory.openInput(file, IOContext.READONCE)) {\n try {\n if (in.length() < CodecUtil.footerLength()) {\n // truncated files trigger IAE if we seek negative... these files are really corrupted though\n throw new CorruptIndexException(\"Can't retrieve checksum from file: \" + file + \" file length must be >= \" + CodecUtil.footerLength() + \" but was: \" + in.length());\n }\n if (readFileAsHash) {\n- final int len = (int)Math.min(1024 * 1024, in.length()); // for safety we limit this to 1MB\n- fileHash.bytes = new byte[len];\n- in.readBytes(fileHash.bytes, 0, len);\n- fileHash.length = len;\n+ hashFile(fileHash, in);\n }\n checksum = digestToString(CodecUtil.retrieveChecksum(in));\n \n@@ -573,6 +570,27 @@ private static void checksumFromLuceneFile(Directory directory, String file, Imm\n }\n }\n \n+ /**\n+ * Computes a strong hash value for small files. Note that this method should only be used for files < 1MB\n+ */\n+ public static void hashFile(BytesRef fileHash, IndexInput in) throws IOException {\n+ final int len = (int)Math.min(1024 * 1024, in.length()); // for safety we limit this to 1MB\n+ fileHash.offset = 0;\n+ fileHash.grow(len);\n+ fileHash.length = len;\n+ in.readBytes(fileHash.bytes, 0, len);\n+ }\n+\n+ /**\n+ * Computes a strong hash value for small files. Note that this method should only be used for files < 1MB\n+ */\n+ public static void hashFile(BytesRef fileHash, BytesRef source) throws IOException {\n+ final int len = Math.min(1024 * 1024, source.length); // for safety we limit this to 1MB\n+ fileHash.offset = 0;\n+ fileHash.grow(len);\n+ fileHash.length = len;\n+ System.arraycopy(source.bytes, source.offset, fileHash.bytes, 0, len);\n+ }\n \n @Override\n public Iterator<StoreFileMetaData> iterator() {", "filename": "src/main/java/org/elasticsearch/index/store/Store.java", "status": "modified" }, { "diff": "@@ -450,114 +450,6 @@ public Version getMasterVersion() {\n return client().admin().cluster().prepareState().get().getState().nodes().masterNode().getVersion();\n }\n \n- @Test\n- @TestLogging(\"index.snapshots:TRACE,index.shard.service:TRACE\")\n- public void testSnapshotAndRestore() throws ExecutionException, InterruptedException, IOException {\n- logger.info(\"--> creating repository\");\n- assertAcked(client().admin().cluster().preparePutRepository(\"test-repo\")\n- .setType(\"fs\").setSettings(ImmutableSettings.settingsBuilder()\n- .put(\"location\", newTempDir(LifecycleScope.SUITE).getAbsolutePath())\n- .put(\"compress\", randomBoolean())\n- .put(\"chunk_size\", randomIntBetween(100, 1000))));\n- String[] indicesBefore = new String[randomIntBetween(2,5)];\n- String[] indicesAfter = new String[randomIntBetween(2,5)];\n- for (int i = 0; i < indicesBefore.length; i++) {\n- indicesBefore[i] = \"index_before_\" + i;\n- createIndex(indicesBefore[i]);\n- }\n- for (int i = 0; i < indicesAfter.length; i++) {\n- indicesAfter[i] = \"index_after_\" + i;\n- createIndex(indicesAfter[i]);\n- }\n- String[] indices = new String[indicesBefore.length + indicesAfter.length];\n- System.arraycopy(indicesBefore, 0, indices, 0, indicesBefore.length);\n- System.arraycopy(indicesAfter, 0, indices, indicesBefore.length, indicesAfter.length);\n- ensureYellow();\n- logger.info(\"--> indexing some data\");\n- IndexRequestBuilder[] buildersBefore = new IndexRequestBuilder[randomIntBetween(10, 200)];\n- for (int i = 0; i < buildersBefore.length; i++) {\n- buildersBefore[i] = client().prepareIndex(RandomPicks.randomFrom(getRandom(), indicesBefore), \"foo\", Integer.toString(i)).setSource(\"{ \\\"foo\\\" : \\\"bar\\\" } \");\n- }\n- IndexRequestBuilder[] buildersAfter = new IndexRequestBuilder[randomIntBetween(10, 200)];\n- for (int i = 0; i < buildersAfter.length; i++) {\n- buildersAfter[i] = client().prepareIndex(RandomPicks.randomFrom(getRandom(), indicesBefore), \"bar\", Integer.toString(i)).setSource(\"{ \\\"foo\\\" : \\\"bar\\\" } \");\n- }\n- indexRandom(true, buildersBefore);\n- indexRandom(true, buildersAfter);\n- assertThat(client().prepareCount(indices).get().getCount(), equalTo((long) (buildersBefore.length + buildersAfter.length)));\n- long[] counts = new long[indices.length];\n- for (int i = 0; i < indices.length; i++) {\n- counts[i] = client().prepareCount(indices[i]).get().getCount();\n- }\n-\n- logger.info(\"--> snapshot subset of indices before upgrage\");\n- CreateSnapshotResponse createSnapshotResponse = client().admin().cluster().prepareCreateSnapshot(\"test-repo\", \"test-snap-1\").setWaitForCompletion(true).setIndices(\"index_before_*\").get();\n- assertThat(createSnapshotResponse.getSnapshotInfo().successfulShards(), greaterThan(0));\n- assertThat(createSnapshotResponse.getSnapshotInfo().successfulShards(), equalTo(createSnapshotResponse.getSnapshotInfo().totalShards()));\n-\n- assertThat(client().admin().cluster().prepareGetSnapshots(\"test-repo\").setSnapshots(\"test-snap-1\").get().getSnapshots().get(0).state(), equalTo(SnapshotState.SUCCESS));\n-\n- logger.info(\"--> delete some data from indices that were already snapshotted\");\n- int howMany = randomIntBetween(1, buildersBefore.length);\n- \n- for (int i = 0; i < howMany; i++) {\n- IndexRequestBuilder indexRequestBuilder = RandomPicks.randomFrom(getRandom(), buildersBefore);\n- IndexRequest request = indexRequestBuilder.request();\n- client().prepareDelete(request.index(), request.type(), request.id()).get();\n- }\n- refresh();\n- final long numDocs = client().prepareCount(indices).get().getCount();\n- assertThat(client().prepareCount(indices).get().getCount(), lessThan((long) (buildersBefore.length + buildersAfter.length)));\n-\n-\n- client().admin().indices().prepareUpdateSettings(indices).setSettings(ImmutableSettings.builder().put(EnableAllocationDecider.INDEX_ROUTING_ALLOCATION_ENABLE, \"none\")).get();\n- backwardsCluster().allowOnAllNodes(indices);\n- logClusterState();\n- boolean upgraded;\n- do {\n- logClusterState();\n- CountResponse countResponse = client().prepareCount().get();\n- assertHitCount(countResponse, numDocs);\n- upgraded = backwardsCluster().upgradeOneNode();\n- ensureYellow();\n- countResponse = client().prepareCount().get();\n- assertHitCount(countResponse, numDocs);\n- } while (upgraded);\n- client().admin().indices().prepareUpdateSettings(indices).setSettings(ImmutableSettings.builder().put(EnableAllocationDecider.INDEX_ROUTING_ALLOCATION_ENABLE, \"all\")).get();\n-\n- logger.info(\"--> close indices\");\n-\n- client().admin().indices().prepareClose(\"index_before_*\").get();\n-\n- logger.info(\"--> restore all indices from the snapshot\");\n- RestoreSnapshotResponse restoreSnapshotResponse = client().admin().cluster().prepareRestoreSnapshot(\"test-repo\", \"test-snap-1\").setWaitForCompletion(true).execute().actionGet();\n- assertThat(restoreSnapshotResponse.getRestoreInfo().totalShards(), greaterThan(0));\n-\n- ensureYellow();\n- assertThat(client().prepareCount(indices).get().getCount(), equalTo((long) (buildersBefore.length + buildersAfter.length)));\n- for (int i = 0; i < indices.length; i++) {\n- assertThat(counts[i], equalTo(client().prepareCount(indices[i]).get().getCount()));\n- }\n-\n- logger.info(\"--> snapshot subset of indices after upgrade\");\n- createSnapshotResponse = client().admin().cluster().prepareCreateSnapshot(\"test-repo\", \"test-snap-2\").setWaitForCompletion(true).setIndices(\"index_*\").get();\n- assertThat(createSnapshotResponse.getSnapshotInfo().successfulShards(), greaterThan(0));\n- assertThat(createSnapshotResponse.getSnapshotInfo().successfulShards(), equalTo(createSnapshotResponse.getSnapshotInfo().totalShards()));\n-\n- // Test restore after index deletion\n- logger.info(\"--> delete indices\");\n- String index = RandomPicks.randomFrom(getRandom(), indices);\n- cluster().wipeIndices(index);\n- logger.info(\"--> restore one index after deletion\");\n- restoreSnapshotResponse = client().admin().cluster().prepareRestoreSnapshot(\"test-repo\", \"test-snap-2\").setWaitForCompletion(true).setIndices(index).execute().actionGet();\n- assertThat(restoreSnapshotResponse.getRestoreInfo().totalShards(), greaterThan(0));\n- ensureYellow();\n- assertThat(client().prepareCount(indices).get().getCount(), equalTo((long) (buildersBefore.length + buildersAfter.length)));\n- for (int i = 0; i < indices.length; i++) {\n- assertThat(counts[i], equalTo(client().prepareCount(indices[i]).get().getCount()));\n- }\n- }\n-\n @Test\n public void testDeleteByQuery() throws ExecutionException, InterruptedException {\n createIndex(\"test\");", "filename": "src/test/java/org/elasticsearch/bwcompat/BasicBackwardsCompatibilityTest.java", "status": "modified" }, { "diff": "@@ -37,8 +37,10 @@\n import org.elasticsearch.action.admin.indices.settings.get.GetSettingsResponse;\n import org.elasticsearch.action.admin.indices.template.get.GetIndexTemplatesResponse;\n import org.elasticsearch.action.count.CountResponse;\n+import org.elasticsearch.action.index.IndexRequestBuilder;\n import org.elasticsearch.client.Client;\n import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.MappingMetaData;\n import org.elasticsearch.cluster.metadata.SnapshotMetaData;\n import org.elasticsearch.cluster.routing.allocation.decider.FilterAllocationDecider;\n@@ -55,6 +57,8 @@\n import java.io.File;\n import java.util.ArrayList;\n import java.util.Arrays;\n+import java.util.List;\n+import java.util.concurrent.ExecutionException;\n import java.util.concurrent.TimeUnit;\n \n import static com.google.common.collect.Lists.newArrayList;\n@@ -1255,6 +1259,68 @@ public void snapshotRelocatingPrimary() throws Exception {\n logger.info(\"--> done\");\n }\n \n+ public void testSnapshotMoreThanOnce() throws ExecutionException, InterruptedException {\n+ Client client = client();\n+\n+ logger.info(\"--> creating repository\");\n+ assertAcked(client.admin().cluster().preparePutRepository(\"test-repo\")\n+ .setType(\"fs\").setSettings(ImmutableSettings.settingsBuilder()\n+ .put(\"location\", newTempDir(LifecycleScope.SUITE))\n+ .put(\"compress\", randomBoolean())\n+ .put(\"chunk_size\", randomIntBetween(100, 1000))));\n+\n+ // only one shard\n+ assertAcked(prepareCreate(\"test\").setSettings(ImmutableSettings.builder().put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1)));\n+ ensureGreen();\n+ logger.info(\"--> indexing\");\n+\n+ final int numdocs = randomIntBetween(10, 100);\n+ IndexRequestBuilder[] builders = new IndexRequestBuilder[numdocs];\n+ for (int i = 0; i < builders.length; i++) {\n+ builders[i] = client().prepareIndex(\"test\", \"doc\", Integer.toString(i)).setSource(\"foo\", \"bar\" + i);\n+ }\n+ indexRandom(true, builders);\n+ flushAndRefresh();\n+ assertNoFailures(client().admin().indices().prepareOptimize(\"test\").setForce(true).setFlush(true).setWaitForMerge(true).setMaxNumSegments(1).get());\n+\n+ CreateSnapshotResponse createSnapshotResponseFirst = client.admin().cluster().prepareCreateSnapshot(\"test-repo\", \"test\").setWaitForCompletion(true).setIndices(\"test\").get();\n+ assertThat(createSnapshotResponseFirst.getSnapshotInfo().successfulShards(), greaterThan(0));\n+ assertThat(createSnapshotResponseFirst.getSnapshotInfo().successfulShards(), equalTo(createSnapshotResponseFirst.getSnapshotInfo().totalShards()));\n+ assertThat(client.admin().cluster().prepareGetSnapshots(\"test-repo\").setSnapshots(\"test\").get().getSnapshots().get(0).state(), equalTo(SnapshotState.SUCCESS));\n+ {\n+ SnapshotStatus snapshotStatus = client.admin().cluster().prepareSnapshotStatus(\"test-repo\").setSnapshots(\"test\").get().getSnapshots().get(0);\n+ List<SnapshotIndexShardStatus> shards = snapshotStatus.getShards();\n+ for (SnapshotIndexShardStatus status : shards) {\n+ assertThat(status.getStats().getProcessedFiles(), greaterThan(1));\n+ }\n+ }\n+\n+ CreateSnapshotResponse createSnapshotResponseSecond = client.admin().cluster().prepareCreateSnapshot(\"test-repo\", \"test-1\").setWaitForCompletion(true).setIndices(\"test\").get();\n+ assertThat(createSnapshotResponseSecond.getSnapshotInfo().successfulShards(), greaterThan(0));\n+ assertThat(createSnapshotResponseSecond.getSnapshotInfo().successfulShards(), equalTo(createSnapshotResponseSecond.getSnapshotInfo().totalShards()));\n+ assertThat(client.admin().cluster().prepareGetSnapshots(\"test-repo\").setSnapshots(\"test-1\").get().getSnapshots().get(0).state(), equalTo(SnapshotState.SUCCESS));\n+ {\n+ SnapshotStatus snapshotStatus = client.admin().cluster().prepareSnapshotStatus(\"test-repo\").setSnapshots(\"test-1\").get().getSnapshots().get(0);\n+ List<SnapshotIndexShardStatus> shards = snapshotStatus.getShards();\n+ for (SnapshotIndexShardStatus status : shards) {\n+ assertThat(status.getStats().getProcessedFiles(), equalTo(1)); // we flush before the snapshot such that we have to process the segments_N files\n+ }\n+ }\n+\n+ client().prepareDelete(\"test\", \"doc\", \"1\").get();\n+ CreateSnapshotResponse createSnapshotResponseThird = client.admin().cluster().prepareCreateSnapshot(\"test-repo\", \"test-2\").setWaitForCompletion(true).setIndices(\"test\").get();\n+ assertThat(createSnapshotResponseThird.getSnapshotInfo().successfulShards(), greaterThan(0));\n+ assertThat(createSnapshotResponseThird.getSnapshotInfo().successfulShards(), equalTo(createSnapshotResponseThird.getSnapshotInfo().totalShards()));\n+ assertThat(client.admin().cluster().prepareGetSnapshots(\"test-repo\").setSnapshots(\"test-2\").get().getSnapshots().get(0).state(), equalTo(SnapshotState.SUCCESS));\n+ {\n+ SnapshotStatus snapshotStatus = client.admin().cluster().prepareSnapshotStatus(\"test-repo\").setSnapshots(\"test-2\").get().getSnapshots().get(0);\n+ List<SnapshotIndexShardStatus> shards = snapshotStatus.getShards();\n+ for (SnapshotIndexShardStatus status : shards) {\n+ assertThat(status.getStats().getProcessedFiles(), equalTo(2)); // we flush before the snapshot such that we have to process the segments_N files plus the .del file\n+ }\n+ }\n+ }\n+\n private boolean waitForIndex(final String index, TimeValue timeout) throws InterruptedException {\n return awaitBusy(new Predicate<Object>() {\n @Override", "filename": "src/test/java/org/elasticsearch/snapshots/SharedClusterSnapshotRestoreTests.java", "status": "modified" }, { "diff": "@@ -0,0 +1,242 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.snapshots;\n+\n+import com.carrotsearch.randomizedtesting.LifecycleScope;\n+import com.carrotsearch.randomizedtesting.generators.RandomPicks;\n+import org.elasticsearch.action.admin.cluster.snapshots.create.CreateSnapshotResponse;\n+import org.elasticsearch.action.admin.cluster.snapshots.restore.RestoreSnapshotResponse;\n+import org.elasticsearch.action.admin.cluster.snapshots.status.SnapshotIndexShardStatus;\n+import org.elasticsearch.action.admin.cluster.snapshots.status.SnapshotStatus;\n+import org.elasticsearch.action.count.CountResponse;\n+import org.elasticsearch.action.index.IndexRequest;\n+import org.elasticsearch.action.index.IndexRequestBuilder;\n+import org.elasticsearch.client.Client;\n+import org.elasticsearch.cluster.metadata.IndexMetaData;\n+import org.elasticsearch.cluster.routing.allocation.decider.EnableAllocationDecider;\n+import org.elasticsearch.common.settings.ImmutableSettings;\n+import org.elasticsearch.test.ElasticsearchBackwardsCompatIntegrationTest;\n+import org.junit.Ignore;\n+import org.junit.Test;\n+\n+import java.io.IOException;\n+import java.util.List;\n+import java.util.concurrent.ExecutionException;\n+\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertHitCount;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertNoFailures;\n+import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.greaterThan;\n+import static org.hamcrest.Matchers.lessThan;\n+\n+public class SnapshotBackwardsCompatibilityTest extends ElasticsearchBackwardsCompatIntegrationTest {\n+\n+ @Test\n+ public void testSnapshotAndRestore() throws ExecutionException, InterruptedException, IOException {\n+ logger.info(\"--> creating repository\");\n+ assertAcked(client().admin().cluster().preparePutRepository(\"test-repo\")\n+ .setType(\"fs\").setSettings(ImmutableSettings.settingsBuilder()\n+ .put(\"location\", newTempDir(LifecycleScope.SUITE).getAbsolutePath())\n+ .put(\"compress\", randomBoolean())\n+ .put(\"chunk_size\", randomIntBetween(100, 1000))));\n+ String[] indicesBefore = new String[randomIntBetween(2,5)];\n+ String[] indicesAfter = new String[randomIntBetween(2,5)];\n+ for (int i = 0; i < indicesBefore.length; i++) {\n+ indicesBefore[i] = \"index_before_\" + i;\n+ createIndex(indicesBefore[i]);\n+ }\n+ for (int i = 0; i < indicesAfter.length; i++) {\n+ indicesAfter[i] = \"index_after_\" + i;\n+ createIndex(indicesAfter[i]);\n+ }\n+ String[] indices = new String[indicesBefore.length + indicesAfter.length];\n+ System.arraycopy(indicesBefore, 0, indices, 0, indicesBefore.length);\n+ System.arraycopy(indicesAfter, 0, indices, indicesBefore.length, indicesAfter.length);\n+ ensureYellow();\n+ logger.info(\"--> indexing some data\");\n+ IndexRequestBuilder[] buildersBefore = new IndexRequestBuilder[randomIntBetween(10, 200)];\n+ for (int i = 0; i < buildersBefore.length; i++) {\n+ buildersBefore[i] = client().prepareIndex(RandomPicks.randomFrom(getRandom(), indicesBefore), \"foo\", Integer.toString(i)).setSource(\"{ \\\"foo\\\" : \\\"bar\\\" } \");\n+ }\n+ IndexRequestBuilder[] buildersAfter = new IndexRequestBuilder[randomIntBetween(10, 200)];\n+ for (int i = 0; i < buildersAfter.length; i++) {\n+ buildersAfter[i] = client().prepareIndex(RandomPicks.randomFrom(getRandom(), indicesBefore), \"bar\", Integer.toString(i)).setSource(\"{ \\\"foo\\\" : \\\"bar\\\" } \");\n+ }\n+ indexRandom(true, buildersBefore);\n+ indexRandom(true, buildersAfter);\n+ assertThat(client().prepareCount(indices).get().getCount(), equalTo((long) (buildersBefore.length + buildersAfter.length)));\n+ long[] counts = new long[indices.length];\n+ for (int i = 0; i < indices.length; i++) {\n+ counts[i] = client().prepareCount(indices[i]).get().getCount();\n+ }\n+\n+ logger.info(\"--> snapshot subset of indices before upgrage\");\n+ CreateSnapshotResponse createSnapshotResponse = client().admin().cluster().prepareCreateSnapshot(\"test-repo\", \"test-snap-1\").setWaitForCompletion(true).setIndices(\"index_before_*\").get();\n+ assertThat(createSnapshotResponse.getSnapshotInfo().successfulShards(), greaterThan(0));\n+ assertThat(createSnapshotResponse.getSnapshotInfo().successfulShards(), equalTo(createSnapshotResponse.getSnapshotInfo().totalShards()));\n+\n+ assertThat(client().admin().cluster().prepareGetSnapshots(\"test-repo\").setSnapshots(\"test-snap-1\").get().getSnapshots().get(0).state(), equalTo(SnapshotState.SUCCESS));\n+\n+ logger.info(\"--> delete some data from indices that were already snapshotted\");\n+ int howMany = randomIntBetween(1, buildersBefore.length);\n+\n+ for (int i = 0; i < howMany; i++) {\n+ IndexRequestBuilder indexRequestBuilder = RandomPicks.randomFrom(getRandom(), buildersBefore);\n+ IndexRequest request = indexRequestBuilder.request();\n+ client().prepareDelete(request.index(), request.type(), request.id()).get();\n+ }\n+ refresh();\n+ final long numDocs = client().prepareCount(indices).get().getCount();\n+ assertThat(client().prepareCount(indices).get().getCount(), lessThan((long) (buildersBefore.length + buildersAfter.length)));\n+\n+\n+ client().admin().indices().prepareUpdateSettings(indices).setSettings(ImmutableSettings.builder().put(EnableAllocationDecider.INDEX_ROUTING_ALLOCATION_ENABLE, \"none\")).get();\n+ backwardsCluster().allowOnAllNodes(indices);\n+ logClusterState();\n+ boolean upgraded;\n+ do {\n+ logClusterState();\n+ CountResponse countResponse = client().prepareCount().get();\n+ assertHitCount(countResponse, numDocs);\n+ upgraded = backwardsCluster().upgradeOneNode();\n+ ensureYellow();\n+ countResponse = client().prepareCount().get();\n+ assertHitCount(countResponse, numDocs);\n+ } while (upgraded);\n+ client().admin().indices().prepareUpdateSettings(indices).setSettings(ImmutableSettings.builder().put(EnableAllocationDecider.INDEX_ROUTING_ALLOCATION_ENABLE, \"all\")).get();\n+\n+ logger.info(\"--> close indices\");\n+\n+ client().admin().indices().prepareClose(\"index_before_*\").get();\n+\n+ logger.info(\"--> restore all indices from the snapshot\");\n+ RestoreSnapshotResponse restoreSnapshotResponse = client().admin().cluster().prepareRestoreSnapshot(\"test-repo\", \"test-snap-1\").setWaitForCompletion(true).execute().actionGet();\n+ assertThat(restoreSnapshotResponse.getRestoreInfo().totalShards(), greaterThan(0));\n+\n+ ensureYellow();\n+ assertThat(client().prepareCount(indices).get().getCount(), equalTo((long) (buildersBefore.length + buildersAfter.length)));\n+ for (int i = 0; i < indices.length; i++) {\n+ assertThat(counts[i], equalTo(client().prepareCount(indices[i]).get().getCount()));\n+ }\n+\n+ logger.info(\"--> snapshot subset of indices after upgrade\");\n+ createSnapshotResponse = client().admin().cluster().prepareCreateSnapshot(\"test-repo\", \"test-snap-2\").setWaitForCompletion(true).setIndices(\"index_*\").get();\n+ assertThat(createSnapshotResponse.getSnapshotInfo().successfulShards(), greaterThan(0));\n+ assertThat(createSnapshotResponse.getSnapshotInfo().successfulShards(), equalTo(createSnapshotResponse.getSnapshotInfo().totalShards()));\n+\n+ // Test restore after index deletion\n+ logger.info(\"--> delete indices\");\n+ String index = RandomPicks.randomFrom(getRandom(), indices);\n+ cluster().wipeIndices(index);\n+ logger.info(\"--> restore one index after deletion\");\n+ restoreSnapshotResponse = client().admin().cluster().prepareRestoreSnapshot(\"test-repo\", \"test-snap-2\").setWaitForCompletion(true).setIndices(index).execute().actionGet();\n+ assertThat(restoreSnapshotResponse.getRestoreInfo().totalShards(), greaterThan(0));\n+ ensureYellow();\n+ assertThat(client().prepareCount(indices).get().getCount(), equalTo((long) (buildersBefore.length + buildersAfter.length)));\n+ for (int i = 0; i < indices.length; i++) {\n+ assertThat(counts[i], equalTo(client().prepareCount(indices[i]).get().getCount()));\n+ }\n+ }\n+\n+ public void testSnapshotMoreThanOnce() throws ExecutionException, InterruptedException, IOException {\n+ Client client = client();\n+\n+ logger.info(\"--> creating repository\");\n+ assertAcked(client.admin().cluster().preparePutRepository(\"test-repo\")\n+ .setType(\"fs\").setSettings(ImmutableSettings.settingsBuilder()\n+ .put(\"location\", newTempDir(LifecycleScope.SUITE).getAbsoluteFile())\n+ .put(\"compress\", randomBoolean())\n+ .put(\"chunk_size\", randomIntBetween(100, 1000))));\n+\n+ // only one shard\n+ assertAcked(prepareCreate(\"test\").setSettings(ImmutableSettings.builder()\n+ .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1)\n+ .put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0)\n+ ));\n+ ensureYellow();\n+ logger.info(\"--> indexing\");\n+\n+ final int numDocs = randomIntBetween(10, 100);\n+ IndexRequestBuilder[] builders = new IndexRequestBuilder[numDocs];\n+ for (int i = 0; i < builders.length; i++) {\n+ builders[i] = client().prepareIndex(\"test\", \"doc\", Integer.toString(i)).setSource(\"foo\", \"bar\" + i);\n+ }\n+ indexRandom(true, builders);\n+ flushAndRefresh();\n+ assertNoFailures(client().admin().indices().prepareOptimize(\"test\").setForce(true).setFlush(true).setWaitForMerge(true).setMaxNumSegments(1).get());\n+\n+ CreateSnapshotResponse createSnapshotResponseFirst = client.admin().cluster().prepareCreateSnapshot(\"test-repo\", \"test\").setWaitForCompletion(true).setIndices(\"test\").get();\n+ assertThat(createSnapshotResponseFirst.getSnapshotInfo().successfulShards(), greaterThan(0));\n+ assertThat(createSnapshotResponseFirst.getSnapshotInfo().successfulShards(), equalTo(createSnapshotResponseFirst.getSnapshotInfo().totalShards()));\n+ assertThat(client.admin().cluster().prepareGetSnapshots(\"test-repo\").setSnapshots(\"test\").get().getSnapshots().get(0).state(), equalTo(SnapshotState.SUCCESS));\n+ {\n+ SnapshotStatus snapshotStatus = client.admin().cluster().prepareSnapshotStatus(\"test-repo\").setSnapshots(\"test\").get().getSnapshots().get(0);\n+ List<SnapshotIndexShardStatus> shards = snapshotStatus.getShards();\n+ for (SnapshotIndexShardStatus status : shards) {\n+ assertThat(status.getStats().getProcessedFiles(), greaterThan(1));\n+ }\n+ }\n+ if (frequently()) {\n+ logger.info(\"--> upgrade\");\n+ client().admin().indices().prepareUpdateSettings(\"test\").setSettings(ImmutableSettings.builder().put(EnableAllocationDecider.INDEX_ROUTING_ALLOCATION_ENABLE, \"none\")).get();\n+ backwardsCluster().allowOnAllNodes(\"test\");\n+ logClusterState();\n+ boolean upgraded;\n+ do {\n+ logClusterState();\n+ CountResponse countResponse = client().prepareCount().get();\n+ assertHitCount(countResponse, numDocs);\n+ upgraded = backwardsCluster().upgradeOneNode();\n+ ensureYellow();\n+ countResponse = client().prepareCount().get();\n+ assertHitCount(countResponse, numDocs);\n+ } while (upgraded);\n+ client().admin().indices().prepareUpdateSettings(\"test\").setSettings(ImmutableSettings.builder().put(EnableAllocationDecider.INDEX_ROUTING_ALLOCATION_ENABLE, \"all\")).get();\n+ }\n+ if (randomBoolean()) {\n+ client().admin().indices().prepareUpdateSettings(\"test\").setSettings(ImmutableSettings.builder().put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, randomIntBetween(1,2))).get();\n+ }\n+\n+ CreateSnapshotResponse createSnapshotResponseSecond = client.admin().cluster().prepareCreateSnapshot(\"test-repo\", \"test-1\").setWaitForCompletion(true).setIndices(\"test\").get();\n+ assertThat(createSnapshotResponseSecond.getSnapshotInfo().successfulShards(), greaterThan(0));\n+ assertThat(createSnapshotResponseSecond.getSnapshotInfo().successfulShards(), equalTo(createSnapshotResponseSecond.getSnapshotInfo().totalShards()));\n+ assertThat(client.admin().cluster().prepareGetSnapshots(\"test-repo\").setSnapshots(\"test-1\").get().getSnapshots().get(0).state(), equalTo(SnapshotState.SUCCESS));\n+ {\n+ SnapshotStatus snapshotStatus = client.admin().cluster().prepareSnapshotStatus(\"test-repo\").setSnapshots(\"test-1\").get().getSnapshots().get(0);\n+ List<SnapshotIndexShardStatus> shards = snapshotStatus.getShards();\n+ for (SnapshotIndexShardStatus status : shards) {\n+ assertThat(status.getStats().getProcessedFiles(), equalTo(1)); // we flush before the snapshot such that we have to process the segments_N files\n+ }\n+ }\n+\n+ client().prepareDelete(\"test\", \"doc\", \"1\").get();\n+ CreateSnapshotResponse createSnapshotResponseThird = client.admin().cluster().prepareCreateSnapshot(\"test-repo\", \"test-2\").setWaitForCompletion(true).setIndices(\"test\").get();\n+ assertThat(createSnapshotResponseThird.getSnapshotInfo().successfulShards(), greaterThan(0));\n+ assertThat(createSnapshotResponseThird.getSnapshotInfo().successfulShards(), equalTo(createSnapshotResponseThird.getSnapshotInfo().totalShards()));\n+ assertThat(client.admin().cluster().prepareGetSnapshots(\"test-repo\").setSnapshots(\"test-2\").get().getSnapshots().get(0).state(), equalTo(SnapshotState.SUCCESS));\n+ {\n+ SnapshotStatus snapshotStatus = client.admin().cluster().prepareSnapshotStatus(\"test-repo\").setSnapshots(\"test-2\").get().getSnapshots().get(0);\n+ List<SnapshotIndexShardStatus> shards = snapshotStatus.getShards();\n+ for (SnapshotIndexShardStatus status : shards) {\n+ assertThat(status.getStats().getProcessedFiles(), equalTo(2)); // we flush before the snapshot such that we have to process the segments_N files plus the .del file\n+ }\n+ }\n+ }\n+}", "filename": "src/test/java/org/elasticsearch/snapshots/SnapshotBackwardsCompatibilityTest.java", "status": "added" } ] }
{ "body": "Upgrade caused shard data to stay on nodes even after it isn't useful any more.\n\nThis comes from https://groups.google.com/forum/#!topic/elasticsearch/Mn1N0xmjsL8\n\nWhat I did:\nStarted upgrading from Elasticsearch 1.2.1 to Elasticsearch 1.3.2. For each of the 6 nodes I updated:\n- Set allocation to primaries only\n- Sync new plugins into place\n- Update deb package\n- Restart Elasticsearch\n- Wait for Elasticsearch to respond on the local host\n- Set allocation to all\n- Wait for Elasticsearch to report GREEN\n- Sleep for half an hour so the cluster can rebalance itself a bit\n\nWhat happened:\nThe new version of Elasticsearch came up but didn't remove all the shard data it can't use. This picture from Whatson shows the problem pretty well:\nhttps://wikitech.wikimedia.org/wiki/File:Whatson_out_of_disk.png\nThe nodes on the left were upgraded and blue means disk usage by Elasticsearch and brown is \"other\" disk usage.\n\nWhen I dig around on the filesystem all the space usage is in the shard storage directory (/var/lib/elasticsearch/production-search-eqiad/nodes/0/indices) but when I compare the list of open files to the list of files on the file system [with this](https://gist.github.com/nik9000/d2dba49c156a5259a7d6) I see that whole directories are just sitting around, unused. Hitting the `/_cat/shards/<directory_name>` corroborates that the shard in the directory isn't on the node. Oddly, if we keep poking around we find open files in directories representing shards that we don't expect to be on the node either....\n\nWhat we're doing now:\nWe're going to try restarting the upgrade and blasting the data directory on the node as we upgrade it.\n\nReproduction steps:\nNo idea. And I'm a bit afraid to keep pushing things on our cluster with it in the state that it is in.\n", "comments": [ { "body": "could this be related to #6692 did you upgrade all nodes to 1.3 or do you still have nodes < 1.3.0 in the cluster?\n", "created_at": "2014-08-21T19:09:42Z" }, { "body": "Only about 1/3 of the nodes before we got warnings about disk space.\n", "created_at": "2014-08-21T19:16:30Z" }, { "body": "I guess it's not freeing the space unless an upgraded node holds a copy of the shard. That is new in 1.3 and I still try to remember what the background was. Can you check if that assumption is true, are the shards that are not delete allocated on old nodes? \n", "created_at": "2014-08-21T19:30:02Z" }, { "body": "Well, this is almost certainly the cause:\n\n``` java\n // If all nodes have been upgraded to >= 1.3.0 at some point we get back here and have the chance to\n // run this api. (when cluster state is then updated)\n if (node.getVersion().before(Version.V_1_3_0)) {\n logger.debug(\"Skip deleting deleting shard instance [{}], a node holding a shard instance is < 1.3.0\", shardRouting);\n return false;\n }\n```\n\n1.3 won't delete stuff from the disks until the whole cluster is 1.3. That's ugly. I run with disks 50% full and the upgrade process almost filled them just with shuffling.\n\nSide note: if the shards are still in the routing table it'd be nice to see them. Right now they seem to be invisble to he _cat api.\n", "created_at": "2014-08-21T19:32:12Z" }, { "body": "@nik9000 this was a temporary thing to add extra safety. It will get lower the more nodes you upgrade. I agree we could expose some more infos here if stuff is still on disk. \n", "created_at": "2014-08-21T19:36:34Z" }, { "body": "This gave me quite a scare! I was running this upgrade over night with a script with extra sleeping to keep the cluster balanced. It woke me up with 99% disk utilization on one of the nodes. I'll keep pushing the upgrade through carefully.\n", "created_at": "2014-08-21T19:44:11Z" }, { "body": "For posterity: if you nuke the contents of your node's disk after stopping Elasticsearch 1.2 but before starting Elasticsearch 1.3 then you won't end up with too much data that can't be cleared. The more nodes you upgrade the more shards you'll be able to delete any way - like @s1monw said.\n", "created_at": "2014-08-21T19:49:17Z" }, { "body": "just to clarify a bit more we added some safety in 1.3 that required a new API and we can only call this API if we know that we are allocated on another 1.3 or newer node that is why we keep the data around longer. thanks for opening this nik!\n", "created_at": "2014-08-21T20:11:30Z" }, { "body": "So far we haven't seen any cleanup of old shards and we've just restarted the last node to pick up 1.3.2.\n![whatson_not_yet_cleaning](https://cloud.githubusercontent.com/assets/215970/4014563/c678a198-2a21-11e4-88e2-f5a00fe5c987.png)\nDeleting the contents of the node slowed down the upgrade but allowed us to continue the process without space being taken up by indexes we couldn't remove.\n", "created_at": "2014-08-22T17:30:28Z" }, { "body": "The unused shard copies only get deleted if all its active copies can be verified. Maybe shard to be cleaned up had copies on this not yet upgraded node?\n\nUnused shard copies should get cleaned up now, if that isn't the case then that is bad.\n\nIf you enable trace logging for the `indices.store` category then we can get a peek in ES' decision making.\n", "created_at": "2014-08-22T17:48:10Z" }, { "body": "@martijnvg - I'll see what happens once all the cluster goes green after the last upgrade - that'll be in under an hour.\n\nDid we do anything to allow changing log levels on the fly? I remember seeing something about it but #6416 is still open.\n", "created_at": "2014-08-22T18:19:47Z" }, { "body": "And by we I mean you, I guess :)\n", "created_at": "2014-08-22T18:20:04Z" }, { "body": ":) Well this has been in for a while: #2517\n\nWhich allows to change the log settings via the cluster update api.\n", "created_at": "2014-08-22T18:24:30Z" }, { "body": "OK! Here is something: https://gist.github.com/nik9000/89013550ec78da5808e4\n", "created_at": "2014-08-22T18:31:10Z" }, { "body": "That is getting spit out constantly.\n", "created_at": "2014-08-22T18:36:43Z" }, { "body": "Looks like it is on every node as well.\n", "created_at": "2014-08-22T18:48:15Z" }, { "body": "Cluster is now green and lots of old data still sitting around.\n", "created_at": "2014-08-22T18:50:08Z" }, { "body": "@nik9000 this is very odd. The line points at a null clusterName . All the nodes are continuously logging this? Can I ask you to enable debug logging for the root logger and share the log? I hope to get more context into when this can happen.\n", "created_at": "2014-08-22T19:06:59Z" }, { "body": "I see that cluster name is something that as introduced in 1.1.1. Maybe a coincidence - but I haven't performed a full cluster restart since upgrading to 1.1.0.\n", "created_at": "2014-08-22T19:08:18Z" }, { "body": "Let me see about that debug logging - seems like that'll be a ton of data. Also - looks like this is the only thing that doesn't check if the cluster name is non null. Probably just a coincidence because it supposed to be non-null since 1.1.1 I guess.....\n", "created_at": "2014-08-22T19:09:26Z" }, { "body": "@nik9000 I'm not sure I follow what you mean by \n\n> looks like this is the only thing that doesn't check if the cluster name is non null. \n\nI was referring to this line: https://github.com/elasticsearch/elasticsearch/blob/v1.3.2/src/main/java/org/elasticsearch/indices/store/IndicesStore.java#L418\n", "created_at": "2014-08-22T19:13:20Z" }, { "body": "@bleskes - sorry, yeah. I was looking at other code that looked at the cluster name and its pretty careful around the cluster name potentially being null. Like \nhttps://github.com/elasticsearch/elasticsearch/blob/v1.3.2/src/main/java/org/elasticsearch/cluster/ClusterState.java#L577 and https://github.com/elasticsearch/elasticsearch/blob/v1.3.2/src/main/java/org/elasticsearch/discovery/zen/ZenDiscovery.java#L551 .\n\nI guess what I'm saying is that if the cluster state never picked up the name somehow this looks like the only thing that would break.\n", "created_at": "2014-08-22T19:20:54Z" }, { "body": "Tried setting logger to debug and didn't get anything super interesting. Here is some of it: https://gist.github.com/nik9000/b9c40805abb4bcbb5b61\n", "created_at": "2014-08-22T19:25:33Z" }, { "body": "Thx Nik. I have a theory. Indeed the cluster name as part of the _cluster state_ was introduced in 1.1.1 . When a node of version >=1.1.1 reads the cluster state from an older node, that field will be populated with null. During the upgrade from 1.1.0 this happened and the cluster state in memory has it's name set to null. Since you never restarted the complete cluster since then, all nodes have kept communicating it keep it alive. This trips this new code. A full cluster restart should fix it but that's obviously totally not desirable. I'm still trying to come up with a potential work around... \n", "created_at": "2014-08-22T19:37:40Z" }, { "body": "@nik9000 do you use dedicated master nodes? it doesn't look so from the logs but I want to double check\n", "created_at": "2014-08-22T19:40:12Z" }, { "body": "@bleskes no dedicated master nodes.\n", "created_at": "2014-08-22T19:42:46Z" }, { "body": "@bleskes that's what I was thinking - I was digging through places where the cluster state is built from name and they are pretty rare. Still, it'd take me some time to validate that they never get saved.\n", "created_at": "2014-08-22T19:44:02Z" }, { "body": "More posterity: this broke for me because when I started the cluster I was using 1.1.0 and I haven't done a full restart since - only rolling restarts. If you are in that boat - do not upgrade to 1.3 until 1.3.3 is released.\n", "created_at": "2014-08-22T21:36:41Z" }, { "body": "I'm going to close this as it is fixed by the change my in #7414\n", "created_at": "2014-08-27T19:42:30Z" }, { "body": "Thanks!\n", "created_at": "2014-08-27T20:11:17Z" } ], "number": 7386, "title": "Internal: Upgrade caused shard data to stay on nodes" }
{ "body": "The ClusterState has a reference to the cluster name since version 1.1.0 (df7474b9fcf849bbfea4222c1d2aa58b6669e52a) . However, if the state was sent from a master of an older version, this name can be set to null. This is unexpected and can cause bugs. The bad part is that it will never correct itself until a full cluster restart where the cluster state is rebuilt using the code of the latest version.\n\nThis commit changes the default to the node's cluster name.\n\n Relates to #7386\n", "number": 7414, "review_comments": [], "title": "Use node's cluster name as a default for an incoming cluster state who misses it" }
{ "commits": [ { "message": "[Internal] user node's cluster name as a default for an incoming cluster state who misses it\n\nClusterState has a reference to the cluster name since version 1.1.0 (df7474b9fcf849bbfea4222c1d2aa58b6669e52a) . However, if the state was sent from a master of an older version, this name can be set to null. This is an unexpected and can cause bugs. The bad part is that it will never correct it self until a full cluster restart where the cluster state is rebuilt using the code of the latest version.\n\n This commit changes the default to the node's cluster name.\n\n Relates to #7386" } ], "files": [ { "diff": "@@ -23,7 +23,6 @@\n import org.elasticsearch.action.support.master.AcknowledgedResponse;\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.routing.allocation.RoutingExplanations;\n-import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n \n@@ -61,7 +60,7 @@ public RoutingExplanations getExplanations() {\n @Override\n public void readFrom(StreamInput in) throws IOException {\n super.readFrom(in);\n- state = ClusterState.Builder.readFrom(in, null);\n+ state = ClusterState.Builder.readFrom(in, null, null);\n readAcknowledged(in);\n if (in.getVersion().onOrAfter(Version.V_1_1_0)) {\n explanations = RoutingExplanations.readFrom(in);", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/reroute/ClusterRerouteResponse.java", "status": "modified" }, { "diff": "@@ -55,7 +55,7 @@ public ClusterName getClusterName() {\n public void readFrom(StreamInput in) throws IOException {\n super.readFrom(in);\n clusterName = ClusterName.readClusterName(in);\n- clusterState = ClusterState.Builder.readFrom(in, null);\n+ clusterState = ClusterState.Builder.readFrom(in, null, clusterName);\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/state/ClusterStateResponse.java", "status": "modified" }, { "diff": "@@ -562,8 +562,14 @@ public static byte[] toBytes(ClusterState state) throws IOException {\n return os.bytes().toBytes();\n }\n \n- public static ClusterState fromBytes(byte[] data, DiscoveryNode localNode) throws IOException {\n- return readFrom(new BytesStreamInput(data, false), localNode);\n+ /**\n+ * @param data input bytes\n+ * @param localNode used to set the local node in the cluster state.\n+ * @param defaultClusterName this cluster name will be used of if the deserialized cluster state does not have a name set\n+ * (which is only introduced in version 1.1.1)\n+ */\n+ public static ClusterState fromBytes(byte[] data, DiscoveryNode localNode, ClusterName defaultClusterName) throws IOException {\n+ return readFrom(new BytesStreamInput(data, false), localNode, defaultClusterName);\n }\n \n public static void writeTo(ClusterState state, StreamOutput out) throws IOException {\n@@ -589,8 +595,14 @@ public static void writeTo(ClusterState state, StreamOutput out) throws IOExcept\n }\n }\n \n- public static ClusterState readFrom(StreamInput in, @Nullable DiscoveryNode localNode) throws IOException {\n- ClusterName clusterName = null;\n+ /**\n+ * @param in input stream\n+ * @param localNode used to set the local node in the cluster state. can be null.\n+ * @param defaultClusterName this cluster name will be used of receiving a cluster state from a node on version older than 1.1.1\n+ * or if the sending node did not set a cluster name\n+ */\n+ public static ClusterState readFrom(StreamInput in, @Nullable DiscoveryNode localNode, @Nullable ClusterName defaultClusterName) throws IOException {\n+ ClusterName clusterName = defaultClusterName;\n if (in.getVersion().onOrAfter(Version.V_1_1_1)) {\n // it might be null even if it comes from a >= 1.1.1 node since it's origin might be an older node\n if (in.readBoolean()) {", "filename": "src/main/java/org/elasticsearch/cluster/ClusterState.java", "status": "modified" }, { "diff": "@@ -301,7 +301,7 @@ private void publish(LocalDiscovery[] members, ClusterState clusterState, final\n if (discovery.master) {\n continue;\n }\n- final ClusterState nodeSpecificClusterState = ClusterState.Builder.fromBytes(clusterStateBytes, discovery.localNode);\n+ final ClusterState nodeSpecificClusterState = ClusterState.Builder.fromBytes(clusterStateBytes, discovery.localNode, clusterName);\n nodeSpecificClusterState.status(ClusterState.ClusterStateStatus.RECEIVED);\n // ignore cluster state messages that do not include \"me\", not in the game yet...\n if (nodeSpecificClusterState.nodes().localNode() != null) {", "filename": "src/main/java/org/elasticsearch/discovery/local/LocalDiscovery.java", "status": "modified" }, { "diff": "@@ -154,7 +154,7 @@ public ZenDiscovery(Settings settings, ClusterName clusterName, ThreadPool threa\n this.nodesFD = new NodesFaultDetection(settings, threadPool, transportService);\n this.nodesFD.addListener(new NodeFailureListener());\n \n- this.publishClusterState = new PublishClusterStateAction(settings, transportService, this, new NewClusterStateListener(), discoverySettings);\n+ this.publishClusterState = new PublishClusterStateAction(settings, transportService, this, new NewClusterStateListener(), discoverySettings, clusterName);\n this.pingService.setNodesProvider(this);\n this.membership = new MembershipAction(settings, clusterService, transportService, this, new MembershipListener());\n ", "filename": "src/main/java/org/elasticsearch/discovery/zen/ZenDiscovery.java", "status": "modified" }, { "diff": "@@ -159,7 +159,8 @@ class JoinResponse extends TransportResponse {\n @Override\n public void readFrom(StreamInput in) throws IOException {\n super.readFrom(in);\n- clusterState = ClusterState.Builder.readFrom(in, nodesProvider.nodes().localNode());\n+ // we don't care about cluster name. This cluster state is never used.\n+ clusterState = ClusterState.Builder.readFrom(in, nodesProvider.nodes().localNode(), null);\n }\n \n @Override\n@@ -219,7 +220,8 @@ class ValidateJoinRequest extends TransportRequest {\n public void readFrom(StreamInput in) throws IOException {\n super.readFrom(in);\n if (in.getVersion().before(Version.V_1_4_0)) {\n- ClusterState.Builder.readFrom(in, nodesProvider.nodes().localNode());\n+ // cluster name doesn't matter...\n+ ClusterState.Builder.readFrom(in, nodesProvider.nodes().localNode(), null);\n }\n }\n ", "filename": "src/main/java/org/elasticsearch/discovery/zen/membership/MembershipAction.java", "status": "modified" }, { "diff": "@@ -21,6 +21,7 @@\n \n import com.google.common.collect.Maps;\n import org.elasticsearch.Version;\n+import org.elasticsearch.cluster.ClusterName;\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n import org.elasticsearch.common.bytes.BytesReference;\n@@ -63,14 +64,16 @@ static interface NewStateProcessed {\n private final DiscoveryNodesProvider nodesProvider;\n private final NewClusterStateListener listener;\n private final DiscoverySettings discoverySettings;\n+ private final ClusterName clusterName;\n \n public PublishClusterStateAction(Settings settings, TransportService transportService, DiscoveryNodesProvider nodesProvider,\n- NewClusterStateListener listener, DiscoverySettings discoverySettings) {\n+ NewClusterStateListener listener, DiscoverySettings discoverySettings, ClusterName clusterName) {\n super(settings);\n this.transportService = transportService;\n this.nodesProvider = nodesProvider;\n this.listener = listener;\n this.discoverySettings = discoverySettings;\n+ this.clusterName = clusterName;\n transportService.registerHandler(ACTION_NAME, new PublishClusterStateRequestHandler());\n }\n \n@@ -169,7 +172,7 @@ public void messageReceived(BytesTransportRequest request, final TransportChanne\n in = CachedStreamInput.cachedHandles(request.bytes().streamInput());\n }\n in.setVersion(request.version());\n- ClusterState clusterState = ClusterState.Builder.readFrom(in, nodesProvider.nodes().localNode());\n+ ClusterState clusterState = ClusterState.Builder.readFrom(in, nodesProvider.nodes().localNode(), clusterName);\n clusterState.status(ClusterState.ClusterStateStatus.RECEIVED);\n logger.debug(\"received cluster state version {}\", clusterState.version());\n listener.onNewClusterState(clusterState, new NewClusterStateListener.NewStateProcessed() {", "filename": "src/main/java/org/elasticsearch/discovery/zen/publish/PublishClusterStateAction.java", "status": "modified" }, { "diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.cluster.serialization;\n \n+import org.elasticsearch.cluster.ClusterName;\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.MetaData;\n@@ -49,13 +50,14 @@ public void testClusterStateSerialization() throws Exception {\n \n DiscoveryNodes nodes = DiscoveryNodes.builder().put(newNode(\"node1\")).put(newNode(\"node2\")).put(newNode(\"node3\")).localNodeId(\"node1\").masterNodeId(\"node2\").build();\n \n- ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).nodes(nodes).metaData(metaData).routingTable(routingTable).build();\n+ ClusterState clusterState = ClusterState.builder(new ClusterName(\"clusterName1\")).nodes(nodes).metaData(metaData).routingTable(routingTable).build();\n \n AllocationService strategy = createAllocationService();\n clusterState = ClusterState.builder(clusterState).routingTable(strategy.reroute(clusterState).routingTable()).build();\n \n- ClusterState serializedClusterState = ClusterState.Builder.fromBytes(ClusterState.Builder.toBytes(clusterState), newNode(\"node1\"));\n+ ClusterState serializedClusterState = ClusterState.Builder.fromBytes(ClusterState.Builder.toBytes(clusterState), newNode(\"node1\"), new ClusterName(\"clusterName2\"));\n \n+ assertThat(serializedClusterState.getClusterName().value(), equalTo(clusterState.getClusterName().value()));\n assertThat(serializedClusterState.routingTable().prettyPrint(), equalTo(clusterState.routingTable().prettyPrint()));\n }\n ", "filename": "src/test/java/org/elasticsearch/cluster/serialization/ClusterSerializationTests.java", "status": "modified" } ] }
{ "body": "we have a force option to upgrade indices if there is even only a single segment. This setting is never passed on to the shardrequest.\n", "comments": [], "number": 7404, "title": "Internal: Optimize#force() is not passed on to the shardrequest" }
{ "body": "The force flag to trigger optimiz calls of a single segment for upgrading\netc. was never passed on to the shard request.\n\nCloses #7404\n", "number": 7405, "review_comments": [], "title": "Force optimize was not passed to shard request" }
{ "commits": [ { "message": "[ENGINE] Force optimize was not passed to shard request\n\nThe force flag to trigger optimiz calls of a single segment for upgrading\netc. was never passed on to the shard request.\n\nCloses #7404" } ], "files": [ { "diff": "@@ -48,6 +48,7 @@ class ShardOptimizeRequest extends BroadcastShardOperationRequest {\n maxNumSegments = request.maxNumSegments();\n onlyExpungeDeletes = request.onlyExpungeDeletes();\n flush = request.flush();\n+ force = request.force();\n }\n \n boolean waitForMerge() {", "filename": "src/main/java/org/elasticsearch/action/admin/indices/optimize/ShardOptimizeRequest.java", "status": "modified" }, { "diff": "@@ -26,6 +26,7 @@\n import org.elasticsearch.action.admin.indices.segments.IndicesSegmentResponse;\n import org.elasticsearch.action.admin.indices.segments.ShardSegments;\n import org.elasticsearch.action.admin.indices.stats.IndicesStatsResponse;\n+import org.elasticsearch.action.index.IndexRequestBuilder;\n import org.elasticsearch.common.settings.ImmutableSettings;\n import org.elasticsearch.common.util.BloomFilter;\n import org.elasticsearch.index.codec.CodecService;\n@@ -35,6 +36,9 @@\n import org.junit.Test;\n \n import java.util.Collection;\n+import java.util.HashSet;\n+import java.util.Set;\n+import java.util.concurrent.ExecutionException;\n \n public class InternalEngineIntegrationTest extends ElasticsearchIntegrationTest {\n \n@@ -130,6 +134,33 @@ public void testSetIndexCompoundOnFlush() {\n assertTotalCompoundSegments(2, 3, \"test\");\n }\n \n+ public void testForceOptimize() throws ExecutionException, InterruptedException {\n+ client().admin().indices().prepareCreate(\"test\").setSettings(ImmutableSettings.builder().put(\"number_of_replicas\", 0).put(\"number_of_shards\", 1)).get();\n+ final int numDocs = randomIntBetween(10, 100);\n+ IndexRequestBuilder[] builders = new IndexRequestBuilder[numDocs];\n+ for (int i = 0; i < builders.length; i++) {\n+ builders[i] = client().prepareIndex(\"test\", \"type\").setSource(\"field\", \"value\");\n+ }\n+ indexRandom(true, builders);\n+ ensureGreen();\n+ flushAndRefresh();\n+ client().admin().indices().prepareOptimize(\"test\").setMaxNumSegments(1).setWaitForMerge(true).get();\n+ IndexSegments firstSegments = client().admin().indices().prepareSegments(\"test\").get().getIndices().get(\"test\");\n+ client().admin().indices().prepareOptimize(\"test\").setMaxNumSegments(1).setWaitForMerge(true).get();\n+ IndexSegments secondsSegments = client().admin().indices().prepareSegments(\"test\").get().getIndices().get(\"test\");\n+\n+ assertThat(segments(firstSegments), Matchers.containsInAnyOrder(segments(secondsSegments).toArray()));\n+ assertThat(segments(firstSegments).size(), Matchers.equalTo(1));\n+ assertThat(segments(secondsSegments), Matchers.containsInAnyOrder(segments(firstSegments).toArray()));\n+ assertThat(segments(secondsSegments).size(), Matchers.equalTo(1));\n+ client().admin().indices().prepareOptimize(\"test\").setMaxNumSegments(1).setWaitForMerge(true).setForce(true).get();\n+ IndexSegments thirdSegments = client().admin().indices().prepareSegments(\"test\").get().getIndices().get(\"test\");\n+ assertThat(segments(firstSegments).size(), Matchers.equalTo(1));\n+ assertThat(segments(thirdSegments).size(), Matchers.equalTo(1));\n+ assertThat(segments(firstSegments), Matchers.not(Matchers.containsInAnyOrder(segments(thirdSegments).toArray())));\n+ assertThat(segments(thirdSegments), Matchers.not(Matchers.containsInAnyOrder(segments(firstSegments).toArray())));\n+ }\n+\n private void assertTotalCompoundSegments(int i, int t, String index) {\n IndicesSegmentResponse indicesSegmentResponse = client().admin().indices().prepareSegments(index).get();\n IndexSegments indexSegments = indicesSegmentResponse.getIndices().get(index);\n@@ -150,7 +181,15 @@ private void assertTotalCompoundSegments(int i, int t, String index) {\n }\n assertThat(compounds, Matchers.equalTo(i));\n assertThat(total, Matchers.equalTo(t));\n-\n }\n \n+ private Set<Segment> segments(IndexSegments segments) {\n+ Set<Segment> segmentSet = new HashSet<>();\n+ for (IndexShardSegments s : segments) {\n+ for (ShardSegments shardSegments : s) {\n+ segmentSet.addAll(shardSegments.getSegments());\n+ }\n+ }\n+ return segmentSet;\n+ }\n }", "filename": "src/test/java/org/elasticsearch/index/engine/internal/InternalEngineIntegrationTest.java", "status": "modified" } ] }
{ "body": "An encoding issue in GeolocationContextMapping means that certain precision levels are being skipped and consequently cannot be queried.\nPartial YAML test here: \n\n```\nsetup:\n - do:\n indices.create:\n index: test\n body:\n mappings:\n test:\n \"properties\":\n \"suggest_geo_multi_level\":\n \"type\" : \"completion\"\n \"context\":\n \"location\":\n \"type\" : \"geo\"\n \"precision\" : [1,2,3,4,5,6,7,8,9,10,11,12]\n - do:\n index:\n index: test\n type: test\n id: 1\n body:\n suggest_geo_multi_level:\n input: \"Hotel Marriot in Amsterdam\"\n context:\n location:\n lat : 52.22\n lon : 4.53\n```\n\nThis call works:\n\n```\n - do:\n suggest:\n index: test\n body:\n result:\n text: \"hote\"\n completion:\n field: suggest_geo_multi_level\n context:\n location:\n lat : 52.22\n lon : 4.53\n precision : 3 \n - length: { result: 1 }\n```\n\nbut a precision length of 4 does not. In fact precisions 1,2,3 and 12 work and all others fail.\nSo there are gaps in the encoding of the data.\nThe reason is that the encoding logic is given precisions in this order: [12, 3, 10, 6, 2, 1, 7, 11, 9, 5, 4, 8]\nand the encoding logic mistakenly truncates the \"geohash\" string while in this loop:\n\n```\n for (String geohash : geohashes) {\n for (int p : mapping.precision) {\n int precision = Math.min(p, geohash.length());\n geohash = geohash.substring(0, precision);\n if(mapping.neighbors) {\n GeoHashUtils.addNeighbors(geohash, precision, locations);\n }\n locations.add(geohash);\n }\n }\n```\n\nThe required fix is to not change the \"geohash\" string value in the inner loop which ensures all precisions are then encoded correctly.\n", "comments": [], "number": 7368, "title": "Suggester: No results returned for certain geo precisions" }
{ "body": "1) One issue reported by a user is due to the truncation of the geohash string. Added YAML test for this scenario\n2) Another suspect piece of code was the “toAutomaton” method that only merged the first of possibly many precisions into the result.\n\nCloses #7368\n", "number": 7369, "review_comments": [ { "body": "can we have a java test for this instead? The REST test are not here to test functionality :)\n", "created_at": "2014-08-21T11:13:38Z" }, { "body": "I started down that route but the Java API looked to be missing the \"context\" part of the suggest API - I can roll a change for that into this PR if you want.\n", "created_at": "2014-08-21T11:19:13Z" }, { "body": "odd isn't `ContextSuggestSearchTests` using the API?\n", "created_at": "2014-08-21T12:04:22Z" }, { "body": "I was looking in the wrong place, thanks\n", "created_at": "2014-08-21T13:00:25Z" } ], "title": "Bugs with encoding multiple levels of geo precision" }
{ "commits": [ { "message": "Suggest API - bugs with encoding multiple levels of geo precision.\n1) One issue reported by a user is due to the truncation of the geohash string. Added Junit test for this scenario\n2) Another suspect piece of code was the “toAutomaton” method that only merged the first of possibly many precisions into the result.\n\nCloses #7368\n\nAdded Java test" } ], "files": [ { "diff": "@@ -650,11 +650,11 @@ protected TokenStream wrapTokenStream(Document doc, TokenStream stream) {\n for (String geohash : geohashes) {\n for (int p : mapping.precision) {\n int precision = Math.min(p, geohash.length());\n- geohash = geohash.substring(0, precision);\n+ String truncatedGeohash = geohash.substring(0, precision);\n if(mapping.neighbors) {\n- GeoHashUtils.addNeighbors(geohash, precision, locations);\n+ GeoHashUtils.addNeighbors(truncatedGeohash, precision, locations);\n }\n- locations.add(geohash);\n+ locations.add(truncatedGeohash);\n }\n }\n \n@@ -692,7 +692,7 @@ public Automaton toAutomaton() {\n } else {\n automaton = BasicAutomata.makeString(location.substring(0, Math.max(1, Math.min(location.length(), precisions[0]))));\n for (int i = 1; i < precisions.length; i++) {\n- final String cell = location.substring(0, Math.max(1, Math.min(location.length(), precisions[0])));\n+ final String cell = location.substring(0, Math.max(1, Math.min(location.length(), precisions[i])));\n automaton = BasicOperations.union(automaton, BasicAutomata.makeString(cell));\n }\n }", "filename": "src/main/java/org/elasticsearch/search/suggest/context/GeolocationContextMapping.java", "status": "modified" }, { "diff": "@@ -27,9 +27,8 @@\n import org.elasticsearch.common.geo.GeoHashUtils;\n import org.elasticsearch.common.geo.GeoPoint;\n import org.elasticsearch.common.unit.Fuzziness;\n-import org.elasticsearch.common.xcontent.*;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.index.mapper.MapperParsingException;\n-import org.elasticsearch.search.aggregations.support.format.ValueFormatter;\n import org.elasticsearch.search.suggest.Suggest.Suggestion;\n import org.elasticsearch.search.suggest.Suggest.Suggestion.Entry;\n import org.elasticsearch.search.suggest.Suggest.Suggestion.Entry.Option;\n@@ -112,6 +111,49 @@ public void testBasicGeo() throws Exception {\n assertEquals(suggestResponse.getSuggest().size(), 1);\n assertEquals(\"Hotel Amsterdam in Berlin\", suggestResponse.getSuggest().getSuggestion(suggestionName).iterator().next().getOptions().iterator().next().getText().string());\n }\n+ \n+ @Test\n+ public void testMultiLevelGeo() throws Exception {\n+ assertAcked(prepareCreate(INDEX).addMapping(TYPE, createMapping(TYPE, ContextBuilder.location(\"st\")\n+ .precision(1)\n+ .precision(2)\n+ .precision(3)\n+ .precision(4)\n+ .precision(5)\n+ .precision(6)\n+ .precision(7)\n+ .precision(8)\n+ .precision(9)\n+ .precision(10)\n+ .precision(11)\n+ .precision(12)\n+ .neighbors(true))));\n+ ensureYellow();\n+\n+ XContentBuilder source1 = jsonBuilder()\n+ .startObject()\n+ .startObject(FIELD)\n+ .array(\"input\", \"Hotel Amsterdam\", \"Amsterdam\")\n+ .field(\"output\", \"Hotel Amsterdam in Berlin\")\n+ .startObject(\"context\").latlon(\"st\", 52.529172, 13.407333).endObject()\n+ .endObject()\n+ .endObject();\n+ client().prepareIndex(INDEX, TYPE, \"1\").setSource(source1).execute().actionGet();\n+\n+ client().admin().indices().prepareRefresh(INDEX).get();\n+ \n+ for (int precision = 1; precision <= 12; precision++) {\n+ String suggestionName = randomAsciiOfLength(10);\n+ CompletionSuggestionBuilder context = new CompletionSuggestionBuilder(suggestionName).field(FIELD).text(\"h\").size(10)\n+ .addGeoLocation(\"st\", 52.529172, 13.407333, precision);\n+\n+ SuggestRequestBuilder suggestionRequest = client().prepareSuggest(INDEX).addSuggestion(context);\n+ SuggestResponse suggestResponse = suggestionRequest.execute().actionGet();\n+ assertEquals(suggestResponse.getSuggest().size(), 1);\n+ assertEquals(\"Hotel Amsterdam in Berlin\", suggestResponse.getSuggest().getSuggestion(suggestionName).iterator().next()\n+ .getOptions().iterator().next().getText().string());\n+ }\n+ } \n \n @Test\n public void testGeoField() throws Exception {", "filename": "src/test/java/org/elasticsearch/search/suggest/ContextSuggestSearchTests.java", "status": "modified" } ] }
{ "body": "Hi, \nI met a strange problem with the latest version of Elasticsearch (1.3.2) - strage as always when NPE occurs :-)\nNoticed that Elasticsearch 0.90 did not have such an issue.\n\nHaving single incorrect type which references to inexisting parent results in NPE while executing hasParent query/filter on another - correct - type.\n\nTo reproduce the issue please refer to description below.\n\nCreate index:\n\n```\nPOST /test\n```\n\nCorrect mapping:\n\n```\nPUT /test/children/_mapping\n{\n \"children\": {\n \"_parent\": {\n \"type\": \"parents\"\n }\n }\n}\n```\n\nMapping for type with missing parent type:\n\n```\nPUT /test/children2/_mapping\n{\n \"children2\": {\n \"_parent\": {\n \"type\": \"parents2\"\n }\n }\n}\n```\n\nAdd something to parents (corrent one) to create mapping:\n\n```\nPOST /test/parents\n{\n \"someField\" : \"someValue\"\n}\n```\n\n```\nPOST /test/children/_search\n{\n \"filter\": {\n \"has_parent\": {\n \"type\": \"parents\",\n \"query\": {\n \"query_string\": {\n \"query\": \"*\"\n } \n }\n }\n }\n}\n```\n\nAbove query is gonna fail with NullPointerException without possiblity to catch a real problem (debug helps here :)),\n\n```\norg.elasticsearch.search.SearchParseException: [test][0]: from[-1],size[-1]: Parse Failure [Failed to parse source [{\n \"filter\": {\n \"has_parent\": {\n \"type\": \"parents\",\n \"query\": {\n \"query_string\": {\n \"query\": \"*\"\n }\n }\n }\n }\n}\n]]\n at org.elasticsearch.search.SearchService.parseSource(SearchService.java:664)\n at org.elasticsearch.search.SearchService.createContext(SearchService.java:515)\n at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:487)\n at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:256)\n at org.elasticsearch.search.action.SearchServiceTransportAction$5.call(SearchServiceTransportAction.java:206)\n at org.elasticsearch.search.action.SearchServiceTransportAction$5.call(SearchServiceTransportAction.java:203)\n at org.elasticsearch.search.action.SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:517)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n at java.lang.Thread.run(Thread.java:744)\nCaused by: java.lang.NullPointerException\n at org.elasticsearch.index.query.HasParentFilterParser.parse(HasParentFilterParser.java:158)\n at org.elasticsearch.index.query.QueryParseContext.executeFilterParser(QueryParseContext.java:290)\n at org.elasticsearch.index.query.QueryParseContext.parseInnerFilter(QueryParseContext.java:271)\n at org.elasticsearch.index.query.IndexQueryParserService.parseInnerFilter(IndexQueryParserService.java:282)\n at org.elasticsearch.search.query.PostFilterParseElement.parse(PostFilterParseElement.java:33)\n at org.elasticsearch.search.SearchService.parseSource(SearchService.java:648)\n ... 9 more\n```\n\nBtw. Query on incorrect type (children2) fails fine since it throws:\n\n```\n[test] [has_parent] filter configured 'parent_type' [parents2] is not a valid type];\n```\n", "comments": [ { "body": "@scoro Thanks for reporting this! I opened #7362 for this bug.\n", "created_at": "2014-08-21T07:36:52Z" } ], "number": 7349, "title": "[ES 1.3.2] NullPointerException while parsing hasParent query/filter" }
{ "body": "PR for #7349\n", "number": 7362, "review_comments": [], "title": "If _parent field points to a non existing parent type, then skip the has_parent query/filter" }
{ "commits": [ { "message": "Parent/child: If _parent field points to a non existing parent type, then skip the has_parent query/filter\n\nCloses #7362\nCloses #7349" } ], "files": [ { "diff": "@@ -18,26 +18,17 @@\n */\n package org.elasticsearch.index.query;\n \n-import org.apache.lucene.search.BooleanClause;\n import org.apache.lucene.search.Filter;\n import org.apache.lucene.search.Query;\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.inject.Inject;\n-import org.elasticsearch.common.lucene.search.NotFilter;\n-import org.elasticsearch.common.lucene.search.XBooleanFilter;\n-import org.elasticsearch.common.lucene.search.XFilteredQuery;\n import org.elasticsearch.common.xcontent.XContentParser;\n-import org.elasticsearch.index.fielddata.plain.ParentChildIndexFieldData;\n-import org.elasticsearch.index.mapper.DocumentMapper;\n-import org.elasticsearch.index.mapper.internal.ParentFieldMapper;\n import org.elasticsearch.index.query.support.XContentStructure;\n import org.elasticsearch.index.search.child.CustomQueryWrappingFilter;\n-import org.elasticsearch.index.search.child.ParentConstantScoreQuery;\n \n import java.io.IOException;\n-import java.util.HashSet;\n-import java.util.Set;\n \n+import static org.elasticsearch.index.query.HasParentQueryParser.createParentQuery;\n import static org.elasticsearch.index.query.QueryParserUtils.ensureNotDeleteByQuery;\n \n /**\n@@ -119,52 +110,14 @@ public Filter parse(QueryParseContext parseContext) throws IOException, QueryPar\n return null;\n }\n \n- DocumentMapper parentDocMapper = parseContext.mapperService().documentMapper(parentType);\n- if (parentDocMapper == null) {\n- throw new QueryParsingException(parseContext.index(), \"[has_parent] filter configured 'parent_type' [\" + parentType + \"] is not a valid type\");\n- }\n-\n- // wrap the query with type query\n- query = new XFilteredQuery(query, parseContext.cacheFilter(parentDocMapper.typeFilter(), null));\n-\n- Set<String> parentTypes = new HashSet<>(5);\n- parentTypes.add(parentType);\n- ParentChildIndexFieldData parentChildIndexFieldData = null;\n- for (DocumentMapper documentMapper : parseContext.mapperService().docMappers(false)) {\n- ParentFieldMapper parentFieldMapper = documentMapper.parentFieldMapper();\n- if (parentFieldMapper.active()) {\n- DocumentMapper parentTypeDocumentMapper = parseContext.mapperService().documentMapper(parentFieldMapper.type());\n- parentChildIndexFieldData = parseContext.getForField(parentFieldMapper);\n- if (parentTypeDocumentMapper == null) {\n- // Only add this, if this parentFieldMapper (also a parent) isn't a child of another parent.\n- parentTypes.add(parentFieldMapper.type());\n- }\n- }\n- }\n- if (parentChildIndexFieldData == null) {\n- throw new QueryParsingException(parseContext.index(), \"[has_parent] no _parent field configured\");\n- }\n-\n- Filter parentFilter;\n- if (parentTypes.size() == 1) {\n- DocumentMapper documentMapper = parseContext.mapperService().documentMapper(parentTypes.iterator().next());\n- parentFilter = parseContext.cacheFilter(documentMapper.typeFilter(), null);\n- } else {\n- XBooleanFilter parentsFilter = new XBooleanFilter();\n- for (String parentTypeStr : parentTypes) {\n- DocumentMapper documentMapper = parseContext.mapperService().documentMapper(parentTypeStr);\n- Filter filter = parseContext.cacheFilter(documentMapper.typeFilter(), null);\n- parentsFilter.add(filter, BooleanClause.Occur.SHOULD);\n- }\n- parentFilter = parentsFilter;\n+ Query parentQuery = createParentQuery(query, parentType, false, parseContext);\n+ if (parentQuery == null) {\n+ return null;\n }\n- Filter childrenFilter = parseContext.cacheFilter(new NotFilter(parentFilter), null);\n- Query parentConstantScoreQuery = new ParentConstantScoreQuery(parentChildIndexFieldData, query, parentType, childrenFilter);\n-\n if (filterName != null) {\n- parseContext.addNamedFilter(filterName, new CustomQueryWrappingFilter(parentConstantScoreQuery));\n+ parseContext.addNamedFilter(filterName, new CustomQueryWrappingFilter(parentQuery));\n }\n- return new CustomQueryWrappingFilter(parentConstantScoreQuery);\n+ return new CustomQueryWrappingFilter(parentQuery);\n }\n \n }\n\\ No newline at end of file", "filename": "src/main/java/org/elasticsearch/index/query/HasParentFilterParser.java", "status": "modified" }, { "diff": "@@ -129,15 +129,27 @@ public Query parse(QueryParseContext parseContext) throws IOException, QueryPars\n innerQuery.setBoost(boost);\n // wrap the query with type query\n innerQuery = new XFilteredQuery(innerQuery, parseContext.cacheFilter(parentDocMapper.typeFilter(), null));\n+ Query query = createParentQuery(innerQuery, parentType, score, parseContext);\n+ if (query == null) {\n+ return null;\n+ }\n \n- ParentChildIndexFieldData parentChildIndexFieldData = null;\n+ query.setBoost(boost);\n+ if (queryName != null) {\n+ parseContext.addNamedFilter(queryName, new CustomQueryWrappingFilter(query));\n+ }\n+ return query;\n+ }\n+\n+ static Query createParentQuery(Query innerQuery, String parentType, boolean score, QueryParseContext parseContext) {\n Set<String> parentTypes = new HashSet<>(5);\n parentTypes.add(parentType);\n+ ParentChildIndexFieldData parentChildIndexFieldData = null;\n for (DocumentMapper documentMapper : parseContext.mapperService().docMappers(false)) {\n ParentFieldMapper parentFieldMapper = documentMapper.parentFieldMapper();\n if (parentFieldMapper.active()) {\n- parentChildIndexFieldData = parseContext.getForField(parentFieldMapper);\n DocumentMapper parentTypeDocumentMapper = parseContext.mapperService().documentMapper(parentFieldMapper.type());\n+ parentChildIndexFieldData = parseContext.getForField(parentFieldMapper);\n if (parentTypeDocumentMapper == null) {\n // Only add this, if this parentFieldMapper (also a parent) isn't a child of another parent.\n parentTypes.add(parentFieldMapper.type());\n@@ -148,32 +160,34 @@ public Query parse(QueryParseContext parseContext) throws IOException, QueryPars\n throw new QueryParsingException(parseContext.index(), \"[has_parent] no _parent field configured\");\n }\n \n- Filter parentFilter;\n+ Filter parentFilter = null;\n if (parentTypes.size() == 1) {\n DocumentMapper documentMapper = parseContext.mapperService().documentMapper(parentTypes.iterator().next());\n- parentFilter = parseContext.cacheFilter(documentMapper.typeFilter(), null);\n+ if (documentMapper != null) {\n+ parentFilter = parseContext.cacheFilter(documentMapper.typeFilter(), null);\n+ }\n } else {\n XBooleanFilter parentsFilter = new XBooleanFilter();\n for (String parentTypeStr : parentTypes) {\n DocumentMapper documentMapper = parseContext.mapperService().documentMapper(parentTypeStr);\n- Filter filter = parseContext.cacheFilter(documentMapper.typeFilter(), null);\n- parentsFilter.add(filter, BooleanClause.Occur.SHOULD);\n+ if (documentMapper != null) {\n+ Filter filter = parseContext.cacheFilter(documentMapper.typeFilter(), null);\n+ parentsFilter.add(filter, BooleanClause.Occur.SHOULD);\n+ }\n }\n parentFilter = parentsFilter;\n }\n- Filter childrenFilter = parseContext.cacheFilter(new NotFilter(parentFilter), null);\n \n- Query query;\n+ if (parentFilter == null) {\n+ return null;\n+ }\n+\n+ Filter childrenFilter = parseContext.cacheFilter(new NotFilter(parentFilter), null);\n if (score) {\n- query = new ParentQuery(parentChildIndexFieldData, innerQuery, parentType, childrenFilter);\n+ return new ParentQuery(parentChildIndexFieldData, innerQuery, parentType, childrenFilter);\n } else {\n- query = new ParentConstantScoreQuery(parentChildIndexFieldData, innerQuery, parentType, childrenFilter);\n- }\n- query.setBoost(boost);\n- if (queryName != null) {\n- parseContext.addNamedFilter(queryName, new CustomQueryWrappingFilter(query));\n+ return new ParentConstantScoreQuery(parentChildIndexFieldData, innerQuery, parentType, childrenFilter);\n }\n- return query;\n }\n \n }\n\\ No newline at end of file", "filename": "src/main/java/org/elasticsearch/index/query/HasParentQueryParser.java", "status": "modified" }, { "diff": "@@ -2499,6 +2499,39 @@ public void testMinMaxChildren() throws Exception {\n \n }\n \n+ @Test\n+ public void testParentFieldToNonExistingType() {\n+ assertAcked(prepareCreate(\"test\").addMapping(\"parent\").addMapping(\"child\", \"_parent\", \"type=parent2\"));\n+ client().prepareIndex(\"test\", \"parent\", \"1\").setSource(\"{}\").get();\n+ client().prepareIndex(\"test\", \"child\", \"1\").setParent(\"1\").setSource(\"{}\").get();\n+ refresh();\n+\n+ try {\n+ client().prepareSearch(\"test\")\n+ .setQuery(QueryBuilders.hasChildQuery(\"child\", matchAllQuery()))\n+ .get();\n+ fail();\n+ } catch (SearchPhaseExecutionException e) {\n+ }\n+\n+ SearchResponse response = client().prepareSearch(\"test\")\n+ .setQuery(QueryBuilders.hasParentQuery(\"parent\", matchAllQuery()))\n+ .get();\n+ assertHitCount(response, 0);\n+\n+ try {\n+ client().prepareSearch(\"test\")\n+ .setQuery(QueryBuilders.constantScoreQuery(FilterBuilders.hasChildFilter(\"child\", matchAllQuery())))\n+ .get();\n+ fail();\n+ } catch (SearchPhaseExecutionException e) {\n+ }\n+\n+ response = client().prepareSearch(\"test\")\n+ .setQuery(QueryBuilders.constantScoreQuery(FilterBuilders.hasParentFilter(\"parent\", matchAllQuery())))\n+ .get();\n+ assertHitCount(response, 0);\n+ }\n \n private static HasChildFilterBuilder hasChildFilter(String type, QueryBuilder queryBuilder) {\n HasChildFilterBuilder hasChildFilterBuilder = FilterBuilders.hasChildFilter(type, queryBuilder);", "filename": "src/test/java/org/elasticsearch/search/child/SimpleChildQuerySearchTests.java", "status": "modified" } ] }
{ "body": "NPE stacktrace from ShardStats class, after executing a Node Stats API:\n\n![image](https://cloud.githubusercontent.com/assets/1224228/3986214/2eac2788-2899-11e4-9ce8-126bf0c59238.png)\n\n(Source: https://twitter.com/bobpoekert/status/502132888066727936)\n", "comments": [], "number": 7356, "title": "Stats: Prevent NullPointerException in ShardStats" }
{ "body": "closes #7356\n", "number": 7358, "review_comments": [], "title": "NPE in ShardStats when routing entry is not set yet on IndexShard" }
{ "commits": [ { "message": "NPE in ShardStats when routing entry is not set yet on IndexShard\ncloses #7356" } ], "files": [ { "diff": "@@ -117,12 +117,11 @@ protected ClusterStatsNodeResponse nodeOperation(ClusterStatsNodeRequest nodeReq\n List<ShardStats> shardsStats = new ArrayList<>();\n for (IndexService indexService : indicesService.indices().values()) {\n for (IndexShard indexShard : indexService) {\n- if (indexShard.routingEntry().active()) {\n+ if (indexShard.routingEntry() != null && indexShard.routingEntry().active()) {\n // only report on fully started shards\n- shardsStats.add(new ShardStats(indexShard, SHARD_STATS_FLAGS));\n+ shardsStats.add(new ShardStats(indexShard, indexShard.routingEntry(), SHARD_STATS_FLAGS));\n }\n }\n-\n }\n \n ClusterHealthStatus clusterStatus = null;", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/stats/TransportClusterStatsAction.java", "status": "modified" }, { "diff": "@@ -43,9 +43,9 @@ public class ShardStats extends BroadcastShardOperationResponse implements ToXCo\n ShardStats() {\n }\n \n- public ShardStats(IndexShard indexShard, CommonStatsFlags flags) {\n- super(indexShard.routingEntry().shardId());\n- this.shardRouting = indexShard.routingEntry();\n+ public ShardStats(IndexShard indexShard, ShardRouting shardRouting, CommonStatsFlags flags) {\n+ super(indexShard.shardId());\n+ this.shardRouting = shardRouting;\n this.stats = new CommonStats(indexShard, flags);\n }\n ", "filename": "src/main/java/org/elasticsearch/action/admin/indices/stats/ShardStats.java", "status": "modified" }, { "diff": "@@ -37,6 +37,7 @@\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.index.IndexShardMissingException;\n import org.elasticsearch.index.service.InternalIndexService;\n import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.index.shard.service.InternalIndexShard;\n@@ -135,6 +136,10 @@ protected ShardStats newShardResponse() {\n protected ShardStats shardOperation(IndexShardStatsRequest request) throws ElasticsearchException {\n InternalIndexService indexService = (InternalIndexService) indicesService.indexServiceSafe(request.shardId().getIndex());\n InternalIndexShard indexShard = (InternalIndexShard) indexService.shardSafe(request.shardId().id());\n+ // if we don't have the routing entry yet, we need it stats wise, we treat it as if the shard is not ready yet\n+ if (indexShard.routingEntry() == null) {\n+ throw new IndexShardMissingException(indexShard.shardId());\n+ }\n \n CommonStatsFlags flags = new CommonStatsFlags().clear();\n \n@@ -197,7 +202,7 @@ protected ShardStats shardOperation(IndexShardStatsRequest request) throws Elast\n flags.set(CommonStatsFlags.Flag.QueryCache);\n }\n \n- return new ShardStats(indexShard, flags);\n+ return new ShardStats(indexShard, indexShard.routingEntry(), flags);\n }\n \n static class IndexShardStatsRequest extends BroadcastShardOperationRequest {", "filename": "src/main/java/org/elasticsearch/action/admin/indices/stats/TransportIndicesStatsAction.java", "status": "modified" }, { "diff": "@@ -82,6 +82,11 @@ public interface IndexShard extends IndexShardComponent {\n \n ShardFieldData fieldData();\n \n+ /**\n+ * Returns the latest cluster routing entry received with this shard. Might be null if the\n+ * shard was just created.\n+ */\n+ @Nullable\n ShardRouting routingEntry();\n \n DocsStats docStats();", "filename": "src/main/java/org/elasticsearch/index/shard/service/IndexShard.java", "status": "modified" }, { "diff": "@@ -206,7 +206,10 @@ public NodeIndicesStats stats(boolean includePrevious, CommonStatsFlags flags) {\n for (IndexService indexService : indices.values()) {\n for (IndexShard indexShard : indexService) {\n try {\n- IndexShardStats indexShardStats = new IndexShardStats(indexShard.shardId(), new ShardStats[] { new ShardStats(indexShard, flags) });\n+ if (indexShard.routingEntry() == null) {\n+ continue;\n+ }\n+ IndexShardStats indexShardStats = new IndexShardStats(indexShard.shardId(), new ShardStats[] { new ShardStats(indexShard, indexShard.routingEntry(), flags) });\n if (!statsByShard.containsKey(indexService.index())) {\n statsByShard.put(indexService.index(), Lists.<IndexShardStats>newArrayList(indexShardStats));\n } else {", "filename": "src/main/java/org/elasticsearch/indices/InternalIndicesService.java", "status": "modified" } ] }
{ "body": "This is somewhat related to the following request:\nhttps://github.com/elasticsearch/elasticsearch/issues/6722\n\n6722 actually causes a NPE when the clauses within the bool are null:\n\n```\n\"bool\" : {\n \"must\": [],\n \"must_not\": [],\n \"should\": []\n }\n```\n\nFor this ticket, there are use cases when Kibana is generating requests like the following:\n\n```\n \"facet_filter\": {\n \"fquery\": {\n \"query\": {\n \"filtered\": {\n \"query\": {\n \"bool\": {\n }\n },\n \"filter\": {\n \"fquery\": {\n \"query\": {\n \"query_string\": {\n \"query\": \"_type:apache\"\n }\n }\n }\n }\n }\n }\n }\n }\n```\n\nThe above query ignores the facet_filter's filter clause when it should really be returning a match_all plus the filter applied.\n\nWhen a query with just a bool {} is run on its own, the empty bool clause in this case does not throw a NPE and is treated as a valid query, except that it returns no documents (when it should really be returning a match_all):\n\n```\n \"query\": {\n \"bool\": {\n }\n }\n```\n", "comments": [ { "body": "An empty `bool` filter or query should be treated as a `match_all`.\n", "created_at": "2014-08-12T17:21:24Z" }, { "body": "Fixed, but I did not push to 1.2 because this relies on a change that is also not on 1.2 (d414d89c6281f99c). Let me know if you need it on 1.2 as well.\n", "created_at": "2014-08-27T12:10:35Z" } ], "number": 7240, "title": "Query DSL: Empty bool {} should return match_all" }
{ "body": "This also fixes has_parent filters with a nested empty bool filter\n(see test SimpleChildQuerySearchTests#test6722, the test should actually expect\neither 0 results when searching for has_parent \"test\" or one result when\nsearch for has_parent \"foo\")\n\ncloses #7240\n", "number": 7347, "review_comments": [ { "body": "I don't think an empty javadoc comment is useful here?\n", "created_at": "2014-08-21T07:25:56Z" }, { "body": "Seems like test data like this can just be inlined in the test case? I would do that for most of these actually, just my opinion (it seems a little weird to me to have test data pulled in as resources for such small data).\n", "created_at": "2014-08-21T07:30:01Z" }, { "body": "can you maybe find a better name for this?\n", "created_at": "2014-08-21T08:06:44Z" }, { "body": "Maybe call this \"testEmptyBoolSubclausesMatchAll()\"? Sorry if I misunderstood what the test is doing, I just think having a github issue number in the name is unhelpful to someone if they see a failure.\n", "created_at": "2014-08-22T05:53:16Z" }, { "body": "Same as above. I would not put an issue number in the name. I would also suggest starting the function name with \"test\".\n", "created_at": "2014-08-22T05:54:01Z" } ], "title": "Empty bool {} should return match_all" }
{ "commits": [ { "message": "bool query: parser should return match_all in case there are no clauses\n\nThis also fixes has_parent filters with a nested empty bool filter\n(see test SimpleChildQuerySearchTests#test6722, the test should actually expect\neither 0 results when searching for has_parent \"test\" or one result when\nsearch for has_parent \"foo\")\n\ncloses #7240" }, { "message": "remove empty comment" }, { "message": "inline tiny query" }, { "message": "rename test class StringTermsWithoutClusterScopeTests -> DedicatedAggregationTests" }, { "message": "better test naming and issue number in comment" }, { "message": "fix test - this only bubbled up after d414d89c6281f9\n\nThe query had the wrong parent type (which is not really wrong, just\nconfusing) and the type filter inside the parent_filter\nis redundant." } ], "files": [ { "diff": "@@ -21,6 +21,7 @@\n \n import org.apache.lucene.search.BooleanClause;\n import org.apache.lucene.search.BooleanQuery;\n+import org.apache.lucene.search.MatchAllDocsQuery;\n import org.apache.lucene.search.Query;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.lucene.search.Queries;\n@@ -131,7 +132,7 @@ public Query parse(QueryParseContext parseContext) throws IOException, QueryPars\n }\n \n if (clauses.isEmpty()) {\n- return null;\n+ return new MatchAllDocsQuery();\n }\n \n BooleanQuery booleanQuery = new BooleanQuery(disableCoord);", "filename": "src/main/java/org/elasticsearch/index/query/BoolQueryParser.java", "status": "modified" }, { "diff": "@@ -45,15 +45,20 @@\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.DistanceUnit;\n import org.elasticsearch.common.unit.Fuzziness;\n+import org.elasticsearch.common.xcontent.XContentHelper;\n+import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.index.cache.filter.support.CacheKeyFilter;\n import org.elasticsearch.index.mapper.MapperService;\n import org.elasticsearch.index.mapper.core.NumberFieldMapper;\n import org.elasticsearch.index.search.NumericRangeFieldDataFilter;\n+import org.elasticsearch.index.search.child.CustomQueryWrappingFilter;\n+import org.elasticsearch.index.search.child.ParentConstantScoreQuery;\n import org.elasticsearch.index.search.geo.GeoDistanceFilter;\n import org.elasticsearch.index.search.geo.GeoPolygonFilter;\n import org.elasticsearch.index.search.geo.InMemoryGeoBoundingBoxFilter;\n import org.elasticsearch.index.search.morelikethis.MoreLikeThisFetchService;\n import org.elasticsearch.index.service.IndexService;\n+import org.elasticsearch.search.internal.SearchContext;\n import org.elasticsearch.test.ElasticsearchSingleNodeTest;\n import org.hamcrest.Matchers;\n import org.junit.Before;\n@@ -67,6 +72,7 @@\n \n import static org.elasticsearch.common.io.Streams.copyToBytesFromClasspath;\n import static org.elasticsearch.common.io.Streams.copyToStringFromClasspath;\n+import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n import static org.elasticsearch.index.query.FilterBuilders.*;\n import static org.elasticsearch.index.query.QueryBuilders.*;\n import static org.elasticsearch.index.query.RegexpFlag.*;\n@@ -2289,5 +2295,42 @@ public void testMatchWithoutFuzzyTranspositions() throws Exception {\n assertThat( ((FuzzyQuery) parsedQuery).getTranspositions(), equalTo(false));\n }\n \n+ // https://github.com/elasticsearch/elasticsearch/issues/7240\n+ @Test\n+ public void testEmptyBooleanQuery() throws Exception {\n+ IndexQueryParserService queryParser = queryParser();\n+ String query = jsonBuilder().startObject().startObject(\"bool\").endObject().endObject().string();\n+ Query parsedQuery = queryParser.parse(query).query();\n+ assertThat(parsedQuery, instanceOf(MatchAllDocsQuery.class));\n+ }\n+\n+ // https://github.com/elasticsearch/elasticsearch/issues/7240\n+ @Test\n+ public void testEmptyBooleanQueryInsideFQuery() throws Exception {\n+ IndexQueryParserService queryParser = queryParser();\n+ String query = copyToStringFromClasspath(\"/org/elasticsearch/index/query/fquery-with-empty-bool-query.json\");\n+ XContentParser parser = XContentHelper.createParser(new BytesArray(query));\n+ ParsedFilter parsedQuery = queryParser.parseInnerFilter(parser);\n+ assertThat(parsedQuery.filter(), instanceOf(QueryWrapperFilter.class));\n+ assertThat(((QueryWrapperFilter) parsedQuery.filter()).getQuery(), instanceOf(XFilteredQuery.class));\n+ assertThat(((XFilteredQuery) ((QueryWrapperFilter) parsedQuery.filter()).getQuery()).getFilter(), instanceOf(TermFilter.class));\n+ TermFilter filter = (TermFilter) ((XFilteredQuery) ((QueryWrapperFilter) parsedQuery.filter()).getQuery()).getFilter();\n+ assertThat(filter.getTerm().toString(), equalTo(\"text:apache\"));\n+ }\n \n+ // https://github.com/elasticsearch/elasticsearch/issues/6722\n+ public void testEmptyBoolSubClausesIsMatchAll() throws ElasticsearchException, IOException {\n+ String query = copyToStringFromClasspath(\"/org/elasticsearch/index/query/bool-query-with-empty-clauses-for-parsing.json\");\n+ IndexService indexService = createIndex(\"testidx\", client().admin().indices().prepareCreate(\"testidx\")\n+ .addMapping(\"foo\")\n+ .addMapping(\"test\", \"_parent\", \"type=foo\"));\n+ SearchContext.setCurrent(createSearchContext(indexService));\n+ IndexQueryParserService queryParser = indexService.queryParserService();\n+ Query parsedQuery = queryParser.parse(query).query();\n+ assertThat(parsedQuery, instanceOf(XConstantScoreQuery.class));\n+ assertThat(((XConstantScoreQuery) parsedQuery).getFilter(), instanceOf(CustomQueryWrappingFilter.class));\n+ assertThat(((CustomQueryWrappingFilter) ((XConstantScoreQuery) parsedQuery).getFilter()).getQuery(), instanceOf(ParentConstantScoreQuery.class));\n+ assertThat(((CustomQueryWrappingFilter) ((XConstantScoreQuery) parsedQuery).getFilter()).getQuery().toString(), equalTo(\"parent_filter[foo](*:*)\"));\n+ SearchContext.removeCurrent();\n+ }\n }", "filename": "src/test/java/org/elasticsearch/index/query/SimpleIndexQueryParserTests.java", "status": "modified" }, { "diff": "@@ -0,0 +1,17 @@\n+{\n+ \"filtered\": {\n+ \"filter\": {\n+ \"has_parent\": {\n+ \"type\": \"foo\",\n+ \"query\": {\n+ \"bool\": {\n+ \"must\": [],\n+ \"must_not\": [],\n+ \"should\": []\n+ }\n+ }\n+ },\n+ \"query\": []\n+ }\n+ }\n+}\n\\ No newline at end of file", "filename": "src/test/java/org/elasticsearch/index/query/bool-query-with-empty-clauses-for-parsing.json", "status": "added" }, { "diff": "@@ -0,0 +1,16 @@\n+{\n+ \"fquery\": {\n+ \"query\": {\n+ \"filtered\": {\n+ \"query\": {\n+ \"bool\": {}\n+ },\n+ \"filter\": {\n+ \"term\": {\n+ \"text\": \"apache\"\n+ }\n+ }\n+ }\n+ }\n+ }\n+}\n\\ No newline at end of file", "filename": "src/test/java/org/elasticsearch/index/query/fquery-with-empty-bool-query.json", "status": "added" }, { "diff": "@@ -0,0 +1,56 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.search.aggregations.bucket;\n+\n+import org.elasticsearch.action.search.SearchResponse;\n+import org.elasticsearch.search.aggregations.bucket.filter.Filter;\n+import org.elasticsearch.search.aggregations.bucket.terms.StringTerms;\n+import org.elasticsearch.test.ElasticsearchIntegrationTest;\n+import org.junit.Test;\n+\n+import java.io.IOException;\n+\n+import static org.elasticsearch.common.io.Streams.copyToStringFromClasspath;\n+import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchResponse;\n+import static org.hamcrest.CoreMatchers.equalTo;\n+import static org.hamcrest.CoreMatchers.instanceOf;\n+\n+\n+public class DedicatedAggregationTests extends ElasticsearchIntegrationTest {\n+\n+ // https://github.com/elasticsearch/elasticsearch/issues/7240\n+ @Test\n+ public void testEmptyBoolIsMatchAll() throws IOException {\n+ String query = copyToStringFromClasspath(\"/org/elasticsearch/search/aggregations/bucket/agg-filter-with-empty-bool.json\");\n+ createIndex(\"testidx\");\n+ index(\"testidx\", \"apache\", \"1\", \"field\", \"text\");\n+ index(\"testidx\", \"nginx\", \"2\", \"field\", \"text\");\n+ refresh();\n+ ensureGreen(\"testidx\");\n+ SearchResponse searchResponse = client().prepareSearch(\"testidx\").setQuery(matchAllQuery()).get();\n+ assertThat(searchResponse.getHits().getTotalHits(), equalTo(2l));\n+ searchResponse = client().prepareSearch(\"testidx\").setSource(query).get();\n+ assertSearchResponse(searchResponse);\n+ assertThat(searchResponse.getAggregations().getAsMap().get(\"issue7240\"), instanceOf(Filter.class));\n+ Filter filterAgg = (Filter) searchResponse.getAggregations().getAsMap().get(\"issue7240\");\n+ assertThat(filterAgg.getAggregations().getAsMap().get(\"terms\"), instanceOf(StringTerms.class));\n+ assertThat(((StringTerms) filterAgg.getAggregations().getAsMap().get(\"terms\")).getBuckets().get(0).getDocCount(), equalTo(1l));\n+ }\n+}", "filename": "src/test/java/org/elasticsearch/search/aggregations/bucket/DedicatedAggregationTests.java", "status": "added" }, { "diff": "@@ -0,0 +1,33 @@\n+{\n+ \"aggs\": {\n+ \"issue7240\": {\n+ \"aggs\": {\n+ \"terms\": {\n+ \"terms\": {\n+ \"field\": \"field\"\n+ }\n+ }\n+ },\n+ \"filter\": {\n+ \"fquery\": {\n+ \"query\": {\n+ \"filtered\": {\n+ \"query\": {\n+ \"bool\": {}\n+ },\n+ \"filter\": {\n+ \"fquery\": {\n+ \"query\": {\n+ \"query_string\": {\n+ \"query\": \"_type:apache\"\n+ }\n+ }\n+ }\n+ }\n+ }\n+ }\n+ }\n+ }\n+ }\n+ }\n+}", "filename": "src/test/java/org/elasticsearch/search/aggregations/bucket/agg-filter-with-empty-bool.json", "status": "added" }, { "diff": "@@ -59,6 +59,7 @@\n \n import static com.google.common.collect.Maps.newHashMap;\n import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_NUMBER_OF_REPLICAS;\n+import static org.elasticsearch.common.io.Streams.copyToStringFromClasspath;\n import static org.elasticsearch.common.settings.ImmutableSettings.builder;\n import static org.elasticsearch.common.settings.ImmutableSettings.settingsBuilder;\n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n@@ -138,12 +139,13 @@ public void test6722() throws ElasticsearchException, IOException {\n \n // index simple data\n client().prepareIndex(\"test\", \"foo\", \"1\").setSource(\"foo\", 1).get();\n- client().prepareIndex(\"test\", \"test\").setSource(\"foo\", 1).setParent(\"1\").get();\n+ client().prepareIndex(\"test\", \"test\", \"2\").setSource(\"foo\", 1).setParent(\"1\").get();\n refresh();\n-\n- SearchResponse searchResponse = client().prepareSearch(\"test\").setSource(\"{\\\"query\\\":{\\\"filtered\\\":{\\\"filter\\\":{\\\"has_parent\\\":{\\\"type\\\":\\\"test\\\",\\\"query\\\":{\\\"bool\\\":{\\\"must\\\":[],\\\"must_not\\\":[],\\\"should\\\":[]}}},\\\"query\\\":[]}}}}\").get();\n+ String query = copyToStringFromClasspath(\"/org/elasticsearch/search/child/bool-query-with-empty-clauses.json\");\n+ SearchResponse searchResponse = client().prepareSearch(\"test\").setSource(query).get();\n assertNoFailures(searchResponse);\n- assertThat(searchResponse.getHits().totalHits(), equalTo(2l));\n+ assertThat(searchResponse.getHits().totalHits(), equalTo(1l));\n+ assertThat(searchResponse.getHits().getAt(0).getId(), equalTo(\"2\"));\n }\n \n @Test", "filename": "src/test/java/org/elasticsearch/search/child/SimpleChildQuerySearchTests.java", "status": "modified" }, { "diff": "@@ -0,0 +1,19 @@\n+{\n+\"query\": {\n+ \"filtered\": {\n+ \"filter\": {\n+ \"has_parent\": {\n+ \"type\": \"foo\",\n+ \"query\": {\n+ \"bool\": {\n+ \"must\": [],\n+ \"must_not\": [],\n+ \"should\": []\n+ }\n+ }\n+ },\n+ \"query\": []\n+ }\n+ }\n+}\n+}\n\\ No newline at end of file", "filename": "src/test/java/org/elasticsearch/search/child/bool-query-with-empty-clauses.json", "status": "added" }, { "diff": "@@ -160,7 +160,7 @@ protected static IndexService createIndex(String index, Settings settings, Strin\n return createIndex(index, createIndexRequestBuilder);\n }\n \n- private static IndexService createIndex(String index, CreateIndexRequestBuilder createIndexRequestBuilder) {\n+ protected static IndexService createIndex(String index, CreateIndexRequestBuilder createIndexRequestBuilder) {\n assertAcked(createIndexRequestBuilder.get());\n // Wait for the index to be allocated so that cluster state updates don't override\n // changes that would have been done locally", "filename": "src/test/java/org/elasticsearch/test/ElasticsearchSingleNodeTest.java", "status": "modified" } ] }
{ "body": "```\nPUT /attractions\n{\n \"mappings\": {\n \"landmark\": {\n \"properties\": {\n \"name\": {\n \"type\": \"string\"\n },\n \"location\": {\n \"type\": \"geo_shape\"\n }\n }\n }\n }\n}\n\nPUT /attractions/landmark/dam_square\n{\n \"name\" : \"Dam Square, Amsterdam\",\n \"location\" : {\n \"type\" : \"polygon\", \n \"coordinates\" : [[ \n [ 4.89218, 52.37356 ], \n [ 4.89205, 52.37276 ], \n [ 4.89301, 52.37274 ], \n [ 4.89392, 52.37250 ], \n [ 4.89431, 52.37287 ], \n [ 4.89331, 52.37346 ], \n [ 4.89305, 52.37326 ], \n [ 4.89218, 52.37356 ]\n ]]\n }\n}\n```\n\nThis point is less than 700m from the above shape, but the search only matches if you set the radius to 1.4km, ie twice the distance:\n\n```\nGET /attractions/landmark/_search\n{\n \"query\": {\n \"geo_shape\": {\n \"location\": {\n \"shape\": {\n \"type\": \"circle\",\n \"coordinates\": [\n 4.89994,\n 52.37815\n ],\n \"radius\": \"1.4km\"\n }\n }\n }\n }\n}\n```\n\nI've tried the same thing at much bigger distances and it exhibits the same problem. The radius needs to be double the distance in order to overlap, which makes me think that it is being used as a diameter instead.\n", "comments": [ { "body": "Is this an elasticsearch bug or a spatial4j bug ? The code and comments for SpatialContext.makeCircle() , which is being used , names the parameter distance instead of radius, which seems a bit ambiguous to me \n", "created_at": "2014-08-18T11:37:21Z" } ], "number": 7301, "title": "Geo: Geo-shape circles using `radius` as diameter" }
{ "body": "This change fixes the creation circle shapes o it calculates it correctly instead of essentially using the diameter as the radius. The radius has to be converted into degrees but calculating the ratio of the desired radius to the circumference of the earth and then multiplying it by 360 (number of degrees around the earths circumference). This issue here was that it was only multiplied by 180 making the result out by a factor of 2. Also made the test for circles actually check to make sure it has the correct centre and radius.\n\nCloses #7301\n", "number": 7338, "review_comments": [ { "body": "maybe reuse GeoUtils.EARTH_EQUATOR?\n", "created_at": "2014-08-20T13:46:35Z" } ], "title": "Fix circle radius calculation" }
{ "commits": [ { "message": "Geo: fixes circle radius calculation\n\nThis change fixes the creation circle shapes o it calculates it correctly instead of essentially using the diameter as the radius. The radius has to be converted into degrees but calculating the ratio of the desired radius to the circumference of the earth and then multiplying it by 360 (number of degrees around the earths circumference). This issue here was that it was only multiplied by 180 making the result out by a factor of 2. Also made the test for circles actually check to make sure it has the correct centre and radius.\n\nCloses #7301" } ], "files": [ { "diff": "@@ -19,12 +19,12 @@\n \n package org.elasticsearch.common.geo.builders;\n \n+import com.spatial4j.core.shape.Circle;\n+import com.vividsolutions.jts.geom.Coordinate;\n import org.elasticsearch.common.unit.DistanceUnit;\n import org.elasticsearch.common.unit.DistanceUnit.Distance;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n \n-import com.spatial4j.core.shape.Circle;\n-import com.vividsolutions.jts.geom.Coordinate;\n import java.io.IOException;\n \n public class CircleBuilder extends ShapeBuilder {\n@@ -109,7 +109,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n \n @Override\n public Circle build() {\n- return SPATIAL_CONTEXT.makeCircle(center.x, center.y, 180 * radius / unit.getEarthCircumference());\n+ return SPATIAL_CONTEXT.makeCircle(center.x, center.y, 360 * radius / unit.getEarthCircumference());\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/common/geo/builders/CircleBuilder.java", "status": "modified" }, { "diff": "@@ -19,9 +19,11 @@\n \n package org.elasticsearch.common.geo;\n \n+import com.spatial4j.core.shape.Circle;\n import com.spatial4j.core.shape.Point;\n import com.spatial4j.core.shape.Rectangle;\n import com.spatial4j.core.shape.Shape;\n+import com.spatial4j.core.shape.impl.PointImpl;\n import com.vividsolutions.jts.geom.Coordinate;\n import com.vividsolutions.jts.geom.LineString;\n import com.vividsolutions.jts.geom.Polygon;\n@@ -161,11 +163,28 @@ public void testPolygonSelfIntersection() {\n \n @Test\n public void testGeoCircle() {\n- ShapeBuilder.newCircleBuilder().center(0, 0).radius(\"100m\").build();\n- ShapeBuilder.newCircleBuilder().center(+180, 0).radius(\"100m\").build();\n- ShapeBuilder.newCircleBuilder().center(-180, 0).radius(\"100m\").build();\n- ShapeBuilder.newCircleBuilder().center(0, 90).radius(\"100m\").build();\n- ShapeBuilder.newCircleBuilder().center(0, -90).radius(\"100m\").build();\n+ double earthCircumference = 40075016.69;\n+ Circle circle = ShapeBuilder.newCircleBuilder().center(0, 0).radius(\"100m\").build();\n+ assertEquals((360 * 100) / earthCircumference, circle.getRadius(), 0.00000001);\n+ assertEquals((Point) new PointImpl(0, 0, ShapeBuilder.SPATIAL_CONTEXT), circle.getCenter());\n+ circle = ShapeBuilder.newCircleBuilder().center(+180, 0).radius(\"100m\").build();\n+ assertEquals((360 * 100) / earthCircumference, circle.getRadius(), 0.00000001);\n+ assertEquals((Point) new PointImpl(180, 0, ShapeBuilder.SPATIAL_CONTEXT), circle.getCenter());\n+ circle = ShapeBuilder.newCircleBuilder().center(-180, 0).radius(\"100m\").build();\n+ assertEquals((360 * 100) / earthCircumference, circle.getRadius(), 0.00000001);\n+ assertEquals((Point) new PointImpl(-180, 0, ShapeBuilder.SPATIAL_CONTEXT), circle.getCenter());\n+ circle = ShapeBuilder.newCircleBuilder().center(0, 90).radius(\"100m\").build();\n+ assertEquals((360 * 100) / earthCircumference, circle.getRadius(), 0.00000001);\n+ assertEquals((Point) new PointImpl(0, 90, ShapeBuilder.SPATIAL_CONTEXT), circle.getCenter());\n+ circle = ShapeBuilder.newCircleBuilder().center(0, -90).radius(\"100m\").build();\n+ assertEquals((360 * 100) / earthCircumference, circle.getRadius(), 0.00000001);\n+ assertEquals((Point) new PointImpl(0, -90, ShapeBuilder.SPATIAL_CONTEXT), circle.getCenter());\n+ double randomLat = (randomDouble() * 180) - 90;\n+ double randomLon = (randomDouble() * 360) - 180;\n+ double randomRadius = randomIntBetween(1, (int) earthCircumference / 4);\n+ circle = ShapeBuilder.newCircleBuilder().center(randomLon, randomLat).radius(randomRadius + \"m\").build();\n+ assertEquals((360 * randomRadius) / earthCircumference, circle.getRadius(), 0.00000001);\n+ assertEquals((Point) new PointImpl(randomLon, randomLat, ShapeBuilder.SPATIAL_CONTEXT), circle.getCenter());\n }\n \n @Test", "filename": "src/test/java/org/elasticsearch/common/geo/ShapeBuilderTests.java", "status": "modified" } ] }
{ "body": "Using the Java API, If one sets the content of a search request through `SearchRequestBuilder#setSource` methods and then calls `toString` to see the result, not only the content of the request is not returned as it wasn't set through `sourceBuilder()`, the content of the request gets also reset due to the `internalBuilder()` call in `toString`.\n\nHere is a small failing test that demontrates it:\n\n```\nSearchRequestBuilder searchRequestBuilder = new SearchRequestBuilder(client()).setSource(\"{\\n\" +\n \" \\\"query\\\" : {\\n\" +\n \" \\\"match\\\" : {\\n\" +\n \" \\\"field\\\" : {\\n\" +\n \" \\\"query\\\" : \\\"value\\\"\" +\n \" }\\n\" +\n \" }\\n\" +\n \" }\\n\" +\n \" }\");\nString preToString = searchRequestBuilder.request().source().toUtf8();\nsearchRequestBuilder.toString();\nString postToString = searchRequestBuilder.request().source().toUtf8();\nassertThat(preToString, equalTo(postToString));\n```\n", "comments": [ { "body": "From a user perspective it's pretty clear what the `toString` method should do, just print the request in json format. The problem is how, as properties can be set in so many ways that can override each other...which is why I guess the current implementation is half broken. I would consider even removing the current `toString` as it has this bad side effect. Curious on what people think about this.\n", "created_at": "2014-03-27T12:37:42Z" }, { "body": "good catch!\n", "created_at": "2014-03-27T12:48:42Z" }, { "body": "@javanna do you think you can work on this this week?\n", "created_at": "2014-04-02T14:54:04Z" }, { "body": "I think I'll get to this soon, I'd appreciate comments on how to fix it though ;)\n", "created_at": "2014-04-02T14:55:49Z" }, { "body": "Hey @GaelTadh I remember reviewing a PR from you for this issue, did you get it in after all?\n", "created_at": "2014-10-10T08:12:30Z" }, { "body": "Nevermind I found it and linked this issue, I see it's not in yet, assigned this issue to you @GaelTadh \n", "created_at": "2014-10-10T08:16:10Z" }, { "body": "Yeah I'll get it in ASAP, it got a little neglected.\n", "created_at": "2014-10-10T09:04:05Z" }, { "body": "thanks @GaelTadh !\n", "created_at": "2014-10-10T09:06:47Z" } ], "number": 5576, "title": "SearchRequestBuilder#toString causes the content of the request to change" }
{ "body": "This fixes #7317.\nSearchRequestBuilder.toString now attempts to render the request's source before falling back to using the internal builder.\n\nCloses #5576 \nCloses #5555\n", "number": 7334, "review_comments": [ { "body": "I'd prefer not to see the catch NPE here. Can't we just return `request().source().toUtf8()` as is? or do something similar to what `SearchSourceBuilder#toString` does?\n", "created_at": "2014-08-20T10:39:51Z" }, { "body": "maybe you can create a `Client` manually instead and make this extend `ElasticsearchTestCase`? at the end of the day you don't really issue the request...\n", "created_at": "2014-08-20T10:41:46Z" }, { "body": "Can we test also the case where we don't set anything? and the case where we use a `SearchSourceBuilder`?\n", "created_at": "2014-08-20T10:43:05Z" }, { "body": "you don't need to rethrow, I'd just make the method throw it as is\n", "created_at": "2014-08-20T10:43:48Z" }, { "body": "maybe you can just call `toString` from a loop and make sure that the output is always the same?\n", "created_at": "2014-08-20T10:47:05Z" } ], "title": "SearchRequestBuilder.toString modifies the SearchRequestBuilder wiping any source set." }
{ "commits": [ { "message": "[FIX] : If request.source is set in SearchRequestBuilder use that on toString.\n\nIf toString() is called on SearchRequestBuilder it will erase any source that has been\nset to the request(). This means that toString() is destructive and changes the behavior of\nSearchRequestBuilder. This commit checks to see if request has a source and uses that instead.\n\nSee #7317" }, { "message": "[FIX][TEST] Add test for SearchRequestBuilder.toString\n\nTest that asserts that SearchRequestBuilder.toString doesn't mutate the search request.\n\nSee #7317" }, { "message": "[Fix] Add CountRequestBuilder.toString and fix SearchRequestBuilder.toString\n\nThis commit fixes the SearchRequestBuilder.toString to be idempotent and adds an\nidempotent CountRequestBuilder.toString method.\nAlso add tests for both these methods to prove the aren't mutating the underlying search request.\n\nSee #5555 #5576 #7334" } ], "files": [ { "diff": "@@ -24,8 +24,11 @@\n import org.elasticsearch.action.support.broadcast.BroadcastOperationRequestBuilder;\n import org.elasticsearch.client.Client;\n import org.elasticsearch.common.bytes.BytesReference;\n+import org.elasticsearch.common.xcontent.XContentHelper;\n import org.elasticsearch.index.query.QueryBuilder;\n \n+import java.io.IOException;\n+\n /**\n * A count action request builder.\n */\n@@ -143,4 +146,22 @@ private QuerySourceBuilder sourceBuilder() {\n }\n return sourceBuilder;\n }\n+\n+ @Override\n+ public String toString() {\n+ if (request.source() != null) {\n+ try {\n+ return XContentHelper.convertToJson(request.source(), false, false);\n+ } catch (IOException |NullPointerException e) {\n+ return request().source().toUtf8();\n+ }\n+ } else {\n+ if (sourceBuilder != null){\n+ return sourceBuilder().toString();\n+ } else {\n+ return \"{}\"; //Nothing has been set return the empty query\n+ }\n+ }\n+ }\n+\n }", "filename": "src/main/java/org/elasticsearch/action/count/CountRequestBuilder.java", "status": "modified" }, { "diff": "@@ -28,6 +28,7 @@\n import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.common.xcontent.XContentHelper;\n import org.elasticsearch.index.query.FilterBuilder;\n import org.elasticsearch.index.query.QueryBuilder;\n import org.elasticsearch.script.ScriptService;\n@@ -41,6 +42,7 @@\n import org.elasticsearch.search.sort.SortOrder;\n import org.elasticsearch.search.suggest.SuggestBuilder;\n \n+import java.io.IOException;\n import java.util.Map;\n \n /**\n@@ -1097,7 +1099,19 @@ public SearchSourceBuilder internalBuilder() {\n \n @Override\n public String toString() {\n- return internalBuilder().toString();\n+ if (request.source() != null) {\n+ try {\n+ return XContentHelper.convertToJson(request.source(), false, false);\n+ } catch (IOException|NullPointerException e) {\n+ return request().source().toUtf8();\n+ }\n+ } else {\n+ if (sourceBuilder != null) {\n+ return sourceBuilder.toString();\n+ } else {\n+ return \"{}\"; //Nothing has been set return the empty query\n+ }\n+ }\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/search/SearchRequestBuilder.java", "status": "modified" }, { "diff": "@@ -0,0 +1,76 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.action.count;\n+\n+\n+import org.elasticsearch.ElasticsearchException;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.common.xcontent.XContentFactory;\n+import org.elasticsearch.common.xcontent.XContentHelper;\n+import org.elasticsearch.test.ElasticsearchIntegrationTest;\n+import org.junit.Test;\n+\n+import java.io.IOException;\n+\n+public class CountRequestBuilderTests extends ElasticsearchIntegrationTest {\n+ //This would work better as a TestCase but CountRequestBuilder construction\n+ //requires a client\n+\n+ @Test\n+ public void testSearchRequestBuilderToString(){\n+ try\n+ {\n+ CountRequestBuilder countRequestBuilder = new CountRequestBuilder(client());\n+ XContentBuilder contentBuilder = XContentFactory.jsonBuilder();\n+ contentBuilder.startObject()\n+ .field(\"query\")\n+ .startObject()\n+ .field(\"match_all\")\n+ .startObject()\n+ .endObject()\n+ .endObject()\n+ .endObject();\n+ countRequestBuilder.setSource(contentBuilder.bytes());\n+ assertEquals(countRequestBuilder.toString(), XContentHelper.convertToJson(contentBuilder.bytes(), false, false));\n+ } catch (IOException ie) {\n+ throw new ElasticsearchException(\"Unable to create content builder\", ie);\n+ }\n+ try\n+ {\n+ CountRequestBuilder countRequestBuilder = new CountRequestBuilder(client());\n+ XContentBuilder contentBuilder = XContentFactory.jsonBuilder();\n+ logger.debug(contentBuilder.toString()); //This should not affect things\n+ contentBuilder.startObject()\n+ .field(\"query\")\n+ .startObject()\n+ .field(\"match_all\")\n+ .startObject()\n+ .endObject()\n+ .endObject()\n+ .endObject();\n+ countRequestBuilder.setSource(contentBuilder.bytes());\n+ assertEquals(countRequestBuilder.toString(), XContentHelper.convertToJson(contentBuilder.bytes(), false, false));\n+\n+ } catch (IOException ie) {\n+ throw new ElasticsearchException(\"Unable to create content builder\", ie);\n+ }\n+ }\n+\n+}\n\\ No newline at end of file", "filename": "src/test/java/org/elasticsearch/action/count/CountRequestBuilderTests.java", "status": "added" }, { "diff": "@@ -0,0 +1,49 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.action.search;\n+\n+\n+import org.elasticsearch.ElasticsearchException;\n+import org.elasticsearch.test.ElasticsearchIntegrationTest;\n+import org.elasticsearch.test.ElasticsearchTestCase;\n+import org.junit.Test;\n+\n+public class SearchRequestBuilderTests extends ElasticsearchIntegrationTest {\n+ //This would work better as a TestCase but SearchRequestBuilder construction\n+ //requires a client\n+\n+ @Test\n+ public void testSearchRequestBuilderToString(){\n+ {\n+ SearchRequestBuilder srb = new SearchRequestBuilder(client());\n+ String querySource = \"{\\\"query\\\":{\\\"match_all\\\":{}}}\";\n+ srb.setSource(\"{\\\"query\\\":{\\\"match_all\\\":{}}}\");\n+ assertEquals(srb.toString(), querySource);\n+ }\n+ {\n+ SearchRequestBuilder srb = new SearchRequestBuilder(client());\n+ logger.debug(srb.toString()); //This really shouldn't do anything\n+ String querySource = \"{\\\"query\\\":{\\\"match_all\\\":{}}}\";\n+ srb.setSource(\"{\\\"query\\\":{\\\"match_all\\\":{}}}\");\n+ assertEquals(srb.toString(), querySource);\n+\n+ }\n+ }\n+}", "filename": "src/test/java/org/elasticsearch/action/search/SearchRequestBuilderTests.java", "status": "added" } ] }
{ "body": "I noticed some hot threads doing this while computing node stats:\n\n```\njava.io.UnixFileSystem.getSpace(Native Method)\n java.io.File.getUsableSpace(File.java:1862)\n org.elasticsearch.index.store.distributor.AbstractDistributor.getUsableSpace(AbstractDistributor.java:60)\n org.elasticsearch.index.store.distributor.LeastUsedDistributor.doAny(LeastUsedDistributor.java:45)\n org.elasticsearch.index.store.distributor.AbstractDistributor.any(AbstractDistributor.java:52)\n org.elasticsearch.index.store.DistributorDirectory.getDirectory(DistributorDirectory.java:176)\n org.elasticsearch.index.store.DistributorDirectory.getDirectory(DistributorDirectory.java:144)\n org.elasticsearch.index.store.DistributorDirectory.fileLength(DistributorDirectory.java:113)\n org.apache.lucene.store.FilterDirectory.fileLength(FilterDirectory.java:63)\n org.elasticsearch.common.lucene.Directories.estimateSize(Directories.java:43)\n org.elasticsearch.index.store.Store.stats(Store.java:174)\n org.elasticsearch.index.shard.service.InternalIndexShard.storeStats(InternalIndexShard.java:524)\n org.elasticsearch.action.admin.indices.stats.CommonStats.<init>(CommonStats.java:130)\n org.elasticsearch.action.admin.indices.stats.ShardStats.<init>(ShardStats.java:49)\n org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.shardOperation(TransportIndicesStatsAction.java:195)\n org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.shardOperation(TransportIndicesStatsAction.java:53)\n org.elasticsearch.action.support.broadcast.TransportBroadcastOperationAction$ShardTransportHandler.messageReceived(TransportBroadcastOperationAction.java:338)\n org.elasticsearch.action.support.broadcast.TransportBroadcastOperationAction$ShardTransportHandler.messageReceived(TransportBroadcastOperationAction.java:324)\n org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.run(MessageChannelHandler.java:275)\n java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n java.lang.Thread.run(Thread.java:745)\n```\n\nWhich is odd because why would we invoke the least_used distributor when checking fileLength (or opening for read, in other cases) an already-existing file? Seems like we should only check this when writing a new file.\n\nLooking at line 176 of 1.x of DistributorDirectory.java, it looks like we do this to simplify concurrency (so we can use CHM.putIfAbsent), but I think we should fix this code to only invoke the distributor when it's writing a new file?\n", "comments": [ { "body": "FYI - I relabelled this since it's a bug\n", "created_at": "2014-09-30T08:51:25Z" } ], "number": 7306, "title": "Internal: DistributorDirectory should not invoke distributor when reading an existing file" }
{ "body": "I just changed the logic in the private getDirectory() to only call distributor.any() if there wasn't already a binding for the requested file name.\n\nCloses #7306\n", "number": 7323, "review_comments": [], "title": "DistributorDirectory shouldn't search for directory when reading existing file" }
{ "commits": [ { "message": "don't invoke Distributor.any if we already know which directory the name is in" } ], "files": [ { "diff": "@@ -159,11 +159,14 @@ private Directory getDirectory(String name, boolean failIfNotAssociated, boolean\n if (usePrimary(name)) {\n return distributor.primary();\n }\n- if (!nameDirMapping.containsKey(name)) {\n- if (iterate) { // in order to get stuff like \"write.lock\" that might not be written though this directory\n+ Directory directory = nameDirMapping.get(name);\n+ if (directory == null) {\n+ // name is not yet bound to a directory:\n+\n+ if (iterate) { // in order to get stuff like \"write.lock\" that might not be written through this directory\n for (Directory dir : distributor.all()) {\n if (dir.fileExists(name)) {\n- final Directory directory = nameDirMapping.putIfAbsent(name, dir);\n+ directory = nameDirMapping.putIfAbsent(name, dir);\n return directory == null ? dir : directory;\n }\n }\n@@ -172,10 +175,17 @@ private Directory getDirectory(String name, boolean failIfNotAssociated, boolean\n if (failIfNotAssociated) {\n throw new FileNotFoundException(\"No such file [\" + name + \"]\");\n }\n+\n+ // Pick a directory and associate this new file with it:\n+ final Directory dir = distributor.any();\n+ directory = nameDirMapping.putIfAbsent(name, dir);\n+ if (directory == null) {\n+ // putIfAbsent did in fact put dir:\n+ directory = dir;\n+ }\n }\n- final Directory dir = distributor.any();\n- final Directory directory = nameDirMapping.putIfAbsent(name, dir);\n- return directory == null ? dir : directory;\n+ \n+ return directory;\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/index/store/DistributorDirectory.java", "status": "modified" }, { "diff": "@@ -18,19 +18,21 @@\n */\n package org.elasticsearch.index.store;\n \n-import com.carrotsearch.randomizedtesting.annotations.Listeners;\n-import com.carrotsearch.randomizedtesting.annotations.ThreadLeakFilters;\n-import com.carrotsearch.randomizedtesting.annotations.ThreadLeakScope;\n-import com.carrotsearch.randomizedtesting.annotations.TimeoutSuite;\n+import java.io.File;\n+import java.io.IOException;\n+\n import org.apache.lucene.store.BaseDirectoryTestCase;\n import org.apache.lucene.store.Directory;\n+import org.apache.lucene.store.IOContext;\n import org.apache.lucene.util.LuceneTestCase;\n import org.apache.lucene.util.TimeUnits;\n+import org.elasticsearch.index.store.distributor.Distributor;\n import org.elasticsearch.test.ElasticsearchThreadFilter;\n import org.elasticsearch.test.junit.listeners.LoggingListener;\n-\n-import java.io.File;\n-import java.io.IOException;\n+import com.carrotsearch.randomizedtesting.annotations.Listeners;\n+import com.carrotsearch.randomizedtesting.annotations.ThreadLeakFilters;\n+import com.carrotsearch.randomizedtesting.annotations.ThreadLeakScope;\n+import com.carrotsearch.randomizedtesting.annotations.TimeoutSuite;\n \n @ThreadLeakFilters(defaultFilters = true, filters = {ElasticsearchThreadFilter.class})\n @ThreadLeakScope(ThreadLeakScope.Scope.NONE)\n@@ -48,4 +50,40 @@ protected Directory getDirectory(File path) throws IOException {\n return new DistributorDirectory(directories);\n }\n \n+ // #7306: don't invoke the distributor when we are opening an already existing file\n+ public void testDoNotCallDistributorOnRead() throws Exception { \n+ Directory dir = newDirectory();\n+ dir.createOutput(\"one.txt\", IOContext.DEFAULT).close();\n+\n+ final Directory[] dirs = new Directory[] {dir};\n+\n+ Distributor distrib = new Distributor() {\n+\n+ @Override\n+ public Directory primary() {\n+ return dirs[0];\n+ }\n+\n+ @Override\n+ public Directory[] all() {\n+ return dirs;\n+ }\n+\n+ @Override\n+ public synchronized Directory any() {\n+ throw new IllegalStateException(\"any should not be called\");\n+ }\n+ };\n+\n+ Directory dd = new DistributorDirectory(distrib);\n+ assertEquals(0, dd.fileLength(\"one.txt\"));\n+ dd.openInput(\"one.txt\", IOContext.DEFAULT).close();\n+ try {\n+ dd.createOutput(\"three.txt\", IOContext.DEFAULT).close();\n+ fail(\"didn't hit expected exception\");\n+ } catch (IllegalStateException ise) {\n+ // expected\n+ }\n+ dd.close();\n+ }\n }", "filename": "src/test/java/org/elasticsearch/index/store/DistributorDirectoryTest.java", "status": "modified" } ] }
{ "body": "the process has to be killed forecefully to come out of this state.. here is the stack trace - \n\n[2014-06-16 07:41:36,568][WARN ][snapshots ] [Quentin Quire] Fail\ned to update snapshot state\njava.lang.NullPointerException\n at org.elasticsearch.snapshots.SnapshotsService.processIndexShardSnapsho\nts(SnapshotsService.java:644)\n at org.elasticsearch.snapshots.SnapshotsService.clusterChanged(Snapshots\nService.java:508)\n at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:430)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:134)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)\n at java.lang.Thread.run(Thread.java:636)\n", "comments": [ { "body": "Which version of elasticsearch was it? Did you shutdown entire cluster or just the master node? \n", "created_at": "2014-06-23T09:13:44Z" }, { "body": "1.1.0\n\ni was shutting down the cluster node by node.. on couple of the nodes, the\nshutdown was graceful. and then i hit this exception on 2 other nodes.\n\nthanks\n\nOn Mon, Jun 23, 2014 at 2:14 AM, Igor Motov notifications@github.com\nwrote:\n\n> Which version of elasticsearch was it? Did you shutdown entire cluster or\n> just the master node?\n> \n> —\n> Reply to this email directly or view it on GitHub\n> https://github.com/elasticsearch/elasticsearch/issues/6506#issuecomment-46821721\n> .\n", "created_at": "2014-06-23T20:04:40Z" } ], "number": 6506, "title": "Snapshot/Restore: NPE in ES when shutdown happens in the middle of snapshotting" }
{ "body": "Fixes #6506\n", "number": 7322, "review_comments": [], "title": "Fix NPE in SnapshotsService on node shutdown" }
{ "commits": [ { "message": "Fix NPE in SnapshotsService on node shutdown\n\nFixes #6506" } ], "files": [ { "diff": "@@ -762,42 +762,44 @@ private void processIndexShardSnapshots(SnapshotMetaData snapshotMetaData) {\n Map<SnapshotId, Map<ShardId, IndexShardSnapshotStatus>> newSnapshots = newHashMap();\n // Now go through all snapshots and update existing or create missing\n final String localNodeId = clusterService.localNode().id();\n- for (SnapshotMetaData.Entry entry : snapshotMetaData.entries()) {\n- if (entry.state() == State.STARTED) {\n- Map<ShardId, IndexShardSnapshotStatus> startedShards = newHashMap();\n- SnapshotShards snapshotShards = shardSnapshots.get(entry.snapshotId());\n- for (Map.Entry<ShardId, SnapshotMetaData.ShardSnapshotStatus> shard : entry.shards().entrySet()) {\n- // Add all new shards to start processing on\n- if (localNodeId.equals(shard.getValue().nodeId())) {\n- if (shard.getValue().state() == State.INIT && (snapshotShards == null || !snapshotShards.shards.containsKey(shard.getKey()))) {\n- logger.trace(\"[{}] - Adding shard to the queue\", shard.getKey());\n- startedShards.put(shard.getKey(), new IndexShardSnapshotStatus());\n+ if (snapshotMetaData != null) {\n+ for (SnapshotMetaData.Entry entry : snapshotMetaData.entries()) {\n+ if (entry.state() == State.STARTED) {\n+ Map<ShardId, IndexShardSnapshotStatus> startedShards = newHashMap();\n+ SnapshotShards snapshotShards = shardSnapshots.get(entry.snapshotId());\n+ for (Map.Entry<ShardId, SnapshotMetaData.ShardSnapshotStatus> shard : entry.shards().entrySet()) {\n+ // Add all new shards to start processing on\n+ if (localNodeId.equals(shard.getValue().nodeId())) {\n+ if (shard.getValue().state() == State.INIT && (snapshotShards == null || !snapshotShards.shards.containsKey(shard.getKey()))) {\n+ logger.trace(\"[{}] - Adding shard to the queue\", shard.getKey());\n+ startedShards.put(shard.getKey(), new IndexShardSnapshotStatus());\n+ }\n }\n }\n- }\n- if (!startedShards.isEmpty()) {\n- newSnapshots.put(entry.snapshotId(), startedShards);\n- if (snapshotShards != null) {\n- // We already saw this snapshot but we need to add more started shards\n- ImmutableMap.Builder<ShardId, IndexShardSnapshotStatus> shards = ImmutableMap.builder();\n- // Put all shards that were already running on this node\n- shards.putAll(snapshotShards.shards);\n- // Put all newly started shards\n- shards.putAll(startedShards);\n- survivors.put(entry.snapshotId(), new SnapshotShards(shards.build()));\n- } else {\n- // Brand new snapshot that we haven't seen before\n- survivors.put(entry.snapshotId(), new SnapshotShards(ImmutableMap.copyOf(startedShards)));\n+ if (!startedShards.isEmpty()) {\n+ newSnapshots.put(entry.snapshotId(), startedShards);\n+ if (snapshotShards != null) {\n+ // We already saw this snapshot but we need to add more started shards\n+ ImmutableMap.Builder<ShardId, IndexShardSnapshotStatus> shards = ImmutableMap.builder();\n+ // Put all shards that were already running on this node\n+ shards.putAll(snapshotShards.shards);\n+ // Put all newly started shards\n+ shards.putAll(startedShards);\n+ survivors.put(entry.snapshotId(), new SnapshotShards(shards.build()));\n+ } else {\n+ // Brand new snapshot that we haven't seen before\n+ survivors.put(entry.snapshotId(), new SnapshotShards(ImmutableMap.copyOf(startedShards)));\n+ }\n }\n- }\n- } else if (entry.state() == State.ABORTED) {\n- // Abort all running shards for this snapshot\n- SnapshotShards snapshotShards = shardSnapshots.get(entry.snapshotId());\n- if (snapshotShards != null) {\n- for (Map.Entry<ShardId, SnapshotMetaData.ShardSnapshotStatus> shard : entry.shards().entrySet()) {\n- IndexShardSnapshotStatus snapshotStatus = snapshotShards.shards.get(shard.getKey());\n- if (snapshotStatus != null) {\n- snapshotStatus.abort();\n+ } else if (entry.state() == State.ABORTED) {\n+ // Abort all running shards for this snapshot\n+ SnapshotShards snapshotShards = shardSnapshots.get(entry.snapshotId());\n+ if (snapshotShards != null) {\n+ for (Map.Entry<ShardId, SnapshotMetaData.ShardSnapshotStatus> shard : entry.shards().entrySet()) {\n+ IndexShardSnapshotStatus snapshotStatus = snapshotShards.shards.get(shard.getKey());\n+ if (snapshotStatus != null) {\n+ snapshotStatus.abort();\n+ }\n }\n }\n }", "filename": "src/main/java/org/elasticsearch/snapshots/SnapshotsService.java", "status": "modified" } ] }
{ "body": "Hey guys,\n\nWe're using version 1.3.2 and Oracle JRE: 1.7.0_67. The cluster has 3 nodes (2 data nodes and 1 used only to route searches).\n\nThere is a separate (different named) cluster of 1 node using marvel to collect statistics from the first one. No other nodes in the network, and no custom/manually made java clients around.\n\nAs the title says, when doing a DateHistogramAggregation using a negative value for either **pre_offset** or **post_offset** yields the message:\n\n\" [transport.netty ] Message not fully read (response) for [69208] handler org.elasticsearch.search.action.SearchServiceTransportAction$6@f58070b, error [false], resetting\" in the logs.\n\nNote that every other query works flawlessly. This is only reproduceable when using the arguments _pre_offset_ and/or _post_offset_ in a DateHistogram with a negative value and with more than 1 node in the cluster.\n\nFacts so far:\n- Gist with the output of [_nodes?jvm=true&pretty](https://gist.github.com/marcelog/010c0bdb1c6bf9b664f1).\n- Gist with [sample query](https://gist.github.com/marcelog/d96f5ad06944da1231d7)\n- **Same query**, but **without negative values** in _pre_offset_ and _post_offset_ **works** (it doesn't return the results we expect, of course, but no errors/warnings are shown in the logs)\n- This [previous issue](https://github.com/elasticsearch/elasticsearch/issues/5178) doesn't seem to be related, there are no other nodes in the network and there are no \"custom\" java clients around either. \n- The sample query **works when there is only one node in the cluster** and we start to get these error messages when adding more nodes (2 will suffice to reproduce the issue).\n\nAny ideas? In the meantime, we solved the issue by using _pre_zone_ and _post_zone_ instead of _pre_offset_ and _post_offset_. Negative values are ok, and everything is running smoothly with those (we tried index, search, and snapshot operations). No error messages in the logs.\n\nThanks in advance,\n", "comments": [ { "body": "Wow that was _really_ fast! Thanks :)\n", "created_at": "2014-08-18T14:39:36Z" } ], "number": 7312, "title": "Aggregations: DateHistogram with negative 'pre_offset' or 'post_offset' value ends with \"Message not fully read (response) for\"" }
{ "body": "Changes the serialisation of pre and post offset to use Long instead of VLong so that negative values are supported. This actually only showed up in the case where minDocCount=0 as the rounding is only serialised in this case.\n\nCloses #7312\n", "number": 7313, "review_comments": [ { "body": "The problem here is that it would break multi-version clusters. We still need to read/write vLong depending on in/out.getVersion so that at least positive offsets work.\n", "created_at": "2014-08-18T16:08:10Z" }, { "body": "It think this is problematic since this forces LocalTransport all the time even if we run with network tests. I think you should add a setting to `AssertingLocalTransport` that allows you to set the min version? then you can just pass the min version here instead of the `TransportModule.TRANSPORT_TYPE_KEY` I hope that makes sense\n", "created_at": "2014-08-21T10:19:38Z" }, { "body": "can't you just use `settings.getAsVersion` here? and on the other end you just use `builder.put(ASSERTING_TRANSPORT_MIN_VERSION_KEY, version)`\n", "created_at": "2014-08-21T11:49:41Z" }, { "body": "would you mind documenting these two methods?\n", "created_at": "2014-08-21T11:49:55Z" }, { "body": "add a quick doc string why this is a sep. class?\n", "created_at": "2014-08-21T11:50:13Z" } ], "title": "Fixes pre and post offset serialisation for histogram aggs" }
{ "commits": [ { "message": "Aggregations: Fixes pre and post offset serialisation for histogram aggs\n\nChanges the serialisation of pre and post offset to use Long instead of VLong so that negative values are supported. This actually only showed up in the case where minDocCount=0 as the rounding is only serialised in this case.\n\nCloses #7312" } ], "files": [ { "diff": "@@ -19,6 +19,7 @@\n package org.elasticsearch.common.rounding;\n \n import org.elasticsearch.ElasticsearchException;\n+import org.elasticsearch.Version;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.io.stream.Streamable;\n@@ -218,15 +219,25 @@ public long nextRoundingValue(long value) {\n @Override\n public void readFrom(StreamInput in) throws IOException {\n rounding = Rounding.Streams.read(in);\n- preOffset = in.readVLong();\n- postOffset = in.readVLong();\n+ if (in.getVersion().before(Version.V_1_4_0)) {\n+ preOffset = in.readVLong();\n+ postOffset = in.readVLong();\n+ } else {\n+ preOffset = in.readLong();\n+ postOffset = in.readLong();\n+ }\n }\n \n @Override\n public void writeTo(StreamOutput out) throws IOException {\n Rounding.Streams.write(rounding, out);\n- out.writeVLong(preOffset);\n- out.writeVLong(postOffset);\n+ if (out.getVersion().before(Version.V_1_4_0)) {\n+ out.writeVLong(preOffset);\n+ out.writeVLong(postOffset);\n+ } else {\n+ out.writeLong(preOffset);\n+ out.writeLong(postOffset);\n+ }\n }\n }\n ", "filename": "src/main/java/org/elasticsearch/common/rounding/Rounding.java", "status": "modified" }, { "diff": "@@ -0,0 +1,208 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.search.aggregations.bucket;\n+\n+import org.elasticsearch.Version;\n+import org.elasticsearch.action.index.IndexRequestBuilder;\n+import org.elasticsearch.action.search.SearchResponse;\n+import org.elasticsearch.common.settings.ImmutableSettings;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.index.mapper.core.DateFieldMapper;\n+import org.elasticsearch.search.aggregations.bucket.histogram.DateHistogram;\n+import org.elasticsearch.test.ElasticsearchIntegrationTest;\n+import org.elasticsearch.test.transport.AssertingLocalTransport;\n+import org.hamcrest.Matchers;\n+import org.joda.time.DateTime;\n+import org.junit.After;\n+import org.junit.Test;\n+\n+import java.io.IOException;\n+import java.util.Collection;\n+\n+import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n+import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery;\n+import static org.elasticsearch.search.aggregations.AggregationBuilders.dateHistogram;\n+import static org.hamcrest.Matchers.equalTo;\n+\n+/**\n+ * The serialisation of pre and post offsets for the date histogram aggregation was corrected in version 1.4 to allow negative offsets and as such the\n+ * serialisation of negative offsets in these tests would break in pre 1.4 versions. These tests are separated from the other DateHistogramTests so the \n+ * AssertingLocalTransport for these tests can be set to only use versions 1.4 onwards while keeping the other tests using all versions\n+ */\n+@ElasticsearchIntegrationTest.SuiteScopeTest\n+@ElasticsearchIntegrationTest.ClusterScope(scope=ElasticsearchIntegrationTest.Scope.SUITE)\n+public class DateHistogramOffsetTests extends ElasticsearchIntegrationTest {\n+\n+ private DateTime date(String date) {\n+ return DateFieldMapper.Defaults.DATE_TIME_FORMATTER.parser().parseDateTime(date);\n+ }\n+\n+ @Override\n+ protected Settings nodeSettings(int nodeOrdinal) {\n+ return ImmutableSettings.builder()\n+ .put(AssertingLocalTransport.ASSERTING_TRANSPORT_MIN_VERSION_KEY, Version.V_1_4_0).build();\n+ }\n+\n+ @After\n+ public void afterEachTest() throws IOException {\n+ internalCluster().wipeIndices(\"idx2\");\n+ }\n+\n+ @Test\n+ public void singleValue_WithPreOffset() throws Exception {\n+ prepareCreate(\"idx2\").addMapping(\"type\", \"date\", \"type=date\").execute().actionGet();\n+ IndexRequestBuilder[] reqs = new IndexRequestBuilder[5];\n+ DateTime date = date(\"2014-03-11T00:00:00+00:00\");\n+ for (int i = 0; i < reqs.length; i++) {\n+ reqs[i] = client().prepareIndex(\"idx2\", \"type\", \"\" + i).setSource(jsonBuilder().startObject().field(\"date\", date).endObject());\n+ date = date.plusHours(1);\n+ }\n+ indexRandom(true, reqs);\n+\n+ SearchResponse response = client().prepareSearch(\"idx2\")\n+ .setQuery(matchAllQuery())\n+ .addAggregation(dateHistogram(\"date_histo\")\n+ .field(\"date\")\n+ .preOffset(\"-2h\")\n+ .interval(DateHistogram.Interval.DAY)\n+ .format(\"yyyy-MM-dd\"))\n+ .execute().actionGet();\n+\n+ assertThat(response.getHits().getTotalHits(), equalTo(5l));\n+\n+ DateHistogram histo = response.getAggregations().get(\"date_histo\");\n+ Collection<? extends DateHistogram.Bucket> buckets = histo.getBuckets();\n+ assertThat(buckets.size(), equalTo(2));\n+\n+ DateHistogram.Bucket bucket = histo.getBucketByKey(\"2014-03-10\");\n+ assertThat(bucket, Matchers.notNullValue());\n+ assertThat(bucket.getDocCount(), equalTo(2l));\n+\n+ bucket = histo.getBucketByKey(\"2014-03-11\");\n+ assertThat(bucket, Matchers.notNullValue());\n+ assertThat(bucket.getDocCount(), equalTo(3l));\n+ }\n+\n+ @Test\n+ public void singleValue_WithPreOffset_MinDocCount() throws Exception {\n+ prepareCreate(\"idx2\").addMapping(\"type\", \"date\", \"type=date\").execute().actionGet();\n+ IndexRequestBuilder[] reqs = new IndexRequestBuilder[5];\n+ DateTime date = date(\"2014-03-11T00:00:00+00:00\");\n+ for (int i = 0; i < reqs.length; i++) {\n+ reqs[i] = client().prepareIndex(\"idx2\", \"type\", \"\" + i).setSource(jsonBuilder().startObject().field(\"date\", date).endObject());\n+ date = date.plusHours(1);\n+ }\n+ indexRandom(true, reqs);\n+\n+ SearchResponse response = client().prepareSearch(\"idx2\")\n+ .setQuery(matchAllQuery())\n+ .addAggregation(dateHistogram(\"date_histo\")\n+ .field(\"date\")\n+ .preOffset(\"-2h\")\n+ .minDocCount(0)\n+ .interval(DateHistogram.Interval.DAY)\n+ .format(\"yyyy-MM-dd\"))\n+ .execute().actionGet();\n+\n+ assertThat(response.getHits().getTotalHits(), equalTo(5l));\n+\n+ DateHistogram histo = response.getAggregations().get(\"date_histo\");\n+ Collection<? extends DateHistogram.Bucket> buckets = histo.getBuckets();\n+ assertThat(buckets.size(), equalTo(2));\n+\n+ DateHistogram.Bucket bucket = histo.getBucketByKey(\"2014-03-10\");\n+ assertThat(bucket, Matchers.notNullValue());\n+ assertThat(bucket.getDocCount(), equalTo(2l));\n+\n+ bucket = histo.getBucketByKey(\"2014-03-11\");\n+ assertThat(bucket, Matchers.notNullValue());\n+ assertThat(bucket.getDocCount(), equalTo(3l));\n+ }\n+\n+ @Test\n+ public void singleValue_WithPostOffset() throws Exception {\n+ prepareCreate(\"idx2\").addMapping(\"type\", \"date\", \"type=date\").execute().actionGet();\n+ IndexRequestBuilder[] reqs = new IndexRequestBuilder[5];\n+ DateTime date = date(\"2014-03-11T00:00:00+00:00\");\n+ for (int i = 0; i < reqs.length; i++) {\n+ reqs[i] = client().prepareIndex(\"idx2\", \"type\", \"\" + i).setSource(jsonBuilder().startObject().field(\"date\", date).endObject());\n+ date = date.plusHours(6);\n+ }\n+ indexRandom(true, reqs);\n+\n+ SearchResponse response = client().prepareSearch(\"idx2\")\n+ .setQuery(matchAllQuery())\n+ .addAggregation(dateHistogram(\"date_histo\")\n+ .field(\"date\")\n+ .postOffset(\"2d\")\n+ .interval(DateHistogram.Interval.DAY)\n+ .format(\"yyyy-MM-dd\"))\n+ .execute().actionGet();\n+\n+ assertThat(response.getHits().getTotalHits(), equalTo(5l));\n+\n+ DateHistogram histo = response.getAggregations().get(\"date_histo\");\n+ Collection<? extends DateHistogram.Bucket> buckets = histo.getBuckets();\n+ assertThat(buckets.size(), equalTo(2));\n+\n+ DateHistogram.Bucket bucket = histo.getBucketByKey(\"2014-03-13\");\n+ assertThat(bucket, Matchers.notNullValue());\n+ assertThat(bucket.getDocCount(), equalTo(4l));\n+\n+ bucket = histo.getBucketByKey(\"2014-03-14\");\n+ assertThat(bucket, Matchers.notNullValue());\n+ assertThat(bucket.getDocCount(), equalTo(1l));\n+ }\n+\n+ @Test\n+ public void singleValue_WithPostOffset_MinDocCount() throws Exception {\n+ prepareCreate(\"idx2\").addMapping(\"type\", \"date\", \"type=date\").execute().actionGet();\n+ IndexRequestBuilder[] reqs = new IndexRequestBuilder[5];\n+ DateTime date = date(\"2014-03-11T00:00:00+00:00\");\n+ for (int i = 0; i < reqs.length; i++) {\n+ reqs[i] = client().prepareIndex(\"idx2\", \"type\", \"\" + i).setSource(jsonBuilder().startObject().field(\"date\", date).endObject());\n+ date = date.plusHours(6);\n+ }\n+ indexRandom(true, reqs);\n+\n+ SearchResponse response = client().prepareSearch(\"idx2\")\n+ .setQuery(matchAllQuery())\n+ .addAggregation(dateHistogram(\"date_histo\")\n+ .field(\"date\")\n+ .postOffset(\"2d\")\n+ .minDocCount(0)\n+ .interval(DateHistogram.Interval.DAY)\n+ .format(\"yyyy-MM-dd\"))\n+ .execute().actionGet();\n+\n+ assertThat(response.getHits().getTotalHits(), equalTo(5l));\n+\n+ DateHistogram histo = response.getAggregations().get(\"date_histo\");\n+ Collection<? extends DateHistogram.Bucket> buckets = histo.getBuckets();\n+ assertThat(buckets.size(), equalTo(2));\n+\n+ DateHistogram.Bucket bucket = histo.getBucketByKey(\"2014-03-13\");\n+ assertThat(bucket, Matchers.notNullValue());\n+ assertThat(bucket.getDocCount(), equalTo(4l));\n+\n+ bucket = histo.getBucketByKey(\"2014-03-14\");\n+ assertThat(bucket, Matchers.notNullValue());\n+ assertThat(bucket.getDocCount(), equalTo(1l));\n+ }\n+}", "filename": "src/test/java/org/elasticsearch/search/aggregations/bucket/DateHistogramOffsetTests.java", "status": "added" }, { "diff": "@@ -1051,76 +1051,6 @@ public void singleValue_WithPreZone() throws Exception {\n assertThat(bucket.getDocCount(), equalTo(3l));\n }\n \n- @Test\n- public void singleValue_WithPreOffset() throws Exception {\n- prepareCreate(\"idx2\").addMapping(\"type\", \"date\", \"type=date\").execute().actionGet();\n- IndexRequestBuilder[] reqs = new IndexRequestBuilder[5];\n- DateTime date = date(\"2014-03-11T00:00:00+00:00\");\n- for (int i = 0; i < reqs.length; i++) {\n- reqs[i] = client().prepareIndex(\"idx2\", \"type\", \"\" + i).setSource(jsonBuilder().startObject().field(\"date\", date).endObject());\n- date = date.plusHours(1);\n- }\n- indexRandom(true, reqs);\n-\n- SearchResponse response = client().prepareSearch(\"idx2\")\n- .setQuery(matchAllQuery())\n- .addAggregation(dateHistogram(\"date_histo\")\n- .field(\"date\")\n- .preOffset(\"-2h\")\n- .interval(DateHistogram.Interval.DAY)\n- .format(\"yyyy-MM-dd\"))\n- .execute().actionGet();\n-\n- assertThat(response.getHits().getTotalHits(), equalTo(5l));\n-\n- DateHistogram histo = response.getAggregations().get(\"date_histo\");\n- Collection<? extends DateHistogram.Bucket> buckets = histo.getBuckets();\n- assertThat(buckets.size(), equalTo(2));\n-\n- DateHistogram.Bucket bucket = histo.getBucketByKey(\"2014-03-10\");\n- assertThat(bucket, Matchers.notNullValue());\n- assertThat(bucket.getDocCount(), equalTo(2l));\n-\n- bucket = histo.getBucketByKey(\"2014-03-11\");\n- assertThat(bucket, Matchers.notNullValue());\n- assertThat(bucket.getDocCount(), equalTo(3l));\n- }\n-\n- @Test\n- public void singleValue_WithPostOffset() throws Exception {\n- prepareCreate(\"idx2\").addMapping(\"type\", \"date\", \"type=date\").execute().actionGet();\n- IndexRequestBuilder[] reqs = new IndexRequestBuilder[5];\n- DateTime date = date(\"2014-03-11T00:00:00+00:00\");\n- for (int i = 0; i < reqs.length; i++) {\n- reqs[i] = client().prepareIndex(\"idx2\", \"type\", \"\" + i).setSource(jsonBuilder().startObject().field(\"date\", date).endObject());\n- date = date.plusHours(6);\n- }\n- indexRandom(true, reqs);\n-\n- SearchResponse response = client().prepareSearch(\"idx2\")\n- .setQuery(matchAllQuery())\n- .addAggregation(dateHistogram(\"date_histo\")\n- .field(\"date\")\n- .postOffset(\"2d\")\n- .interval(DateHistogram.Interval.DAY)\n- .format(\"yyyy-MM-dd\"))\n- .execute().actionGet();\n-\n- assertThat(response.getHits().getTotalHits(), equalTo(5l));\n-\n- DateHistogram histo = response.getAggregations().get(\"date_histo\");\n- Collection<? extends DateHistogram.Bucket> buckets = histo.getBuckets();\n- assertThat(buckets.size(), equalTo(2));\n-\n- DateHistogram.Bucket bucket = histo.getBucketByKey(\"2014-03-13\");\n- assertThat(bucket, Matchers.notNullValue());\n- assertThat(bucket.getDocCount(), equalTo(4l));\n-\n- bucket = histo.getBucketByKey(\"2014-03-14\");\n- assertThat(bucket, Matchers.notNullValue());\n- assertThat(bucket.getDocCount(), equalTo(1l));\n- }\n-\n @Test\n public void singleValue_WithPreZone_WithAadjustLargeInterval() throws Exception {\n prepareCreate(\"idx2\").addMapping(\"type\", \"date\", \"type=date\").execute().actionGet();", "filename": "src/test/java/org/elasticsearch/search/aggregations/bucket/DateHistogramTests.java", "status": "modified" }, { "diff": "@@ -19,14 +19,10 @@\n package org.elasticsearch.test;\n \n import com.carrotsearch.randomizedtesting.RandomizedTest;\n-import com.carrotsearch.randomizedtesting.annotations.Listeners;\n-import com.carrotsearch.randomizedtesting.annotations.ThreadLeakFilters;\n-import com.carrotsearch.randomizedtesting.annotations.ThreadLeakScope;\n+import com.carrotsearch.randomizedtesting.annotations.*;\n import com.carrotsearch.randomizedtesting.annotations.ThreadLeakScope.Scope;\n-import com.carrotsearch.randomizedtesting.annotations.TimeoutSuite;\n import com.google.common.base.Predicate;\n import com.google.common.collect.ImmutableList;\n-import com.google.common.collect.Lists;\n import org.apache.lucene.search.FieldCache;\n import org.apache.lucene.store.MockDirectoryWrapper;\n import org.apache.lucene.util.AbstractRandomizedTest;\n@@ -297,20 +293,84 @@ public static boolean maybeDocValues() {\n SORTED_VERSIONS = version.build();\n }\n \n+ /**\n+ * @return the {@link Version} before the {@link Version#CURRENT}\n+ */\n public static Version getPreviousVersion() {\n Version version = SORTED_VERSIONS.get(1);\n assert version.before(Version.CURRENT);\n return version;\n }\n-\n+ \n+ /**\n+ * A random {@link Version}.\n+ *\n+ * @return a random {@link Version} from all available versions\n+ */\n public static Version randomVersion() {\n return randomVersion(getRandom());\n }\n-\n+ \n+ /**\n+ * A random {@link Version}.\n+ * \n+ * @param random\n+ * the {@link Random} to use to generate the random version\n+ *\n+ * @return a random {@link Version} from all available versions\n+ */\n public static Version randomVersion(Random random) {\n return SORTED_VERSIONS.get(random.nextInt(SORTED_VERSIONS.size()));\n }\n \n+ /**\n+ * A random {@link Version} from <code>minVersion</code> to\n+ * <code>maxVersion</code> (inclusive).\n+ * \n+ * @param minVersion\n+ * the minimum version (inclusive)\n+ * @param maxVersion\n+ * the maximum version (inclusive)\n+ * @return a random {@link Version} from <code>minVersion</code> to\n+ * <code>maxVersion</code> (inclusive)\n+ */\n+ public static Version randomVersionBetween(Version minVersion, Version maxVersion) {\n+ return randomVersionBetween(getRandom(), minVersion, maxVersion);\n+ }\n+\n+ /**\n+ * A random {@link Version} from <code>minVersion</code> to\n+ * <code>maxVersion</code> (inclusive).\n+ * \n+ * @param random\n+ * the {@link Random} to use to generate the random version\n+ * @param minVersion\n+ * the minimum version (inclusive)\n+ * @param maxVersion\n+ * the maximum version (inclusive)\n+ * @return a random {@link Version} from <code>minVersion</code> to\n+ * <code>maxVersion</code> (inclusive)\n+ */\n+ public static Version randomVersionBetween(Random random, Version minVersion, Version maxVersion) {\n+ int minVersionIndex = SORTED_VERSIONS.size();\n+ if (minVersion != null) {\n+ minVersionIndex = SORTED_VERSIONS.indexOf(minVersion);\n+ }\n+ int maxVersionIndex = 0;\n+ if (maxVersion != null) {\n+ maxVersionIndex = SORTED_VERSIONS.indexOf(maxVersion);\n+ }\n+ if (minVersionIndex == -1) {\n+ throw new IllegalArgumentException(\"minVersion [\" + minVersion + \"] does not exist.\");\n+ } else if (maxVersionIndex == -1) {\n+ throw new IllegalArgumentException(\"maxVersion [\" + maxVersion + \"] does not exist.\");\n+ } else {\n+ // minVersionIndex is inclusive so need to add 1 to this index\n+ int range = minVersionIndex + 1 - maxVersionIndex;\n+ return SORTED_VERSIONS.get(maxVersionIndex + random.nextInt(range));\n+ }\n+ }\n+\n static final class ElasticsearchUncaughtExceptionHandler implements Thread.UncaughtExceptionHandler {\n \n private final Thread.UncaughtExceptionHandler parent;", "filename": "src/test/java/org/elasticsearch/test/ElasticsearchTestCase.java", "status": "modified" }, { "diff": "@@ -37,24 +37,31 @@\n *\n */\n public class AssertingLocalTransport extends LocalTransport {\n+\n+ public static final String ASSERTING_TRANSPORT_MIN_VERSION_KEY = \"transport.asserting.version.min\";\n+ public static final String ASSERTING_TRANSPORT_MAX_VERSION_KEY = \"transport.asserting.version.max\";\n private final Random random;\n+ private final Version minVersion;\n+ private final Version maxVersion;\n \n @Inject\n public AssertingLocalTransport(Settings settings, ThreadPool threadPool, Version version) {\n super(settings, threadPool, version);\n final long seed = settings.getAsLong(ElasticsearchIntegrationTest.SETTING_INDEX_SEED, 0l);\n random = new Random(seed);\n+ minVersion = settings.getAsVersion(ASSERTING_TRANSPORT_MIN_VERSION_KEY, Version.V_0_18_0);\n+ maxVersion = settings.getAsVersion(ASSERTING_TRANSPORT_MAX_VERSION_KEY, Version.CURRENT);\n }\n \n @Override\n protected void handleParsedResponse(final TransportResponse response, final TransportResponseHandler handler) {\n- ElasticsearchAssertions.assertVersionSerializable(ElasticsearchTestCase.randomVersion(random), response);\n+ ElasticsearchAssertions.assertVersionSerializable(ElasticsearchTestCase.randomVersionBetween(random, minVersion, maxVersion), response);\n super.handleParsedResponse(response, handler);\n }\n \n @Override\n public void sendRequest(final DiscoveryNode node, final long requestId, final String action, final TransportRequest request, TransportRequestOptions options) throws IOException, TransportException {\n- ElasticsearchAssertions.assertVersionSerializable(ElasticsearchTestCase.randomVersion(random), request);\n+ ElasticsearchAssertions.assertVersionSerializable(ElasticsearchTestCase.randomVersionBetween(random, minVersion, maxVersion), request);\n super.sendRequest(node, requestId, action, request, options);\n }\n }", "filename": "src/test/java/org/elasticsearch/test/transport/AssertingLocalTransport.java", "status": "modified" } ] }
{ "body": "This works on 1.3.1, fails in master:\n\n```\nDELETE /myapp\nPUT /myapp\n{\n \"mappings\" : {\n \"multiuser\" : {\n \"properties\" : {\n \"timestamp\" : {\n \"type\" : \"date\"\n },\n \"entry\" : {\n \"properties\" : {\n \"last\" : {\n \"type\" : \"string\"\n },\n \"first\" : {\n \"type\" : \"string\"\n }\n },\n \"dynamic\" : \"strict\",\n \"type\" : \"nested\"\n }\n },\n \"_timestamp\" : {\n \"path\" : \"timestamp\",\n \"enabled\" : 1\n },\n \"numeric_detection\" : 1,\n \"dynamic\" : \"strict\"\n }\n },\n \"settings\" : {}\n}\n\nPOST /myapp/multiuser?op_type=create\n{\n \"timestamp\" : 1408198082386,\n \"entry\" : [\n {\n \"first\" : \"john\",\n \"last\" : \"smith\"\n }\n ]\n}\n```\n\nThis throws: StrictDynamicMappingException[mapping set to strict, dynamic introduction of [entry] within [multiuser] is not allowed]\n\n```\nat org.elasticsearch.index.mapper.object.ObjectMapper.serializeArray(ObjectMapper.java:604)\nat org.elasticsearch.index.mapper.object.ObjectMapper.parse(ObjectMapper.java:489)\nat org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:533)\nat org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:482)\nat org.elasticsearch.index.shard.service.InternalIndexShard.prepareCreate(InternalIndexShard.java:384)\nat org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:193)\nat org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:532)\nat org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$1.run(TransportShardReplicationOperationAction.java:431)\nat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\nat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\nat java.lang.Thread.run(Thread.java:722)\n```\n", "comments": [ { "body": "@colings86 i think this may be related to the change in #6939 \n", "created_at": "2014-08-16T14:15:24Z" }, { "body": "I'm not sure this change needs version labels since the bug has not been released?\n", "created_at": "2014-08-18T09:11:46Z" } ], "number": 7304, "title": "Mapping: First index of nested value as an array fails when dynamic is strict" }
{ "body": "Closes #7304\n", "number": 7307, "review_comments": [], "title": "Mapping: Fixes using nested doc array with strict mapping" }
{ "commits": [ { "message": "Mapping: Fixes using nested doc array with strict mapping\n\nCloses #7304" } ], "files": [ { "diff": "@@ -592,8 +592,15 @@ private void serializeObject(final ParseContext context, String currentFieldName\n private void serializeArray(ParseContext context, String lastFieldName) throws IOException {\n String arrayFieldName = lastFieldName;\n Mapper mapper = mappers.get(lastFieldName);\n- if (mapper != null && mapper instanceof ArrayValueMapperParser) {\n- mapper.parse(context);\n+ if (mapper != null) {\n+ // There is a concrete mapper for this field already. Need to check if the mapper \n+ // expects an array, if so we pass the context straight to the mapper and if not \n+ // we serialize the array components\n+ if (mapper instanceof ArrayValueMapperParser) {\n+ mapper.parse(context);\n+ } else {\n+ serializeNonDynamicArray(context, lastFieldName, arrayFieldName);\n+ }\n } else {\n \n Dynamic dynamic = this.dynamic;", "filename": "src/main/java/org/elasticsearch/index/mapper/object/ObjectMapper.java", "status": "modified" }, { "diff": "@@ -24,6 +24,7 @@\n import org.elasticsearch.index.mapper.ParsedDocument;\n import org.elasticsearch.index.mapper.internal.TypeFieldMapper;\n import org.elasticsearch.index.mapper.object.ObjectMapper;\n+import org.elasticsearch.index.mapper.object.ObjectMapper.Dynamic;\n import org.elasticsearch.test.ElasticsearchSingleNodeTest;\n import org.junit.Test;\n \n@@ -314,4 +315,37 @@ public void multiRootAndNested1() throws Exception {\n assertThat(doc.docs().get(6).get(\"nested1.field1\"), nullValue());\n assertThat(doc.docs().get(6).getFields(\"nested1.nested2.field2\").length, equalTo(4));\n }\n+\n+ @Test\n+ public void nestedArray_strict() throws Exception {\n+ String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\").startObject(\"properties\")\n+ .startObject(\"nested1\").field(\"type\", \"nested\").field(\"dynamic\", \"strict\").startObject(\"properties\")\n+ .startObject(\"field1\").field(\"type\", \"string\")\n+ .endObject().endObject()\n+ .endObject().endObject().endObject().string();\n+\n+ DocumentMapper docMapper = createIndex(\"test\").mapperService().documentMapperParser().parse(mapping);\n+\n+ assertThat(docMapper.hasNestedObjects(), equalTo(true));\n+ ObjectMapper nested1Mapper = docMapper.objectMappers().get(\"nested1\");\n+ assertThat(nested1Mapper.nested().isNested(), equalTo(true));\n+ assertThat(nested1Mapper.dynamic(), equalTo(Dynamic.STRICT));\n+\n+ ParsedDocument doc = docMapper.parse(\"type\", \"1\", XContentFactory.jsonBuilder()\n+ .startObject()\n+ .field(\"field\", \"value\")\n+ .startArray(\"nested1\")\n+ .startObject().field(\"field1\", \"1\").endObject()\n+ .startObject().field(\"field1\", \"4\").endObject()\n+ .endArray()\n+ .endObject()\n+ .bytes());\n+\n+ assertThat(doc.docs().size(), equalTo(3));\n+ assertThat(doc.docs().get(0).get(\"nested1.field1\"), equalTo(\"4\"));\n+ assertThat(doc.docs().get(0).get(\"field\"), nullValue());\n+ assertThat(doc.docs().get(1).get(\"nested1.field1\"), equalTo(\"1\"));\n+ assertThat(doc.docs().get(1).get(\"field\"), nullValue());\n+ assertThat(doc.docs().get(2).get(\"field\"), equalTo(\"value\"));\n+ }\n }\n\\ No newline at end of file", "filename": "src/test/java/org/elasticsearch/index/mapper/nested/NestedMappingTests.java", "status": "modified" } ] }
{ "body": "The expand_wildcards option supports 'open', 'closed', and 'open,closed'. If you specify 'closed' and the defaultSettings are 'open', both open and closed indices will match. This is because the defaults are pre-selected so even if only 'closed' is provided in the request, open will still be set as well. Need to change it so it only sets the defaults if the \"expand_wildcards\" parameter is not provided\n\nsee: https://github.com/elasticsearch/elasticsearch/blob/master/src/main/java/org/elasticsearch/action/support/IndicesOptions.java#L155-168\n", "comments": [ { "body": "Right, this is odd, caused by the fact that usually you just want to either expand to open or open and closed, not closed only.\n\nOne other thing that I noticed is that it is not possible to disable expanding wildcards from the REST layer, while it would be possible from the java api I think.\n", "created_at": "2014-08-13T14:47:37Z" }, { "body": "@javanna You're right this issue doesn't manifest when used from the Java api.\n", "created_at": "2014-08-14T12:24:26Z" } ], "number": 7258, "title": "REST API: Cannot expand_wildcards for only closed indices" }
{ "body": "This change means that the default settings for expand_wildcards are only applied if the expand_wildcards parameter is not specified rather than being set upfront. It also adds the none and all options to the parameter to allow the user to specify no expansion and expansion to all indexes (equivalent to 'open,closed')\n\nCloses #7258\n", "number": 7290, "review_comments": [ { "body": "missing @Test annotation, I think if the method name doesn't start with test it won't be picked up\n", "created_at": "2014-08-15T10:22:08Z" }, { "body": "Same as above\n", "created_at": "2014-08-15T10:22:19Z" } ], "title": "Allows all options for expand_wildcards parameter" }
{ "commits": [ { "message": "REST API: Allows all options for expand_wildcards parameter\n\nThis change means that the default settings for expand_wildcards are only applied if the expand_wildcards parameter is not specified rather than being set upfront. It also adds the none and all options to the parameter to allow the user to specify no expansion and expansion to all indexes (equivalent to 'open,closed')\n\nCloses #7258" } ], "files": [ { "diff": "@@ -48,6 +48,10 @@ to. If `open` is specified then the wildcard expression is expanded to only\n open indices and if `closed` is specified then the wildcard expression is\n expanded only to closed indices. Also both values (`open,closed`) can be\n specified to expand to all indices.\n++\n+If `none` is specified then wildcard expansion will be disabled and if `all` \n+is specified, wildcard expressions will expand to all indices (this is equivalent \n+to specifying `open,closed`). coming[1.4.0]\n \n The defaults settings for the above parameters depend on the api being used.\n ", "filename": "docs/reference/api-conventions.asciidoc", "status": "modified" }, { "diff": "@@ -0,0 +1,96 @@\n+---\n+setup:\n+ - do:\n+ indices.create:\n+ index: test-xxx\n+ body:\n+ mappings:\n+ type_1: {}\n+ - do:\n+ indices.create:\n+ index: test-xxy\n+ body:\n+ mappings:\n+ type_2: {}\n+ - do:\n+ indices.create:\n+ index: test-xyy\n+ body:\n+ mappings:\n+ type_3: {}\n+ - do:\n+ indices.create:\n+ index: test-yyy\n+ body:\n+ mappings:\n+ type_4: {}\n+\n+ - do:\n+ indices.close:\n+ index: test-xyy\n+\n+---\n+\"Get test-* with defaults\":\n+\n+ - do:\n+ indices.get_mapping:\n+ index: test-x*\n+\n+ - match: { test-xxx.mappings.type_1.properties: {}}\n+ - match: { test-xxy.mappings.type_2.properties: {}}\n+\n+---\n+\"Get test-* with wildcard_expansion=all\":\n+\n+ - do:\n+ indices.get_mapping:\n+ index: test-x*\n+ expand_wildcards: all\n+\n+ - match: { test-xxx.mappings.type_1.properties: {}}\n+ - match: { test-xxy.mappings.type_2.properties: {}}\n+ - match: { test-xyy.mappings.type_3.properties: {}}\n+\n+---\n+\"Get test-* with wildcard_expansion=open\":\n+\n+ - do:\n+ indices.get_mapping:\n+ index: test-x*\n+ expand_wildcards: open\n+\n+ - match: { test-xxx.mappings.type_1.properties: {}}\n+ - match: { test-xxy.mappings.type_2.properties: {}}\n+\n+---\n+\"Get test-* with wildcard_expansion=closed\":\n+\n+ - do:\n+ indices.get_mapping:\n+ index: test-x*\n+ expand_wildcards: closed\n+\n+ - match: { test-xyy.mappings.type_3.properties: {}}\n+\n+---\n+\"Get test-* with wildcard_expansion=none\":\n+\n+ - do:\n+ catch: missing\n+ indices.get_mapping:\n+ index: test-x*\n+ expand_wildcards: none\n+\n+---\n+\"Get test-* with wildcard_expansion=open,closed\":\n+\n+ - do:\n+ indices.get_mapping:\n+ index: test-x*\n+ expand_wildcards: open,closed\n+\n+ - match: { test-xxx.mappings.type_1.properties: {}}\n+ - match: { test-xxy.mappings.type_2.properties: {}}\n+ - match: { test-xyy.mappings.type_3.properties: {}}\n+\n+", "filename": "rest-api-spec/test/indices.get_mapping/50_wildcard_expansion.yaml", "status": "added" }, { "diff": "@@ -152,15 +152,24 @@ public static IndicesOptions fromRequest(RestRequest request, IndicesOptions def\n return defaultSettings;\n }\n \n- boolean expandWildcardsOpen = defaultSettings.expandWildcardsOpen();\n- boolean expandWildcardsClosed = defaultSettings.expandWildcardsClosed();\n- if (sWildcards != null) {\n+ boolean expandWildcardsOpen = false;\n+ boolean expandWildcardsClosed = false;\n+ if (sWildcards == null) {\n+ expandWildcardsOpen = defaultSettings.expandWildcardsOpen();\n+ expandWildcardsClosed = defaultSettings.expandWildcardsClosed();\n+ } else {\n String[] wildcards = Strings.splitStringByCommaToArray(sWildcards);\n for (String wildcard : wildcards) {\n if (\"open\".equals(wildcard)) {\n expandWildcardsOpen = true;\n } else if (\"closed\".equals(wildcard)) {\n expandWildcardsClosed = true;\n+ } else if (\"none\".equals(wildcard)) {\n+ expandWildcardsOpen = false;\n+ expandWildcardsClosed = false;\n+ } else if (\"all\".equals(wildcard)) {\n+ expandWildcardsOpen = true;\n+ expandWildcardsClosed = true;\n } else {\n throw new ElasticsearchIllegalArgumentException(\"No valid expand wildcard value [\" + wildcard + \"]\");\n }", "filename": "src/main/java/org/elasticsearch/action/support/IndicesOptions.java", "status": "modified" }, { "diff": "@@ -22,13 +22,16 @@\n import com.google.common.collect.Sets;\n import org.elasticsearch.ElasticsearchIllegalArgumentException;\n import org.elasticsearch.action.support.IndicesOptions;\n+import org.elasticsearch.cluster.metadata.IndexMetaData.State;\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.settings.ImmutableSettings;\n import org.elasticsearch.indices.IndexClosedException;\n import org.elasticsearch.indices.IndexMissingException;\n import org.elasticsearch.test.ElasticsearchTestCase;\n import org.junit.Test;\n \n+import java.util.HashSet;\n+\n import static com.google.common.collect.Sets.newHashSet;\n import static org.hamcrest.Matchers.*;\n \n@@ -505,6 +508,22 @@ public void convertWildcardsTests() {\n assertThat(newHashSet(md.convertFromWildcards(new String[]{\"+testYYY\", \"+testX*\"}, IndicesOptions.lenientExpandOpen())), equalTo(newHashSet(\"testXXX\", \"testXYY\", \"testYYY\")));\n }\n \n+ @Test\n+ public void convertWildcardsOpenClosedIndicesTests() {\n+ MetaData.Builder mdBuilder = MetaData.builder()\n+ .put(indexBuilder(\"testXXX\").state(State.OPEN))\n+ .put(indexBuilder(\"testXXY\").state(State.OPEN))\n+ .put(indexBuilder(\"testXYY\").state(State.CLOSE))\n+ .put(indexBuilder(\"testYYY\").state(State.OPEN))\n+ .put(indexBuilder(\"testYYX\").state(State.CLOSE))\n+ .put(indexBuilder(\"kuku\").state(State.OPEN));\n+ MetaData md = mdBuilder.build();\n+ // Can't test when wildcard expansion is turned off here as convertFromWildcards shouldn't be called in this case. Tests for this are covered in the concreteIndices() tests\n+ assertThat(newHashSet(md.convertFromWildcards(new String[]{\"testX*\"}, IndicesOptions.fromOptions(true, true, true, true))), equalTo(newHashSet(\"testXXX\", \"testXXY\", \"testXYY\")));\n+ assertThat(newHashSet(md.convertFromWildcards(new String[]{\"testX*\"}, IndicesOptions.fromOptions(true, true, false, true))), equalTo(newHashSet(\"testXYY\")));\n+ assertThat(newHashSet(md.convertFromWildcards(new String[]{\"testX*\"}, IndicesOptions.fromOptions(true, true, true, false))), equalTo(newHashSet(\"testXXX\", \"testXXY\")));\n+ }\n+\n private IndexMetaData.Builder indexBuilder(String index) {\n return IndexMetaData.builder(index).settings(ImmutableSettings.settingsBuilder().put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1).put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0));\n }\n@@ -545,6 +564,21 @@ public void concreteIndicesIgnoreIndicesEmptyRequest() {\n assertThat(newHashSet(md.concreteIndices(IndicesOptions.lenientExpandOpen(), new String[]{})), equalTo(Sets.newHashSet(\"kuku\", \"testXXX\")));\n }\n \n+ @Test\n+ public void concreteIndicesWildcardExpansion() {\n+ MetaData.Builder mdBuilder = MetaData.builder()\n+ .put(indexBuilder(\"testXXX\").state(State.OPEN))\n+ .put(indexBuilder(\"testXXY\").state(State.OPEN))\n+ .put(indexBuilder(\"testXYY\").state(State.CLOSE))\n+ .put(indexBuilder(\"testYYY\").state(State.OPEN))\n+ .put(indexBuilder(\"testYYX\").state(State.OPEN));\n+ MetaData md = mdBuilder.build();\n+ assertThat(newHashSet(md.concreteIndices(IndicesOptions.fromOptions(true, true, false, false), \"testX*\")), equalTo(new HashSet<String>()));\n+ assertThat(newHashSet(md.concreteIndices(IndicesOptions.fromOptions(true, true, true, false), \"testX*\")), equalTo(newHashSet(\"testXXX\", \"testXXY\")));\n+ assertThat(newHashSet(md.concreteIndices(IndicesOptions.fromOptions(true, true, false, true), \"testX*\")), equalTo(newHashSet(\"testXYY\")));\n+ assertThat(newHashSet(md.concreteIndices(IndicesOptions.fromOptions(true, true, true, true), \"testX*\")), equalTo(newHashSet(\"testXXX\", \"testXXY\", \"testXYY\")));\n+ }\n+\n @Test\n public void testIsAllIndices_null() throws Exception {\n MetaData metaData = MetaData.builder().build();", "filename": "src/test/java/org/elasticsearch/cluster/metadata/MetaDataTests.java", "status": "modified" } ] }
{ "body": "```\nGET /attractions/restaurant/_search\n{\n \"query\": {\n \"filtered\": {\n \"filter\": {\n \"geo_distance\": {\n \"location\": {\n \"lat\": 40.715,\n \"lon\": -73.998\n }\n }\n }\n }\n }\n}\n```\n\nNPE:\n\n```\nCaused by: java.lang.NullPointerException\nat org.elasticsearch.common.unit.DistanceUnit$Distance.parseDistance(DistanceUnit.java:319)\nat org.elasticsearch.common.unit.DistanceUnit$Distance.access$000(DistanceUnit.java:245)\nat org.elasticsearch.common.unit.DistanceUnit.parse(DistanceUnit.java:162)\nat org.elasticsearch.index.query.GeoDistanceFilterParser.parse(GeoDistanceFilterParser.java:147)\nat org.elasticsearch.index.query.QueryParseContext.executeFilterParser(QueryParseContext.java:290)\nat org.elasticsearch.index.query.QueryParseContext.parseInnerFilter(QueryParseContext.java:271)\nat org.elasticsearch.index.query.FilteredQueryParser.parse(FilteredQueryParser.java:74)\nat org.elasticsearch.index.query.QueryParseContext.parseInnerQuery(QueryParseContext.java:234)\nat org.elasticsearch.index.query.IndexQueryParserService.innerParse(IndexQueryParserService.java:342)\nat org.elasticsearch.index.query.IndexQueryParserService.parse(IndexQueryParserService.java:268)\nat org.elasticsearch.index.query.IndexQueryParserService.parse(IndexQueryParserService.java:263)\nat org.elasticsearch.search.query.QueryParseElement.parse(QueryParseElement.java:33)\nat org.elasticsearch.search.SearchService.parseSource(SearchService.java:669)\n... 9 more\n```\n", "comments": [], "number": 7260, "title": "Geo: Geo-distance without distance throws an NPE" }
{ "body": "geo_distance filter now throws a parse exception if no distance parameter is supplied\n\nClose #7260\n", "number": 7272, "review_comments": [], "title": "Improved error handling in geo_distance" }
{ "commits": [], "files": [] }
{ "body": "The following command:\n\n`curl -XPUT http://localhost:9200/test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_haystack`\n\nput my elasticsearch into an infinite loop, writing the following lines over and over to `elasticsearch.log`:\n\n```\n[2013-12-11 18:36:04,630][WARN ][cluster.action.shard ] [Payback] [test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_haystack][1] sending failed shard for [test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_haystack][1], node[8vmR12rFRp2FI3EA-icfrw], [P], s[INITIALIZING], indexUUID [zQIlZKm3R1C1blwKNxFWJg], reason [Failed to create shard, message [IndexShardCreationException[[test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_haystack][1] failed to create shard]; nested: IOException[File name too long]; ]]\n\n[2013-12-11 18:36:04,630][WARN ][cluster.action.shard ] [Payback] [test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_haystack][1] received shard failed for [test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_haystack][1], node[8vmR12rFRp2FI3EA-icfrw], [P], s[INITIALIZING], indexUUID [zQIlZKm3R1C1blwKNxFWJg], reason [Failed to create shard, message [IndexShardCreationException[[test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_haystack][1] failed to create shard]; nested: IOException[File name too long]; ]]\n\n[2013-12-11 18:36:04,638][WARN ][indices.cluster ] [Payback][test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_haystack][2] failed to create shard org.elasticsearch.index.shard.IndexShardCreationException: [test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_test_haystack][2] failed to create shard\n\n at org.elasticsearch.index.service.InternalIndexService.createShard(InternalIndexService.java:347)\n at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyInitializingShard(IndicesClusterStateService.java:651)\n at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyNewOrUpdatedShards(IndicesClusterStateService.java:569)\n at org.elasticsearch.indices.cluster.IndicesClusterStateService.clusterChanged(IndicesClusterStateService.java:181)\n at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:414)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:135)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n at java.lang.Thread.run(Thread.java:679)\nCaused by: java.io.IOException: File name too long\n at java.io.UnixFileSystem.canonicalize0(Native Method)\n at java.io.UnixFileSystem.canonicalize(UnixFileSystem.java:172)\n at java.io.File.getCanonicalPath(File.java:576)\n at org.apache.lucene.store.FSDirectory.getCanonicalPath(FSDirectory.java:129)\n at org.apache.lucene.store.FSDirectory.<init>(FSDirectory.java:143)\n at org.apache.lucene.store.NIOFSDirectory.<init>(NIOFSDirectory.java:64)\n at org.elasticsearch.index.store.fs.NioFsDirectoryService.newFSDirectory(NioFsDirectoryService.java:45)\n at org.elasticsearch.index.store.fs.FsDirectoryService.build(FsDirectoryService.java:129)\n at org.elasticsearch.index.store.distributor.AbstractDistributor.<init>(AbstractDistributor.java:35)\n at org.elasticsearch.index.store.distributor.LeastUsedDistributor.<init>(LeastUsedDistributor.java:36)\n at sun.reflect.GeneratedConstructorAccessor16.newInstance(Unknown Source)\n at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)\n at java.lang.reflect.Constructor.newInstance(Constructor.java:532)\n at org.elasticsearch.common.inject.DefaultConstructionProxyFactory$1.newInstance(DefaultConstructionProxyFactory.java:54)\n at org.elasticsearch.common.inject.ConstructorInjector.construct(ConstructorInjector.java:86)\n at org.elasticsearch.common.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:98)\n at org.elasticsearch.common.inject.FactoryProxy.get(FactoryProxy.java:52)\n at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:45)\n at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:837)\n at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:42)\n at org.elasticsearch.common.inject.Scopes$1$1.get(Scopes.java:57)\n at org.elasticsearch.common.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:45)\n at org.elasticsearch.common.inject.SingleParameterInjector.inject(SingleParameterInjector.java:42)\n at org.elasticsearch.common.inject.SingleParameterInjector.getAll(SingleParameterInjector.java:66)\n at org.elasticsearch.common.inject.ConstructorInjector.construct(ConstructorInjector.java:85)\n at org.elasticsearch.common.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:98)\n at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:45)\n at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:837)\n at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:42)\n at org.elasticsearch.common.inject.Scopes$1$1.get(Scopes.java:57)\n at org.elasticsearch.common.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:45)\n at org.elasticsearch.common.inject.SingleParameterInjector.inject(SingleParameterInjector.java:42)\n at org.elasticsearch.common.inject.SingleParameterInjector.getAll(SingleParameterInjector.java:66)\n at org.elasticsearch.common.inject.ConstructorInjector.construct(ConstructorInjector.java:85)\n at org.elasticsearch.common.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:98)\n at org.elasticsearch.common.inject.FactoryProxy.get(FactoryProxy.java:52)\n at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:45)\n at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:837)\n at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:42)\n at org.elasticsearch.common.inject.Scopes$1$1.get(Scopes.java:57)\n at org.elasticsearch.common.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:45)\n at org.elasticsearch.common.inject.InjectorBuilder$1.call(InjectorBuilder.java:200)\n at org.elasticsearch.common.inject.InjectorBuilder$1.call(InjectorBuilder.java:193)\n at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:830)\n at org.elasticsearch.common.inject.InjectorBuilder.loadEagerSingletons(InjectorBuilder.java:193)\n at org.elasticsearch.common.inject.InjectorBuilder.injectDynamically(InjectorBuilder.java:175)\n at org.elasticsearch.common.inject.InjectorBuilder.build(InjectorBuilder.java:110)\n at org.elasticsearch.common.inject.InjectorImpl.createChildInjector(InjectorImpl.java:131)\n at org.elasticsearch.common.inject.ModulesBuilder.createChildInjector(ModulesBuilder.java:69)\n at org.elasticsearch.index.service.InternalIndexService.createShard(InternalIndexService.java:345)\n ... 8 more\n```\n\nNow you might argue that this is a stupid thing to do, and that an ES server should be protected from the public so it's not a Denial of Service attack. Nevertheless, this happened to me by accident while I was developing a test harness for Haystack's ElasticSearch backend. However I think it would be better to respond gracefully to invalid input, instead of trying to fill up the hard disk with infinite useless logs.\n", "comments": [], "number": 4417, "title": "Elasticsearch goes into infinite loop with long database names" }
{ "body": "Fixes #4417\n\nI picked 100 sort of arbitrarily, I'm open to any suggestions for a better limit.\n", "number": 7252, "review_comments": [], "title": "Resiliency: Forbid index names over 100 characters in length" }
{ "commits": [ { "message": "Forbid index names over 100 characters in length\n\nFixes #4417" } ], "files": [ { "diff": "@@ -36,8 +36,7 @@\n import org.elasticsearch.cluster.ack.ClusterStateUpdateResponse;\n import org.elasticsearch.cluster.block.ClusterBlock;\n import org.elasticsearch.cluster.block.ClusterBlocks;\n-import org.elasticsearch.cluster.metadata.IndexMetaData.Custom;\n-import org.elasticsearch.cluster.metadata.IndexMetaData.State;\n+import org.elasticsearch.cluster.metadata.IndexMetaData.*;\n import org.elasticsearch.cluster.node.DiscoveryNodes;\n import org.elasticsearch.cluster.routing.RoutingTable;\n import org.elasticsearch.cluster.routing.allocation.AllocationService;\n@@ -72,6 +71,7 @@\n import java.io.File;\n import java.io.FileInputStream;\n import java.io.InputStreamReader;\n+import java.io.UnsupportedEncodingException;\n import java.util.Comparator;\n import java.util.List;\n import java.util.Locale;\n@@ -87,6 +87,8 @@\n */\n public class MetaDataCreateIndexService extends AbstractComponent {\n \n+ public final static int MAX_INDEX_NAME_BYTES = 100;\n+\n private final Environment environment;\n private final ThreadPool threadPool;\n private final ClusterService clusterService;\n@@ -172,6 +174,18 @@ public void validateIndexName(String index, ClusterState state) throws Elasticse\n if (!index.toLowerCase(Locale.ROOT).equals(index)) {\n throw new InvalidIndexNameException(new Index(index), index, \"must be lowercase\");\n }\n+ int byteCount = 0;\n+ try {\n+ byteCount = index.getBytes(\"UTF-8\").length;\n+ } catch (UnsupportedEncodingException e) {\n+ // UTF-8 should always be supported, but rethrow this if it is not for some reason\n+ throw new ElasticsearchException(\"Unable to determine length of index name\", e);\n+ }\n+ if (byteCount > MAX_INDEX_NAME_BYTES) {\n+ throw new InvalidIndexNameException(new Index(index), index,\n+ \"index name is too long, (\" + byteCount +\n+ \" > \" + MAX_INDEX_NAME_BYTES + \")\");\n+ }\n if (state.metaData().aliases().containsKey(index)) {\n throw new InvalidIndexNameException(new Index(index), index, \"already exists as alias\");\n }", "filename": "src/main/java/org/elasticsearch/cluster/metadata/MetaDataCreateIndexService.java", "status": "modified" }, { "diff": "@@ -21,13 +21,16 @@\n import org.elasticsearch.action.bulk.BulkResponse;\n import org.elasticsearch.action.index.IndexRequestBuilder;\n import org.elasticsearch.action.index.IndexResponse;\n+import org.elasticsearch.cluster.metadata.MetaDataCreateIndexService;\n import org.elasticsearch.index.VersionType;\n+import org.elasticsearch.indices.InvalidIndexNameException;\n import org.elasticsearch.test.ElasticsearchIntegrationTest;\n import org.elasticsearch.test.junit.annotations.TestLogging;\n import org.junit.Test;\n \n import java.util.ArrayList;\n import java.util.List;\n+import java.util.Locale;\n import java.util.Random;\n import java.util.concurrent.Callable;\n import java.util.concurrent.ExecutorService;\n@@ -177,4 +180,39 @@ public void testCreateFlagWithBulk() {\n IndexResponse indexResponse = bulkResponse.getItems()[0].getResponse();\n assertTrue(indexResponse.isCreated());\n }\n+\n+ @Test\n+ public void testCreateIndexWithLongName() {\n+ int min = MetaDataCreateIndexService.MAX_INDEX_NAME_BYTES + 1;\n+ int max = MetaDataCreateIndexService.MAX_INDEX_NAME_BYTES * 2;\n+ try {\n+ createIndex(randomAsciiOfLengthBetween(min, max).toLowerCase(Locale.ROOT));\n+ fail(\"exception should have been thrown on too-long index name\");\n+ } catch (InvalidIndexNameException e) {\n+ assertThat(\"exception contains message about index name too long: \" + e.getMessage(),\n+ e.getMessage().contains(\"index name is too long,\"), equalTo(true));\n+ }\n+\n+ try {\n+ client().prepareIndex(randomAsciiOfLengthBetween(min, max).toLowerCase(Locale.ROOT), \"mytype\").setSource(\"foo\", \"bar\").get();\n+ fail(\"exception should have been thrown on too-long index name\");\n+ } catch (InvalidIndexNameException e) {\n+ assertThat(\"exception contains message about index name too long: \" + e.getMessage(),\n+ e.getMessage().contains(\"index name is too long,\"), equalTo(true));\n+ }\n+\n+ try {\n+ // Catch chars that are more than a single byte\n+ client().prepareIndex(randomAsciiOfLength(MetaDataCreateIndexService.MAX_INDEX_NAME_BYTES -1).toLowerCase(Locale.ROOT) +\n+ \"Ϟ\".toLowerCase(Locale.ROOT),\n+ \"mytype\").setSource(\"foo\", \"bar\").get();\n+ fail(\"exception should have been thrown on too-long index name\");\n+ } catch (InvalidIndexNameException e) {\n+ assertThat(\"exception contains message about index name too long: \" + e.getMessage(),\n+ e.getMessage().contains(\"index name is too long,\"), equalTo(true));\n+ }\n+\n+ // we can create an index of max length\n+ createIndex(randomAsciiOfLength(MetaDataCreateIndexService.MAX_INDEX_NAME_BYTES).toLowerCase(Locale.ROOT));\n+ }\n }", "filename": "src/test/java/org/elasticsearch/indexing/IndexActionTests.java", "status": "modified" } ] }
{ "body": "```\nGET /_validate/query?explain\n{\n \"query\": {\n \"filtered\": {\n \"filter\": {\n \"geohash_cell\": {\n \"location\": {\n \"lat\": 51.521568,\n \"lon\": -0.141257\n },\n \"precision\": \"100km\",\n \"neighbors\": true\n }\n }\n }\n }\n}\n```\n\nReturns geohashes:\n- `ebzs` - see http://geohash.2ch.to/ebzs\n- `ebzu` - see http://geohash.2ch.to/ebzu\n- `gcpt` - see http://geohash.2ch.to/gcpt\n- `gcpv` - see http://geohash.2ch.to/gcpv\n- `s0bh` - see http://geohash.2ch.to/s0bh\n- `u10j` - see http://geohash.2ch.to/u10j\n\nOnly `gcpt`, `gpcv`, and `u10j` are in the right place. The others are in the Gulf of Guinea.\n", "comments": [], "number": 7226, "title": "Geo: Geohash_cell produces bad neighbors" }
{ "body": "The geohash grid it 8 cells wide and 4 cells tall. GeoHashUtils.neighbor(String,int,int.int) set the limit of the number of cells in y to < 3 rather than <= 3 resulting in it either not finding all neighbours or incorrectly searching for a neighbour in a different parent cell.\n\nCloses #7226\n", "number": 7247, "review_comments": [], "title": "Fixes computation of geohash neighbours" }
{ "commits": [], "files": [] }
{ "body": "1. We are running ES 1.3.1, but we had spotted this same issue on the previous versions as well\n2. It can be solved temporarily by closing/opening index, but it will get back to such a state later\n\nCorresponding mapping and index settings: https://gist.github.com/AVVS/bef59f42760256e2b5e8\n\nBasically it looks like this and can last forever:\n\n```\n[2014-08-09 02:24:10,836][WARN ][cluster.metadata ] [ubuntu74] [profiles-2014-08-01] re-syncing mappings with cluster state for types [[profiles_v1]]\n[2014-08-09 02:24:11,069][WARN ][cluster.metadata ] [ubuntu74] [profiles-2014-08-01] re-syncing mappings with cluster state for types [[profiles_v1]]\n[2014-08-09 02:24:11,303][WARN ][cluster.metadata ] [ubuntu74] [profiles-2014-08-01] re-syncing mappings with cluster state for types [[profiles_v1]]\n[2014-08-09 02:24:11,613][WARN ][cluster.metadata ] [ubuntu74] [profiles-2014-08-01] re-syncing mappings with cluster state for types [[profiles_v1]]\n[2014-08-09 02:24:11,854][WARN ][cluster.metadata ] [ubuntu74] [profiles-2014-08-01] re-syncing mappings with cluster state for types [[profiles_v1]]\n[2014-08-09 02:24:12,272][WARN ][cluster.metadata ] [ubuntu74] [profiles-2014-08-01] re-syncing mappings with cluster state for types [[profiles_v1]]\n[2014-08-09 02:24:12,514][WARN ][cluster.metadata ] [ubuntu74] [profiles-2014-08-01] re-syncing mappings with cluster state for types [[profiles_v1]]\n[2014-08-09 02:24:12,755][WARN ][cluster.metadata ] [ubuntu74] [profiles-2014-08-01] re-syncing mappings with cluster state for types [[profiles_v1]]\n[2014-08-09 02:24:12,997][WARN ][cluster.metadata ] [ubuntu74] [profiles-2014-08-01] re-syncing mappings with cluster state for types [[profiles_v1]]\n[2014-08-09 02:24:13,382][WARN ][cluster.metadata ] [ubuntu74] [profiles-2014-08-01] re-syncing mappings with cluster state for types [[profiles_v1]]\n[2014-08-09 02:24:13,632][WARN ][cluster.metadata ] [ubuntu74] [profiles-2014-08-01] re-syncing mappings with cluster state for types [[profiles_v1]]\n[2014-08-09 02:24:13,874][WARN ][cluster.metadata ] [ubuntu74] [profiles-2014-08-01] re-syncing mappings with cluster state for types [[profiles_v1]]\n[2014-08-09 02:24:14,108][WARN ][cluster.metadata ] [ubuntu74] [profiles-2014-08-01] re-syncing mappings with cluster state for types [[profiles_v1]]\n[2014-08-09 02:24:14,350][WARN ][cluster.metadata ] [ubuntu74] [profiles-2014-08-01] re-syncing mappings with cluster state for types [[profiles_v1]]\n[2014-08-09 02:24:14,601][WARN ][cluster.metadata ] [ubuntu74] [profiles-2014-08-01] re-syncing mappings with cluster state for types [[profiles_v1]]\n[2014-08-09 02:24:14,852][WARN ][cluster.metadata ] [ubuntu74] [profiles-2014-08-01] re-syncing mappings with cluster state for types [[profiles_v1]]\n[2014-08-09 02:24:15,100][WARN ][cluster.metadata ] [ubuntu74] [profiles-2014-08-01] re-syncing mappings with cluster state for types [[profiles_v1]]\n[2014-08-09 02:24:15,418][WARN ][cluster.metadata ] [ubuntu74] [profiles-2014-08-01] re-syncing mappings with cluster state for types [[profiles_v1]]\n[2014-08-09 02:24:15,761][WARN ][cluster.metadata ] [ubuntu74] [profiles-2014-08-01] re-syncing mappings with cluster state for types [[profiles_v1]]\n[2014-08-09 02:24:16,036][WARN ][cluster.metadata ] [ubuntu74] [profiles-2014-08-01] re-syncing mappings with cluster state for types [[profiles_v1]]\n[2014-08-09 02:24:16,292][WARN ][cluster.metadata ] [ubuntu74] [profiles-2014-08-01] re-syncing mappings with cluster state for types [[profiles_v1]]\n[2014-08-09 02:24:16,552][WARN ][cluster.metadata ] [ubuntu74] [profiles-2014-08-01] re-syncing mappings with cluster state for types [[profiles_v1]]\n[2014-08-09 02:24:16,802][WARN ][cluster.metadata ] [ubuntu74] [profiles-2014-08-01] re-syncing mappings with cluster state for types [[profiles_v1]]\n[2014-08-09 02:24:17,044][WARN ][cluster.metadata ] [ubuntu74] [profiles-2014-08-01] re-syncing mappings with cluster state for types [[profiles_v1]]\n[2014-08-09 02:24:17,287][WARN ][cluster.metadata ] [ubuntu74] [profiles-2014-08-01] re-syncing mappings with cluster state for types [[profiles_v1]]\n[2014-08-09 02:24:17,518][WARN ][cluster.metadata ] [ubuntu74] [profiles-2014-08-01] re-syncing mappings with cluster state for types [[profiles_v1]]\n[2014-08-09 02:24:17,753][WARN ][cluster.metadata ] [ubuntu74] [profiles-2014-08-01] re-syncing mappings with cluster state for types [[profiles_v1]]\n[2014-08-09 02:24:18,096][WARN ][cluster.metadata ] [ubuntu74] [profiles-2014-08-01] re-syncing mappings with cluster state for types [[profiles_v1]]\n[2014-08-09 02:24:18,346][WARN ][cluster.metadata ] [ubuntu74] [profiles-2014-08-01] re-syncing mappings with cluster state for types [[profiles_v1]]\n[2014-08-09 02:24:18,597][WARN ][cluster.metadata ] [ubuntu74] [profiles-2014-08-01] re-syncing mappings with cluster state for types [[profiles_v1]]\n[2014-08-09 02:24:18,847][WARN ][cluster.metadata ] [ubuntu74] [profiles-2014-08-01] re-syncing mappings with cluster state for types [[profiles_v1]]\n[2014-08-09 02:24:19,096][WARN ][cluster.metadata ] [ubuntu74] [profiles-2014-08-01] re-syncing mappings with cluster state for types [[profiles_v1]]\n[2014-08-09 02:24:19,415][WARN ][cluster.metadata ] [ubuntu74] [profiles-2014-08-01] re-syncing mappings with cluster state for types [[profiles_v1]]\n[2014-08-09 02:24:19,666][WARN ][cluster.metadata ] [ubuntu74] [profiles-2014-08-01] re-syncing mappings with cluster state for types [[profiles_v1]]\n[2014-08-09 02:24:19,917][WARN ][cluster.metadata ] [ubuntu74] [profiles-2014-08-01] re-syncing mappings with cluster state for types [[profiles_v1]]\n[2014-08-09 02:24:20,151][WARN ][cluster.metadata ] [ubuntu74] [profiles-2014-08-01] re-syncing mappings with cluster state for types [[profiles_v1]]\n[2014-08-09 02:24:20,585][WARN ][cluster.metadata ] [ubuntu74] [profiles-2014-08-01] re-syncing mappings with cluster state for types [[profiles_v1]]\n[2014-08-09 02:24:20,859][WARN ][cluster.metadata ] [ubuntu74] [profiles-2014-08-01] re-syncing mappings with cluster state for types [[profiles_v1]]\n[2014-08-09 02:24:21,076][WARN ][cluster.metadata ] [ubuntu74] [profiles-2014-08-01] re-syncing mappings with cluster state for types [[profiles_v1]]\n[2014-08-09 02:24:21,335][WARN ][cluster.metadata ] [ubuntu74] [profiles-2014-08-01] re-syncing mappings with cluster state for types [[profiles_v1]]\n[2014-08-09 02:24:21,585][WARN ][cluster.metadata ] [ubuntu74] [profiles-2014-08-01] re-syncing mappings with cluster state for types [[profiles_v1]]\n[2014-08-09 02:24:21,818][WARN ][cluster.metadata ] [ubuntu74] [profiles-2014-08-01] re-syncing mappings with cluster state for types [[profiles_v1]]\n[2014-08-09 02:24:22,059][WARN ][cluster.metadata ] [ubuntu74] [profiles-2014-08-01] re-syncing mappings with cluster state for types [[profiles_v1]]\n[2014-08-09 02:24:22,294][WARN ][cluster.metadata ] [ubuntu74] [profiles-2014-08-01] re-syncing mappings with cluster state for types [[profiles_v1]]\n```\n", "comments": [ { "body": "Hi @AVVS \n\nHow many nodes are you running? Could you add the relevant logs from the other nodes?\n\nDo you have the same version of ES on all nodes? And the same version of the JVM? Same version of the ICU plugin?\n\nI presume you are actively indexing into this index? Does this message stop if you stop indexing?\n", "created_at": "2014-08-09T10:00:08Z" }, { "body": "1. 48 nodes: 41 data, 4 http balancer, 3 master nodes\n2. ICU: 2.3.0, all nodes\n3. openjdk 7_u55, all nodes\n\nI'm indexing a lot, but this message has, sadly, nothing to do with it. If I stop it persists. One easy way to trigger it sooner is to add more replicas to the index.\n\nOther nodes dont really have anything interesting: Master nodes have last messages from the 4 days ago about adding a node to the cluster except for the actual elected master. HTTP nodes dont have anything, data nodes have GC message from some 8 hours ago\n", "created_at": "2014-08-09T10:17:47Z" }, { "body": "One more thing: if I look at `_cat/pending_tasks` I notice that its at ~ 16,5k lines, a few lines are always about refreshing mappings, and the rest is the same message about dangling indices, but I guess it never gets to these tasks because its completely occupied with mapping refreshes\n\n```\n32662080 1s HIGH refresh-mapping [profiles-2014-08-01][[profiles_v1]] \n32662081 892ms HIGH refresh-mapping [profiles-2014-08-01][[profiles_v1]] \n32662082 850ms HIGH refresh-mapping [profiles-2014-08-01][[profiles_v1]] \n...\n32662084 845ms HIGH refresh-mapping [profiles-2014-08-01][[profiles_v1]] \n...\n4365952 3.7d NORMAL allocation dangled indices [test_index] \n...\n```\n", "created_at": "2014-08-09T10:25:05Z" }, { "body": "If you close the dangling indices, does it stop then? Probably not, but worth a try.\n\nFor some reason the code in question thinks that your mapping has changed. I'm wondering if there is something that you've specified which isn't properly handled by the equals() method. \nhttps://github.com/elasticsearch/elasticsearch/blob/master/src/main/java/org/elasticsearch/cluster/metadata/MetaDataMappingService.java#L257\n", "created_at": "2014-08-09T10:32:41Z" }, { "body": "I cant really close those dangling indices, they are not yet available:\n\n```\n{\n \"error\": \"RemoteTransportException[[ubuntu74][inet[/10.10.100.74:9300]][indices/close]]; nested: IndexMissingException[[test_index] missing]; \",\n \"status\": 404\n}\n```\n\n<br/>\n\n> For some reason the code in question thinks that your mapping has changed. I'm wondering if there is something that you've specified which isn't properly handled by the equals() method.\n\nWhat data can I provide to help with testing this?\n", "created_at": "2014-08-09T10:36:34Z" }, { "body": "Not sure yet - I'll come back to you with more.\n", "created_at": "2014-08-09T11:04:05Z" }, { "body": "I've reduced this to the following, which reproduces the problem:\n\n```\nPUT /t\n{\n \"mappings\": {\n \"profiles_v1\": {\n \"properties\": {\n \"fullName\": {\n \"type\": \"string\",\n \"fields\": {\n \"one\": {\n \"type\": \"string\"\n },\n \"two\": {\n \"type\": \"string\"\n },\n \"three\": {\n \"type\": \"string\"\n },\n \"completion\": {\n \"type\": \"string\"\n },\n \"four\": {\n \"type\": \"string\"\n },\n \"ngrams_front_omit_norms\": {\n \"type\": \"string\"\n },\n \"ngrams_back\": {\n \"type\": \"string\"\n },\n \"ngrams_back_omit_norms\": {\n \"type\": \"string\"\n },\n \"ngrams_middle\": {\n \"type\": \"string\"\n },\n \"shingle\": {\n \"type\": \"string\"\n },\n \"ngrams_front\": {\n \"type\": \"string\"\n },\n \"fullName\": {\n \"type\": \"string\"\n }\n }\n }\n }\n }\n }\n}\n```\n\nWhen it tries to compare the mappings here https://github.com/elasticsearch/elasticsearch/blob/master/src/main/java/org/elasticsearch/cluster/metadata/MetaDataMappingService.java#L270 the fields are in a different order.\n\n`mapper.mappingSource` returns the fields in the same order as above, while `builders.mapping(type).source()` moves the `ngrams_middle` field to the end:\n\n```\n{\n \"profiles_v1\": {\n \"properties\": {\n \"fullName\": {\n \"type\": \"string\",\n \"fields\": {\n \"fullName\": {\n \"type\": \"string\"\n },\n \"two\": {\n \"type\": \"string\"\n },\n \"completion\": {\n \"type\": \"string\"\n },\n \"four\": {\n \"type\": \"string\"\n },\n \"three\": {\n \"type\": \"string\"\n },\n \"ngrams_front_omit_norms\": {\n \"type\": \"string\"\n },\n \"ngrams_back\": {\n \"type\": \"string\"\n },\n \"ngrams_back_omit_norms\": {\n \"type\": \"string\"\n },\n \"shingle\": {\n \"type\": \"string\"\n },\n \"one\": {\n \"type\": \"string\"\n },\n \"ngrams_front\": {\n \"type\": \"string\"\n },\n \"ngrams_middle\": {\n \"type\": \"string\"\n }\n }\n }\n }\n }\n}\n```\n", "created_at": "2014-08-11T10:46:43Z" }, { "body": "Ok, any easy fix to temporarily allow it to sync properly? i.e., put same mapping with different field order\n", "created_at": "2014-08-11T11:09:31Z" }, { "body": "@AVVS that _may_ work, or changing field names might work as well. We're working on a fix.\n", "created_at": "2014-08-11T12:16:16Z" }, { "body": "@AVVS Thanks for reporting this issue, the re-syncing of the mapping was causing by multi-fields not being serialised consistently and this is fixed now.\n", "created_at": "2014-08-11T17:27:48Z" }, { "body": "Thanks, when should I expect 1.3.2 to be released? :)\n", "created_at": "2014-08-11T17:32:03Z" }, { "body": "@AVVS I expect a 1.3.2 release in the coming days.\n", "created_at": "2014-08-11T17:33:31Z" }, { "body": "I'm not entirely clear on the status of this bug, but wanted to report that under 1.5 and now 1.5.1 (potentially under 1.4.x as well, but I can't confirm that) we've run into the endless re-syncing mappings issue.\n\nI can provide any further details you'd like, let me know.\n", "created_at": "2015-04-13T20:46:12Z" }, { "body": "@heffergm please could you open a new bug, and upload your mappings?\n", "created_at": "2015-04-14T13:02:45Z" }, { "body": "@heffergm I see you already have :) #10581\n", "created_at": "2015-04-14T13:04:29Z" } ], "number": 7215, "title": "Endless mapping re-sync problem" }
{ "body": "This ensure that the source is the same and avoids unnecessary mapping re-syncs.\n\nPR for #7215\n", "number": 7220, "review_comments": [], "title": "Make sure that multi fields are serialized in a consistent order." }
{ "commits": [ { "message": "Mappings: Make sure that multi fields are serialized in alphabetic order to ensure that the source is always the same.\n\nCloses #7215" } ], "files": [ { "diff": "@@ -60,10 +60,7 @@\n import org.elasticsearch.index.similarity.SimilarityProvider;\n \n import java.io.IOException;\n-import java.util.ArrayList;\n-import java.util.List;\n-import java.util.Locale;\n-import java.util.Map;\n+import java.util.*;\n \n /**\n *\n@@ -995,9 +992,17 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n builder.field(\"path\", pathType.name().toLowerCase(Locale.ROOT));\n }\n if (!mappers.isEmpty()) {\n+ // sort the mappers so we get consistent serialization format\n+ Mapper[] sortedMappers = mappers.values().toArray(Mapper.class);\n+ Arrays.sort(sortedMappers, new Comparator<Mapper>() {\n+ @Override\n+ public int compare(Mapper o1, Mapper o2) {\n+ return o1.name().compareTo(o2.name());\n+ }\n+ });\n builder.startObject(\"fields\");\n- for (ObjectCursor<Mapper> cursor : mappers.values()) {\n- cursor.value.toXContent(builder, params);\n+ for (Mapper mapper : sortedMappers) {\n+ mapper.toXContent(builder, params);\n }\n builder.endObject();\n }", "filename": "src/main/java/org/elasticsearch/index/mapper/core/AbstractFieldMapper.java", "status": "modified" }, { "diff": "@@ -23,6 +23,9 @@\n import org.elasticsearch.common.bytes.BytesArray;\n import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.common.xcontent.XContentHelper;\n+import org.elasticsearch.common.xcontent.support.XContentMapValues;\n import org.elasticsearch.index.mapper.DocumentMapper;\n import org.elasticsearch.index.mapper.DocumentMapperParser;\n import org.elasticsearch.index.mapper.FieldMapper;\n@@ -33,6 +36,9 @@\n import org.elasticsearch.test.ElasticsearchSingleNodeTest;\n import org.junit.Test;\n \n+import java.util.Arrays;\n+import java.util.Map;\n+\n import static org.elasticsearch.common.io.Streams.copyToBytesFromClasspath;\n import static org.elasticsearch.common.io.Streams.copyToStringFromClasspath;\n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n@@ -408,4 +414,35 @@ public void testConvertMultiFieldCompletion() throws Exception {\n assertThat(f.fieldType().stored(), equalTo(false));\n assertThat(f.fieldType().indexed(), equalTo(true));\n }\n+\n+ @Test\n+ // The underlying order of the fields in multi fields in the mapping source should always be consistent, if not this\n+ // can to unnecessary re-syncing of the mappings between the local instance and cluster state\n+ public void testMultiFieldsInConsistentOrder() throws Exception {\n+ String[] multiFieldNames = new String[randomIntBetween(2, 10)];\n+ for (int i = 0; i < multiFieldNames.length; i++) {\n+ multiFieldNames[i] = randomAsciiOfLength(4);\n+ }\n+\n+ XContentBuilder builder = jsonBuilder().startObject().startObject(\"type\").startObject(\"properties\")\n+ .startObject(\"my_field\").field(\"type\", \"string\").startObject(\"fields\");\n+ for (String multiFieldName : multiFieldNames) {\n+ builder = builder.startObject(multiFieldName).field(\"type\", \"string\").endObject();\n+ }\n+ builder = builder.endObject().endObject().endObject().endObject().endObject();\n+ String mapping = builder.string();\n+ DocumentMapper docMapper = createIndex(\"test\").mapperService().documentMapperParser().parse(mapping);\n+ Arrays.sort(multiFieldNames);\n+\n+ Map<String, Object> sourceAsMap = XContentHelper.convertToMap(docMapper.mappingSource().compressed(), true).v2();\n+ @SuppressWarnings(\"unchecked\")\n+ Map<String, Object> multiFields = (Map<String, Object>) XContentMapValues.extractValue(\"type.properties.my_field.fields\", sourceAsMap);\n+ assertThat(multiFields.size(), equalTo(multiFieldNames.length));\n+\n+ int i = 0;\n+ // underlying map is LinkedHashMap, so this ok:\n+ for (String field : multiFields.keySet()) {\n+ assertThat(field, equalTo(multiFieldNames[i++]));\n+ }\n+ }\n }", "filename": "src/test/java/org/elasticsearch/index/mapper/multifield/MultiFieldTests.java", "status": "modified" } ] }
{ "body": "Trying to index this shape results in an IndexOutOfBoundsException\n\n```\nDELETE /countries\nPUT /countries\nPUT /countries/location/_mapping\n{\n \"location\" : {\n \"properties\" : {\n \"location\" : {\n \"type\" : \"geo_shape\"\n }\n }\n }\n}\n\n\nPUT countries/location/somewhere-in-sweden\n{ \"location\" : { \n \"type\": \"MultiPolygon\",\n \"coordinates\": [\n [\n [\n [22.183173, 65.723741],\n [21.213517, 65.026005],\n [21.369631, 64.413588], \n [22.183173, 65.723741]\n ],\n [\n [17.061767, 57.385783],\n [17.210083, 57.326521],\n [16.430053, 56.179196], \n [17.061767, 57.385783]\n ]\n ]\n ]\n} } \n\n```\n\nThe shape looks valid here https://gist.github.com/spinscale/0c014b3a0f15f90b5c4c\n", "comments": [ { "body": "The GeoJSON in the index request appears to be invalid as it is specifying a single polygon with a hole outside its bounds. The correct request, following the geoJSON spec should be:\n\n```\nPUT countries/location/somewhere-in-sweden\n{ \"location\" : { \n \"type\": \"MultiPolygon\",\n \"coordinates\": [\n [\n [\n [22.183173, 65.723741],\n [21.213517, 65.026005],\n [21.369631, 64.413588], \n [22.183173, 65.723741]\n ]\n ],\n [\n [\n [17.061767, 57.385783],\n [17.210083, 57.326521],\n [16.430053, 56.179196], \n [17.061767, 57.385783]\n ]\n ]\n ]\n} } \n```\n", "created_at": "2014-08-06T10:15:19Z" }, { "body": "Could we at least throw a nicer error?\n", "created_at": "2014-08-07T12:12:29Z" } ], "number": 7126, "title": "Geo: geo_shape MultiPolygon parsing problem" }
{ "body": "Closes #7126\n", "number": 7190, "review_comments": [], "title": "Better error for invalid multipolygon" }
{ "commits": [ { "message": "Geo: Better error for invalid multipolygon\n\nCloses #7126" } ], "files": [ { "diff": "@@ -21,6 +21,7 @@\n \n import com.spatial4j.core.shape.Shape;\n import com.vividsolutions.jts.geom.*;\n+import org.elasticsearch.ElasticsearchParseException;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n \n import java.io.IOException;\n@@ -358,7 +359,9 @@ private static void assign(Edge[] holes, Coordinate[][] points, int numHoles, Ed\n current.intersect = current.coordinate;\n final int intersections = intersections(current.coordinate.x, edges);\n final int pos = Arrays.binarySearch(edges, 0, intersections, current, INTERSECTION_ORDER);\n- assert pos < 0 : \"illegal state: two edges cross the datum at the same position\";\n+ if (pos < 0) {\n+ throw new ElasticsearchParseException(\"Invaild shape: Hole is not within polygon\");\n+ }\n final int index = -(pos+2);\n final int component = -edges[index].component - numHoles - 1;\n ", "filename": "src/main/java/org/elasticsearch/common/geo/builders/BasePolygonBuilder.java", "status": "modified" } ] }
{ "body": "Code in question: https://github.com/elasticsearch/elasticsearch/blob/master/src/main/java/org/elasticsearch/common/geo/builders/ShapeBuilder.java#L297-299\n\nThis code seems to compare the current point with the one after next and continue if their comparison value is equal. The reason for doing this is not clear and there are no tests which rely on this code.\n\nThe compare method is not limited to return -1, 0 and 1 so the chance of this condition being met seems slim, further adding to the confusion as to what its purpose is.\n", "comments": [], "number": 7016, "title": "Geo: Fix comparison of doubles in ShapeBuilder.intersections()" }
{ "body": "If a geo_shape had edges which either ran vertically along the dateline or touched the date line but did not cross it they would fail to parse. This is because the code which splits a polygon along the dateline did not take into account the case where the polygon touched but did not cross the dateline. This PR fixes those issues and provides tests for them.\n\nClose #7016\n", "number": 7188, "review_comments": [ { "body": "Why all these commented out tests?\n", "created_at": "2014-08-11T07:39:01Z" }, { "body": "@jpountz I forgot to delete them before I made the PR. I realised after creating these tests that they are not relevant as you cannot have a hole share a point with the main polygon, which makes sense since a hole which intersects with the edge of the main polygon isn't actually a hole. I will delete these tests in the next commit\n", "created_at": "2014-08-11T10:38:24Z" } ], "title": "Fix geo_shapes which intersect dateline" }
{ "commits": [ { "message": "Geo: fixes geo_shapes which intersect dateline\n\nIf a geo_shape had edges which either ran vertically along the dateline or touched the date line but did not cross it they would fail to parse. This is because the code which splits a polygon along the dateline did not take into account the case where the polygon touched but did not cross the dateline. This PR fixes those issues and provides tests for them.\n\nClose #7016" } ], "files": [ { "diff": "@@ -417,7 +417,7 @@ private static void connect(Edge in, Edge out) {\n in.next = new Edge(in.intersect, out.next, in.intersect);\n }\n out.next = new Edge(out.intersect, e1, out.intersect);\n- } else {\n+ } else if (in.next != out){\n // first edge intersects with dateline\n Edge e2 = new Edge(out.intersect, in.next, out.intersect);\n ", "filename": "src/main/java/org/elasticsearch/common/geo/builders/BasePolygonBuilder.java", "status": "modified" }, { "diff": "@@ -293,12 +293,6 @@ protected static int intersections(double dateline, Edge[] edges) {\n \n double position = intersection(p1, p2, dateline);\n if (!Double.isNaN(position)) {\n- if (position == 1) {\n- if (Double.compare(p1.x, dateline) == Double.compare(edges[i].next.next.coordinate.x, dateline)) {\n- // Ignore the ear\n- continue;\n- }\n- }\n edges[i].intersection(position);\n numIntersections++;\n }", "filename": "src/main/java/org/elasticsearch/common/geo/builders/ShapeBuilder.java", "status": "modified" }, { "diff": "@@ -308,4 +308,66 @@ public void testComplexShapeWithHole() {\n \n assertPolygon(shape);\n }\n+\n+ @Test\n+ public void testShapeWithHoleAtEdgeEndPoints() {\n+ PolygonBuilder builder = ShapeBuilder.newPolygon()\n+ .point(-4, 2)\n+ .point(4, 2)\n+ .point(6, 0)\n+ .point(4, -2)\n+ .point(-4, -2)\n+ .point(-6, 0)\n+ .point(-4, 2);\n+\n+ builder.hole()\n+ .point(4, 1)\n+ .point(4, -1)\n+ .point(-4, -1)\n+ .point(-4, 1)\n+ .point(4, 1);\n+\n+ Shape shape = builder.close().build();\n+\n+ assertPolygon(shape);\n+ }\n+\n+ @Test\n+ public void testShapeWithPointOnDateline() {\n+ PolygonBuilder builder = ShapeBuilder.newPolygon()\n+ .point(180, 0)\n+ .point(176, 4)\n+ .point(176, -4)\n+ .point(180, 0);\n+\n+ Shape shape = builder.close().build();\n+\n+ assertPolygon(shape);\n+ }\n+\n+ @Test\n+ public void testShapeWithEdgeAlongDateline() {\n+ PolygonBuilder builder = ShapeBuilder.newPolygon()\n+ .point(180, 0)\n+ .point(176, 4)\n+ .point(180, -4)\n+ .point(180, 0);\n+\n+ Shape shape = builder.close().build();\n+\n+ assertPolygon(shape);\n+ }\n+\n+ @Test\n+ public void testShapeWithEdgeAcrossDateline() {\n+ PolygonBuilder builder = ShapeBuilder.newPolygon()\n+ .point(180, 0)\n+ .point(176, 4)\n+ .point(-176, 4)\n+ .point(180, 0);\n+\n+ Shape shape = builder.close().build();\n+\n+ assertPolygon(shape);\n+ }\n }", "filename": "src/test/java/org/elasticsearch/common/geo/ShapeBuilderTests.java", "status": "modified" } ] }
{ "body": "I use Elasticsearch with Logstash and dynamic mapping.\nI my logs, I have geopoints with the array syntax.\nIt was OK with ES 1.0.X, but it's broken with ES 1.2.2.\n\nTemplate:\n\n```\ncurl -XPUT localhost:9200/_template/logstash -d '{\n \"template\": \"logstash_*\",\n \"settings\" : {\n \"refresh_interval\": \"30s\"\n },\n \"mappings\": {\n \"logs\": {\n \"_all\" : {\n \"enabled\": false\n },\n \"dynamic_templates\": [\n {\n \"location\": {\n \"match\": \"location*\",\n \"mapping\": {\n \"type\": \"geo_point\"\n }\n }\n },\n {\n \"generic\": {\n \"match\": \"*\",\n \"match_mapping_type\": \"string\",\n \"mapping\": {\n \"type\": \"string\",\n \"index\": \"not_analyzed\"\n }\n }\n }\n ],\n \"dynamic_date_formats\": [\n \"dateOptionalTime\",\n \"yyyy-MM-dd\",\n \"yyyy-MM-dd HH:mm:ss\"\n ]\n }\n }\n}'\n```\n\nDelete index:\n\n```\ncurl -XDELETE localhost:9200/logstash_test\n```\n\nTry to add a doc/log:\n\n```\ncurl -XPOST localhost:9200/logstash_test/logs -d '{\n \"location_array\": [\n 2.3069244,\n 48.8881598\n ]\n}'\n```\n\nIt fails with this message:\n\n```\n[2014-07-21 11:31:49,168][INFO ][cluster.metadata ] [fr-dev-01] [logstash_test] creating index, cause [auto(index api)], shards [5]/[1], mappings [logs]\n[2014-07-21 11:31:49,465][DEBUG][action.index ] [fr-dev-01] [logstash_test][1], node[zzMEQe9JSS6qEgw0oRgdVA], [P], s[STARTED]: Failed to execute [index {[logstash_test][logs][PhqP9fpgQkS4tHfOs105GQ], source[{\"location_array\":[2.3069244,48.8881598]}]}]\norg.elasticsearch.index.mapper.MapperParsingException: failed to parse\n at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:536)\n at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:462)\n at org.elasticsearch.index.shard.service.InternalIndexShard.prepareCreate(InternalIndexShard.java:373)\n at org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:203)\n at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:534)\n at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$1.run(TransportShardReplicationOperationAction.java:433)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n at java.lang.Thread.run(Thread.java:724)\nCaused by: org.elasticsearch.ElasticsearchParseException: geo_point expected\n at org.elasticsearch.common.geo.GeoUtils.parseGeoPoint(GeoUtils.java:421)\n at org.elasticsearch.index.mapper.geo.GeoPointFieldMapper.parse(GeoPointFieldMapper.java:530)\n at org.elasticsearch.index.mapper.object.ObjectMapper.parseDynamicValue(ObjectMapper.java:819)\n at org.elasticsearch.index.mapper.object.ObjectMapper.serializeValue(ObjectMapper.java:639)\n at org.elasticsearch.index.mapper.object.ObjectMapper.serializeArray(ObjectMapper.java:625)\n at org.elasticsearch.index.mapper.object.ObjectMapper.parse(ObjectMapper.java:482)\n at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:515)\n ... 8 more\n```\n\nIf I insert a doc with a geopoint as an object, it works, and the mapping is created dynamically.\nAfter that, I can insert a doc with a geopoint as an array without error.\n", "comments": [ { "body": "This also appears to be happening on the master branch.\n", "created_at": "2014-07-26T03:25:27Z" }, { "body": "Seems like it's blowing up since the geo_point is only parsing the first value instead of the 2 item array as a whole. It works the second time through because the mapper object within org.elasticsearch.index.mapper.object.ObjectMapper.serializeArray caches location_array as a geo_point so the array logic properly pulls the mapper. The first time through that entry is not there so the mapper comes up null and it tries to parse the array values individually. May I suggest adding some logic to the org.elasticsearch.index.mapper.object.ObjectMapper.serializeArray method to run the template builder logic to find the GeoMapper\n\n```\nprivate void serializeArray(ParseContext context, String lastFieldName) throws IOException {\n String arrayFieldName = lastFieldName;\n Mapper mapper = mappers.get(lastFieldName);\n //Start Check TemplateBuilder for geo\n if (mapper == null) {\n BuilderContext builderContext = new BuilderContext(context.indexSettings(), context.path());\n Mapper.Builder builder = context.root().findTemplateBuilder(context, arrayFieldName, \"geo_point\");\n if (builder != null) {\n mapper = builder.build(builderContext);\n mappers.put(lastFieldName, mapper);\n }\n }\n //End Check TemplateBuilder for geo\n if (mapper != null && mapper instanceof ArrayValueMapperParser) {\n mapper.parse(context);\n } else {\n```\n", "created_at": "2014-07-26T20:37:29Z" } ], "number": 6939, "title": "Mapping: Geopoint with array broken (dynamic mapping): geo_point expected" }
{ "body": "If a dynamic mapping for a geo_point field is defined and the first document specifies the value of the field as a geo_point array, the dynamic mapping throws an error as the array is broken into individual number before consulting the dynamic mapping configuration. This change adds a check of the dynamic mapping before the array is split into individual numbers.\n\nCloses #6939\n", "number": 7175, "review_comments": [], "title": "Fix dynamic mapping of geo_point fields" }
{ "commits": [ { "message": "Mapping: fixes dynamic mapping of geo_point fields\n\nIf a dynamic mapping for a geo_point field is defined and the first document specifies the value of the field as a geo_point array, the dynamic mapping throws an error as the array is broken into individual number before consulting the dynamic mapping configuration. This change adds a check of the dynamic mapping before the array is split into individual numbers.\n\nCloses #6939" } ], "files": [ { "diff": "@@ -575,34 +575,7 @@ private void serializeObject(final ParseContext context, String currentFieldName\n }\n BuilderContext builderContext = new BuilderContext(context.indexSettings(), context.path());\n objectMapper = builder.build(builderContext);\n- // ...now re add it\n- context.path().add(currentFieldName);\n- context.setMappingsModified();\n-\n- if (context.isWithinNewMapper()) {\n- // within a new mapper, no need to traverse, just parse\n- objectMapper.parse(context);\n- } else {\n- // create a context of new mapper, so we batch aggregate all the changes within\n- // this object mapper once, and traverse all of them to add them in a single go\n- context.setWithinNewMapper();\n- try {\n- objectMapper.parse(context);\n- FieldMapperListener.Aggregator newFields = new FieldMapperListener.Aggregator();\n- ObjectMapperListener.Aggregator newObjects = new ObjectMapperListener.Aggregator();\n- objectMapper.traverse(newFields);\n- objectMapper.traverse(newObjects);\n- // callback on adding those fields!\n- context.docMapper().addFieldMappers(newFields.mappers);\n- context.docMapper().addObjectMappers(newObjects.mappers);\n- } finally {\n- context.clearWithinNewMapper();\n- }\n- }\n-\n- // only put after we traversed and did the callbacks, so other parsing won't see it only after we\n- // properly traversed it and adding the mappers\n- putMapper(objectMapper);\n+ putDynamicMapper(context, currentFieldName, objectMapper);\n } else {\n objectMapper.parse(context);\n }\n@@ -622,22 +595,95 @@ private void serializeArray(ParseContext context, String lastFieldName) throws I\n if (mapper != null && mapper instanceof ArrayValueMapperParser) {\n mapper.parse(context);\n } else {\n- XContentParser parser = context.parser();\n- XContentParser.Token token;\n- while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) {\n- if (token == XContentParser.Token.START_OBJECT) {\n- serializeObject(context, lastFieldName);\n- } else if (token == XContentParser.Token.START_ARRAY) {\n- serializeArray(context, lastFieldName);\n- } else if (token == XContentParser.Token.FIELD_NAME) {\n- lastFieldName = parser.currentName();\n- } else if (token == XContentParser.Token.VALUE_NULL) {\n- serializeNullValue(context, lastFieldName);\n- } else if (token == null) {\n- throw new MapperParsingException(\"object mapping for [\" + name + \"] with array for [\" + arrayFieldName + \"] tried to parse as array, but got EOF, is there a mismatch in types for the same field?\");\n- } else {\n- serializeValue(context, lastFieldName, token);\n+\n+ Dynamic dynamic = this.dynamic;\n+ if (dynamic == null) {\n+ dynamic = context.root().dynamic();\n+ }\n+ if (dynamic == Dynamic.STRICT) {\n+ throw new StrictDynamicMappingException(fullPath, arrayFieldName);\n+ } else if (dynamic == Dynamic.TRUE) {\n+ // we sync here just so we won't add it twice. Its not the end of the world\n+ // to sync here since next operations will get it before\n+ synchronized (mutex) {\n+ mapper = mappers.get(arrayFieldName);\n+ if (mapper == null) {\n+ Mapper.Builder builder = context.root().findTemplateBuilder(context, arrayFieldName, \"object\");\n+ if (builder == null) {\n+ serializeNonDynamicArray(context, lastFieldName, arrayFieldName);\n+ return;\n+ }\n+ BuilderContext builderContext = new BuilderContext(context.indexSettings(), context.path());\n+ mapper = builder.build(builderContext);\n+ if (mapper != null && mapper instanceof ArrayValueMapperParser) {\n+ putDynamicMapper(context, arrayFieldName, mapper);\n+ } else {\n+ serializeNonDynamicArray(context, lastFieldName, arrayFieldName);\n+ }\n+ } else {\n+ \n+ serializeNonDynamicArray(context, lastFieldName, arrayFieldName);\n+ }\n }\n+ } else {\n+ \n+ serializeNonDynamicArray(context, lastFieldName, arrayFieldName);\n+ }\n+ }\n+ }\n+\n+ private void putDynamicMapper(ParseContext context, String arrayFieldName, Mapper mapper) throws IOException {\n+ // ...now re add it\n+ context.path().add(arrayFieldName);\n+ context.setMappingsModified();\n+\n+ if (context.isWithinNewMapper()) {\n+ // within a new mapper, no need to traverse,\n+ // just parse\n+ mapper.parse(context);\n+ } else {\n+ // create a context of new mapper, so we batch\n+ // aggregate all the changes within\n+ // this object mapper once, and traverse all of\n+ // them to add them in a single go\n+ context.setWithinNewMapper();\n+ try {\n+ mapper.parse(context);\n+ FieldMapperListener.Aggregator newFields = new FieldMapperListener.Aggregator();\n+ ObjectMapperListener.Aggregator newObjects = new ObjectMapperListener.Aggregator();\n+ mapper.traverse(newFields);\n+ mapper.traverse(newObjects);\n+ // callback on adding those fields!\n+ context.docMapper().addFieldMappers(newFields.mappers);\n+ context.docMapper().addObjectMappers(newObjects.mappers);\n+ } finally {\n+ context.clearWithinNewMapper();\n+ }\n+ }\n+\n+ // only put after we traversed and did the\n+ // callbacks, so other parsing won't see it only\n+ // after we\n+ // properly traversed it and adding the mappers\n+ putMapper(mapper);\n+ }\n+\n+ private void serializeNonDynamicArray(ParseContext context, String lastFieldName, String arrayFieldName) throws IOException {\n+ XContentParser parser = context.parser();\n+ XContentParser.Token token;\n+ while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) {\n+ if (token == XContentParser.Token.START_OBJECT) {\n+ serializeObject(context, lastFieldName);\n+ } else if (token == XContentParser.Token.START_ARRAY) {\n+ serializeArray(context, lastFieldName);\n+ } else if (token == XContentParser.Token.FIELD_NAME) {\n+ lastFieldName = parser.currentName();\n+ } else if (token == XContentParser.Token.VALUE_NULL) {\n+ serializeNullValue(context, lastFieldName);\n+ } else if (token == null) {\n+ throw new MapperParsingException(\"object mapping for [\" + name + \"] with array for [\" + arrayFieldName + \"] tried to parse as array, but got EOF, is there a mismatch in types for the same field?\");\n+ } else {\n+ serializeValue(context, lastFieldName, token);\n }\n }\n }", "filename": "src/main/java/org/elasticsearch/index/mapper/object/ObjectMapper.java", "status": "modified" }, { "diff": "@@ -345,6 +345,27 @@ public void testLonLatArray() throws Exception {\n assertThat(doc.rootDoc().get(\"point\"), equalTo(\"1.2,1.3\"));\n }\n \n+ @Test\n+ public void testLonLatArrayDynamic() throws Exception {\n+ String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n+ .startArray(\"dynamic_templates\").startObject()\n+ .startObject(\"point\").field(\"match\", \"point*\").startObject(\"mapping\").field(\"type\", \"geo_point\").field(\"lat_lon\", true).endObject().endObject()\n+ .endObject().endArray()\n+ .endObject().endObject().string();\n+\n+ DocumentMapper defaultMapper = createIndex(\"test\").mapperService().documentMapperParser().parse(mapping);\n+\n+ ParsedDocument doc = defaultMapper.parse(\"type\", \"1\", XContentFactory.jsonBuilder()\n+ .startObject()\n+ .startArray(\"point\").value(1.3).value(1.2).endArray()\n+ .endObject()\n+ .bytes());\n+\n+ assertThat(doc.rootDoc().getField(\"point.lat\"), notNullValue());\n+ assertThat(doc.rootDoc().getField(\"point.lon\"), notNullValue());\n+ assertThat(doc.rootDoc().get(\"point\"), equalTo(\"1.2,1.3\"));\n+ }\n+\n @Test\n public void testLonLatArrayStored() throws Exception {\n String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")", "filename": "src/test/java/org/elasticsearch/index/mapper/geo/LatLonMappingGeoPointTests.java", "status": "modified" } ] }
{ "body": "Relates to #6655\n", "comments": [], "number": 7125, "title": "Aggregations: Terms Aggregation should only show key_as_string when format is specified" }
{ "body": "The key_as_string field is now not shown in the terms aggregation for long and double fields unless the format parameter is specified\n\nCloses #7125\n", "number": 7160, "review_comments": [], "title": "`key_as_string` only shown when format specified in terms agg" }
{ "commits": [ { "message": "Aggregations: key_as_string only shown when format specified in terms agg\n\nThe key_as_string field is now not shown in the terms aggregation for long and double fields unless the format parameter is specified\n\nCloses #7125" } ], "files": [ { "diff": "@@ -180,7 +180,7 @@ public XContentBuilder doXContentBody(XContentBuilder builder, Params params) th\n for (InternalTerms.Bucket bucket : buckets) {\n builder.startObject();\n builder.field(CommonFields.KEY, ((Bucket) bucket).term);\n- if (formatter != null) {\n+ if (formatter != null && formatter != ValueFormatter.RAW) {\n builder.field(CommonFields.KEY_AS_STRING, formatter.format(((Bucket) bucket).term));\n }\n builder.field(CommonFields.DOC_COUNT, bucket.getDocCount());", "filename": "src/main/java/org/elasticsearch/search/aggregations/bucket/terms/DoubleTerms.java", "status": "modified" }, { "diff": "@@ -181,7 +181,7 @@ public XContentBuilder doXContentBody(XContentBuilder builder, Params params) th\n for (InternalTerms.Bucket bucket : buckets) {\n builder.startObject();\n builder.field(CommonFields.KEY, ((Bucket) bucket).term);\n- if (formatter != null) {\n+ if (formatter != null && formatter != ValueFormatter.RAW) {\n builder.field(CommonFields.KEY_AS_STRING, formatter.format(((Bucket) bucket).term));\n }\n builder.field(CommonFields.DOC_COUNT, bucket.getDocCount());", "filename": "src/main/java/org/elasticsearch/search/aggregations/bucket/terms/LongTerms.java", "status": "modified" } ] }
{ "body": "When indexing IPs, the IP parser is too lenient. For example:\n\n``` bash\nPUT /ipaddr/\n{\n \"mappings\" : {\n \"temp\" : {\n \"properties\" : {\n \"addr\" : {\n \"type\" : \"ip\"\n }\n }\n }\n }\n}\n\nPOST ipaddr/temp\n{\n \"addr\" : \"127.0.011.1111111\"\n}\n```\n\nThis address is considered \"valid\", since the parser only checks for 4 dots. If there are four dots, the, string is split and each numeric is shifted to obtain the resulting Long. ([source](https://github.com/elasticsearch/elasticsearch/blob/master/src/main/java/org/elasticsearch/index/mapper/ip/IpFieldMapper.java#L82-L87))\n\nThis IP is therefore \"converted\" into an entirely different IP:\n\n``` bash\nGET ipaddr/temp/_search?search_type=count\n{\n \"aggs\": {\n \"ips\": {\n \"terms\": {\n \"field\": \"addr\"\n }\n }\n }\n}\n\n{\n...\n \"aggregations\": {\n \"ips\": {\n \"buckets\": [\n {\n \"key\": 2131820359,\n \"key_as_string\": \"127.16.255.71\",\n \"doc_count\": 1\n }\n ]\n }\n }\n}\n```\n", "comments": [], "number": 7131, "title": "Mapping API: Improve IP address validation" }
{ "body": "Until now, IP addresses were only checked for four dots, which\nallowed invalid values like 127.0.0.111111\n\nThis adds an additional check for validation.\n\n**Note**: This does have a performance impact in the log file indexing case as it adds an additional parsing step. Maybe this was the reason, why it had not been implemented in the first case? We could potentially just reuse the code from guavas `InetAddresses.textToNumericFormatV4()` which is unfortunately private\n\nCloses #7131\n", "number": 7141, "review_comments": [ { "body": "Wondering why we need to keep that test. `InetAddresses.isInetAddress()` does not do it?\n\nAnother way for catching specifically your issue could be to control all octets size?\n\n``` java\n if (octets.length != 4 || octets[0].length() > 3 || octets[1].length() > 3 || octets[2].length() > 3 || octets[3].length() > 3) {\n throw new ElasticsearchIllegalArgumentException(\"failed to parse ip [\" + ip + \"], not full ip address (4 dots)\");\n }\n```\n", "created_at": "2014-08-04T12:39:33Z" }, { "body": "I think that if InetAddresses does it already, we might as well rely just on it\n", "created_at": "2014-08-05T12:49:49Z" } ], "title": "Improve IP address validation" }
{ "commits": [ { "message": "Mapping API: Improve IP address validation\n\nUntil now, IP addresses were only checked for four dots, which\nallowed invalid values like 127.0.0.111111\n\nThis adds an additional check for validation.\n\nCloses #7131" } ], "files": [ { "diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.index.mapper.ip;\n \n+import com.google.common.net.InetAddresses;\n import org.apache.lucene.analysis.NumericTokenStream;\n import org.apache.lucene.document.Field;\n import org.apache.lucene.document.FieldType;\n@@ -79,9 +80,12 @@ public static String longToIp(long longIp) {\n \n public static long ipToLong(String ip) throws ElasticsearchIllegalArgumentException {\n try {\n+ if (!InetAddresses.isInetAddress(ip)) {\n+ throw new ElasticsearchIllegalArgumentException(\"failed to parse ip [\" + ip + \"], not a valid ip address\");\n+ }\n String[] octets = pattern.split(ip);\n if (octets.length != 4) {\n- throw new ElasticsearchIllegalArgumentException(\"failed to parse ip [\" + ip + \"], not full ip address (4 dots)\");\n+ throw new ElasticsearchIllegalArgumentException(\"failed to parse ip [\" + ip + \"], not a valid ipv4 address (4 dots)\");\n }\n return (Long.parseLong(octets[0]) << 24) + (Integer.parseInt(octets[1]) << 16) +\n (Integer.parseInt(octets[2]) << 8) + Integer.parseInt(octets[3]);", "filename": "src/main/java/org/elasticsearch/index/mapper/ip/IpFieldMapper.java", "status": "modified" }, { "diff": "@@ -19,34 +19,62 @@\n \n package org.elasticsearch.index.mapper.ip;\n \n-import org.elasticsearch.test.ElasticsearchTestCase;\n-import org.junit.Ignore;\n+import org.elasticsearch.ElasticsearchIllegalArgumentException;\n+import org.elasticsearch.bootstrap.Elasticsearch;\n+import org.elasticsearch.common.xcontent.XContentFactory;\n+import org.elasticsearch.index.mapper.DocumentMapper;\n+import org.elasticsearch.index.mapper.ParsedDocument;\n+import org.elasticsearch.test.ElasticsearchSingleNodeTest;\n+import org.junit.Test;\n+\n+import static org.hamcrest.Matchers.*;\n \n /**\n *\n */\n-@Ignore(\"No tests?\")\n-public class SimpleIpMappingTests extends ElasticsearchTestCase {\n-\n- // No Longer enabled...\n-// @Test public void testAutoIpDetection() throws Exception {\n-// String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n-// .startObject(\"properties\").endObject()\n-// .endObject().endObject().string();\n-//\n-// XContentDocumentMapper defaultMapper = MapperTests.newParser().parse(mapping);\n-//\n-// ParsedDocument doc = defaultMapper.parse(\"type\", \"1\", XContentFactory.jsonBuilder()\n-// .startObject()\n-// .field(\"ip1\", \"127.0.0.1\")\n-// .field(\"ip2\", \"0.1\")\n-// .field(\"ip3\", \"127.0.0.1.2\")\n-// .endObject()\n-// .copiedBytes());\n-//\n-// assertThat(doc.doc().getFieldable(\"ip1\"), notNullValue());\n-// assertThat(doc.doc().get(\"ip1\"), nullValue()); // its numeric\n-// assertThat(doc.doc().get(\"ip2\"), equalTo(\"0.1\"));\n-// assertThat(doc.doc().get(\"ip3\"), equalTo(\"127.0.0.1.2\"));\n-// }\n+public class SimpleIpMappingTests extends ElasticsearchSingleNodeTest {\n+\n+ @Test\n+ public void testSimpleMapping() throws Exception {\n+ String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n+ .startObject(\"properties\").startObject(\"ip\").field(\"type\", \"ip\").endObject().endObject()\n+ .endObject().endObject().string();\n+\n+ DocumentMapper defaultMapper = createIndex(\"test\").mapperService().documentMapperParser().parse(mapping);\n+\n+ ParsedDocument doc = defaultMapper.parse(\"type\", \"1\", XContentFactory.jsonBuilder()\n+ .startObject()\n+ .field(\"ip\", \"127.0.0.1\")\n+ .endObject()\n+ .bytes());\n+\n+ assertThat(doc.rootDoc().getField(\"ip\").numericValue().longValue(), is(2130706433L));\n+ assertThat(doc.rootDoc().get(\"ip\"), is(nullValue()));\n+ }\n+\n+ @Test\n+ public void testThatValidIpCanBeConvertedToLong() throws Exception {\n+ assertThat(IpFieldMapper.ipToLong(\"127.0.0.1\"), is(2130706433L));\n+ }\n+\n+ @Test\n+ public void testThatInvalidIpThrowsException() throws Exception {\n+ try {\n+ IpFieldMapper.ipToLong(\"127.0.011.1111111\");\n+ fail(\"Expected ip address parsing to fail but did not happen\");\n+ } catch (ElasticsearchIllegalArgumentException e) {\n+ assertThat(e.getMessage(), containsString(\"not a valid ip address\"));\n+ }\n+ }\n+\n+ @Test\n+ public void testThatIpv6AddressThrowsException() throws Exception {\n+ try {\n+ IpFieldMapper.ipToLong(\"2001:db8:0:8d3:0:8a2e:70:7344\");\n+ fail(\"Expected ip address parsing to fail but did not happen\");\n+ } catch (ElasticsearchIllegalArgumentException e) {\n+ assertThat(e.getMessage(), containsString(\"not a valid ipv4 address\"));\n+ }\n+ }\n+\n }", "filename": "src/test/java/org/elasticsearch/index/mapper/ip/SimpleIpMappingTests.java", "status": "modified" } ] }
{ "body": "We're on ElasticSearch 1.1.1 running on Illumos (Solaris 10 derivative on Joyent).\n\nWe ran into an issue today where elasticsearch became completely unresponsive after the following exception:\n\n```\n[2014-07-31 12:30:18,081][WARN ][monitor.jvm ] [HOSTNAME] [gc][young][3604571][140866] duration [1.9s], collections [1]/[2.2s], total [\n1.9s]/[1.2h], memory [22.7gb]->[21.4gb]/[29.1gb], all_pools {[young] [1.3gb]->[29.2mb]/[1.4gb]}{[survivor] [70mb]->[55.3mb]/[191.3mb]}{[old] [21.3gb]->[21.3\ngb]/[27.4gb]}\n[2014-07-31 12:30:27,075][WARN ][monitor.jvm ] [HOSTNAME] [gc][young][3604579][140869] duration [1.2s], collections [1]/[1.9s], total [\n1.2s]/[1.2h], memory [22.3gb]->[21.2gb]/[29.1gb], all_pools {[young] [1.1gb]->[29.8mb]/[1.4gb]}{[survivor] [52.9mb]->[46.8mb]/[191.3mb]}{[old] [21.2gb]->[21\n.2gb]/[27.4gb]}\n[2014-07-31 12:30:35,954][WARN ][http.netty ] [HOSTNAME] Caught exception while handling client http traffic, closing connection [id:\n0x810b66dd, /IPSOURCE:48650 => /IPDEST:9200]\norg.elasticsearch.common.netty.channel.ChannelException: java.net.SocketException: Invalid argument\n at org.elasticsearch.common.netty.channel.socket.DefaultSocketChannelConfig.setTcpNoDelay(DefaultSocketChannelConfig.java:178)\n at org.elasticsearch.common.netty.channel.socket.DefaultSocketChannelConfig.setOption(DefaultSocketChannelConfig.java:54)\n at org.elasticsearch.common.netty.channel.socket.nio.DefaultNioSocketChannelConfig.setOption(DefaultNioSocketChannelConfig.java:70)\n at org.elasticsearch.common.netty.channel.DefaultChannelConfig.setOptions(DefaultChannelConfig.java:36)\n at org.elasticsearch.common.netty.channel.socket.nio.DefaultNioSocketChannelConfig.setOptions(DefaultNioSocketChannelConfig.java:54)\n at org.elasticsearch.common.netty.bootstrap.ServerBootstrap$Binder.childChannelOpen(ServerBootstrap.java:399)\n at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:77)\n at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)\n at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)\n at org.elasticsearch.common.netty.channel.Channels.fireChildChannelStateChanged(Channels.java:541)\n at org.elasticsearch.common.netty.channel.Channels.fireChannelOpen(Channels.java:167)\n at org.elasticsearch.common.netty.channel.socket.nio.NioAcceptedSocketChannel.<init>(NioAcceptedSocketChannel.java:42)\n at org.elasticsearch.common.netty.channel.socket.nio.NioServerBoss.registerAcceptedChannel(NioServerBoss.java:137)\n at org.elasticsearch.common.netty.channel.socket.nio.NioServerBoss.process(NioServerBoss.java:104)\n at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)\n at org.elasticsearch.common.netty.channel.socket.nio.NioServerBoss.run(NioServerBoss.java:42)\n at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)\n at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n at java.lang.Thread.run(Thread.java:744)\nCaused by: java.net.SocketException: Invalid argument\n at sun.nio.ch.Net.setIntOption0(Native Method)\n at sun.nio.ch.Net.setSocketOption(Net.java:373)\n at sun.nio.ch.SocketChannelImpl.setOption(SocketChannelImpl.java:189)\n at sun.nio.ch.SocketAdaptor.setBooleanOption(SocketAdaptor.java:295)\n at sun.nio.ch.SocketAdaptor.setTcpNoDelay(SocketAdaptor.java:330)\n at org.elasticsearch.common.netty.channel.socket.DefaultSocketChannelConfig.setTcpNoDelay(DefaultSocketChannelConfig.java:176)\n ... 20 more\n```\n\nOn solaris, setsocketopt has different behavior that on other platforms. It will return EINVAL causing java to raise an InvalidArgument exception when the socket has been closed. Apparently this happens when the client closes the connection before the server has finished it's accept. Elasticsearch appears to have been doing a garbage collection around that time.\n\nHere's a couple references to this bug occurring in other projects:\n\nhttp://bugs.java.com/view_bug.do?bug_id=6378870\nhttps://java.net/jira/browse/GLASSFISH-5342\nhttps://jira.atlassian.com/browse/STASH-3624\n\nIt also appears that in Netty 4.0+ this might have been fixed by: https://github.com/netty/netty/commit/39357f3835f971e6cc1a0e41a805fa1293e7005e#diff-dbfa6a222217d4fc2c12d20ee3496eb3R50\n\nUnfortunately, this is a bit difficult to reproduce and it only happens rarely. I'd imagine it can by reproduced by running elasticsearch on Solaris 10, finding a way to stall the server long enough for the client to close the connection before the server has set the socket options. Elasticsearch search should then stall and stop responding to any requests (as is the behavior that we saw).\n\nThanks,\nPaul\n", "comments": [ { "body": "seems like on netty its not set by default only on Android, and still being set on Solaris. I would be more than happy to create a change to disable it on Solaris, others, thoughts?\n", "created_at": "2014-07-31T21:09:42Z" }, { "body": "@kimchy Correct, though it is now wrapped in an exception block and is ignored if it throws an error. TCP_NODELAY should still be set on Solaris but the behavior on a closed socket throws an exception and should be ignored.\n", "created_at": "2014-07-31T21:17:12Z" }, { "body": "Yes, the silent ignore in the exception... . I was just wondering why netty didn't disable it on Solaris by default as well. Based on your input, it seems like it should. I am reaching out to some solaris experts on our end to see what they think, just to be double sure we should make this change. Thanks for bringing it up!\n", "created_at": "2014-07-31T21:23:48Z" }, { "body": "Okay, awesome! Thanks so much!\n", "created_at": "2014-07-31T21:28:18Z" }, { "body": "@letuboy btw, which Java version are you running?\n", "created_at": "2014-07-31T21:32:04Z" }, { "body": "and another question, if you set it to `false`, does that happen (still gathering info, can probably find out on my own as well)? Its just the mere fact of calling `setTcpNoDelay`? If so, then we need not to set this setting at all on solaris, and at the very least, provide another setting to not set it (or another option, call it \"default\" to leave it as is)\n", "created_at": "2014-07-31T21:35:10Z" }, { "body": "We're using OpenJDK 1.7.\n\n```\nopenjdk version \"1.7.0-internal\"\nOpenJDK Runtime Environment (build 1.7.0-internal-pkgsrc_2014_05_16_23_21-b00)\nOpenJDK 64-Bit Server VM (build 24.51-b03, mixed mode)\n```\n\nIt appears that the mere fact of calling `setTcpNoDelay` causes this. It's also very rare, but it has happened a few times.\n", "created_at": "2014-07-31T23:32:13Z" }, { "body": "@letuboy hard to tell exactly which Java version its actually is..., internal?\n", "created_at": "2014-08-02T14:59:02Z" }, { "body": "I pushed #7136 to master and 1.x (upcoming 1.4) to allow to set `default` as the value, and then it will not be set. Its not a good out of the box solution, but at least now users will have the option to configure ES not to set it at all.\n", "created_at": "2014-08-02T15:34:51Z" }, { "body": "OpenJDK 1.7 correlates to Java 7, as far as I'm aware of. Thanks so much! We'll look out for the 1.4 release and update the setting when that happens. This issue is rare, so it shouldn't be too much of an pain until then. I'll close this ticket.\n\n@sax @indirect\n", "created_at": "2014-08-02T20:55:34Z" } ], "number": 7115, "title": "On Solaris 10 (Illumos), setting TCP_NODELAY on a closed socket causes elasticsearch to be unresponsive" }
{ "body": "Allow to set the value default to network.tcp.no_delay and network.tcp.keep_alive so they won't be set at all, since on solaris, setting tcpNoDelay can actually cause failure\nrelates to #7115\n", "number": 7136, "review_comments": [], "title": "Support \"default\" for tcpNoDelay and tcpKeepAlive " }
{ "commits": [ { "message": "Support \"default\" for tcpNoDelay and tcpKeepAlive\nAllow to set the value default to network.tcp.no_delay and network.tcp.keep_alive so they won't be set at all, since on solaris, setting tcpNoDelay can actually cause failure\nrelates to #7115" } ], "files": [ { "diff": "@@ -72,10 +72,10 @@ share the following allowed settings:\n |=======================================================================\n |Setting |Description\n |`network.tcp.no_delay` |Enable or disable tcp no delay setting.\n-Defaults to `true`.\n+Defaults to `true`. coming[1.4,Can be set to `default` to not be set at all.]\n \n-|`network.tcp.keep_alive` |Enable or disable tcp keep alive. By default\n-not explicitly set.\n+|`network.tcp.keep_alive` |Enable or disable tcp keep alive. Defaults\n+to `true`. coming[1.4,Can be set to `default` to not be set at all].\n \n |`network.tcp.reuse_address` |Should an address be reused or not.\n Defaults to `true` on non-windows machines.", "filename": "docs/reference/modules/network.asciidoc", "status": "modified" }, { "diff": "@@ -20,6 +20,7 @@\n package org.elasticsearch.http.netty;\n \n import org.elasticsearch.ElasticsearchException;\n+import org.elasticsearch.common.Booleans;\n import org.elasticsearch.common.component.AbstractLifecycleComponent;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.netty.NettyUtils;\n@@ -88,10 +89,8 @@ public class NettyHttpServerTransport extends AbstractLifecycleComponent<HttpSer\n \n private final String publishHost;\n \n- private final Boolean tcpNoDelay;\n-\n- private final Boolean tcpKeepAlive;\n-\n+ private final String tcpNoDelay;\n+ private final String tcpKeepAlive;\n private final Boolean reuseAddress;\n \n private final ByteSizeValue tcpSendBufferSize;\n@@ -135,8 +134,8 @@ public NettyHttpServerTransport(Settings settings, NetworkService networkService\n this.port = componentSettings.get(\"port\", settings.get(\"http.port\", \"9200-9300\"));\n this.bindHost = componentSettings.get(\"bind_host\", settings.get(\"http.bind_host\", settings.get(\"http.host\")));\n this.publishHost = componentSettings.get(\"publish_host\", settings.get(\"http.publish_host\", settings.get(\"http.host\")));\n- this.tcpNoDelay = componentSettings.getAsBoolean(\"tcp_no_delay\", settings.getAsBoolean(TCP_NO_DELAY, true));\n- this.tcpKeepAlive = componentSettings.getAsBoolean(\"tcp_keep_alive\", settings.getAsBoolean(TCP_KEEP_ALIVE, true));\n+ this.tcpNoDelay = componentSettings.get(\"tcp_no_delay\", settings.get(TCP_NO_DELAY, \"true\"));\n+ this.tcpKeepAlive = componentSettings.get(\"tcp_keep_alive\", settings.get(TCP_KEEP_ALIVE, \"true\"));\n this.reuseAddress = componentSettings.getAsBoolean(\"reuse_address\", settings.getAsBoolean(TCP_REUSE_ADDRESS, NetworkUtils.defaultReuseAddress()));\n this.tcpSendBufferSize = componentSettings.getAsBytesSize(\"tcp_send_buffer_size\", settings.getAsBytesSize(TCP_SEND_BUFFER_SIZE, TCP_DEFAULT_SEND_BUFFER_SIZE));\n this.tcpReceiveBufferSize = componentSettings.getAsBytesSize(\"tcp_receive_buffer_size\", settings.getAsBytesSize(TCP_RECEIVE_BUFFER_SIZE, TCP_DEFAULT_RECEIVE_BUFFER_SIZE));\n@@ -197,11 +196,11 @@ protected void doStart() throws ElasticsearchException {\n \n serverBootstrap.setPipelineFactory(configureServerChannelPipelineFactory());\n \n- if (tcpNoDelay != null) {\n- serverBootstrap.setOption(\"child.tcpNoDelay\", tcpNoDelay);\n+ if (!\"default\".equals(tcpNoDelay)) {\n+ serverBootstrap.setOption(\"child.tcpNoDelay\", Booleans.parseBoolean(tcpNoDelay, null));\n }\n- if (tcpKeepAlive != null) {\n- serverBootstrap.setOption(\"child.keepAlive\", tcpKeepAlive);\n+ if (!\"default\".equals(tcpKeepAlive)) {\n+ serverBootstrap.setOption(\"child.keepAlive\", Booleans.parseBoolean(tcpKeepAlive, null));\n }\n if (tcpSendBufferSize != null && tcpSendBufferSize.bytes() > 0) {\n serverBootstrap.setOption(\"child.sendBufferSize\", tcpSendBufferSize.bytes());", "filename": "src/main/java/org/elasticsearch/http/netty/NettyHttpServerTransport.java", "status": "modified" }, { "diff": "@@ -23,6 +23,7 @@\n import com.google.common.collect.Lists;\n import org.elasticsearch.*;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n+import org.elasticsearch.common.Booleans;\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.bytes.ReleasableBytesReference;\n import org.elasticsearch.common.component.AbstractLifecycleComponent;\n@@ -128,11 +129,8 @@ public class NettyTransport extends AbstractLifecycleComponent<Transport> implem\n final boolean compress;\n \n final TimeValue connectTimeout;\n-\n- final Boolean tcpNoDelay;\n-\n- final Boolean tcpKeepAlive;\n-\n+ final String tcpNoDelay;\n+ final String tcpKeepAlive;\n final Boolean reuseAddress;\n \n final ByteSizeValue tcpSendBufferSize;\n@@ -196,8 +194,8 @@ public NettyTransport(Settings settings, ThreadPool threadPool, NetworkService n\n this.publishPort = componentSettings.getAsInt(\"publish_port\", settings.getAsInt(\"transport.publish_port\", 0));\n this.compress = settings.getAsBoolean(TransportSettings.TRANSPORT_TCP_COMPRESS, false);\n this.connectTimeout = componentSettings.getAsTime(\"connect_timeout\", settings.getAsTime(\"transport.tcp.connect_timeout\", settings.getAsTime(TCP_CONNECT_TIMEOUT, TCP_DEFAULT_CONNECT_TIMEOUT)));\n- this.tcpNoDelay = componentSettings.getAsBoolean(\"tcp_no_delay\", settings.getAsBoolean(TCP_NO_DELAY, true));\n- this.tcpKeepAlive = componentSettings.getAsBoolean(\"tcp_keep_alive\", settings.getAsBoolean(TCP_KEEP_ALIVE, true));\n+ this.tcpNoDelay = componentSettings.get(\"tcp_no_delay\", settings.get(TCP_NO_DELAY, \"true\"));\n+ this.tcpKeepAlive = componentSettings.get(\"tcp_keep_alive\", settings.get(TCP_KEEP_ALIVE, \"true\"));\n this.reuseAddress = componentSettings.getAsBoolean(\"reuse_address\", settings.getAsBoolean(TCP_REUSE_ADDRESS, NetworkUtils.defaultReuseAddress()));\n this.tcpSendBufferSize = componentSettings.getAsBytesSize(\"tcp_send_buffer_size\", settings.getAsBytesSize(TCP_SEND_BUFFER_SIZE, TCP_DEFAULT_SEND_BUFFER_SIZE));\n this.tcpReceiveBufferSize = componentSettings.getAsBytesSize(\"tcp_receive_buffer_size\", settings.getAsBytesSize(TCP_RECEIVE_BUFFER_SIZE, TCP_DEFAULT_RECEIVE_BUFFER_SIZE));\n@@ -271,11 +269,11 @@ protected void doStart() throws ElasticsearchException {\n }\n clientBootstrap.setPipelineFactory(configureClientChannelPipelineFactory());\n clientBootstrap.setOption(\"connectTimeoutMillis\", connectTimeout.millis());\n- if (tcpNoDelay != null) {\n- clientBootstrap.setOption(\"tcpNoDelay\", tcpNoDelay);\n+ if (!\"default\".equals(tcpNoDelay)) {\n+ clientBootstrap.setOption(\"tcpNoDelay\", Booleans.parseBoolean(tcpNoDelay, null));\n }\n- if (tcpKeepAlive != null) {\n- clientBootstrap.setOption(\"keepAlive\", tcpKeepAlive);\n+ if (!\"default\".equals(tcpKeepAlive)) {\n+ clientBootstrap.setOption(\"keepAlive\", Booleans.parseBoolean(tcpKeepAlive, null));\n }\n if (tcpSendBufferSize != null && tcpSendBufferSize.bytes() > 0) {\n clientBootstrap.setOption(\"sendBufferSize\", tcpSendBufferSize.bytes());\n@@ -306,11 +304,11 @@ protected void doStart() throws ElasticsearchException {\n workerCount));\n }\n serverBootstrap.setPipelineFactory(configureServerChannelPipelineFactory());\n- if (tcpNoDelay != null) {\n- serverBootstrap.setOption(\"child.tcpNoDelay\", tcpNoDelay);\n+ if (!\"default\".equals(tcpNoDelay)) {\n+ serverBootstrap.setOption(\"child.tcpNoDelay\", Booleans.parseBoolean(tcpNoDelay, null));\n }\n- if (tcpKeepAlive != null) {\n- serverBootstrap.setOption(\"child.keepAlive\", tcpKeepAlive);\n+ if (!\"default\".equals(tcpKeepAlive)) {\n+ serverBootstrap.setOption(\"child.keepAlive\", Booleans.parseBoolean(tcpKeepAlive, null));\n }\n if (tcpSendBufferSize != null && tcpSendBufferSize.bytes() > 0) {\n serverBootstrap.setOption(\"child.sendBufferSize\", tcpSendBufferSize.bytes());", "filename": "src/main/java/org/elasticsearch/transport/netty/NettyTransport.java", "status": "modified" } ] }
{ "body": "To reproduce:\n\n```\n\nDELETE testidx\n\nPUT testidx\n{\n \"settings\": {\n \"index.translog.disable_flush\": true,\n \"index.number_of_shards\": 1,\n \"refresh_interval\": \"1h\"\n },\n \"mappings\": {\n \"doc\": {\n \"properties\": {\n \"text\": {\n \"type\": \"string\",\n \"term_vector\": \"with_positions_offsets\"\n }\n }\n }\n }\n}\nPOST testidx/doc/1\n{\n \"text\": \"foo bar\"\n}\n\nGET testidx/doc/1/_termvector\n\n```\n\nresults in \n\n```\n{\n \"error\": \"JsonGenerationException[Current context not an object but ROOT]\",\n \"status\": 500\n}\n```\n\nA more meaningful error message maybe?\n", "comments": [], "number": 7121, "title": "`_termvector` returns `JsonGenerationException` if called between index and refresh" }
{ "body": "Closes #7121\n", "number": 7124, "review_comments": [ { "body": "I'd disable refresh entirely (`-1`) just to make sure that the document is not found in the index.\n", "created_at": "2014-08-01T13:37:42Z" }, { "body": "can we name this file following the convention used with the other REST tests? something like 02_bla_bla.yaml or even add the test to the existing yaml file for term_vector?\n", "created_at": "2014-08-01T13:39:01Z" }, { "body": "Was the additional `endObject` causing a bug? If it's not needed anymore it means it closed an object one too many times before this fix?\n", "created_at": "2014-08-01T13:46:32Z" }, { "body": "Yes that's correct, this could resolve some other bugs for when anytime the requested doc does not exist.\n", "created_at": "2014-08-01T14:21:18Z" }, { "body": "Makes sense, then I would add a specific Java test for it and create a separate issue marked as bug for it? Or maybe just adapt the title and label of this one, cause it seems that we already return the proper boolean flag but we might break trying to do that?\n", "created_at": "2014-08-01T14:29:07Z" }, { "body": "Actually, looking at the original issue (not the PR) everything is clearer, I would just mark it as a bug then.\n", "created_at": "2014-08-01T14:30:36Z" }, { "body": "But still, a Java test would be fantastic to have around this :)\n", "created_at": "2014-08-01T14:31:22Z" } ], "title": "Return found: false for docs requested between index and refresh" }
{ "commits": [ { "message": "Term vector API: return 'found: false' for docs between index and refresh\n\nCloses #7121" }, { "message": "Addressed comments" }, { "message": "Check response is JSON serializable and mention of near realtime" } ], "files": [ { "diff": "@@ -4,7 +4,9 @@\n added[1.0.0.Beta1]\n \n Returns information and statistics on terms in the fields of a\n-particular document as stored in the index.\n+particular document as stored in the index. Note that this is a\n+near realtime API as the term vectors are not available until the\n+next refresh.\n \n [source,js]\n --------------------------------------------------", "filename": "docs/reference/docs/termvectors.asciidoc", "status": "modified" }, { "diff": "@@ -0,0 +1,36 @@\n+setup:\n+ - do:\n+ indices.create:\n+ index: testidx\n+ body:\n+ settings:\n+ \"index.translog.disable_flush\": true\n+ \"index.number_of_shards\": 1\n+ \"refresh_interval\": -1\n+ mappings:\n+ doc:\n+ \"properties\":\n+ \"text\":\n+ \"type\" : \"string\"\n+ \"term_vector\" : \"with_positions_offsets\"\n+ - do:\n+ index:\n+ index: testidx\n+ type: doc\n+ id: 1\n+ body:\n+ \"text\" : \"foo bar\"\n+\n+---\n+\"Term vector API should return 'found: false' for docs between index and refresh\":\n+\n+ - do:\n+ termvector:\n+ index: testidx\n+ type: doc\n+ id: 1\n+\n+ - match: { \"_index\": \"testidx\" }\n+ - match: { \"_type\": \"doc\" }\n+ - match: { \"_id\": \"1\" }\n+ - match: { \"found\": false }", "filename": "rest-api-spec/test/termvector/20_issue7121.yaml", "status": "added" }, { "diff": "@@ -170,7 +170,6 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n builder.field(FieldStrings._VERSION, docVersion);\n builder.field(FieldStrings.FOUND, isExists());\n if (!isExists()) {\n- builder.endObject();\n return builder;\n }\n builder.startObject(FieldStrings.TERM_VECTORS);", "filename": "src/main/java/org/elasticsearch/action/termvector/TermVectorResponse.java", "status": "modified" }, { "diff": "@@ -28,6 +28,7 @@\n import org.elasticsearch.action.ActionFuture;\n import org.elasticsearch.action.index.IndexRequestBuilder;\n import org.elasticsearch.common.settings.ImmutableSettings;\n+import org.elasticsearch.common.xcontent.ToXContent;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.index.mapper.core.AbstractFieldMapper;\n@@ -68,6 +69,8 @@ public void testNoSuchDoc() throws Exception {\n TermVectorResponse actionGet = termVector.actionGet();\n assertThat(actionGet, notNullValue());\n assertThat(actionGet.isExists(), equalTo(false));\n+ // check response is nevertheless serializable to json\n+ actionGet.toXContent(XContentFactory.jsonBuilder(), ToXContent.EMPTY_PARAMS);\n }\n }\n ", "filename": "src/test/java/org/elasticsearch/action/termvector/GetTermVectorTests.java", "status": "modified" } ] }
{ "body": "When the cluster is in the ClusterBlockException state (Eg. not enough master to meet min master nodes), the ClusterBlockException cannot be caught for a bulk request when using node client:\n\n``` java\n BulkRequestBuilder brb = client.prepareBulk();\n XContentBuilder builder = XContentFactory.jsonBuilder().startObject().field(\"bfield1\", \"bvalue1\").endObject();\n String jsonString = builder.string();\n IndexRequestBuilder irb = client.prepareIndex(INDEX_NAME,TYPE_NAME,\"b1\");\n irb.setSource(jsonString);\n brb.add(irb);\n BulkResponse bulkResponse = brb.execute().actionGet();\n```\n\nReturns the exception:\n\n```\nException in thread \"elasticsearch[client_node][generic][T#4]\" org.elasticsearch.cluster.block.ClusterBlockException: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];[SERVICE_UNAVAILABLE/2/no master];\n at org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedException(ClusterBlocks.java:138)\n at org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedRaiseException(ClusterBlocks.java:128)\n at org.elasticsearch.action.bulk.TransportBulkAction.executeBulk(TransportBulkAction.java:197)\n at org.elasticsearch.action.bulk.TransportBulkAction.access$000(TransportBulkAction.java:65)\n at org.elasticsearch.action.bulk.TransportBulkAction$1.onFailure(TransportBulkAction.java:143)\n at org.elasticsearch.action.support.TransportAction$ThreadedActionListener$2.run(TransportAction.java:119)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n at java.lang.Thread.run(Thread.java:745)\n```\n\nThe above cannot be caught in a catch(ClusterBlockException exception) clause. However, when the cluster is in the same state using the node client, the ClusterBlockException can be caught for search requests.\n", "comments": [], "number": 7086, "title": "ClusterBlockException cannot be caught for bulk request when using node client" }
{ "body": "...n to not return\n\nwhen there is a cluster block (like no master yet discovered), the bulk action doesn't properly catch the exception of inner execute to notify the listener, causing the bulk operation to hang\ncloses #7086\n", "number": 7109, "review_comments": [], "title": "Cluster block with auto create index bulk action can cause bulk execution to not return" }
{ "commits": [ { "message": "cluster block with auto create index bulk action can cause bulk execution to not return\nwhen there is a cluster block (like no master yet discovered), the bulk action doesn't properly catch the exception of inner execute to notify the listener, causing the bulk operation to hang\ncloses #7086" } ], "files": [ { "diff": "@@ -96,7 +96,7 @@ public BulkRequest newRequestInstance(){\n @Override\n protected void doExecute(final BulkRequest bulkRequest, final ActionListener<BulkResponse> listener) {\n final long startTime = System.currentTimeMillis();\n- final AtomicArray<BulkItemResponse> responses = new AtomicArray<BulkItemResponse>(bulkRequest.requests.size());\n+ final AtomicArray<BulkItemResponse> responses = new AtomicArray<>(bulkRequest.requests.size());\n \n if (autoCreateIndex.needToCheck()) {\n final Set<String> indices = Sets.newHashSet();\n@@ -125,7 +125,7 @@ protected void doExecute(final BulkRequest bulkRequest, final ActionListener<Bul\n ClusterState state = clusterService.state();\n for (final String index : indices) {\n if (autoCreateIndex.shouldAutoCreate(index, state)) {\n- createIndexAction.execute(new CreateIndexRequest(index).cause(\"auto(bulk api)\"), new ActionListener<CreateIndexResponse>() {\n+ createIndexAction.execute(new CreateIndexRequest(index).cause(\"auto(bulk api)\").masterNodeTimeout(bulkRequest.timeout()), new ActionListener<CreateIndexResponse>() {\n @Override\n public void onResponse(CreateIndexResponse result) {\n if (counter.decrementAndGet() == 0) {\n@@ -145,7 +145,11 @@ public void onFailure(Throwable e) {\n }\n }\n if (counter.decrementAndGet() == 0) {\n- executeBulk(bulkRequest, startTime, listener, responses);\n+ try {\n+ executeBulk(bulkRequest, startTime, listener, responses);\n+ } catch (Throwable t) {\n+ listener.onFailure(t);\n+ }\n }\n }\n });", "filename": "src/main/java/org/elasticsearch/action/bulk/TransportBulkAction.java", "status": "modified" }, { "diff": "@@ -19,13 +19,14 @@\n \n package org.elasticsearch.cluster;\n \n-import com.google.common.base.Predicate;\n+import org.elasticsearch.action.bulk.BulkRequestBuilder;\n import org.elasticsearch.action.percolate.PercolateSourceBuilder;\n import org.elasticsearch.cluster.block.ClusterBlockException;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.discovery.Discovery;\n+import org.elasticsearch.discovery.MasterNotDiscoveredException;\n import org.elasticsearch.rest.RestStatus;\n import org.elasticsearch.script.ScriptService;\n import org.elasticsearch.test.ElasticsearchIntegrationTest;\n@@ -36,21 +37,24 @@\n import java.util.HashMap;\n \n import static org.elasticsearch.common.settings.ImmutableSettings.settingsBuilder;\n-import static org.elasticsearch.test.ElasticsearchIntegrationTest.*;\n-import static org.hamcrest.Matchers.equalTo;\n-import static org.hamcrest.Matchers.greaterThan;\n+import static org.elasticsearch.test.ElasticsearchIntegrationTest.Scope;\n+import static org.hamcrest.Matchers.*;\n \n /**\n */\n-@ClusterScope(scope= Scope.TEST, numDataNodes =0)\n+@ClusterScope(scope = Scope.TEST, numDataNodes = 0)\n public class NoMasterNodeTests extends ElasticsearchIntegrationTest {\n \n @Test\n @TestLogging(\"action:TRACE,cluster.service:TRACE\")\n public void testNoMasterActions() throws Exception {\n+ // note, sometimes, we want to check with the fact that an index gets created, sometimes not...\n+ boolean autoCreateIndex = randomBoolean();\n+ logger.info(\"auto_create_index set to {}\", autoCreateIndex);\n+\n Settings settings = settingsBuilder()\n .put(\"discovery.type\", \"zen\")\n- .put(\"action.auto_create_index\", false)\n+ .put(\"action.auto_create_index\", autoCreateIndex)\n .put(\"discovery.zen.minimum_master_nodes\", 2)\n .put(\"discovery.zen.ping_timeout\", \"200ms\")\n .put(\"discovery.initial_state_timeout\", \"500ms\")\n@@ -75,14 +79,14 @@ public void run() {\n try {\n client().prepareGet(\"test\", \"type1\", \"1\").execute().actionGet();\n fail(\"Expected ClusterBlockException\");\n- } catch (ClusterBlockException e) {\n+ } catch (ClusterBlockException | MasterNotDiscoveredException e) {\n assertThat(e.status(), equalTo(RestStatus.SERVICE_UNAVAILABLE));\n }\n \n try {\n client().prepareMultiGet().add(\"test\", \"type1\", \"1\").execute().actionGet();\n fail(\"Expected ClusterBlockException\");\n- } catch (ClusterBlockException e) {\n+ } catch (ClusterBlockException | MasterNotDiscoveredException e) {\n assertThat(e.status(), equalTo(RestStatus.SERVICE_UNAVAILABLE));\n }\n \n@@ -93,42 +97,61 @@ public void run() {\n .setIndices(\"test\").setDocumentType(\"type1\")\n .setSource(percolateSource).execute().actionGet();\n fail(\"Expected ClusterBlockException\");\n- } catch (ClusterBlockException e) {\n+ } catch (ClusterBlockException | MasterNotDiscoveredException e) {\n assertThat(e.status(), equalTo(RestStatus.SERVICE_UNAVAILABLE));\n }\n \n long now = System.currentTimeMillis();\n try {\n client().prepareUpdate(\"test\", \"type1\", \"1\").setScript(\"test script\", ScriptService.ScriptType.INLINE).setTimeout(timeout).execute().actionGet();\n fail(\"Expected ClusterBlockException\");\n- } catch (ClusterBlockException e) {\n+ } catch (ClusterBlockException | MasterNotDiscoveredException e) {\n assertThat(System.currentTimeMillis() - now, greaterThan(timeout.millis() - 50));\n assertThat(e.status(), equalTo(RestStatus.SERVICE_UNAVAILABLE));\n }\n \n try {\n client().admin().indices().prepareAnalyze(\"test\", \"this is a test\").execute().actionGet();\n fail(\"Expected ClusterBlockException\");\n- } catch (ClusterBlockException e) {\n+ } catch (ClusterBlockException | MasterNotDiscoveredException e) {\n assertThat(e.status(), equalTo(RestStatus.SERVICE_UNAVAILABLE));\n }\n \n try {\n client().prepareCount(\"test\").execute().actionGet();\n fail(\"Expected ClusterBlockException\");\n- } catch (ClusterBlockException e) {\n+ } catch (ClusterBlockException | MasterNotDiscoveredException e) {\n assertThat(e.status(), equalTo(RestStatus.SERVICE_UNAVAILABLE));\n }\n \n now = System.currentTimeMillis();\n try {\n client().prepareIndex(\"test\", \"type1\", \"1\").setSource(XContentFactory.jsonBuilder().startObject().endObject()).setTimeout(timeout).execute().actionGet();\n fail(\"Expected ClusterBlockException\");\n- } catch (ClusterBlockException e) {\n+ } catch (ClusterBlockException | MasterNotDiscoveredException e) {\n assertThat(System.currentTimeMillis() - now, greaterThan(timeout.millis() - 50));\n assertThat(e.status(), equalTo(RestStatus.SERVICE_UNAVAILABLE));\n }\n \n+ now = System.currentTimeMillis();\n+ try {\n+ BulkRequestBuilder bulkRequestBuilder = client().prepareBulk();\n+ bulkRequestBuilder.add(client().prepareIndex(\"test\", \"type1\", \"1\").setSource(XContentFactory.jsonBuilder().startObject().endObject()));\n+ bulkRequestBuilder.add(client().prepareIndex(\"test\", \"type1\", \"2\").setSource(XContentFactory.jsonBuilder().startObject().endObject()));\n+ bulkRequestBuilder.setTimeout(timeout);\n+ bulkRequestBuilder.get();\n+ fail(\"Expected ClusterBlockException\");\n+ } catch (ClusterBlockException | MasterNotDiscoveredException e) {\n+ if (autoCreateIndex) {\n+ // if its auto create index, the timeout will be based on the create index API\n+ assertThat(System.currentTimeMillis() - now, greaterThan(timeout.millis() - 50));\n+ } else {\n+ // TODO note, today we don't retry on global block for bulk operations-Dtests.seed=80C397728140167\n+ assertThat(System.currentTimeMillis() - now, lessThan(50l));\n+ }\n+ assertThat(e.status(), equalTo(RestStatus.SERVICE_UNAVAILABLE));\n+ }\n+\n internalCluster().startNode(settings);\n client().admin().cluster().prepareHealth().setWaitForGreenStatus().setWaitForNodes(\"2\").execute().actionGet();\n }", "filename": "src/test/java/org/elasticsearch/cluster/NoMasterNodeTests.java", "status": "modified" } ] }
{ "body": "ES version: 1.1.1\n\nWhen I remove all percolators and create new, deleted percolators are resurrected.\n\nMy test:\n1. Create empty index:\n \n curl -XPUT myhost:9200/myemptyindex\n2. Create two percolators: \n\ncurl -XPUT myhost:9200/myemptyindex/.percolator/1 -d '{query:{match:{t:1}}}'\n\ncurl -XPUT myhost:9200/myemptyindex/.percolator/2 -d '{query:{match:{t:2}}}'\n1. Delete all percolators:\n\ncurl -XDELETE myhost:9200/myemptyindex/.percolator\n1. Check deletion:\n\ncurl -XGET myhost:9200/myemptyindex/.percolator/_search\n\nNo percolators found.\n1. Create new percolator:\n\ncurl -XPUT myhost:9200/myemptyindex/.percolator/3 -d '{query:{match:{t:3}}}'\n1. Get all percolators:\n\ncurl -XGET myhost:9200/myemptyindex/.percolator/_search\n\nI see deleted percolators (1,2) are resurrected, it is incorrect.\n\nI think it may be depended on percolators caching or something like that.\n", "comments": [ { "body": "HI @hovsep \n\nYes, this is a known bug. Deleting a type deletes the documents within the type using delete-by-query, which doesn't unregister the in-memory percolators.\n\nSee #7052 \n", "created_at": "2014-07-30T10:35:34Z" }, { "body": "Depends on #7052\n", "created_at": "2014-07-30T10:36:03Z" } ], "number": 7087, "title": "Strange behavior after .percolator type deletion" }
{ "body": "The `.percolator` type is a hidden type and therefor the types from the delete mapping request should passed down to the delete by query request, otherwise the percolator type gets ignored and the percolator queries don't get deleted from disk (only unregistered).\n\nCloses #7087\n", "number": 7091, "review_comments": [], "title": "Pass down the types from the delete mapping request to the delete by query request" }
{ "commits": [ { "message": "Core: Pass down the types from the delete mapping request to the delete by query request.\n\nThe `.percolator` type is a hidden type and therefor the types from the delete mapping request should passed down to the delete by query request, otherwise the percolator type gets ignored and the percolator queries don't get deleted from disk (only unregistered).\n\nCloses #7087" } ], "files": [ { "diff": "@@ -138,7 +138,7 @@ public void onResponse(FlushResponse flushResponse) {\n request.types(types.toArray(new String[types.size()]));\n QuerySourceBuilder querySourceBuilder = new QuerySourceBuilder()\n .setQuery(QueryBuilders.filteredQuery(QueryBuilders.matchAllQuery(), filterBuilder));\n- deleteByQueryAction.execute(Requests.deleteByQueryRequest(concreteIndices).source(querySourceBuilder), new ActionListener<DeleteByQueryResponse>() {\n+ deleteByQueryAction.execute(Requests.deleteByQueryRequest(concreteIndices).types(request.types()).source(querySourceBuilder), new ActionListener<DeleteByQueryResponse>() {\n @Override\n public void onResponse(DeleteByQueryResponse deleteByQueryResponse) {\n if (logger.isTraceEnabled()) {", "filename": "src/main/java/org/elasticsearch/action/admin/indices/mapping/delete/TransportDeleteMappingAction.java", "status": "modified" }, { "diff": "@@ -1611,6 +1611,9 @@ public boolean apply(Object o) {\n .setPercolateDoc(docBuilder().setDoc(jsonBuilder().startObject().field(\"field1\", \"b\").endObject()))\n .execute().actionGet();\n assertMatchCount(response, 0l);\n+\n+ SearchResponse searchResponse = client().prepareSearch(\"test1\", \"test2\").get();\n+ assertHitCount(searchResponse, 0);\n }\n \n public static String[] convertFromTextArray(PercolateResponse.Match[] matches, String index) {", "filename": "src/test/java/org/elasticsearch/percolator/PercolatorTests.java", "status": "modified" } ] }
{ "body": "The RetryListener was notified twice for each single failure, which caused some additional retries, but more importantly was making the client reach the maximum number of retries (number of connected nodes) too quickly, meanwhile ongoing retries which could succeed are not completed yet.\n\nThe `TransportService` already notifies the listener of any exception received from a separate thread through the request holder, no need to notify the retry listener again in any other place (either catch or `onFailure` method itself).\n", "comments": [ { "body": "The fix look good to me (left some comments regarding comments :) ). I'd love to see a unit test as opposed to an integration test. I think we'd get much more out of it.\n", "created_at": "2014-07-11T14:03:55Z" }, { "body": "Pushed new commits to address comments and a unit test for it as suggested by @bleskes . I also changed a bit how we catch exceptions given how they get thrown by the `TransportService`. Ready for reviews!\n", "created_at": "2014-07-25T16:58:02Z" }, { "body": "LGTM, very clean now indeed!\n", "created_at": "2014-07-28T17:37:55Z" }, { "body": "Side note: as part of the work to fix this issue the `throwConnectException` option was removed from the `TransportService`.\n", "created_at": "2014-07-28T18:59:45Z" } ], "number": 6829, "title": "Fixed the node retry mechanism which could fail without trying all the connected nodes" }
{ "body": "This commit effectively reverts e1aa91d , as it is not needed anymore to add the original listed nodes. The cluster state local call made will in fact always return at least the local node (see #6811).\n\nThere were a couple of downsides caused by putting the original listed nodes among the connected nodes:\n1) in the following retries, they weren't seen as listed nodes anymore, thus the light connect wasn't used\n2) among the connected nodes some were \"bad\" duplicates as they are already there and don't contain all needed info for each node. This was causing serialization problems for instance given that the node version was missing on the `DiscoveryNode` object (or `minCompatibilityVersion` after #6894).\n\n(As a side note, the fact that nodes were appearing twice in the list was hiding #6829 in sniff mode, as more nodes than expected were in the list and then retried)\n\nNext step is to enable transport client `sniff` mode in our tests, already in the work on a public branch: https://github.com/elasticsearch/elasticsearch/tree/enhancement/test-enable-transport-client-sniff .\n", "number": 7067, "review_comments": [], "title": "Transport client: Don't add listed nodes to connected nodes list in sniff mode" }
{ "commits": [ { "message": "Transport client: don't add listed nodes to connected nodes list in sniff mode\n\nThis commit effectively reverts e1aa91d , as it is not needed anymore to add the original listed nodes. The cluster state local call made will in fact always return at least the local node (see #6811).\n\nThere were a couple of downsides caused by putting the original listed nodes among the connected nodes:\n1) in the following retries, they weren't seen as listed nodes anymore, thus the light connect wasn't used\n2) among the connected nodes some were \"bad\" duplicates as they are already there and don't contain all needed info for each node. This was causing serialization problems for instance given that the node version was missing on the `DiscoveryNode` object." } ], "files": [ { "diff": "@@ -456,7 +456,7 @@ public void handleException(TransportException e) {\n return;\n }\n \n- HashSet<DiscoveryNode> newNodes = new HashSet<>(listedNodes);\n+ HashSet<DiscoveryNode> newNodes = new HashSet<>();\n HashSet<DiscoveryNode> newFilteredNodes = new HashSet<>();\n for (Map.Entry<DiscoveryNode, ClusterStateResponse> entry : clusterStateResponses.entrySet()) {\n if (!ignoreClusterName && !clusterName.equals(entry.getValue().getClusterName())) {", "filename": "src/main/java/org/elasticsearch/client/transport/TransportClientNodesService.java", "status": "modified" } ] }
{ "body": "If a value count agg is used for the sort column of a terms aggregation an exception is thrown stating:\n\n```\nReduceSearchPhaseException[Failed to execute phase [query], [reduce] ]; nested: ElasticsearchIllegalArgumentException[Invalid order path [grades_count]. Missing value key in [grades_count] which refers to a multi-value metric aggregation]; \n```\n\nMarvel commands to reproduce error: https://gist.github.com/colings86/4dfcb7de6c474ae69c32\n\nSetting order to:\n\n```\n\"order\": {\n \"grades_count\": \"desc\"\n}\n```\n\nproduces the following error:\n\n```\n{\n \"error\": \"ReduceSearchPhaseException[Failed to execute phase [fetch], [reduce] ]; nested: ClassCastException[org.elasticsearch.search.aggregations.metrics.valuecount.InternalValueCount cannot be cast to org.elasticsearch.search.aggregations.metrics.InternalNumericMetricsAggregation$MultiValue]; \",\n \"status\": 503\n}\n```\n", "comments": [], "number": 7050, "title": "Aggregations: Value Count Agg cannot be used for sort order" }
{ "body": "Closes #7050\n", "number": 7051, "review_comments": [], "title": "Fixed value count so it can be used in terms order" }
{ "commits": [ { "message": "Aggregations: fixed value count so it can be used in terms order\n\nCloses #7050" } ], "files": [ { "diff": "@@ -30,7 +30,7 @@\n /**\n * An internal implementation of {@link ValueCount}.\n */\n-public class InternalValueCount extends InternalNumericMetricsAggregation implements ValueCount {\n+public class InternalValueCount extends InternalNumericMetricsAggregation.SingleValue implements ValueCount {\n \n public static final Type TYPE = new Type(\"value_count\", \"vcount\");\n \n@@ -61,6 +61,11 @@ public long getValue() {\n return value;\n }\n \n+ @Override\n+ public double value() {\n+ return value;\n+ }\n+\n @Override\n public Type type() {\n return TYPE;", "filename": "src/main/java/org/elasticsearch/search/aggregations/metrics/valuecount/InternalValueCount.java", "status": "modified" } ] }
{ "body": "I'm unsure if this is a bug or something that wasn't intended to work. In the example below, I'm trying to find the 'foo' object that has the nested 'bar' object named 'bar0'. I then want to know how many nested 'baz' objects the 'foo' contains. After using reverse_nested I can use the first-level fields of 'foo0', but not the nested fields.\n\nUsing my actual data this sometimes did work, which is why I thought there might be a bug here. So for example two root objects would contain a certain nested object ('bar'), but only one was returning a proper count of another nested object (baz). With my example I wasn't able to recreate this.\n\n```\nDELETE /_all\n\nPUT /foos\n\nPUT /foos/foo/_mapping\n{\n \"foo\": {\n \"properties\": {\n \"bar\": {\n \"type\": \"nested\",\n \"properties\": {\n \"name\": {\n \"type\": \"string\"\n }\n }\n },\n \"baz\": {\n \"type\": \"nested\",\n \"properties\": {\n \"name\": {\n \"type\": \"string\"\n }\n }\n },\n \"name\": {\n \"type\": \"string\"\n }\n }\n }\n}\n\nPUT /foos/foo/0\n{\n \"bar\": [\n {\n \"name\": \"bar0\"\n },\n {\n \"name\": \"bar1\"\n }\n ],\n \"baz\": [\n {\n \"name\": \"baz0\"\n },\n {\n \"name\": \"baz1\"\n }\n ],\n \"name\": \"foo0\"\n}\nPUT /foos/foo/1\n{\n \"bar\": [\n {\n \"name\": \"bar2\"\n },\n {\n \"name\": \"bar3\"\n }\n ],\n \"baz\": [\n {\n \"name\": \"baz2\"\n },\n {\n \"name\": \"baz3\"\n }\n ],\n \"name\": \"foo1\"\n}\n\n#this should give a count of 2 under baz, but it's 0 instead\n\nPOST /foos/foo/_search?pretty\n{\n \"size\": 0,\n \"aggs\": {\n \"bar\": {\n \"nested\": {\n \"path\": \"bar\"\n },\n \"aggs\": {\n \"bar_filter\": {\n \"filter\": {\n \"bool\": {\n \"must\": {\n \"term\": {\n \"bar.name\": \"bar0\"\n }\n }\n }\n },\n \"aggs\": {\n \"foo\": {\n \"reverse_nested\": {},\n \"aggs\": {\n \"name\": {\n \"terms\": {\n \"field\": \"name\"\n },\n \"aggs\": {\n \"baz\": {\n \"nested\": {\n \"path\": \"baz\"\n }\n }\n }\n }\n }\n }\n }\n }\n }\n }\n }\n}\n\n# this works and give the right number of baz\nPOST /foos/foo/_search?pretty\n{\n \"size\": 0,\n \"aggs\": {\n \"foo_filter\": {\n \"filter\": {\n \"bool\": {\n \"must\": {\n \"term\": {\n \"name\": \"foo0\"\n }\n }\n }\n },\n \"aggs\": {\n \"baz\": {\n \"nested\": {\n \"path\": \"baz\"\n }\n }\n }\n }\n }\n}\n```\n", "comments": [ { "body": "@martijnvg I've confirmed this behaviour - looks like a bug to me.\n", "created_at": "2014-07-24T11:57:29Z" }, { "body": "@lmeck @clintongormley This is indeed a bug. I opened #7048 for this.\n", "created_at": "2014-07-27T19:05:32Z" } ], "number": 6994, "title": "Within a sub-aggregation of a reverse_nested aggregation, cannot filter on nested fields" }
{ "body": "PR for #6994\n", "number": 7048, "review_comments": [], "title": "The `nested` aggregator should also resolve and use the parentFilter of the closest `reverse_nested` aggregator." }
{ "commits": [ { "message": "The `nested` aggregator should also resolve and use the parentFilter of the closest `reverse_nested` aggregator.\n\nCloses #6994\nCloses #7048" } ], "files": [ { "diff": "@@ -71,16 +71,13 @@ public NestedAggregator(String name, AggregatorFactories factories, String neste\n @Override\n public void setNextReader(AtomicReaderContext reader) {\n if (parentFilter == null) {\n- NestedAggregator closestNestedAggregator = findClosestNestedAggregator(parentAggregator);\n- final Filter parentFilterNotCached;\n- if (closestNestedAggregator == null) {\n+ // The aggs are instantiated in reverse, first the most inner nested aggs and lastly the top level aggs\n+ // So at the time a nested 'nested' aggs is parsed its closest parent nested aggs hasn't been constructed.\n+ // So the trick to set at the last moment just before needed and we can use its child filter as the\n+ // parent filter.\n+ Filter parentFilterNotCached = findClosestNestedPath(parentAggregator);\n+ if (parentFilterNotCached == null) {\n parentFilterNotCached = NonNestedDocsFilter.INSTANCE;\n- } else {\n- // The aggs are instantiated in reverse, first the most inner nested aggs and lastly the top level aggs\n- // So at the time a nested 'nested' aggs is parsed its closest parent nested aggs hasn't been constructed.\n- // So the trick to set at the last moment just before needed and we can use its child filter as the\n- // parent filter.\n- parentFilterNotCached = closestNestedAggregator.childFilter;\n }\n parentFilter = SearchContext.current().filterCache().cache(parentFilterNotCached);\n // if the filter cache is disabled, we still need to produce bit sets\n@@ -103,10 +100,8 @@ public void setNextReader(AtomicReaderContext reader) {\n \n @Override\n public void collect(int parentDoc, long bucketOrd) throws IOException {\n-\n // here we translate the parent doc to a list of its nested docs, and then call super.collect for evey one of them\n // so they'll be collected\n-\n if (parentDoc == 0 || parentDocs == null) {\n return;\n }\n@@ -135,10 +130,12 @@ public String getNestedPath() {\n return nestedPath;\n }\n \n- static NestedAggregator findClosestNestedAggregator(Aggregator parent) {\n+ private static Filter findClosestNestedPath(Aggregator parent) {\n for (; parent != null; parent = parent.parent()) {\n if (parent instanceof NestedAggregator) {\n- return (NestedAggregator) parent;\n+ return ((NestedAggregator) parent).childFilter;\n+ } else if (parent instanceof ReverseNestedAggregator) {\n+ return ((ReverseNestedAggregator) parent).getParentFilter();\n }\n }\n return null;", "filename": "src/main/java/org/elasticsearch/search/aggregations/bucket/nested/NestedAggregator.java", "status": "modified" }, { "diff": "@@ -38,8 +38,6 @@\n \n import java.io.IOException;\n \n-import static org.elasticsearch.search.aggregations.bucket.nested.NestedAggregator.findClosestNestedAggregator;\n-\n /**\n *\n */\n@@ -128,6 +126,14 @@ private void innerCollect(int parentDoc, long bucketOrd) throws IOException {\n collectBucket(parentDoc, bucketOrd);\n }\n \n+ private static NestedAggregator findClosestNestedAggregator(Aggregator parent) {\n+ for (; parent != null; parent = parent.parent()) {\n+ if (parent instanceof NestedAggregator) {\n+ return (NestedAggregator) parent;\n+ }\n+ }\n+ return null;\n+ }\n \n @Override\n public InternalAggregation buildAggregation(long owningBucketOrdinal) {\n@@ -139,6 +145,10 @@ public InternalAggregation buildEmptyAggregation() {\n return new InternalReverseNested(name, 0, buildEmptySubAggregations());\n }\n \n+ Filter getParentFilter() {\n+ return parentFilter;\n+ }\n+\n @Override\n protected void doClose() {\n Releasables.close(bucketOrdToLastCollectedParentDocRecycler);", "filename": "src/main/java/org/elasticsearch/search/aggregations/bucket/nested/ReverseNestedAggregator.java", "status": "modified" }, { "diff": "@@ -129,6 +129,29 @@ public void simple_reverseNestedToRoot() throws Exception {\n verifyResults(response);\n }\n \n+ @Test\n+ public void simple_nested1ToRootToNested2() throws Exception {\n+ SearchResponse response = client().prepareSearch(\"idx\").setTypes(\"type2\")\n+ .addAggregation(nested(\"nested1\").path(\"nested1\")\n+ .subAggregation(\n+ reverseNested(\"nested1_to_root\")\n+ .subAggregation(nested(\"root_to_nested2\").path(\"nested1.nested2\"))\n+ )\n+ )\n+ .get();\n+\n+ assertSearchResponse(response);\n+ Nested nested = response.getAggregations().get(\"nested1\");\n+ assertThat(nested.getName(), equalTo(\"nested1\"));\n+ assertThat(nested.getDocCount(), equalTo(9l));\n+ ReverseNested reverseNested = nested.getAggregations().get(\"nested1_to_root\");\n+ assertThat(reverseNested.getName(), equalTo(\"nested1_to_root\"));\n+ assertThat(reverseNested.getDocCount(), equalTo(9l));\n+ nested = reverseNested.getAggregations().get(\"root_to_nested2\");\n+ assertThat(nested.getName(), equalTo(\"root_to_nested2\"));\n+ assertThat(nested.getDocCount(), equalTo(25l));\n+ }\n+\n @Test\n public void simple_reverseNestedToNested1() throws Exception {\n SearchResponse response = client().prepareSearch(\"idx\")", "filename": "src/test/java/org/elasticsearch/search/aggregations/bucket/ReverseNestedTests.java", "status": "modified" } ] }
{ "body": "`RestFilter`s allow to configure their execution order through the `order` method. Their javadocs say:\n\n> Execution is done from lowest value to highest.\n\nThe filters are ordered and executed the opposite way though, from highest to lowest.\n", "comments": [ { "body": "This issue is marked as breaking as plugins that register multiple REST filters might rely on their wrong ordering, quite a rare usecase though I believe.\n", "created_at": "2014-07-28T15:16:26Z" } ], "number": 7019, "title": "Rest filters execution order doesn't reflect javadocs" }
{ "body": "Fix filters ordering which seems to be the opposite to what it should be (#7019). Added missing tests for rest filters.\n\nSolved concurrency issue in both rest filter chain and transport action filter chain (#7021).\n\nCloses #7019\nCloses #7021\n", "number": 7023, "review_comments": [ { "body": "can this be final?\n", "created_at": "2014-07-25T17:32:20Z" }, { "body": "can this be final?\n", "created_at": "2014-07-25T17:32:38Z" }, { "body": "sure, will change that!\n", "created_at": "2014-07-28T08:21:58Z" }, { "body": "yep\n", "created_at": "2014-07-28T08:22:04Z" } ], "title": "Fixed filters execution order and fix potential concurrency issue in filter chains" }
{ "commits": [ { "message": "Internal: use AtomicInteger instead of volatile int for the current action filter position\n\nAlso improved filter chain tests to not rely on execution time, and made filter chain tests look more similar to what happens in reality by removing multiple threads creation in testTooManyContinueProcessing (something we don't support anyway, makes little sense to test it).\n\nCloses #7021" }, { "message": "Rest: fixed filters execution order to be from lowest to highest rather than the other way around\n\nCloses #7019" } ], "files": [ { "diff": "@@ -27,6 +27,8 @@\n import org.elasticsearch.common.util.concurrent.EsRejectedExecutionException;\n import org.elasticsearch.threadpool.ThreadPool;\n \n+import java.util.concurrent.atomic.AtomicInteger;\n+\n import static org.elasticsearch.action.support.PlainActionFuture.newFuture;\n \n /**\n@@ -146,12 +148,12 @@ public void run() {\n \n private class TransportActionFilterChain implements ActionFilterChain {\n \n- private volatile int index = 0;\n+ private final AtomicInteger index = new AtomicInteger();\n \n @SuppressWarnings(\"unchecked\")\n @Override\n public void continueProcessing(String action, ActionRequest actionRequest, ActionListener actionListener) {\n- int i = index++;\n+ int i = index.getAndIncrement();\n try {\n if (i < filters.length) {\n filters[i].process(action, actionRequest, actionListener, this);", "filename": "src/main/java/org/elasticsearch/action/support/TransportAction.java", "status": "modified" }, { "diff": "@@ -33,10 +33,9 @@\n import java.io.IOException;\n import java.util.Arrays;\n import java.util.Comparator;\n+import java.util.concurrent.atomic.AtomicInteger;\n \n-import static org.elasticsearch.rest.RestStatus.BAD_REQUEST;\n-import static org.elasticsearch.rest.RestStatus.OK;\n-import static org.elasticsearch.rest.RestStatus.FORBIDDEN;\n+import static org.elasticsearch.rest.RestStatus.*;\n \n /**\n *\n@@ -87,7 +86,7 @@ public synchronized void registerFilter(RestFilter preProcessor) {\n Arrays.sort(copy, new Comparator<RestFilter>() {\n @Override\n public int compare(RestFilter o1, RestFilter o2) {\n- return o2.order() - o1.order();\n+ return Integer.compare(o1.order(), o2.order());\n }\n });\n filters = copy;\n@@ -216,7 +215,7 @@ class ControllerFilterChain implements RestFilterChain {\n \n private final RestFilter executionFilter;\n \n- private volatile int index;\n+ private final AtomicInteger index = new AtomicInteger();\n \n ControllerFilterChain(RestFilter executionFilter) {\n this.executionFilter = executionFilter;\n@@ -225,8 +224,7 @@ class ControllerFilterChain implements RestFilterChain {\n @Override\n public void continueProcessing(RestRequest request, RestChannel channel) {\n try {\n- int loc = index;\n- index++;\n+ int loc = index.getAndIncrement();\n if (loc > filters.length) {\n throw new ElasticsearchIllegalStateException(\"filter continueProcessing was called more than expected\");\n } else if (loc == filters.length) {", "filename": "src/main/java/org/elasticsearch/rest/RestController.java", "status": "modified" }, { "diff": "@@ -100,7 +100,7 @@ public int compare(ActionFilter o1, ActionFilter o2) {\n Collections.sort(testFiltersByLastExecution, new Comparator<TestFilter>() {\n @Override\n public int compare(TestFilter o1, TestFilter o2) {\n- return Long.compare(o1.lastExecution, o2.lastExecution);\n+ return Integer.compare(o1.executionToken, o2.executionToken);\n }\n });\n \n@@ -131,12 +131,7 @@ public void testTooManyContinueProcessing() throws ExecutionException, Interrupt\n @Override\n public void execute(final String action, final ActionRequest actionRequest, final ActionListener actionListener, final ActionFilterChain actionFilterChain) {\n for (int i = 0; i <= additionalContinueCount; i++) {\n- new Thread() {\n- @Override\n- public void run() {\n- actionFilterChain.continueProcessing(action, actionRequest, actionListener);\n- }\n- }.start();\n+ actionFilterChain.continueProcessing(action, actionRequest, actionListener);\n }\n }\n });\n@@ -185,13 +180,15 @@ public void onFailure(Throwable e) {\n }\n }\n \n- private static class TestFilter implements ActionFilter {\n+ private final AtomicInteger counter = new AtomicInteger();\n+\n+ private class TestFilter implements ActionFilter {\n private final int order;\n private final Callback callback;\n \n AtomicInteger runs = new AtomicInteger();\n volatile String lastActionName;\n- volatile long lastExecution = Long.MAX_VALUE; //the filters that don't run will go last in the sorted list\n+ volatile int executionToken = Integer.MAX_VALUE; //the filters that don't run will go last in the sorted list\n \n TestFilter(int order, Callback callback) {\n this.order = order;\n@@ -203,7 +200,7 @@ private static class TestFilter implements ActionFilter {\n public void process(String action, ActionRequest actionRequest, ActionListener actionListener, ActionFilterChain actionFilterChain) {\n this.runs.incrementAndGet();\n this.lastActionName = action;\n- this.lastExecution = System.nanoTime();\n+ this.executionToken = counter.incrementAndGet();\n this.callback.execute(action, actionRequest, actionListener, actionFilterChain);\n }\n ", "filename": "src/test/java/org/elasticsearch/action/support/TransportActionFilterChainTests.java", "status": "modified" }, { "diff": "@@ -0,0 +1,98 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.rest;\n+\n+import org.elasticsearch.common.bytes.BytesReference;\n+\n+import java.util.HashMap;\n+import java.util.Map;\n+\n+class FakeRestRequest extends RestRequest {\n+\n+ private final Map<String, String> headers;\n+\n+ FakeRestRequest() {\n+ this(new HashMap<String, String>());\n+ }\n+\n+ FakeRestRequest(Map<String, String> headers) {\n+ this.headers = headers;\n+ }\n+\n+ @Override\n+ public Method method() {\n+ return Method.GET;\n+ }\n+\n+ @Override\n+ public String uri() {\n+ return \"/\";\n+ }\n+\n+ @Override\n+ public String rawPath() {\n+ return \"/\";\n+ }\n+\n+ @Override\n+ public boolean hasContent() {\n+ return false;\n+ }\n+\n+ @Override\n+ public boolean contentUnsafe() {\n+ return false;\n+ }\n+\n+ @Override\n+ public BytesReference content() {\n+ return null;\n+ }\n+\n+ @Override\n+ public String header(String name) {\n+ return headers.get(name);\n+ }\n+\n+ @Override\n+ public Iterable<Map.Entry<String, String>> headers() {\n+ return headers.entrySet();\n+ }\n+\n+ @Override\n+ public boolean hasParam(String key) {\n+ return false;\n+ }\n+\n+ @Override\n+ public String param(String key) {\n+ return null;\n+ }\n+\n+ @Override\n+ public String param(String key, String defaultValue) {\n+ return null;\n+ }\n+\n+ @Override\n+ public Map<String, String> params() {\n+ return null;\n+ }\n+}\n\\ No newline at end of file", "filename": "src/test/java/org/elasticsearch/rest/FakeRestRequest.java", "status": "added" }, { "diff": "@@ -35,7 +35,6 @@\n import org.elasticsearch.client.support.AbstractClient;\n import org.elasticsearch.client.support.AbstractClusterAdminClient;\n import org.elasticsearch.client.support.AbstractIndicesAdminClient;\n-import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.test.ElasticsearchTestCase;\n import org.elasticsearch.threadpool.ThreadPool;\n@@ -332,75 +331,6 @@ private static void assertHeaders(ActionRequest<?> request, Map<String, String>\n }\n }\n \n- private static class FakeRestRequest extends RestRequest {\n-\n- private final Map<String, String> headers;\n-\n- private FakeRestRequest(Map<String, String> headers) {\n- this.headers = headers;\n- }\n-\n- @Override\n- public Method method() {\n- return null;\n- }\n-\n- @Override\n- public String uri() {\n- return null;\n- }\n-\n- @Override\n- public String rawPath() {\n- return null;\n- }\n-\n- @Override\n- public boolean hasContent() {\n- return false;\n- }\n-\n- @Override\n- public boolean contentUnsafe() {\n- return false;\n- }\n-\n- @Override\n- public BytesReference content() {\n- return null;\n- }\n-\n- @Override\n- public String header(String name) {\n- return headers.get(name);\n- }\n-\n- @Override\n- public Iterable<Map.Entry<String, String>> headers() {\n- return headers.entrySet();\n- }\n-\n- @Override\n- public boolean hasParam(String key) {\n- return false;\n- }\n-\n- @Override\n- public String param(String key) {\n- return null;\n- }\n-\n- @Override\n- public String param(String key, String defaultValue) {\n- return null;\n- }\n-\n- @Override\n- public Map<String, String> params() {\n- return null;\n- }\n- }\n-\n private static class NoOpClient extends AbstractClient implements AdminClient {\n \n @Override", "filename": "src/test/java/org/elasticsearch/rest/HeadersCopyClientTests.java", "status": "modified" }, { "diff": "@@ -0,0 +1,270 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.rest;\n+\n+import com.google.common.collect.Lists;\n+import org.elasticsearch.common.Nullable;\n+import org.elasticsearch.common.bytes.BytesReference;\n+import org.elasticsearch.common.io.stream.BytesStreamOutput;\n+import org.elasticsearch.common.settings.ImmutableSettings;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.test.ElasticsearchTestCase;\n+import org.junit.Test;\n+\n+import java.io.IOException;\n+import java.util.*;\n+import java.util.concurrent.CountDownLatch;\n+import java.util.concurrent.TimeUnit;\n+import java.util.concurrent.atomic.AtomicInteger;\n+\n+import static org.hamcrest.CoreMatchers.equalTo;\n+\n+public class RestFilterChainTests extends ElasticsearchTestCase {\n+\n+ @Test\n+ public void testRestFilters() throws InterruptedException {\n+\n+ RestController restController = new RestController(ImmutableSettings.EMPTY);\n+\n+ int numFilters = randomInt(10);\n+ Set<Integer> orders = new HashSet<>(numFilters);\n+ while (orders.size() < numFilters) {\n+ orders.add(randomInt(10));\n+ }\n+\n+ List<RestFilter> filters = new ArrayList<>();\n+ for (Integer order : orders) {\n+ TestFilter testFilter = new TestFilter(order, randomFrom(Operation.values()));\n+ filters.add(testFilter);\n+ restController.registerFilter(testFilter);\n+ }\n+\n+ ArrayList<RestFilter> restFiltersByOrder = Lists.newArrayList(filters);\n+ Collections.sort(restFiltersByOrder, new Comparator<RestFilter>() {\n+ @Override\n+ public int compare(RestFilter o1, RestFilter o2) {\n+ return Integer.compare(o1.order(), o2.order());\n+ }\n+ });\n+\n+ List<RestFilter> expectedRestFilters = Lists.newArrayList();\n+ for (RestFilter filter : restFiltersByOrder) {\n+ TestFilter testFilter = (TestFilter) filter;\n+ expectedRestFilters.add(testFilter);\n+ if (!(testFilter.callback == Operation.CONTINUE_PROCESSING) ) {\n+ break;\n+ }\n+ }\n+\n+ restController.registerHandler(RestRequest.Method.GET, \"/\", new RestHandler() {\n+ @Override\n+ public void handleRequest(RestRequest request, RestChannel channel) throws Exception {\n+ channel.sendResponse(new TestResponse());\n+ }\n+ });\n+\n+ FakeRestRequest fakeRestRequest = new FakeRestRequest();\n+ FakeRestChannel fakeRestChannel = new FakeRestChannel(fakeRestRequest, 1);\n+ restController.dispatchRequest(fakeRestRequest, fakeRestChannel);\n+ assertThat(fakeRestChannel.await(), equalTo(true));\n+\n+\n+ List<TestFilter> testFiltersByLastExecution = Lists.newArrayList();\n+ for (RestFilter restFilter : filters) {\n+ testFiltersByLastExecution.add((TestFilter)restFilter);\n+ }\n+ Collections.sort(testFiltersByLastExecution, new Comparator<TestFilter>() {\n+ @Override\n+ public int compare(TestFilter o1, TestFilter o2) {\n+ return Long.compare(o1.executionToken, o2.executionToken);\n+ }\n+ });\n+\n+ ArrayList<TestFilter> finalTestFilters = Lists.newArrayList();\n+ for (RestFilter filter : testFiltersByLastExecution) {\n+ TestFilter testFilter = (TestFilter) filter;\n+ finalTestFilters.add(testFilter);\n+ if (!(testFilter.callback == Operation.CONTINUE_PROCESSING) ) {\n+ break;\n+ }\n+ }\n+\n+ assertThat(finalTestFilters.size(), equalTo(expectedRestFilters.size()));\n+\n+ for (int i = 0; i < finalTestFilters.size(); i++) {\n+ TestFilter testFilter = finalTestFilters.get(i);\n+ assertThat(testFilter, equalTo(expectedRestFilters.get(i)));\n+ assertThat(testFilter.runs.get(), equalTo(1));\n+ }\n+ }\n+\n+ @Test\n+ public void testTooManyContinueProcessing() throws InterruptedException {\n+\n+ final int additionalContinueCount = randomInt(10);\n+\n+ TestFilter testFilter = new TestFilter(randomInt(), new Callback() {\n+ @Override\n+ public void execute(final RestRequest request, final RestChannel channel, final RestFilterChain filterChain) throws Exception {\n+ for (int i = 0; i <= additionalContinueCount; i++) {\n+ filterChain.continueProcessing(request, channel);\n+ }\n+ }\n+ });\n+\n+ RestController restController = new RestController(ImmutableSettings.EMPTY);\n+ restController.registerFilter(testFilter);\n+\n+ restController.registerHandler(RestRequest.Method.GET, \"/\", new RestHandler() {\n+ @Override\n+ public void handleRequest(RestRequest request, RestChannel channel) throws Exception {\n+ channel.sendResponse(new TestResponse());\n+ }\n+ });\n+\n+ FakeRestRequest fakeRestRequest = new FakeRestRequest();\n+ FakeRestChannel fakeRestChannel = new FakeRestChannel(fakeRestRequest, additionalContinueCount + 1);\n+ restController.dispatchRequest(fakeRestRequest, fakeRestChannel);\n+ fakeRestChannel.await();\n+\n+ assertThat(testFilter.runs.get(), equalTo(1));\n+\n+ assertThat(fakeRestChannel.responses.get(), equalTo(1));\n+ assertThat(fakeRestChannel.errors.get(), equalTo(additionalContinueCount));\n+ }\n+\n+ private static class FakeRestChannel extends RestChannel {\n+\n+ private final CountDownLatch latch;\n+ AtomicInteger responses = new AtomicInteger();\n+ AtomicInteger errors = new AtomicInteger();\n+\n+ protected FakeRestChannel(RestRequest request, int responseCount) {\n+ super(request);\n+ this.latch = new CountDownLatch(responseCount);\n+ }\n+\n+ @Override\n+ public XContentBuilder newBuilder() throws IOException {\n+ return super.newBuilder();\n+ }\n+\n+ @Override\n+ public XContentBuilder newBuilder(@Nullable BytesReference autoDetectSource) throws IOException {\n+ return super.newBuilder(autoDetectSource);\n+ }\n+\n+ @Override\n+ protected BytesStreamOutput newBytesOutput() {\n+ return super.newBytesOutput();\n+ }\n+\n+ @Override\n+ public RestRequest request() {\n+ return super.request();\n+ }\n+\n+ @Override\n+ public void sendResponse(RestResponse response) {\n+ if (response.status() == RestStatus.OK) {\n+ responses.incrementAndGet();\n+ } else {\n+ errors.incrementAndGet();\n+ }\n+ latch.countDown();\n+ }\n+\n+ public boolean await() throws InterruptedException {\n+ return latch.await(10, TimeUnit.SECONDS);\n+ }\n+ }\n+\n+ private static enum Operation implements Callback {\n+ CONTINUE_PROCESSING {\n+ @Override\n+ public void execute(RestRequest request, RestChannel channel, RestFilterChain filterChain) throws Exception {\n+ filterChain.continueProcessing(request, channel);\n+ }\n+ },\n+ CHANNEL_RESPONSE {\n+ @Override\n+ public void execute(RestRequest request, RestChannel channel, RestFilterChain filterChain) throws Exception {\n+ channel.sendResponse(new TestResponse());\n+ }\n+ }\n+ }\n+\n+ private static interface Callback {\n+ void execute(RestRequest request, RestChannel channel, RestFilterChain filterChain) throws Exception;\n+ }\n+\n+ private final AtomicInteger counter = new AtomicInteger();\n+\n+ private class TestFilter extends RestFilter {\n+ private final int order;\n+ private final Callback callback;\n+ AtomicInteger runs = new AtomicInteger();\n+ volatile int executionToken = Integer.MAX_VALUE; //the filters that don't run will go last in the sorted list\n+\n+ TestFilter(int order, Callback callback) {\n+ this.order = order;\n+ this.callback = callback;\n+ }\n+\n+ @Override\n+ public void process(RestRequest request, RestChannel channel, RestFilterChain filterChain) throws Exception {\n+ this.runs.incrementAndGet();\n+ this.executionToken = counter.incrementAndGet();\n+ this.callback.execute(request, channel, filterChain);\n+ }\n+\n+ @Override\n+ public int order() {\n+ return order;\n+ }\n+\n+ @Override\n+ public String toString() {\n+ return \"[order:\" + order + \", executionToken:\" + executionToken + \"]\";\n+ }\n+ }\n+\n+ private static class TestResponse extends RestResponse {\n+ @Override\n+ public String contentType() {\n+ return null;\n+ }\n+\n+ @Override\n+ public boolean contentThreadSafe() {\n+ return false;\n+ }\n+\n+ @Override\n+ public BytesReference content() {\n+ return null;\n+ }\n+\n+ @Override\n+ public RestStatus status() {\n+ return RestStatus.OK;\n+ }\n+ }\n+}", "filename": "src/test/java/org/elasticsearch/rest/RestFilterChainTests.java", "status": "added" } ] }
{ "body": "The geo bounds aggregation does not surround its JSON output with the aggregation name. For 1.3.x this should be fixed directly. For 1.4+ we should move the serialisation of the aggregation name to InternalAggregation so an aggregation can only output JSON in its own scope.\n", "comments": [ { "body": "+1\n", "created_at": "2014-07-24T09:10:35Z" } ], "number": 7004, "title": "Aggregations: Geo bounds aggregation does not output aggregation name" }
{ "body": "Before this change each aggregation had to output an object field with its name and write its JSON inside that object. This allowed for badly behaved aggregations which could write JSON content in the root of the 'aggs' object. this change move the writing of the aggregation name to a level above the aggregation itself, ensuring that aggregations can only write within there own scope in the JSON output.\n\nCloses #7004\n", "number": 6985, "review_comments": [], "title": "Better JSON output scoping" }
{ "commits": [ { "message": "Aggregations: Better JSON output scoping\n\nBefore this change each aggregation had to output an object field with its name and write its JSON inside that object. This allowed for badly behaved aggregations which could write JSON content in the root of the 'aggs' object. this change move the writing of the aggregation name to a level above the aggregation itself, ensuring that aggregations can only write within there own scope in the JSON output.\n\nCloses #7004" } ], "files": [ { "diff": "@@ -25,6 +25,7 @@\n import org.elasticsearch.common.io.stream.Streamable;\n import org.elasticsearch.common.util.BigArrays;\n import org.elasticsearch.common.xcontent.ToXContent;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentBuilderString;\n \n import java.io.IOException;\n@@ -150,6 +151,16 @@ protected static void writeSize(int size, StreamOutput out) throws IOException {\n }\n out.writeVInt(size);\n }\n+ \n+ @Override\n+ public final XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {\n+ builder.startObject(name);\n+ doXContentBody(builder, params);\n+ builder.endObject();\n+ return builder;\n+ }\n+\n+ public abstract XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException;\n \n /**\n * Common xcontent fields that are shared among addAggregation", "filename": "src/main/java/org/elasticsearch/search/aggregations/InternalAggregation.java", "status": "modified" }, { "diff": "@@ -95,10 +95,9 @@ public void writeTo(StreamOutput out) throws IOException {\n }\n \n @Override\n- public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {\n- builder.startObject(name);\n+ public XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException {\n builder.field(CommonFields.DOC_COUNT, docCount);\n aggregations.toXContentInternal(builder, params);\n- return builder.endObject();\n+ return builder;\n }\n }", "filename": "src/main/java/org/elasticsearch/search/aggregations/bucket/InternalSingleBucketAggregation.java", "status": "modified" }, { "diff": "@@ -228,8 +228,7 @@ public void writeTo(StreamOutput out) throws IOException {\n }\n \n @Override\n- public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {\n- builder.startObject(name);\n+ public XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException {\n builder.startArray(CommonFields.BUCKETS);\n for (Bucket bucket : buckets) {\n builder.startObject();\n@@ -239,7 +238,6 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n builder.endObject();\n }\n builder.endArray();\n- builder.endObject();\n return builder;\n }\n ", "filename": "src/main/java/org/elasticsearch/search/aggregations/bucket/geogrid/InternalGeoHashGrid.java", "status": "modified" }, { "diff": "@@ -391,8 +391,7 @@ public void writeTo(StreamOutput out) throws IOException {\n }\n \n @Override\n- public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {\n- builder.startObject(name);\n+ public XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException {\n if (keyed) {\n builder.startObject(CommonFields.BUCKETS);\n } else {\n@@ -406,7 +405,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n } else {\n builder.endArray();\n }\n- return builder.endObject();\n+ return builder;\n }\n \n }", "filename": "src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/InternalHistogram.java", "status": "modified" }, { "diff": "@@ -263,8 +263,7 @@ public void writeTo(StreamOutput out) throws IOException {\n }\n \n @Override\n- public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {\n- builder.startObject(name);\n+ public XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException {\n if (keyed) {\n builder.startObject(CommonFields.BUCKETS);\n } else {\n@@ -278,7 +277,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n } else {\n builder.endArray();\n }\n- return builder.endObject();\n+ return builder;\n }\n \n }", "filename": "src/main/java/org/elasticsearch/search/aggregations/bucket/range/InternalRange.java", "status": "modified" }, { "diff": "@@ -27,7 +27,8 @@\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.search.aggregations.AggregationStreams;\n import org.elasticsearch.search.aggregations.InternalAggregations;\n-import org.elasticsearch.search.aggregations.bucket.significant.heuristics.*;\n+import org.elasticsearch.search.aggregations.bucket.significant.heuristics.SignificanceHeuristic;\n+import org.elasticsearch.search.aggregations.bucket.significant.heuristics.SignificanceHeuristicStreams;\n import org.elasticsearch.search.aggregations.support.format.ValueFormatter;\n import org.elasticsearch.search.aggregations.support.format.ValueFormatterStreams;\n \n@@ -160,8 +161,7 @@ public void writeTo(StreamOutput out) throws IOException {\n }\n \n @Override\n- public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {\n- builder.startObject(name);\n+ public XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException {\n builder.field(\"doc_count\", subsetSize);\n builder.startArray(CommonFields.BUCKETS);\n for (InternalSignificantTerms.Bucket bucket : buckets) {\n@@ -177,7 +177,6 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n builder.endObject();\n }\n builder.endArray();\n- builder.endObject();\n return builder;\n }\n ", "filename": "src/main/java/org/elasticsearch/search/aggregations/bucket/significant/SignificantLongTerms.java", "status": "modified" }, { "diff": "@@ -29,7 +29,8 @@\n import org.elasticsearch.search.aggregations.AggregationStreams;\n import org.elasticsearch.search.aggregations.InternalAggregation;\n import org.elasticsearch.search.aggregations.InternalAggregations;\n-import org.elasticsearch.search.aggregations.bucket.significant.heuristics.*;\n+import org.elasticsearch.search.aggregations.bucket.significant.heuristics.SignificanceHeuristic;\n+import org.elasticsearch.search.aggregations.bucket.significant.heuristics.SignificanceHeuristicStreams;\n \n import java.io.IOException;\n import java.util.ArrayList;\n@@ -153,8 +154,7 @@ public void writeTo(StreamOutput out) throws IOException {\n }\n \n @Override\n- public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {\n- builder.startObject(name);\n+ public XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException {\n builder.field(\"doc_count\", subsetSize);\n builder.startArray(CommonFields.BUCKETS);\n for (InternalSignificantTerms.Bucket bucket : buckets) {\n@@ -171,7 +171,6 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n }\n }\n builder.endArray();\n- builder.endObject();\n return builder;\n }\n ", "filename": "src/main/java/org/elasticsearch/search/aggregations/bucket/significant/SignificantStringTerms.java", "status": "modified" }, { "diff": "@@ -99,10 +99,8 @@ public void writeTo(StreamOutput out) throws IOException {\n }\n \n @Override\n- public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {\n- builder.startObject(name);\n+ public XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException {\n builder.startArray(CommonFields.BUCKETS).endArray();\n- builder.endObject();\n return builder;\n }\n ", "filename": "src/main/java/org/elasticsearch/search/aggregations/bucket/significant/UnmappedSignificantTerms.java", "status": "modified" }, { "diff": "@@ -145,8 +145,7 @@ public void writeTo(StreamOutput out) throws IOException {\n }\n \n @Override\n- public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {\n- builder.startObject(name);\n+ public XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException {\n builder.startArray(CommonFields.BUCKETS);\n for (InternalTerms.Bucket bucket : buckets) {\n builder.startObject();\n@@ -159,7 +158,6 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n builder.endObject();\n }\n builder.endArray();\n- builder.endObject();\n return builder;\n }\n ", "filename": "src/main/java/org/elasticsearch/search/aggregations/bucket/terms/DoubleTerms.java", "status": "modified" }, { "diff": "@@ -146,8 +146,7 @@ public void writeTo(StreamOutput out) throws IOException {\n }\n \n @Override\n- public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {\n- builder.startObject(name);\n+ public XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException {\n builder.startArray(CommonFields.BUCKETS);\n for (InternalTerms.Bucket bucket : buckets) {\n builder.startObject();\n@@ -160,7 +159,6 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n builder.endObject();\n }\n builder.endArray();\n- builder.endObject();\n return builder;\n }\n ", "filename": "src/main/java/org/elasticsearch/search/aggregations/bucket/terms/LongTerms.java", "status": "modified" }, { "diff": "@@ -142,8 +142,7 @@ public void writeTo(StreamOutput out) throws IOException {\n }\n \n @Override\n- public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {\n- builder.startObject(name);\n+ public XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException {\n builder.startArray(CommonFields.BUCKETS);\n for (InternalTerms.Bucket bucket : buckets) {\n builder.startObject();\n@@ -153,7 +152,6 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n builder.endObject();\n }\n builder.endArray();\n- builder.endObject();\n return builder;\n }\n ", "filename": "src/main/java/org/elasticsearch/search/aggregations/bucket/terms/StringTerms.java", "status": "modified" }, { "diff": "@@ -98,10 +98,8 @@ protected InternalTerms newAggregation(String name, List<Bucket> buckets) {\n }\n \n @Override\n- public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {\n- builder.startObject(name);\n+ public XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException {\n builder.startArray(CommonFields.BUCKETS).endArray();\n- builder.endObject();\n return builder;\n }\n ", "filename": "src/main/java/org/elasticsearch/search/aggregations/bucket/terms/UnmappedTerms.java", "status": "modified" }, { "diff": "@@ -101,13 +101,11 @@ public void writeTo(StreamOutput out) throws IOException {\n }\n \n @Override\n- public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {\n- builder.startObject(name);\n+ public XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException {\n builder.field(CommonFields.VALUE, count != 0 ? getValue() : null);\n if (count != 0 && valueFormatter != null) {\n builder.field(CommonFields.VALUE_AS_STRING, valueFormatter.format(getValue()));\n }\n- builder.endObject();\n return builder;\n }\n ", "filename": "src/main/java/org/elasticsearch/search/aggregations/metrics/avg/InternalAvg.java", "status": "modified" }, { "diff": "@@ -123,14 +123,12 @@ public void merge(InternalCardinality other) {\n }\n \n @Override\n- public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {\n- builder.startObject(name);\n+ public XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException {\n final long cardinality = getValue();\n builder.field(CommonFields.VALUE, cardinality);\n if (valueFormatter != null) {\n builder.field(CommonFields.VALUE_AS_STRING, valueFormatter.format(cardinality));\n }\n- builder.endObject();\n return builder;\n }\n ", "filename": "src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/InternalCardinality.java", "status": "modified" }, { "diff": "@@ -104,7 +104,7 @@ public InternalAggregation reduce(ReduceContext reduceContext) {\n }\n \n @Override\n- public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {\n+ public XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException {\n GeoPoint topLeft = topLeft();\n GeoPoint bottomRight = bottomRight();\n if (topLeft != null) {\n@@ -117,8 +117,9 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n builder.field(\"lat\", bottomRight.lat());\n builder.field(\"lon\", bottomRight.lon());\n builder.endObject();\n+ builder.endObject();\n }\n- return builder.endObject();\n+ return builder;\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/search/aggregations/metrics/geobounds/InternalGeoBounds.java", "status": "modified" }, { "diff": "@@ -95,14 +95,12 @@ public void writeTo(StreamOutput out) throws IOException {\n }\n \n @Override\n- public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {\n- builder.startObject(name);\n+ public XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException {\n boolean hasValue = !Double.isInfinite(max);\n builder.field(CommonFields.VALUE, hasValue ? max : null);\n if (hasValue && valueFormatter != null) {\n builder.field(CommonFields.VALUE_AS_STRING, valueFormatter.format(max));\n }\n- builder.endObject();\n return builder;\n }\n }", "filename": "src/main/java/org/elasticsearch/search/aggregations/metrics/max/InternalMax.java", "status": "modified" }, { "diff": "@@ -96,14 +96,12 @@ public void writeTo(StreamOutput out) throws IOException {\n }\n \n @Override\n- public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {\n- builder.startObject(name);\n+ public XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException {\n boolean hasValue = !Double.isInfinite(min);\n builder.field(CommonFields.VALUE, hasValue ? min : null);\n if (hasValue && valueFormatter != null) {\n builder.field(CommonFields.VALUE_AS_STRING, valueFormatter.format(min));\n }\n- builder.endObject();\n return builder;\n }\n ", "filename": "src/main/java/org/elasticsearch/search/aggregations/metrics/min/InternalMin.java", "status": "modified" }, { "diff": "@@ -104,8 +104,7 @@ public void writeTo(StreamOutput out) throws IOException {\n }\n \n @Override\n- public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {\n- builder.startObject(name);\n+ public XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException {\n if (keyed) {\n builder.startObject(CommonFields.VALUES);\n for(int i = 0; i < keys.length; ++i) {\n@@ -131,7 +130,6 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n }\n builder.endArray();\n }\n- builder.endObject();\n return builder;\n }\n }", "filename": "src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/AbstractInternalPercentiles.java", "status": "modified" }, { "diff": "@@ -174,8 +174,7 @@ static class Fields {\n }\n \n @Override\n- public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {\n- builder.startObject(name);\n+ public XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException {\n builder.field(Fields.COUNT, count);\n builder.field(Fields.MIN, count != 0 ? min : null);\n builder.field(Fields.MAX, count != 0 ? max : null);\n@@ -188,7 +187,6 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n builder.field(Fields.SUM_AS_STRING, valueFormatter.format(sum));\n }\n otherStatsToXCotent(builder, params);\n- builder.endObject();\n return builder;\n }\n ", "filename": "src/main/java/org/elasticsearch/search/aggregations/metrics/stats/InternalStats.java", "status": "modified" }, { "diff": "@@ -95,13 +95,11 @@ public void writeTo(StreamOutput out) throws IOException {\n }\n \n @Override\n- public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {\n- builder.startObject(name);\n+ public XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException {\n builder.field(CommonFields.VALUE, sum);\n if (valueFormatter != null) {\n builder.field(CommonFields.VALUE_AS_STRING, valueFormatter.format(sum));\n }\n- builder.endObject();\n return builder;\n }\n ", "filename": "src/main/java/org/elasticsearch/search/aggregations/metrics/sum/InternalSum.java", "status": "modified" }, { "diff": "@@ -142,10 +142,8 @@ public void writeTo(StreamOutput out) throws IOException {\n }\n \n @Override\n- public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {\n- builder.startObject(name);\n+ public XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException {\n searchHits.toXContent(builder, params);\n- builder.endObject();\n return builder;\n }\n }", "filename": "src/main/java/org/elasticsearch/search/aggregations/metrics/tophits/InternalTopHits.java", "status": "modified" }, { "diff": "@@ -88,10 +88,8 @@ public void writeTo(StreamOutput out) throws IOException {\n }\n \n @Override\n- public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {\n- return builder.startObject(name)\n- .field(CommonFields.VALUE, value)\n- .endObject();\n+ public XContentBuilder doXContentBody(XContentBuilder builder, Params params) throws IOException {\n+ return builder.field(CommonFields.VALUE, value);\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/search/aggregations/metrics/valuecount/InternalValueCount.java", "status": "modified" }, { "diff": "@@ -20,9 +20,7 @@\n package org.elasticsearch.search.aggregations.bucket;\n \n import org.elasticsearch.action.index.IndexRequestBuilder;\n-import org.elasticsearch.action.search.SearchPhaseExecutionException;\n import org.elasticsearch.action.search.SearchResponse;\n-import org.elasticsearch.common.ParseField;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.settings.Settings;\n@@ -33,7 +31,6 @@\n import org.elasticsearch.index.query.QueryParsingException;\n import org.elasticsearch.plugins.AbstractPlugin;\n import org.elasticsearch.search.aggregations.Aggregation;\n-import org.elasticsearch.search.aggregations.Aggregations;\n import org.elasticsearch.search.aggregations.bucket.filter.FilterAggregationBuilder;\n import org.elasticsearch.search.aggregations.bucket.filter.InternalFilter;\n import org.elasticsearch.search.aggregations.bucket.significant.SignificantTerms;\n@@ -44,6 +41,8 @@\n import org.elasticsearch.search.aggregations.bucket.terms.Terms;\n import org.elasticsearch.search.aggregations.bucket.terms.TermsBuilder;\n import org.elasticsearch.test.ElasticsearchIntegrationTest;\n+import org.elasticsearch.test.ElasticsearchIntegrationTest.ClusterScope;\n+import org.elasticsearch.test.ElasticsearchIntegrationTest.Scope;\n import org.junit.Test;\n \n import java.io.IOException;\n@@ -53,8 +52,6 @@\n import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_NUMBER_OF_REPLICAS;\n import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_NUMBER_OF_SHARDS;\n import static org.elasticsearch.common.settings.ImmutableSettings.settingsBuilder;\n-import static org.elasticsearch.test.ElasticsearchIntegrationTest.ClusterScope;\n-import static org.elasticsearch.test.ElasticsearchIntegrationTest.Scope;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchResponse;\n import static org.hamcrest.Matchers.*;", "filename": "src/test/java/org/elasticsearch/search/aggregations/bucket/SignificantTermsSignificanceScoreTests.java", "status": "modified" } ] }
{ "body": "posting certain valid geojson polygons results in the following exception:\n\norg.elasticsearch.index.mapper.MapperParsingException: failed to parse [geometry] at org.elasticsearch.index.mapper.geo.GeoShapeFieldMapper.parse(GeoShapeFieldMapper.java:249)\n...\n\ncurl -XDELETE 'http://localhost:9200/test'\n\ncurl -XPOST 'http://localhost:9200/test' -d '{\n \"mappings\":{\n \"test\":{\n \"properties\":{\n \"geometry\":{\n \"type\":\"geo_shape\",\n \"tree\":\"quadtree\",\n \"tree_levels\":14,\n \"distance_error_pct\":0.0\n }\n }\n }\n }\n}'\n\ncurl -XPOST 'http://localhost:9200/test/test/1' -d '{\n \"geometry\":{\n \"type\":\"Polygon\",\n \"coordinates\":[\n [[-85.0018514,37.1311314],\n [-85.0016645,37.1315293],\n [-85.0016246,37.1317069],\n [-85.0016526,37.1318183],\n [-85.0017119,37.1319196],\n [-85.0019371,37.1321182],\n [-85.0019972,37.1322115],\n [-85.0019942,37.1323234],\n [-85.0019543,37.1324336],\n [-85.001906,37.1324985],\n [-85.001834,37.1325497],\n [-85.0016965,37.1325907],\n [-85.0016011,37.1325873],\n [-85.0014816,37.1325353],\n [-85.0011755,37.1323509],\n [-85.000955,37.1322802],\n [-85.0006241,37.1322529],\n [-85.0000002,37.1322307],\n [-84.9994,37.1323001],\n [-84.999109,37.1322864],\n [-84.998934,37.1322415],\n [-84.9988639,37.1321888],\n [-84.9987841,37.1320944],\n [-84.9987208,37.131954],\n [-84.998736,37.1316611],\n [-84.9988091,37.131334],\n [-84.9989283,37.1311337],\n [-84.9991943,37.1309198],\n [-84.9993573,37.1308459],\n [-84.9995888,37.1307924],\n [-84.9998746,37.130806],\n [-85.0000002,37.1308358],\n [-85.0004984,37.1310658],\n [-85.0008008,37.1311625],\n [-85.0009461,37.1311684],\n [-85.0011373,37.1311515],\n [-85.0016455,37.1310491],\n [-85.0018514,37.1311314]],\n [[-85.0000002,37.1317672],\n [-85.0001983,37.1317538],\n [-85.0003378,37.1317582],\n [-85.0004697,37.131792],\n [-85.0008048,37.1319439],\n [-85.0009342,37.1319838],\n [-85.0010184,37.1319463],\n [-85.0010618,37.13184],\n [-85.0010057,37.1315102],\n [-85.000977,37.1314403],\n [-85.0009182,37.1313793],\n [-85.0005366,37.1312209],\n [-85.000224,37.1311466],\n [-85.000087,37.1311356],\n [-85.0000002,37.1311433],\n [-84.9995021,37.1312336],\n [-84.9993308,37.1312859],\n [-84.9992567,37.1313252],\n [-84.9991868,37.1314277],\n [-84.9991593,37.1315381],\n [-84.9991841,37.1316527],\n [-84.9992329,37.1317117],\n [-84.9993527,37.1317788],\n [-84.9994931,37.1318061],\n [-84.9996815,37.1317979],\n [-85.0000002,37.1317672]]]\n }\n}'\n\nExpected:\n {\"ok\":true,\"_index\":\"test\",\"_type\":\"test\",\"_id\":\"1\",\"_version\":1}\nActual:\n {\"error\":\"MapperParsingException[failed to parse [geometry]]; nested: ArrayIndexOutOfBoundsException[-1]; \",\"status\":400}\n\nThis is an issue with es-1.1.0. The same requests execute successfully against es-0.2.4.\n\nIt is possible to view and validate the data in qgis.\n\n![screen shot 2014-04-10 at 5 01 56 pm](https://cloud.githubusercontent.com/assets/6935249/2675061/cc6cfc7a-c10d-11e3-9829-7c80f8075fe8.png)\n", "comments": [ { "body": "Checked additional versions of elastic search:\nelasticsearch-0.20.4 PASSED\nelasticsearch-0.90.13 PASSED\nelasticsearch-1.0.0.RC1 FAILED\nelasticsearch-1.0.2 FAILED\nelasticsearch-1.1.0 FAILED\n", "created_at": "2014-04-11T18:02:27Z" }, { "body": "Here is a small test case which triggers the issue in v1.1.0\n\nimport org.elasticsearch.common.geo.builders.ShapeBuilder;\nimport org.elasticsearch.common.xcontent.XContentParser;\nimport org.elasticsearch.common.xcontent.json.JsonXContent;\n\npublic class Test {\n public static void main(String[] args) throws Exception {\n\n```\n String geoJson = \"{ \\\"type\\\": \\\"Polygon\\\",\\\"coordinates\\\": [[[-85.0018514,37.1311314],[-85.0016645,37.1315293],[-85.0016246,37.1317069],[-85.0016526,37.1318183],[-85.0017119,37.1319196],[-85.0019371,37.1321182],[-85.0019972,37.1322115],[-85.0019942,37.1323234],[-85.0019543,37.1324336],[-85.001906,37.1324985],[-85.001834,37.1325497],[-85.0016965,37.1325907],[-85.0016011,37.1325873],[-85.0014816,37.1325353],[-85.0011755,37.1323509],[-85.000955,37.1322802],[-85.0006241,37.1322529],[-85.0000002,37.1322307],[-84.9994,37.1323001],[-84.999109,37.1322864],[-84.998934,37.1322415],[-84.9988639,37.1321888],[-84.9987841,37.1320944],[-84.9987208,37.131954],[-84.998736,37.1316611],[-84.9988091,37.131334],[-84.9989283,37.1311337],[-84.9991943,37.1309198],[-84.9993573,37.1308459],[-84.9995888,37.1307924],[-84.9998746,37.130806],[-85.0000002,37.1308358],[-85.0004984,37.1310658],[-85.0008008,37.1311625],[-85.0009461,37.1311684],[-85.0011373,37.1311515],[-85.0016455,37.1310491],[-85.0018514,37.1311314]],[[-85.0000002,37.1317672],[-85.0001983,37.1317538],[-85.0003378,37.1317582],[-85.0004697,37.131792],[-85.0008048,37.1319439],[-85.0009342,37.1319838],[-85.0010184,37.1319463],[-85.0010618,37.13184],[-85.0010057,37.1315102],[-85.000977,37.1314403],[-85.0009182,37.1313793],[-85.0005366,37.1312209],[-85.000224,37.1311466],[-85.000087,37.1311356],[-85.0000002,37.1311433],[-84.9995021,37.1312336],[-84.9993308,37.1312859],[-84.9992567,37.1313252],[-84.9991868,37.1314277],[-84.9991593,37.1315381],[-84.9991841,37.1316527],[-84.9992329,37.1317117],[-84.9993527,37.1317788],[-84.9994931,37.1318061],[-84.9996815,37.1317979],[-85.0000002,37.1317672]]]}\";\n\n XContentParser parser = JsonXContent.jsonXContent.createParser(geoJson); \n parser.nextToken();\n ShapeBuilder.parse(parser).build();\n}\n```\n\n}\n\n// stack trace\nException in thread \"main\" java.lang.ArrayIndexOutOfBoundsException: -1\n at org.elasticsearch.common.geo.builders.BasePolygonBuilder.assign(BasePolygonBuilder.java:366)\n at org.elasticsearch.common.geo.builders.BasePolygonBuilder.compose(BasePolygonBuilder.java:347)\n at org.elasticsearch.common.geo.builders.BasePolygonBuilder.coordinates(BasePolygonBuilder.java:146)\n at org.elasticsearch.common.geo.builders.BasePolygonBuilder.buildGeometry(BasePolygonBuilder.java:175)\n at org.elasticsearch.common.geo.builders.BasePolygonBuilder.build(BasePolygonBuilder.java:151)\n", "created_at": "2014-04-11T21:58:08Z" }, { "body": "Hey.\n\nyou can also use the github geo json feature as well to visualize this, see https://gist.github.com/spinscale/9cc6ba24bff03cca2be5\n\nthere has been a huge geo refactoring going on between those affected versions, will check for a regression there.\n\nDo you have any other data where this happens with, or is it just this single polygon?\n\nThanks a lot for all your input!\n", "created_at": "2014-04-14T08:08:07Z" }, { "body": "I updated my geojson gist above and added a test with a simple polygon (rectangle with hole, which is a rectangle as well), which works... need to investigate\n", "created_at": "2014-04-14T08:49:34Z" }, { "body": "I have several thousand polygons that result in this error. I have not\npulled them all out of the logs yet, I will hopefully have time to do that\ntoday.\n\nOn Mon, Apr 14, 2014 at 1:08 AM, Alexander Reelsen <notifications@github.com\n\n> wrote:\n> \n> Hey.\n> \n> you can also use the github geo json feature as well to visualize this,\n> see https://gist.github.com/spinscale/9cc6ba24bff03cca2be5\n> \n> there has been a huge geo refactoring going on between those affected\n> versions, will check for a regression there.\n> \n> Do you have any other data where this happens with, or is it just this\n> single polygon?\n> \n> Thanks a lot for all your input!\n> \n> —\n> Reply to this email directly or view it on GitHubhttps://github.com/elasticsearch/elasticsearch/issues/5773#issuecomment-40342030\n> .\n", "created_at": "2014-04-14T14:38:49Z" }, { "body": "I'd highly appreciate it, if you could test with the PR referenced above... it solves this problem, but maybe you could check if I introduced side effects (one just came to mind, which i need to check)...\n", "created_at": "2014-04-14T14:49:52Z" }, { "body": "I have put up a file containing 15k+ polygons which had the same\nArrayIndexOutOfBoundsException against ES1.0+. It is available at\nhttps://github.com/marcuswr/elasticsearch-polygon-data.git\n\nAdditionally, the gist at https://gist.github.com/marcuswr/493406918e0a9edeb509 contains a set of polygons which still fail against the patched version (same IndexOutOfBoundsException)\n\nOn Mon, Apr 14, 2014 at 7:50 AM, Alexander Reelsen <notifications@github.com\n\n> wrote:\n> \n> I'd highly appreciate it, if you could test with the PR referenced\n> above... it solves this problem, but maybe you could check if I introduced\n> side effects (one just came to mind, which i need to check)...\n> \n> —\n> Reply to this email directly or view it on GitHubhttps://github.com/elasticsearch/elasticsearch/issues/5773#issuecomment-40374176\n> .\n", "created_at": "2014-04-14T20:03:46Z" }, { "body": "I have added another file to the repository, with the subset of polygons,\nwhich still failed to ingest, after the patch was applied.\nhttps://github.com/marcuswr/elasticsearch-polygon-data/blob/master/patch_errors.geojson\n\nOn Mon, Apr 14, 2014 at 1:03 PM, Marcus Richardson\nmrichardson@climate.comwrote:\n\n> I have put up a file containing 15k+ polygons which had the same\n> ArrayIndexOutOfBoundsException against ES1.0+. It is available at\n> https://github.com/marcuswr/elasticsearch-polygon-data.git\n> \n> Additionally, I have pulled down your fix locally (your branch of master,\n> and I used your patch against es 1.1). I am able to insert some polygons\n> but\n> not all (I have not tried all of them). I also cloned your gist to:\n> https://gist.github.com/marcuswr/493406918e0a9edeb509 The third\n> (test.geojson) renders correctly in the gist, however, does not work for me\n> against either patched version. The 4th file (test1.geojson) does work in\n> both versions.\n> \n> On Mon, Apr 14, 2014 at 7:50 AM, Alexander Reelsen <\n> notifications@github.com> wrote:\n> \n> > I'd highly appreciate it, if you could test with the PR referenced\n> > above... it solves this problem, but maybe you could check if I introduced\n> > side effects (one just came to mind, which i need to check)...\n> > \n> > —\n> > Reply to this email directly or view it on GitHubhttps://github.com/elasticsearch/elasticsearch/issues/5773#issuecomment-40374176\n> > .\n", "created_at": "2014-04-14T22:31:07Z" }, { "body": "thanks a lot for all the data, I will test with the other polygons as soon as possible (travelling a bit the next days, but will try to check ASAP).\n", "created_at": "2014-04-20T11:56:07Z" }, { "body": "@spinscale wondering if you had a chance to look at this? We are looking to do a major elastic search upgrade, but will not be able to without a fix. Please let me know if there is anything I can do to assist. \n", "created_at": "2014-04-29T22:42:11Z" }, { "body": "sorry, did not yet have the time to check out all the other polygons you supplied due to traveling\n", "created_at": "2014-04-30T14:40:23Z" }, { "body": "These polygons fail to ingest when the point in the hole (the first point in the LineString, or the leftmost point in the patched version https://github.com/elasticsearch/elasticsearch/pull/5796) has the same x coordinate (starting or ending) as 2 or more line segments of the shell.\nI have created another gist (https://gist.github.com/marcuswr/e0490b4f6e25b344e779) with simplified polygons which demonstrate this problem (hole_aligned.geojson, hole_aligned_simple.geojson). These will fail on the patched version. Changing the order of the coordinates in the hole so the leftmost coordinate is first and last (repeated), should result in it failing in both patched and non-patched). There are additional polygons shown which touch or cross the dateline (the dateline hole should be removed from these polygons -> convert to multi-polygon).\nThese failures only occur when fixdateline = true.\n\nAdditionally, could you tell me why there is special handling of the ear in ShapeBuilder.intersections?\n if (Double.compare(p1.x, dateline) == Double.compare(edges[i].next.next.coordinate.x, dateline)) {\n // Ignore the ear\n\nAlso Double.compare is not guaranteed to return -1, 0, 1 I'm not sure what the equality is testing for.\n", "created_at": "2014-05-08T23:50:20Z" }, { "body": "thanks a lot for testing and debugging, your comments make a lot of sense, I will close the PR\n", "created_at": "2014-05-09T08:42:43Z" }, { "body": "Hey @spinscale sorry to bug, but do you think you could glance at this PR and at least indicate if ES would move forward with it? If so, we (I work with Marcus) can proceed locally as the finer details of the PR are worked out.\n", "created_at": "2014-05-20T15:02:54Z" } ], "number": 5773, "title": "Geo: Valid complex polygons fail to parse" }
{ "body": "The bug reproduces when the point under test for the placement of the hole of the polygon has an x coordinate which only intersects with the ends of edges in the main polygon. The previous code threw out these cases as not relevant but an intersect at 1.0 of the distance from the start to the end of an edge is just as valid as an intersect at any other point along the edge. The fix corrects this and adds a test.\n\nCloses #5773\n", "number": 6976, "review_comments": [], "title": "Fixes parse error with complex shapes" }
{ "commits": [ { "message": "Geo: Fixes parse error with complex shapes\n\nThe bug reproduces when the point under test for the placement of the hole of the polygon has an x coordinate which only intersects with the ends of edges in the main polygon. The previous code threw out these cases as not relevant but an intersect at 1.0 of the distance from the start to the end of an edge is just as valid as an intersect at any other point along the edge. The fix corrects this and adds a test.\n\nCloses #5773" } ], "files": [ { "diff": "@@ -19,8 +19,12 @@\n \n package org.elasticsearch.common.geo.builders;\n \n+import com.spatial4j.core.context.jts.JtsSpatialContext;\n+import com.spatial4j.core.shape.Shape;\n import com.spatial4j.core.shape.jts.JtsGeometry;\n+import com.vividsolutions.jts.geom.Coordinate;\n import com.vividsolutions.jts.geom.Geometry;\n+import com.vividsolutions.jts.geom.GeometryFactory;\n import org.elasticsearch.ElasticsearchIllegalArgumentException;\n import org.elasticsearch.ElasticsearchParseException;\n import org.elasticsearch.common.logging.ESLogger;\n@@ -32,10 +36,6 @@\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.common.xcontent.json.JsonXContent;\n \n-import com.spatial4j.core.context.jts.JtsSpatialContext;\n-import com.spatial4j.core.shape.Shape;\n-import com.vividsolutions.jts.geom.Coordinate;\n-import com.vividsolutions.jts.geom.GeometryFactory;\n import java.io.IOException;\n import java.util.*;\n \n@@ -297,9 +297,6 @@ protected static int intersections(double dateline, Edge[] edges) {\n if (Double.compare(p1.x, dateline) == Double.compare(edges[i].next.next.coordinate.x, dateline)) {\n // Ignore the ear\n continue;\n- } else if (p2.x == dateline) {\n- // Ignore Linesegment on dateline\n- continue;\n }\n }\n edges[i].intersection(position);", "filename": "src/main/java/org/elasticsearch/common/geo/builders/ShapeBuilder.java", "status": "modified" }, { "diff": "@@ -30,8 +30,7 @@\n import org.elasticsearch.test.ElasticsearchTestCase;\n import org.junit.Test;\n \n-import static org.elasticsearch.test.hamcrest.ElasticsearchGeoAssertions.assertMultiLineString;\n-import static org.elasticsearch.test.hamcrest.ElasticsearchGeoAssertions.assertMultiPolygon;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchGeoAssertions.*;\n /**\n * Tests for {@link ShapeBuilder}\n */\n@@ -234,4 +233,79 @@ public void testDateline() {\n \n assertMultiPolygon(shape);\n }\n+\n+ @Test\n+ public void testComplexShapeWithHole() {\n+ PolygonBuilder builder = ShapeBuilder.newPolygon()\n+ .point(-85.0018514,37.1311314)\n+ .point(-85.0016645,37.1315293)\n+ .point(-85.0016246,37.1317069)\n+ .point(-85.0016526,37.1318183)\n+ .point(-85.0017119,37.1319196)\n+ .point(-85.0019371,37.1321182)\n+ .point(-85.0019972,37.1322115)\n+ .point(-85.0019942,37.1323234)\n+ .point(-85.0019543,37.1324336)\n+ .point(-85.001906,37.1324985)\n+ .point(-85.001834,37.1325497)\n+ .point(-85.0016965,37.1325907)\n+ .point(-85.0016011,37.1325873)\n+ .point(-85.0014816,37.1325353)\n+ .point(-85.0011755,37.1323509)\n+ .point(-85.000955,37.1322802)\n+ .point(-85.0006241,37.1322529)\n+ .point(-85.0000002,37.1322307)\n+ .point(-84.9994,37.1323001)\n+ .point(-84.999109,37.1322864)\n+ .point(-84.998934,37.1322415)\n+ .point(-84.9988639,37.1321888)\n+ .point(-84.9987841,37.1320944)\n+ .point(-84.9987208,37.131954)\n+ .point(-84.998736,37.1316611)\n+ .point(-84.9988091,37.131334)\n+ .point(-84.9989283,37.1311337)\n+ .point(-84.9991943,37.1309198)\n+ .point(-84.9993573,37.1308459)\n+ .point(-84.9995888,37.1307924)\n+ .point(-84.9998746,37.130806)\n+ .point(-85.0000002,37.1308358)\n+ .point(-85.0004984,37.1310658)\n+ .point(-85.0008008,37.1311625)\n+ .point(-85.0009461,37.1311684)\n+ .point(-85.0011373,37.1311515)\n+ .point(-85.0016455,37.1310491)\n+ .point(-85.0018514,37.1311314);\n+\n+ builder.hole()\n+ .point(-85.0000002,37.1317672)\n+ .point(-85.0001983,37.1317538)\n+ .point(-85.0003378,37.1317582)\n+ .point(-85.0004697,37.131792)\n+ .point(-85.0008048,37.1319439)\n+ .point(-85.0009342,37.1319838)\n+ .point(-85.0010184,37.1319463)\n+ .point(-85.0010618,37.13184)\n+ .point(-85.0010057,37.1315102)\n+ .point(-85.000977,37.1314403)\n+ .point(-85.0009182,37.1313793)\n+ .point(-85.0005366,37.1312209)\n+ .point(-85.000224,37.1311466)\n+ .point(-85.000087,37.1311356)\n+ .point(-85.0000002,37.1311433)\n+ .point(-84.9995021,37.1312336)\n+ .point(-84.9993308,37.1312859)\n+ .point(-84.9992567,37.1313252)\n+ .point(-84.9991868,37.1314277)\n+ .point(-84.9991593,37.1315381)\n+ .point(-84.9991841,37.1316527)\n+ .point(-84.9992329,37.1317117)\n+ .point(-84.9993527,37.1317788)\n+ .point(-84.9994931,37.1318061)\n+ .point(-84.9996815,37.1317979)\n+ .point(-85.0000002,37.1317672);\n+\n+ Shape shape = builder.close().build();\n+\n+ assertPolygon(shape);\n+ }\n }", "filename": "src/test/java/org/elasticsearch/common/geo/ShapeBuilderTests.java", "status": "modified" } ] }
{ "body": "I index a text field with type `token_count`. When indexing, this creates an additional field that holds the number of tokens in the text field.\nWhen a document is retrieved from the transaction log (because no flush happened yet), and I want to get the `token_count` of my text field, I would assume that the `token_count` field is simply not retrieved, because it does not exist yet. Instead I get a `NumberFormatException`.\n\nHere are the steps to reproduce:\n\n```\nDELETE testidx\n\nPUT testidx\n{\n \"settings\": {\n \"index.translog.disable_flush\": true,\n \"index.number_of_shards\": 1,\n \"refresh_interval\": \"1h\"\n },\n \"mappings\": {\n \"doc\": {\n \"properties\": {\n \"text\": {\n \"fields\": {\n \"word_count\": {\n \"type\": \"token_count\",\n \"store\": \"yes\",\n \"analyzer\": \"standard\"\n }\n },\n \"type\": \"string\"\n }\n }\n }\n }\n}\n\nPUT testidx/doc/1\n{\n \"text\": \"some text\"\n}\n\n#ok, get document from translog\nGET testidx/doc/1?realtime=true\n#ok, get document from index but it is not there yet\nGET testidx/doc/1?realtime=false\n# try to get the document from translog but also field text.word_count which is not there yet: NumberFormatException\nGET testidx/doc/1?fields=text.word_count&realtime=true\n\n```\n", "comments": [ { "body": "Here is what happens:\n\nFor multi-fields, the parent field is returned instead of `null` if a sub-field is requested. For the example above, when getting `text.word_count`, `text` is retrieved from the source and returned.\nWe could prevent this easily like this: 7f522fbd9542\n\nHowever, the FastVectorHighlighter relies on this functionality to highlight on multi-fields (see [here](https://github.com/elasticsearch/elasticsearch/blob/master/src/main/java/org/elasticsearch/search/highlight/vectorhighlight/SourceSimpleFragmentsBuilder.java#L55)), so this is not really a solution unless we want to prevent highlighting with the FastVectorHighlighter on multi-fields.\n\nThe other option is to simply catch the NumberFormatException and handle it like here: be999b1042\n", "created_at": "2014-07-04T16:12:50Z" }, { "body": "@brwe what is the status of this?\n", "created_at": "2014-07-09T10:30:41Z" }, { "body": "@s1monw Need to write more tests, did not get to it yet. Will continue Friday. \n", "created_at": "2014-07-09T11:56:07Z" }, { "body": "cool ok but it's going to be ready for 1.3 right?\n", "created_at": "2014-07-09T12:02:56Z" }, { "body": "depends on when the release is\n", "created_at": "2014-07-09T12:21:02Z" }, { "body": "A field of type `murmur3` actually has the same issue. In addition, if `murmur3` and `token_count` fields are not stored, GET will also return NumberFormatException after refresh, example below.\n\n```\nDELETE testidx\nPUT testidx\n{\n \"settings\": {\n \"index.translog.disable_flush\": true,\n \"index.number_of_shards\": 1,\n \"refresh_interval\": \"1h\"\n },\n \"mappings\": {\n \"doc\": {\n \"properties\": {\n \"token_count\": {\n \"type\": \"token_count\",\n \"analyzer\": \"standard\"\n },\n \"murmur\": {\n \"type\": \"murmur3\"\n }\n }\n }\n }\n}\n\nPOST testidx/doc/1\n{\n \"murmur\": \"Some value that can be hashed\",\n \"token_count\": \"A text with five words.\"\n}\n\nGET testidx/doc/1?routing=2&fields=murmur,token_count\n\nPOST testidx/_refresh\n\nGET testidx/doc/1?routing=2&fields=murmur,token_count\n\n```\n", "created_at": "2014-07-18T19:07:00Z" }, { "body": "Following the discussion on pull request #6826 I checked all field mappers and tried to figure out what they should return upon GET. I will call a field \"generated\" if the content is only available after indexing. \nWe discussed that we either throw a meaningful exception if the field is generated or ignore the field silently if a (new) parameter is set with the GET request. `FieldMapper` should get a new method `isGenerated()` which indicates weather the field will be found in the source or not.\n\nHere is what I think we should do:\n\nFor some core field types (`integer, float, string,...`), the behavior (`isGenerated()` returns `true` or `false`) should be configurable. The reason is that a different mapper might use them and store generated data in them. The Mapper attachment plugin does that: Fields like author (`string`), content_type (`string`) etc. are only available after tika parsing. \n\nThere are currently four field types (detailed list below):\n1. Fields that should not be configurable, because they are always generated\n2. Fields that not be configurable because they are never generated\n3. Fields that should not be configurable because they are never stored\n4. Fields that should be configurable\n\nFor 1-3 we simply have to implement `isGenerated()` accordingly.\n\nTo make the fields configurable we could add a parameter `\"is_generated\"` to the mapping which steers the behavior.\n\nPro: would be easy to do and also allow different types in plugin to very easily use the feature.\n\nCon: This would allow users to set `\"is_generated\"` accidentally - fields that are accessible via source would then still cause an exception if requested via GET while the document is not yet indexed\n\nFor fields that are not configurable, the parameter `\"is_generated\"` could be ignored without warning like so many other parameters.\n\nList of types and their category:\n\nThere is core types, root types, geo an ip.\n\n#### Core types\n\nThese should be configurable:\n\n```\nIntegerFieldMapper.java\nShortFieldMapper.java\nBinaryFieldMapper.java \nDateFieldMapper.java \nLongFieldMapper.java \nStringFieldMapper.java\nBooleanFieldMapper.java \nDoubleFieldMapper.java \n```\n\nThe following two should not be configurable because they are always generated:\n\n```\nMurmur3FieldMapper.java \nTokenCountFieldMapper.java\n```\n\nThis should not be configurable because it is never stored:\n\n```\nCompletionFieldMapper.java\n```\n\n#### ip an geo\n\nShould be configurable:\n\n```\nGeoPointFieldMapper.java \nGeoShapeFieldMapper.java\nIpFieldMapper.java\n```\n\n#### root types\n\nNever generated and should not be configurable:\n\n```\nRoutingFieldMapper.java \nTimestampFieldMapper.java\nIdFieldMapper.java \nSizeFieldMapper.java \nTypeFieldMapper.java\nBoostFieldMapper.java \nIndexFieldMapper.java \nSourceFieldMapper.java \nParentFieldMapper.java \nTTLFieldMapper.java \nVersionFieldMapper.java\n```\n\nAlways generated and should not be configurable:\n\n```\nAllFieldMapper.java \nFieldNamesFieldMapper.java \n```\n\nThe following should not be configurable, because they are never stored:\n\n```\nAnalyzerMapper.java \nUidFieldMapper.java\n```\n", "created_at": "2014-07-18T19:22:02Z" }, { "body": "hmpf. while writing tests I figured there are actually more cases to consider. will update soon...\n", "created_at": "2014-07-19T14:13:01Z" }, { "body": "There are two numeric fields that are currently generated (`Murmur3FieldMapper.java` and `TokenCountFieldMapper.java`) and two string fields (`AllFieldMapper.java` and `FieldNamesFieldMapper.java` ).\n\nThese should only be returned with GET (`fields=...`) if set to `stored` and not retuned if not `stored` regardless of if source is enabled or not (this was not so, see example above). If refresh has not been called between indexing and GET then this should cause an Exception unless `ignore_errors_on_generated_fields=true` (working title) is set with the GET request.\nUntil now `_all` and `_field_names` where silently ignored and getting the numeric fields caused a `NumberFormatException`.\n\nI am now unsure if we should make the core types configurable. By configurable, I actually meant adding a parameter to the type mapping such as\n\n```\n{\n type: string,\n is_generated: true/false\n ...\n}\n```\n\nI'll make a pull request without that and then maybe we can discuss further. \n\nJust for completeness, below is a list of all ungenerated field types and how they behave with GET.\n\n---\n\n## Fields with fixed behavior:\n\nNever stored -> should never be returned via GET\n\n`CompletionFieldMapper` \n\nAlways stored -> should always be returned via GET\n\n```\nParentFieldMapper.java \nTTLFieldMapper.java \n```\n\nStored or source enabled -> always return via GET, else never return\n\n```\nBoostFieldMapper.java \n```\n\nStored (but independent of source) -> always return via GET, else never return\n\n```\nTimestampFieldMapper.java\nSizeFieldMapper.java \nRoutingFieldMapper.java \n```\n\n## Fields that might be configurable\n\n```\nIntegerFieldMapper.java\nShortFieldMapper.java\nBinaryFieldMapper.java \nDateFieldMapper.java \nLongFieldMapper.java \nStringFieldMapper.java\nBooleanFieldMapper.java \nDoubleFieldMapper.java \nGeoPointFieldMapper.java \nGeoShapeFieldMapper.java\nIpFieldMapper.java\n```\n\n## Special fields which can never be in the \"fields\" list returned by GET anyway\n\n```\nIdFieldMapper.java \nTypeFieldMapper.java\nIndexFieldMapper.java \nSourceFieldMapper.java \nVersionFieldMapper.java\nAnalyzerMapper.java \nUidFieldMapper.java\n```\n", "created_at": "2014-07-23T07:54:42Z" } ], "number": 6676, "title": "GET: Add parameter to GET for checking if generated fields can be retrieved" }
{ "body": "Fields of type `token_count`, `murmur3`, `_all` and `_field_names` are generated only when indexing.\nIf a GET requests accesses the transaction log (because no refresh\nbetween indexing and GET request) then these fields cannot be retrieved at all.\nBefore the behavior was so:\n\n`_all, _field_names`: The field was siletly ignored\n`murmur3, token_count`: `NumberFormatException` because GET tried to parse the values from the source.\n\nIn addition, if these fields were not stored, the same behavior occured if the fields were\nretrieved with GET after a `refresh()` because here also the source was used to get the fields.\n\nNow, GET accepts a parameter `ignore_errors_on_generated_fields` which has\nthe following effect:\n- Throw exception with meaningful error message explaining the problem if set to false (default)\n- Ignore the field if set to true\n- Always ignore the field if it was not set to stored\n\nThis changes the behavior for `_all` and `_field_names` as now an Exception is thrown if a user\ntries to GET them before a `refresh()`.\n\ncloses #6676\n", "number": 6973, "review_comments": [ { "body": "this should have a version check no?\n", "created_at": "2014-07-23T08:49:52Z" }, { "body": "this should also have a version check\n", "created_at": "2014-07-23T08:50:02Z" }, { "body": "version checks?\n", "created_at": "2014-07-23T08:50:20Z" }, { "body": "version checks?\n", "created_at": "2014-07-23T08:50:27Z" }, { "body": "version checks\n", "created_at": "2014-07-23T08:50:42Z" }, { "body": "version checks\n", "created_at": "2014-07-23T08:50:46Z" }, { "body": "can it be the last argument on the ctor to be consistent with the other ctor?\n", "created_at": "2014-07-23T08:51:50Z" }, { "body": "also here can we just append to the argument list and move it to the back\n", "created_at": "2014-07-23T08:52:21Z" }, { "body": "`s/x/fieldMapper`\n", "created_at": "2014-07-23T08:52:43Z" }, { "body": "also please change `s/x/fieldMappers/`\n", "created_at": "2014-07-23T08:53:40Z" }, { "body": "I know we are not very strict about it but maybe we can just put some javadocs on this?\n", "created_at": "2014-07-23T08:54:02Z" }, { "body": "maybe instead of passing a boolean to the abstract field mapper we can just return `false` from here and the exceptional mappers that implement that can just override it and return true? that way we don't break compat with others that implement mappers and the change is less intrusive?\n", "created_at": "2014-07-23T08:57:57Z" }, { "body": "added commit \"remove isGenerated flag from constructor...\"\n", "created_at": "2014-07-24T13:09:48Z" }, { "body": "indeed, forgot to run bwc tests which failed. added in commit \"add version check for new get parameter\"\n", "created_at": "2014-07-24T13:10:55Z" }, { "body": "done in commit \"renaming and reordering of parameters\"\n", "created_at": "2014-07-24T13:11:23Z" } ], "title": "Add parameter to GET API for checking if generated fields can be retrieved" }
{ "commits": [ { "message": "Add parameter to GET for checking if generated fields can be retrieved\n\nFields of type `token_count`, `murmur3`, `_all` and `_field_names` are generated only when indexing.\nIf a GET requests accesses the transaction log (because no refresh\nbetween indexing and GET request) then these fields cannot be retrieved at all.\nBefore the behavior was so:\n\n`_all, _field_names`: The field was siletly ignored\n`murmur3, token_count`: `NumberFormatException` because GET tried to parse the values from the source.\n\nIn addition, if these fields were not stored, the same behavior occured if the fields were\nretrieved with GET after a `refresh()` because here also the source was used to get the fields.\n\nNow, GET accepts a parameter `ignore_errors_on_generated_fields` which has\nthe following effect:\n- Throw exception with meaningful error message explaining the problem if set to false (default)\n- Ignore the field if set to true\n- Always ignore the field if it was not set to stored\n\nThis changes the behavior for `_all` and `_field_names` as now an Exception is thrown if a user\ntries to GET them before a `refresh()`.\n\ncloses #6676" }, { "message": "add version check for new get parameter" }, { "message": "renaming and reordering of parameters" }, { "message": "fix tests" }, { "message": "add javadoc" }, { "message": "remove isGenerated flag from constructor, instead overrige isGenerated() in generated fields" }, { "message": "add added[1.4.0]" } ], "files": [ { "diff": "@@ -124,6 +124,15 @@ Field values fetched from the document it self are always returned as an array.\n Also only leaf fields can be returned via the `field` option. So object fields can't be returned and such requests\n will fail.\n \n+[float]\n+[[generated-fields]]\n+=== Generated fields\n+added[1.4.0]\n+\n+If no refresh occurred between indexing and refresh, GET will access the transaction log to fetch the document. However, some fields are generated only when indexing. \n+If you try to access a field that is only generated when indexing, you will get an exception (default). You can choose to ignore field that are generated if the transaction log is accessed by setting `ignore_errors_on_generated_fields=true`.\n+\n+\n [float]\n [[_source]]\n === Getting the _source directly\n@@ -223,4 +232,5 @@ it's current version is equal to the specified one. This behavior is the same\n for all version types with the exception of version type `FORCE` which always\n retrieves the document.\n \n-Note that Elasticsearch do not store older versions of documents. Only the current version can be retrieved.\n\\ No newline at end of file\n+Note that Elasticsearch do not store older versions of documents. Only the current version can be retrieved.\n+", "filename": "docs/reference/docs/get.asciidoc", "status": "modified" }, { "diff": "@@ -180,6 +180,14 @@ curl 'localhost:9200/_mget' -d '{\n }'\n --------------------------------------------------\n \n+[float]\n+[[generated-fields]]\n+=== Generated fields\n+\n+added[1.4.0]\n+\n+See <<generated-fields>> for fields are generated only when indexing. \n+\n [float]\n [[mget-routing]]\n === Routing", "filename": "docs/reference/docs/multi-get.asciidoc", "status": "modified" }, { "diff": "@@ -138,7 +138,7 @@ protected ExplainResponse shardOperation(ExplainRequest request, int shardId) th\n // Advantage is that we're not opening a second searcher to retrieve the _source. Also\n // because we are working in the same searcher in engineGetResult we can be sure that a\n // doc isn't deleted between the initial get and this call.\n- GetResult getResult = indexShard.getService().get(result, request.id(), request.type(), request.fields(), request.fetchSourceContext());\n+ GetResult getResult = indexShard.getService().get(result, request.id(), request.type(), request.fields(), request.fetchSourceContext(), false);\n return new ExplainResponse(true, explanation, getResult);\n } else {\n return new ExplainResponse(true, explanation);", "filename": "src/main/java/org/elasticsearch/action/explain/TransportExplainAction.java", "status": "modified" }, { "diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.action.get;\n \n+import org.elasticsearch.Version;\n import org.elasticsearch.action.ActionRequestValidationException;\n import org.elasticsearch.action.ValidateActions;\n import org.elasticsearch.action.support.single.shard.SingleShardOperationRequest;\n@@ -59,6 +60,7 @@ public class GetRequest extends SingleShardOperationRequest<GetRequest> {\n \n private VersionType versionType = VersionType.INTERNAL;\n private long version = Versions.MATCH_ANY;\n+ private boolean ignoreErrorsOnGeneratedFields;\n \n GetRequest() {\n type = \"_all\";\n@@ -240,10 +242,19 @@ public GetRequest versionType(VersionType versionType) {\n return this;\n }\n \n+ public GetRequest ignoreErrorsOnGeneratedFields(boolean ignoreErrorsOnGeneratedFields) {\n+ this.ignoreErrorsOnGeneratedFields = ignoreErrorsOnGeneratedFields;\n+ return this;\n+ }\n+\n public VersionType versionType() {\n return this.versionType;\n }\n \n+ public boolean ignoreErrorsOnGeneratedFields() {\n+ return ignoreErrorsOnGeneratedFields;\n+ }\n+\n @Override\n public void readFrom(StreamInput in) throws IOException {\n super.readFrom(in);\n@@ -265,6 +276,9 @@ public void readFrom(StreamInput in) throws IOException {\n } else if (realtime == 1) {\n this.realtime = true;\n }\n+ if(in.getVersion().onOrAfter(Version.V_1_4_0)) {\n+ this.ignoreErrorsOnGeneratedFields = in.readBoolean();\n+ }\n \n this.versionType = VersionType.fromValue(in.readByte());\n this.version = Versions.readVersionWithVLongForBW(in);\n@@ -296,7 +310,9 @@ public void writeTo(StreamOutput out) throws IOException {\n } else {\n out.writeByte((byte) 1);\n }\n-\n+ if(out.getVersion().onOrAfter(Version.V_1_4_0)) {\n+ out.writeBoolean(ignoreErrorsOnGeneratedFields);\n+ }\n out.writeByte(versionType.getValue());\n Versions.writeVersionWithVLongForBW(version, out);\n \n@@ -307,4 +323,5 @@ public void writeTo(StreamOutput out) throws IOException {\n public String toString() {\n return \"get [\" + index + \"][\" + type + \"][\" + id + \"]: routing [\" + routing + \"]\";\n }\n+\n }", "filename": "src/main/java/org/elasticsearch/action/get/GetRequest.java", "status": "modified" }, { "diff": "@@ -174,6 +174,11 @@ public GetRequestBuilder setRealtime(Boolean realtime) {\n return this;\n }\n \n+ public GetRequestBuilder setIgnoreErrorsOnGeneratedFields(Boolean ignoreErrorsOnGeneratedFields) {\n+ request.ignoreErrorsOnGeneratedFields(ignoreErrorsOnGeneratedFields);\n+ return this;\n+ }\n+\n /**\n * Sets the version, which will cause the get operation to only be performed if a matching\n * version exists and no changes happened on the doc since then.", "filename": "src/main/java/org/elasticsearch/action/get/GetRequestBuilder.java", "status": "modified" }, { "diff": "@@ -22,6 +22,7 @@\n import com.google.common.collect.Iterators;\n import org.elasticsearch.ElasticsearchIllegalArgumentException;\n import org.elasticsearch.ElasticsearchParseException;\n+import org.elasticsearch.Version;\n import org.elasticsearch.action.ActionRequest;\n import org.elasticsearch.action.ActionRequestValidationException;\n import org.elasticsearch.action.ValidateActions;\n@@ -241,6 +242,7 @@ public int hashCode() {\n String preference;\n Boolean realtime;\n boolean refresh;\n+ public boolean ignoreErrorsOnGeneratedFields = false;\n \n List<Item> items = new ArrayList<>();\n \n@@ -309,6 +311,12 @@ public MultiGetRequest refresh(boolean refresh) {\n return this;\n }\n \n+\n+ public MultiGetRequest ignoreErrorsOnGeneratedFields(boolean ignoreErrorsOnGeneratedFields) {\n+ this.ignoreErrorsOnGeneratedFields = ignoreErrorsOnGeneratedFields;\n+ return this;\n+ }\n+\n public MultiGetRequest add(@Nullable String defaultIndex, @Nullable String defaultType, @Nullable String[] defaultFields, @Nullable FetchSourceContext defaultFetchSource, byte[] data, int from, int length) throws Exception {\n return add(defaultIndex, defaultType, defaultFields, defaultFetchSource, new BytesArray(data, from, length), true);\n }\n@@ -481,6 +489,9 @@ public void readFrom(StreamInput in) throws IOException {\n } else if (realtime == 1) {\n this.realtime = true;\n }\n+ if(in.getVersion().onOrAfter(Version.V_1_4_0)) {\n+ ignoreErrorsOnGeneratedFields = in.readBoolean();\n+ }\n \n int size = in.readVInt();\n items = new ArrayList<>(size);\n@@ -501,6 +512,9 @@ public void writeTo(StreamOutput out) throws IOException {\n } else {\n out.writeByte((byte) 1);\n }\n+ if(out.getVersion().onOrAfter(Version.V_1_4_0)) {\n+ out.writeBoolean(ignoreErrorsOnGeneratedFields);\n+ }\n \n out.writeVInt(items.size());\n for (Item item : items) {", "filename": "src/main/java/org/elasticsearch/action/get/MultiGetRequest.java", "status": "modified" }, { "diff": "@@ -82,6 +82,11 @@ public MultiGetRequestBuilder setRealtime(Boolean realtime) {\n return this;\n }\n \n+ public MultiGetRequestBuilder setIgnoreErrorsOnGeneratedFields(boolean ignoreErrorsOnGeneratedFields) {\n+ request.ignoreErrorsOnGeneratedFields(ignoreErrorsOnGeneratedFields);\n+ return this;\n+ }\n+\n @Override\n protected void doExecute(ActionListener<MultiGetResponse> listener) {\n client.multiGet(request, listener);", "filename": "src/main/java/org/elasticsearch/action/get/MultiGetRequestBuilder.java", "status": "modified" }, { "diff": "@@ -21,6 +21,7 @@\n \n import com.carrotsearch.hppc.IntArrayList;\n import com.carrotsearch.hppc.LongArrayList;\n+import org.elasticsearch.Version;\n import org.elasticsearch.action.support.single.shard.SingleShardOperationRequest;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.io.stream.StreamInput;\n@@ -39,6 +40,7 @@ public class MultiGetShardRequest extends SingleShardOperationRequest<MultiGetSh\n private String preference;\n Boolean realtime;\n boolean refresh;\n+ boolean ignoreErrorsOnGeneratedFields = false;\n \n IntArrayList locations;\n List<String> types;\n@@ -91,6 +93,11 @@ public MultiGetShardRequest realtime(Boolean realtime) {\n return this;\n }\n \n+ public MultiGetShardRequest ignoreErrorsOnGeneratedFields(Boolean ignoreErrorsOnGeneratedFields) {\n+ this.ignoreErrorsOnGeneratedFields = ignoreErrorsOnGeneratedFields;\n+ return this;\n+ }\n+\n public boolean refresh() {\n return this.refresh;\n }\n@@ -153,6 +160,9 @@ public void readFrom(StreamInput in) throws IOException {\n } else if (realtime == 1) {\n this.realtime = true;\n }\n+ if(in.getVersion().onOrAfter(Version.V_1_4_0)) {\n+ ignoreErrorsOnGeneratedFields = in.readBoolean();\n+ }\n }\n \n @Override\n@@ -191,7 +201,13 @@ public void writeTo(StreamOutput out) throws IOException {\n } else {\n out.writeByte((byte) 1);\n }\n+ if(out.getVersion().onOrAfter(Version.V_1_4_0)) {\n+ out.writeBoolean(ignoreErrorsOnGeneratedFields);\n+ }\n \n+ }\n \n+ public boolean ignoreErrorsOnGeneratedFields() {\n+ return ignoreErrorsOnGeneratedFields;\n }\n }", "filename": "src/main/java/org/elasticsearch/action/get/MultiGetShardRequest.java", "status": "modified" }, { "diff": "@@ -103,7 +103,7 @@ protected GetResponse shardOperation(GetRequest request, int shardId) throws Ela\n }\n \n GetResult result = indexShard.getService().get(request.type(), request.id(), request.fields(),\n- request.realtime(), request.version(), request.versionType(), request.fetchSourceContext());\n+ request.realtime(), request.version(), request.versionType(), request.fetchSourceContext(), request.ignoreErrorsOnGeneratedFields());\n return new GetResponse(result);\n }\n ", "filename": "src/main/java/org/elasticsearch/action/get/TransportGetAction.java", "status": "modified" }, { "diff": "@@ -84,6 +84,7 @@ protected void doExecute(final MultiGetRequest request, final ActionListener<Mul\n shardRequest.preference(request.preference);\n shardRequest.realtime(request.realtime);\n shardRequest.refresh(request.refresh);\n+ shardRequest.ignoreErrorsOnGeneratedFields(request.ignoreErrorsOnGeneratedFields);\n \n shardRequests.put(shardId, shardRequest);\n }", "filename": "src/main/java/org/elasticsearch/action/get/TransportMultiGetAction.java", "status": "modified" }, { "diff": "@@ -121,7 +121,7 @@ protected MultiGetShardResponse shardOperation(MultiGetShardRequest request, int\n \n FetchSourceContext fetchSourceContext = request.fetchSourceContexts.get(i);\n try {\n- GetResult getResult = indexShard.getService().get(type, id, fields, request.realtime(), version, versionType, fetchSourceContext);\n+ GetResult getResult = indexShard.getService().get(type, id, fields, request.realtime(), version, versionType, fetchSourceContext, request.ignoreErrorsOnGeneratedFields());\n response.add(request.locations.get(i), new GetResponse(getResult));\n } catch (Throwable t) {\n if (TransportActions.isShardNotAvailableException(t)) {", "filename": "src/main/java/org/elasticsearch/action/get/TransportShardMultiGetAction.java", "status": "modified" }, { "diff": "@@ -84,7 +84,7 @@ public Result prepare(UpdateRequest request, IndexShard indexShard) {\n long getDate = System.currentTimeMillis();\n final GetResult getResult = indexShard.getService().get(request.type(), request.id(),\n new String[]{RoutingFieldMapper.NAME, ParentFieldMapper.NAME, TTLFieldMapper.NAME},\n- true, request.version(), request.versionType(), FetchSourceContext.FETCH_SOURCE);\n+ true, request.version(), request.versionType(), FetchSourceContext.FETCH_SOURCE, false);\n \n if (!getResult.isExists()) {\n if (request.upsertRequest() == null && !request.docAsUpsert()) {", "filename": "src/main/java/org/elasticsearch/action/update/UpdateHelper.java", "status": "modified" }, { "diff": "@@ -96,12 +96,12 @@ public ShardGetService setIndexShard(IndexShard indexShard) {\n return this;\n }\n \n- public GetResult get(String type, String id, String[] gFields, boolean realtime, long version, VersionType versionType, FetchSourceContext fetchSourceContext)\n+ public GetResult get(String type, String id, String[] gFields, boolean realtime, long version, VersionType versionType, FetchSourceContext fetchSourceContext, boolean ignoreErrorsOnGeneratedFields)\n throws ElasticsearchException {\n currentMetric.inc();\n try {\n long now = System.nanoTime();\n- GetResult getResult = innerGet(type, id, gFields, realtime, version, versionType, fetchSourceContext);\n+ GetResult getResult = innerGet(type, id, gFields, realtime, version, versionType, fetchSourceContext, ignoreErrorsOnGeneratedFields);\n \n if (getResult.isExists()) {\n existsMetric.inc(System.nanoTime() - now);\n@@ -121,7 +121,7 @@ public GetResult get(String type, String id, String[] gFields, boolean realtime,\n * <p/>\n * Note: Call <b>must</b> release engine searcher associated with engineGetResult!\n */\n- public GetResult get(Engine.GetResult engineGetResult, String id, String type, String[] fields, FetchSourceContext fetchSourceContext) {\n+ public GetResult get(Engine.GetResult engineGetResult, String id, String type, String[] fields, FetchSourceContext fetchSourceContext, boolean ignoreErrorsOnGeneratedFields) {\n if (!engineGetResult.exists()) {\n return new GetResult(shardId.index().name(), type, id, -1, false, null, null);\n }\n@@ -135,7 +135,7 @@ public GetResult get(Engine.GetResult engineGetResult, String id, String type, S\n return new GetResult(shardId.index().name(), type, id, -1, false, null, null);\n }\n fetchSourceContext = normalizeFetchSourceContent(fetchSourceContext, fields);\n- GetResult getResult = innerGetLoadFromStoredFields(type, id, fields, fetchSourceContext, engineGetResult, docMapper);\n+ GetResult getResult = innerGetLoadFromStoredFields(type, id, fields, fetchSourceContext, engineGetResult, docMapper, ignoreErrorsOnGeneratedFields);\n if (getResult.isExists()) {\n existsMetric.inc(System.nanoTime() - now);\n } else {\n@@ -165,7 +165,7 @@ protected FetchSourceContext normalizeFetchSourceContent(@Nullable FetchSourceCo\n return FetchSourceContext.DO_NOT_FETCH_SOURCE;\n }\n \n- public GetResult innerGet(String type, String id, String[] gFields, boolean realtime, long version, VersionType versionType, FetchSourceContext fetchSourceContext) throws ElasticsearchException {\n+ public GetResult innerGet(String type, String id, String[] gFields, boolean realtime, long version, VersionType versionType, FetchSourceContext fetchSourceContext, boolean ignoreErrorsOnGeneratedFields) throws ElasticsearchException {\n fetchSourceContext = normalizeFetchSourceContent(fetchSourceContext, gFields);\n \n boolean loadSource = (gFields != null && gFields.length > 0) || fetchSourceContext.fetchSource();\n@@ -207,7 +207,7 @@ public GetResult innerGet(String type, String id, String[] gFields, boolean real\n try {\n // break between having loaded it from translog (so we only have _source), and having a document to load\n if (get.docIdAndVersion() != null) {\n- return innerGetLoadFromStoredFields(type, id, gFields, fetchSourceContext, get, docMapper);\n+ return innerGetLoadFromStoredFields(type, id, gFields, fetchSourceContext, get, docMapper, ignoreErrorsOnGeneratedFields);\n } else {\n Translog.Source source = get.source();\n \n@@ -241,20 +241,21 @@ public GetResult innerGet(String type, String id, String[] gFields, boolean real\n searchLookup.source().setNextSource(source.source);\n }\n \n- FieldMapper<?> x = docMapper.mappers().smartNameFieldMapper(field);\n- if (x == null) {\n+ FieldMapper<?> fieldMapper = docMapper.mappers().smartNameFieldMapper(field);\n+ if (fieldMapper == null) {\n if (docMapper.objectMappers().get(field) != null) {\n // Only fail if we know it is a object field, missing paths / fields shouldn't fail.\n throw new ElasticsearchIllegalArgumentException(\"field [\" + field + \"] isn't a leaf field\");\n }\n- } else if (docMapper.sourceMapper().enabled() || x.fieldType().stored()) {\n+ } else if (shouldGetFromSource(ignoreErrorsOnGeneratedFields, docMapper, fieldMapper)) {\n List<Object> values = searchLookup.source().extractRawValues(field);\n if (!values.isEmpty()) {\n for (int i = 0; i < values.size(); i++) {\n- values.set(i, x.valueForSearch(values.get(i)));\n+ values.set(i, fieldMapper.valueForSearch(values.get(i)));\n }\n value = values;\n }\n+\n }\n }\n if (value != null) {\n@@ -312,7 +313,27 @@ public GetResult innerGet(String type, String id, String[] gFields, boolean real\n }\n }\n \n- private GetResult innerGetLoadFromStoredFields(String type, String id, String[] gFields, FetchSourceContext fetchSourceContext, Engine.GetResult get, DocumentMapper docMapper) {\n+ protected boolean shouldGetFromSource(boolean ignoreErrorsOnGeneratedFields, DocumentMapper docMapper, FieldMapper<?> fieldMapper) {\n+ if (!fieldMapper.isGenerated()) {\n+ //if the field is always there we check if either source mapper is enabled, in which case we get the field\n+ // from source, or, if the field is stored, in which case we have to get if from source here also (we are in the translog phase, doc not indexed yet, we annot access the stored fields)\n+ return docMapper.sourceMapper().enabled() || fieldMapper.fieldType().stored();\n+ } else {\n+ if (!fieldMapper.fieldType().stored()) {\n+ //if it is not stored, user will not get the generated field back\n+ return false;\n+ } else {\n+ if (ignoreErrorsOnGeneratedFields) {\n+ return false;\n+ } else {\n+ throw new ElasticsearchException(\"Cannot access field \" + fieldMapper.name() + \" from transaction log. You can only get this field after refresh() has been called.\");\n+ }\n+ }\n+\n+ }\n+ }\n+\n+ private GetResult innerGetLoadFromStoredFields(String type, String id, String[] gFields, FetchSourceContext fetchSourceContext, Engine.GetResult get, DocumentMapper docMapper, boolean ignoreErrorsOnGeneratedFields) {\n Map<String, GetField> fields = null;\n BytesReference source = null;\n Versions.DocIdAndVersion docIdAndVersion = get.docIdAndVersion();\n@@ -335,17 +356,18 @@ private GetResult innerGetLoadFromStoredFields(String type, String id, String[]\n }\n \n // now, go and do the script thingy if needed\n+\n if (gFields != null && gFields.length > 0) {\n SearchLookup searchLookup = null;\n for (String field : gFields) {\n Object value = null;\n- FieldMappers x = docMapper.mappers().smartName(field);\n- if (x == null) {\n+ FieldMappers fieldMapper = docMapper.mappers().smartName(field);\n+ if (fieldMapper == null) {\n if (docMapper.objectMappers().get(field) != null) {\n // Only fail if we know it is a object field, missing paths / fields shouldn't fail.\n throw new ElasticsearchIllegalArgumentException(\"field [\" + field + \"] isn't a leaf field\");\n }\n- } else if (!x.mapper().fieldType().stored()) {\n+ } else if (!fieldMapper.mapper().fieldType().stored() && !fieldMapper.mapper().isGenerated()) {\n if (searchLookup == null) {\n searchLookup = new SearchLookup(mapperService, fieldDataService, new String[]{type});\n searchLookup.setNextReader(docIdAndVersion.context);\n@@ -356,7 +378,7 @@ private GetResult innerGetLoadFromStoredFields(String type, String id, String[]\n List<Object> values = searchLookup.source().extractRawValues(field);\n if (!values.isEmpty()) {\n for (int i = 0; i < values.size(); i++) {\n- values.set(i, x.mapper().valueForSearch(values.get(i)));\n+ values.set(i, fieldMapper.mapper().valueForSearch(values.get(i)));\n }\n value = values;\n }", "filename": "src/main/java/org/elasticsearch/index/get/ShardGetService.java", "status": "modified" }, { "diff": "@@ -290,4 +290,12 @@ public static Loading parse(String loading, Loading defaultValue) {\n \n Loading normsLoading(Loading defaultLoading);\n \n+ /**\n+ * Fields might not be available before indexing, for example _all, token_count,...\n+ * When get is called and these fields are requested, this case needs special treatment.\n+ *\n+ * @return If the field is available before indexing or not.\n+ * */\n+ public boolean isGenerated();\n+\n }", "filename": "src/main/java/org/elasticsearch/index/mapper/FieldMapper.java", "status": "modified" }, { "diff": "@@ -1115,4 +1115,11 @@ public void parse(String field, ParseContext context) throws IOException {\n \n }\n \n+ /**\n+ * Returns if this field is only generated when indexing. For example, the field of type token_count\n+ */\n+ public boolean isGenerated() {\n+ return false;\n+ }\n+\n }", "filename": "src/main/java/org/elasticsearch/index/mapper/core/AbstractFieldMapper.java", "status": "modified" }, { "diff": "@@ -108,4 +108,9 @@ protected void innerParseCreateField(ParseContext context, List<Field> fields) t\n \n }\n \n+ @Override\n+ public boolean isGenerated() {\n+ return true;\n+ }\n+\n }", "filename": "src/main/java/org/elasticsearch/index/mapper/core/Murmur3FieldMapper.java", "status": "modified" }, { "diff": "@@ -197,4 +197,10 @@ protected void doXContentBody(XContentBuilder builder, boolean includeDefaults,\n \n builder.field(\"analyzer\", analyzer());\n }\n+\n+ @Override\n+ public boolean isGenerated() {\n+ return true;\n+ }\n+\n }", "filename": "src/main/java/org/elasticsearch/index/mapper/core/TokenCountFieldMapper.java", "status": "modified" }, { "diff": "@@ -351,4 +351,9 @@ public void merge(Mapper mergeWith, MergeContext mergeContext) throws MergeMappi\n public boolean hasDocValues() {\n return false;\n }\n+\n+ @Override\n+ public boolean isGenerated() {\n+ return true;\n+ }\n }", "filename": "src/main/java/org/elasticsearch/index/mapper/internal/AllFieldMapper.java", "status": "modified" }, { "diff": "@@ -249,4 +249,9 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n }\n return super.toXContent(builder, params);\n }\n+\n+ @Override\n+ public boolean isGenerated() {\n+ return true;\n+ }\n }", "filename": "src/main/java/org/elasticsearch/index/mapper/internal/FieldNamesFieldMapper.java", "status": "modified" }, { "diff": "@@ -118,7 +118,7 @@ private Fields generateTermVectorsIfNeeded(Fields termVectorsByField, TermVector\n }\n // TODO: support for fetchSourceContext?\n GetResult getResult = indexShard.getService().get(\n- get, request.id(), request.type(), validFields.toArray(Strings.EMPTY_ARRAY), null);\n+ get, request.id(), request.type(), validFields.toArray(Strings.EMPTY_ARRAY), null, false);\n generatedTermVectors = generateTermVectors(getResult.getFields().values(), request.offsets());\n } finally {\n get.release();", "filename": "src/main/java/org/elasticsearch/index/termvectors/ShardTermVectorService.java", "status": "modified" }, { "diff": "@@ -57,6 +57,7 @@ public void handleRequest(final RestRequest request, final RestChannel channel,\n getRequest.parent(request.param(\"parent\"));\n getRequest.preference(request.param(\"preference\"));\n getRequest.realtime(request.paramAsBoolean(\"realtime\", null));\n+ getRequest.ignoreErrorsOnGeneratedFields(request.paramAsBoolean(\"ignore_errors_on_generated_fields\", false));\n \n String sField = request.param(\"fields\");\n if (sField != null) {", "filename": "src/main/java/org/elasticsearch/rest/action/get/RestGetAction.java", "status": "modified" }, { "diff": "@@ -57,6 +57,7 @@ public void handleRequest(final RestRequest request, final RestChannel channel,\n multiGetRequest.refresh(request.paramAsBoolean(\"refresh\", multiGetRequest.refresh()));\n multiGetRequest.preference(request.param(\"preference\"));\n multiGetRequest.realtime(request.paramAsBoolean(\"realtime\", null));\n+ multiGetRequest.ignoreErrorsOnGeneratedFields(request.paramAsBoolean(\"ignore_errors_on_generated_fields\", false));\n \n String[] sFields = null;\n String sField = request.param(\"fields\");", "filename": "src/main/java/org/elasticsearch/rest/action/get/RestMultiGetAction.java", "status": "modified" }, { "diff": "@@ -19,16 +19,16 @@\n \n package org.elasticsearch.get;\n \n+import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.ElasticsearchIllegalArgumentException;\n import org.elasticsearch.action.ShardOperationFailedException;\n import org.elasticsearch.action.admin.cluster.health.ClusterHealthResponse;\n import org.elasticsearch.action.admin.cluster.health.ClusterHealthStatus;\n import org.elasticsearch.action.admin.indices.flush.FlushResponse;\n import org.elasticsearch.action.delete.DeleteResponse;\n-import org.elasticsearch.action.get.GetResponse;\n-import org.elasticsearch.action.get.MultiGetRequest;\n-import org.elasticsearch.action.get.MultiGetResponse;\n+import org.elasticsearch.action.get.*;\n import org.elasticsearch.common.Base64;\n+import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.bytes.BytesArray;\n import org.elasticsearch.common.bytes.BytesReference;\n@@ -39,10 +39,12 @@\n import org.elasticsearch.test.junit.annotations.TestLogging;\n import org.junit.Test;\n \n+import java.io.IOException;\n import java.util.Map;\n \n import static org.elasticsearch.client.Requests.clusterHealthRequest;\n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n import static org.hamcrest.Matchers.*;\n \n public class GetActionTests extends ElasticsearchIntegrationTest {\n@@ -909,4 +911,445 @@ public void testGet_allField() throws Exception {\n assertNotNull(getResponse.getField(\"_all\").getValue());\n assertThat(getResponse.getField(\"_all\").getValue().toString(), equalTo(\"some text\" + \" \"));\n }\n+\n+ @Test\n+ public void testUngeneratedFieldsThatAreNeverStored() throws IOException {\n+ String createIndexSource = \"{\\n\" +\n+ \" \\\"settings\\\": {\\n\" +\n+ \" \\\"index.translog.disable_flush\\\": true,\\n\" +\n+ \" \\\"refresh_interval\\\": \\\"-1\\\"\\n\" +\n+ \" },\\n\" +\n+ \" \\\"mappings\\\": {\\n\" +\n+ \" \\\"doc\\\": {\\n\" +\n+ \" \\\"_source\\\": {\\n\" +\n+ \" \\\"enabled\\\": \\\"\" + randomBoolean() + \"\\\"\\n\" +\n+ \" },\\n\" +\n+ \" \\\"properties\\\": {\\n\" +\n+ \" \\\"suggest\\\": {\\n\" +\n+ \" \\\"type\\\": \\\"completion\\\"\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \"}\";\n+ assertAcked(prepareCreate(\"testidx\").setSource(createIndexSource));\n+ ensureGreen();\n+ String doc = \"{\\n\" +\n+ \" \\\"suggest\\\": {\\n\" +\n+ \" \\\"input\\\": [\\n\" +\n+ \" \\\"Nevermind\\\",\\n\" +\n+ \" \\\"Nirvana\\\"\\n\" +\n+ \" ],\\n\" +\n+ \" \\\"output\\\": \\\"Nirvana - Nevermind\\\"\\n\" +\n+ \" }\\n\" +\n+ \"}\";\n+\n+ index(\"testidx\", \"doc\", \"1\", doc);\n+ String[] fieldsList = {\"suggest\"};\n+ // before refresh - document is only in translog\n+ assertGetFieldsAlwaysNull(\"testidx\", \"doc\", \"1\", fieldsList);\n+ refresh();\n+ //after refresh - document is in translog and also indexed\n+ assertGetFieldsAlwaysNull(\"testidx\", \"doc\", \"1\", fieldsList);\n+ flush();\n+ //after flush - document is in not anymore translog - only indexed\n+ assertGetFieldsAlwaysNull(\"testidx\", \"doc\", \"1\", fieldsList);\n+ }\n+\n+ @Test\n+ public void testUngeneratedFieldsThatAreAlwaysStored() throws IOException {\n+ String storedString = randomBoolean() ? \"yes\" : \"no\";\n+ String createIndexSource = \"{\\n\" +\n+ \" \\\"settings\\\": {\\n\" +\n+ \" \\\"index.translog.disable_flush\\\": true,\\n\" +\n+ \" \\\"refresh_interval\\\": \\\"-1\\\"\\n\" +\n+ \" },\\n\" +\n+ \" \\\"mappings\\\": {\\n\" +\n+ \" \\\"parentdoc\\\": {},\\n\" +\n+ \" \\\"doc\\\": {\\n\" +\n+ \" \\\"_source\\\": {\\n\" +\n+ \" \\\"enabled\\\": \" + randomBoolean() + \"\\n\" +\n+ \" },\\n\" +\n+ \" \\\"_parent\\\": {\\n\" +\n+ \" \\\"type\\\": \\\"parentdoc\\\",\\n\" +\n+ \" \\\"store\\\": \\\"\" + storedString + \"\\\"\\n\" +\n+ \" },\\n\" +\n+ \" \\\"_ttl\\\": {\\n\" +\n+ \" \\\"enabled\\\": true,\\n\" +\n+ \" \\\"store\\\": \\\"\" + storedString + \"\\\"\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \"}\";\n+ assertAcked(prepareCreate(\"testidx\").setSource(createIndexSource));\n+ ensureGreen();\n+ String doc = \"{\\n\" +\n+ \" \\\"_ttl\\\": \\\"1h\\\"\\n\" +\n+ \"}\";\n+\n+ client().prepareIndex(\"testidx\", \"doc\").setId(\"1\").setSource(doc).setParent(\"1\").execute().actionGet();\n+\n+ String[] fieldsList = {\"_ttl\", \"_parent\"};\n+ // before refresh - document is only in translog\n+ assertGetFieldsAlwaysWorks(\"testidx\", \"doc\", \"1\", fieldsList, \"1\");\n+ refresh();\n+ //after refresh - document is in translog and also indexed\n+ assertGetFieldsAlwaysWorks(\"testidx\", \"doc\", \"1\", fieldsList, \"1\");\n+ flush();\n+ //after flush - document is in not anymore translog - only indexed\n+ assertGetFieldsAlwaysWorks(\"testidx\", \"doc\", \"1\", fieldsList, \"1\");\n+ }\n+\n+ @Test\n+ public void testUngeneratedFieldsPartOfSourceUnstoredSourceDisabled() throws IOException {\n+ indexSingleDocumentWithUngeneratedFieldsThatArePartOf_source(false, false);\n+ String[] fieldsList = {\"my_boost\"};\n+ // before refresh - document is only in translog\n+ assertGetFieldsAlwaysNull(\"testidx\", \"doc\", \"1\", fieldsList);\n+ refresh();\n+ //after refresh - document is in translog and also indexed\n+ assertGetFieldsAlwaysNull(\"testidx\", \"doc\", \"1\", fieldsList);\n+ flush();\n+ //after flush - document is in not anymore translog - only indexed\n+ assertGetFieldsAlwaysNull(\"testidx\", \"doc\", \"1\", fieldsList);\n+ }\n+\n+ @Test\n+ public void testUngeneratedFieldsPartOfSourceEitherStoredOrSourceEnabled() throws IOException {\n+ boolean stored = randomBoolean();\n+ boolean sourceEnabled = true;\n+ if (stored) {\n+ sourceEnabled = randomBoolean();\n+ }\n+ indexSingleDocumentWithUngeneratedFieldsThatArePartOf_source(stored, sourceEnabled);\n+ String[] fieldsList = {\"my_boost\"};\n+ // before refresh - document is only in translog\n+ assertGetFieldsAlwaysWorks(\"testidx\", \"doc\", \"1\", fieldsList);\n+ refresh();\n+ //after refresh - document is in translog and also indexed\n+ assertGetFieldsAlwaysWorks(\"testidx\", \"doc\", \"1\", fieldsList);\n+ flush();\n+ //after flush - document is in not anymore translog - only indexed\n+ assertGetFieldsAlwaysWorks(\"testidx\", \"doc\", \"1\", fieldsList);\n+ }\n+\n+ void indexSingleDocumentWithUngeneratedFieldsThatArePartOf_source(boolean stored, boolean sourceEnabled) {\n+ String storedString = stored ? \"yes\" : \"no\";\n+ String createIndexSource = \"{\\n\" +\n+ \" \\\"settings\\\": {\\n\" +\n+ \" \\\"index.translog.disable_flush\\\": true,\\n\" +\n+ \" \\\"refresh_interval\\\": \\\"-1\\\"\\n\" +\n+ \" },\\n\" +\n+ \" \\\"mappings\\\": {\\n\" +\n+ \" \\\"doc\\\": {\\n\" +\n+ \" \\\"_source\\\": {\\n\" +\n+ \" \\\"enabled\\\": \" + sourceEnabled + \"\\n\" +\n+ \" },\\n\" +\n+ \" \\\"_boost\\\": {\\n\" +\n+ \" \\\"name\\\": \\\"my_boost\\\",\\n\" +\n+ \" \\\"null_value\\\": 1,\\n\" +\n+ \" \\\"store\\\": \\\"\" + storedString + \"\\\"\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \"}\";\n+ assertAcked(prepareCreate(\"testidx\").setSource(createIndexSource));\n+ ensureGreen();\n+ String doc = \"{\\n\" +\n+ \" \\\"my_boost\\\": 5.0,\\n\" +\n+ \" \\\"_ttl\\\": \\\"1h\\\"\\n\" +\n+ \"}\\n\";\n+\n+ client().prepareIndex(\"testidx\", \"doc\").setId(\"1\").setSource(doc).setRouting(\"1\").execute().actionGet();\n+ }\n+\n+\n+ @Test\n+ public void testUngeneratedFieldsNotPartOfSourceUnstored() throws IOException {\n+ indexSingleDocumentWithUngeneratedFieldsThatAreNeverPartOf_source(false, randomBoolean());\n+ String[] fieldsList = {\"_timestamp\", \"_size\", \"_routing\"};\n+ // before refresh - document is only in translog\n+ assertGetFieldsAlwaysNull(\"testidx\", \"doc\", \"1\", fieldsList, \"1\");\n+ refresh();\n+ //after refresh - document is in translog and also indexed\n+ assertGetFieldsAlwaysNull(\"testidx\", \"doc\", \"1\", fieldsList, \"1\");\n+ flush();\n+ //after flush - document is in not anymore translog - only indexed\n+ assertGetFieldsAlwaysNull(\"testidx\", \"doc\", \"1\", fieldsList, \"1\");\n+ }\n+\n+ @Test\n+ public void testUngeneratedFieldsNotPartOfSourceStored() throws IOException {\n+ indexSingleDocumentWithUngeneratedFieldsThatAreNeverPartOf_source(true, randomBoolean());\n+ String[] fieldsList = {\"_timestamp\", \"_size\", \"_routing\"};\n+ // before refresh - document is only in translog\n+ assertGetFieldsAlwaysWorks(\"testidx\", \"doc\", \"1\", fieldsList, \"1\");\n+ refresh();\n+ //after refresh - document is in translog and also indexed\n+ assertGetFieldsAlwaysWorks(\"testidx\", \"doc\", \"1\", fieldsList, \"1\");\n+ flush();\n+ //after flush - document is in not anymore translog - only indexed\n+ assertGetFieldsAlwaysWorks(\"testidx\", \"doc\", \"1\", fieldsList, \"1\");\n+ }\n+\n+ void indexSingleDocumentWithUngeneratedFieldsThatAreNeverPartOf_source(boolean stored, boolean sourceEnabled) {\n+ String storedString = stored ? \"yes\" : \"no\";\n+ String createIndexSource = \"{\\n\" +\n+ \" \\\"settings\\\": {\\n\" +\n+ \" \\\"index.translog.disable_flush\\\": true,\\n\" +\n+ \" \\\"refresh_interval\\\": \\\"-1\\\"\\n\" +\n+ \" },\\n\" +\n+ \" \\\"mappings\\\": {\\n\" +\n+ \" \\\"parentdoc\\\": {},\\n\" +\n+ \" \\\"doc\\\": {\\n\" +\n+ \" \\\"_timestamp\\\": {\\n\" +\n+ \" \\\"store\\\": \\\"\" + storedString + \"\\\",\\n\" +\n+ \" \\\"enabled\\\": true\\n\" +\n+ \" },\\n\" +\n+ \" \\\"_routing\\\": {\\n\" +\n+ \" \\\"store\\\": \\\"\" + storedString + \"\\\"\\n\" +\n+ \" },\\n\" +\n+ \" \\\"_size\\\": {\\n\" +\n+ \" \\\"store\\\": \\\"\" + storedString + \"\\\",\\n\" +\n+ \" \\\"enabled\\\": true\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \"}\";\n+\n+ assertAcked(prepareCreate(\"testidx\").setSource(createIndexSource));\n+ ensureGreen();\n+ String doc = \"{\\n\" +\n+ \" \\\"text\\\": \\\"some text.\\\"\\n\" +\n+ \"}\\n\";\n+ client().prepareIndex(\"testidx\", \"doc\").setId(\"1\").setSource(doc).setRouting(\"1\").execute().actionGet();\n+ }\n+\n+\n+ @Test\n+ public void testGeneratedStringFieldsUnstored() throws IOException {\n+ indexSingleDocumentWithStringFieldsGeneratedFromText(false, randomBoolean());\n+ String[] fieldsList = {\"_all\", \"_field_names\"};\n+ // before refresh - document is only in translog\n+ assertGetFieldsAlwaysNull(\"testidx\", \"doc\", \"1\", fieldsList);\n+ refresh();\n+ //after refresh - document is in translog and also indexed\n+ assertGetFieldsAlwaysNull(\"testidx\", \"doc\", \"1\", fieldsList);\n+ flush();\n+ //after flush - document is in not anymore translog - only indexed\n+ assertGetFieldsAlwaysNull(\"testidx\", \"doc\", \"1\", fieldsList);\n+ }\n+\n+ @Test\n+ public void testGeneratedStringFieldsStored() throws IOException {\n+ indexSingleDocumentWithStringFieldsGeneratedFromText(true, randomBoolean());\n+ String[] fieldsList = {\"_all\", \"_field_names\"};\n+ // before refresh - document is only in translog\n+ assertGetFieldsNull(\"testidx\", \"doc\", \"1\", fieldsList);\n+ assertGetFieldsException(\"testidx\", \"doc\", \"1\", fieldsList);\n+ refresh();\n+ //after refresh - document is in translog and also indexed\n+ assertGetFieldsAlwaysWorks(\"testidx\", \"doc\", \"1\", fieldsList);\n+ flush();\n+ //after flush - document is in not anymore translog - only indexed\n+ assertGetFieldsAlwaysWorks(\"testidx\", \"doc\", \"1\", fieldsList);\n+ }\n+\n+ void indexSingleDocumentWithStringFieldsGeneratedFromText(boolean stored, boolean sourceEnabled) {\n+\n+ String storedString = stored ? \"yes\" : \"no\";\n+ String createIndexSource = \"{\\n\" +\n+ \" \\\"settings\\\": {\\n\" +\n+ \" \\\"index.translog.disable_flush\\\": true,\\n\" +\n+ \" \\\"refresh_interval\\\": \\\"-1\\\"\\n\" +\n+ \" },\\n\" +\n+ \" \\\"mappings\\\": {\\n\" +\n+ \" \\\"doc\\\": {\\n\" +\n+ \" \\\"_source\\\" : {\\\"enabled\\\" : \" + sourceEnabled + \"},\" +\n+ \" \\\"_all\\\" : {\\\"enabled\\\" : true, \\\"store\\\":\\\"\" + storedString + \"\\\" },\" +\n+ \" \\\"_field_names\\\" : {\\\"store\\\":\\\"\" + storedString + \"\\\" }\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \"}\";\n+\n+ assertAcked(prepareCreate(\"testidx\").setSource(createIndexSource));\n+ ensureGreen();\n+ String doc = \"{\\n\" +\n+ \" \\\"text1\\\": \\\"some text.\\\"\\n,\" +\n+ \" \\\"text2\\\": \\\"more text.\\\"\\n\" +\n+ \"}\\n\";\n+ index(\"testidx\", \"doc\", \"1\", doc);\n+ }\n+\n+\n+ @Test\n+ public void testGeneratedNumberFieldsUnstored() throws IOException {\n+ indexSingleDocumentWithNumericFieldsGeneratedFromText(false, randomBoolean());\n+ String[] fieldsList = {\"token_count\", \"text.token_count\", \"murmur\", \"text.murmur\"};\n+ // before refresh - document is only in translog\n+ assertGetFieldsAlwaysNull(\"testidx\", \"doc\", \"1\", fieldsList);\n+ refresh();\n+ //after refresh - document is in translog and also indexed\n+ assertGetFieldsAlwaysNull(\"testidx\", \"doc\", \"1\", fieldsList);\n+ flush();\n+ //after flush - document is in not anymore translog - only indexed\n+ assertGetFieldsAlwaysNull(\"testidx\", \"doc\", \"1\", fieldsList);\n+ }\n+\n+ @Test\n+ public void testGeneratedNumberFieldsStored() throws IOException {\n+ indexSingleDocumentWithNumericFieldsGeneratedFromText(true, randomBoolean());\n+ String[] fieldsList = {\"token_count\", \"text.token_count\", \"murmur\", \"text.murmur\"};\n+ // before refresh - document is only in translog\n+ assertGetFieldsNull(\"testidx\", \"doc\", \"1\", fieldsList);\n+ assertGetFieldsException(\"testidx\", \"doc\", \"1\", fieldsList);\n+ refresh();\n+ //after refresh - document is in translog and also indexed\n+ assertGetFieldsAlwaysWorks(\"testidx\", \"doc\", \"1\", fieldsList);\n+ flush();\n+ //after flush - document is in not anymore translog - only indexed\n+ assertGetFieldsAlwaysWorks(\"testidx\", \"doc\", \"1\", fieldsList);\n+ }\n+\n+ void indexSingleDocumentWithNumericFieldsGeneratedFromText(boolean stored, boolean sourceEnabled) {\n+ String storedString = stored ? \"yes\" : \"no\";\n+ String createIndexSource = \"{\\n\" +\n+ \" \\\"settings\\\": {\\n\" +\n+ \" \\\"index.translog.disable_flush\\\": true,\\n\" +\n+ \" \\\"refresh_interval\\\": \\\"-1\\\"\\n\" +\n+ \" },\\n\" +\n+ \" \\\"mappings\\\": {\\n\" +\n+ \" \\\"doc\\\": {\\n\" +\n+ \" \\\"_source\\\" : {\\\"enabled\\\" : \" + sourceEnabled + \"},\" +\n+ \" \\\"properties\\\": {\\n\" +\n+ \" \\\"token_count\\\": {\\n\" +\n+ \" \\\"type\\\": \\\"token_count\\\",\\n\" +\n+ \" \\\"analyzer\\\": \\\"standard\\\",\\n\" +\n+ \" \\\"store\\\": \\\"\" + storedString + \"\\\"\" +\n+ \" },\\n\" +\n+ \" \\\"murmur\\\": {\\n\" +\n+ \" \\\"type\\\": \\\"murmur3\\\",\\n\" +\n+ \" \\\"store\\\": \\\"\" + storedString + \"\\\"\" +\n+ \" },\\n\" +\n+ \" \\\"text\\\": {\\n\" +\n+ \" \\\"type\\\": \\\"string\\\",\\n\" +\n+ \" \\\"fields\\\": {\\n\" +\n+ \" \\\"token_count\\\": {\\n\" +\n+ \" \\\"type\\\": \\\"token_count\\\",\\n\" +\n+ \" \\\"analyzer\\\": \\\"standard\\\",\\n\" +\n+ \" \\\"store\\\": \\\"\" + storedString + \"\\\"\" +\n+ \" },\\n\" +\n+ \" \\\"murmur\\\": {\\n\" +\n+ \" \\\"type\\\": \\\"murmur3\\\",\\n\" +\n+ \" \\\"store\\\": \\\"\" + storedString + \"\\\"\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \" }\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \"}\";\n+\n+ assertAcked(prepareCreate(\"testidx\").setSource(createIndexSource));\n+ ensureGreen();\n+ String doc = \"{\\n\" +\n+ \" \\\"murmur\\\": \\\"Some value that can be hashed\\\",\\n\" +\n+ \" \\\"token_count\\\": \\\"A text with five words.\\\",\\n\" +\n+ \" \\\"text\\\": \\\"A text with five words.\\\"\\n\" +\n+ \"}\\n\";\n+ index(\"testidx\", \"doc\", \"1\", doc);\n+ }\n+\n+ private void assertGetFieldsAlwaysWorks(String index, String type, String docId, String[] fields) {\n+ assertGetFieldsAlwaysWorks(index, type, docId, fields, null);\n+ }\n+\n+ private void assertGetFieldsAlwaysWorks(String index, String type, String docId, String[] fields, @Nullable String routing) {\n+ for (String field : fields) {\n+ assertGetFieldWorks(index, type, docId, field, false, routing);\n+ assertGetFieldWorks(index, type, docId, field, true, routing);\n+ }\n+ }\n+\n+ private void assertGetFieldWorks(String index, String type, String docId, String field, boolean ignoreErrors, @Nullable String routing) {\n+ GetResponse response = getDocument(index, type, docId, field, ignoreErrors, routing);\n+ assertThat(response.getId(), equalTo(docId));\n+ assertTrue(response.isExists());\n+ assertNotNull(response.getField(field));\n+ response = multiGetDocument(index, type, docId, field, ignoreErrors, routing);\n+ assertThat(response.getId(), equalTo(docId));\n+ assertTrue(response.isExists());\n+ assertNotNull(response.getField(field));\n+ }\n+\n+ protected void assertGetFieldsException(String index, String type, String docId, String[] fields) {\n+ for (String field : fields) {\n+ assertGetFieldException(index, type, docId, field);\n+ }\n+ }\n+\n+ private void assertGetFieldException(String index, String type, String docId, String field) {\n+ try {\n+ client().prepareGet().setIndex(index).setType(type).setId(docId).setFields(field).setIgnoreErrorsOnGeneratedFields(false).get();\n+ fail();\n+ } catch (ElasticsearchException e) {\n+ assertTrue(e.getMessage().contains(\"You can only get this field after refresh() has been called.\"));\n+ }\n+ MultiGetResponse multiGetResponse = client().prepareMultiGet().add(new MultiGetRequest.Item(index, type, docId).fields(field)).setIgnoreErrorsOnGeneratedFields(false).get();\n+ assertNull(multiGetResponse.getResponses()[0].getResponse());\n+ assertTrue(multiGetResponse.getResponses()[0].getFailure().getMessage().contains(\"You can only get this field after refresh() has been called.\"));\n+ }\n+\n+ protected void assertGetFieldsNull(String index, String type, String docId, String[] fields) {\n+ assertGetFieldsNull(index, type, docId, fields, null);\n+ }\n+\n+ protected void assertGetFieldsNull(String index, String type, String docId, String[] fields, @Nullable String routing) {\n+ for (String field : fields) {\n+ assertGetFieldNull(index, type, docId, field, true, routing);\n+ }\n+ }\n+\n+ protected void assertGetFieldsAlwaysNull(String index, String type, String docId, String[] fields) {\n+ assertGetFieldsAlwaysNull(index, type, docId, fields, null);\n+ }\n+\n+ protected void assertGetFieldsAlwaysNull(String index, String type, String docId, String[] fields, @Nullable String routing) {\n+ for (String field : fields) {\n+ assertGetFieldNull(index, type, docId, field, true, routing);\n+ assertGetFieldNull(index, type, docId, field, false, routing);\n+ }\n+ }\n+\n+ protected void assertGetFieldNull(String index, String type, String docId, String field, boolean ignoreErrors, @Nullable String routing) {\n+ //for get\n+ GetResponse response = getDocument(index, type, docId, field, ignoreErrors, routing);\n+ assertTrue(response.isExists());\n+ assertNull(response.getField(field));\n+ assertThat(response.getId(), equalTo(docId));\n+ //same for multi get\n+ response = multiGetDocument(index, type, docId, field, ignoreErrors, routing);\n+ assertNull(response.getField(field));\n+ assertThat(response.getId(), equalTo(docId));\n+ assertTrue(response.isExists());\n+ }\n+\n+ private GetResponse multiGetDocument(String index, String type, String docId, String field, boolean ignoreErrors, @Nullable String routing) {\n+ MultiGetRequest.Item getItem = new MultiGetRequest.Item(index, type, docId).fields(field);\n+ if (routing != null) {\n+ getItem.routing(routing);\n+ }\n+ MultiGetRequestBuilder multiGetRequestBuilder = client().prepareMultiGet().add(getItem).setIgnoreErrorsOnGeneratedFields(ignoreErrors);\n+ MultiGetResponse multiGetResponse = multiGetRequestBuilder.get();\n+ assertThat(multiGetResponse.getResponses().length, equalTo(1));\n+ return multiGetResponse.getResponses()[0].getResponse();\n+ }\n+\n+ private GetResponse getDocument(String index, String type, String docId, String field, boolean ignoreErrors, @Nullable String routing) {\n+ GetRequestBuilder getRequestBuilder = client().prepareGet().setIndex(index).setType(type).setId(docId).setFields(field).setIgnoreErrorsOnGeneratedFields(ignoreErrors);\n+ if (routing != null) {\n+ getRequestBuilder.setRouting(routing);\n+ }\n+ return getRequestBuilder.get();\n+ }\n }", "filename": "src/test/java/org/elasticsearch/get/GetActionTests.java", "status": "modified" }, { "diff": "@@ -1040,6 +1040,19 @@ protected final IndexResponse index(String index, String type, String id, Object\n return client().prepareIndex(index, type, id).setSource(source).execute().actionGet();\n }\n \n+ /**\n+ * Syntactic sugar for:\n+ *\n+ * <pre>\n+ * return client().prepareIndex(index, type, id).setSource(source).execute().actionGet();\n+ * </pre>\n+ *\n+ * where source is a String.\n+ */\n+ protected final IndexResponse index(String index, String type, String id, String source) {\n+ return client().prepareIndex(index, type, id).setSource(source).execute().actionGet();\n+ }\n+\n /**\n * Waits for relocations and refreshes all indices in the cluster.\n *", "filename": "src/test/java/org/elasticsearch/test/ElasticsearchIntegrationTest.java", "status": "modified" } ] }
{ "body": "Unaligned memory access is illegal on several architectures (such as SPARC, see https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/elasticsearch/Nh-kXI5J6Ek/WXIZKhhGVHkJ for context).\n", "comments": [ { "body": "oh boy\n", "created_at": "2014-07-22T15:47:28Z" }, { "body": "I added the labels for `1.3` and `1.2.3` this is basically making us not portable anymore and we should fix that! you should also remove the `pom.xml` entry that makes the unsafe utils and exception for forbidden APIs\n", "created_at": "2014-07-23T07:22:26Z" }, { "body": "ES 1.3.4 still has issues, but this time in lzf compression codec. See file hs_err_pid26623.log at\nhttps://gist.github.com/jprante/3c9ca8a85da13bd65226\n", "created_at": "2014-10-20T15:08:10Z" }, { "body": "@jprante The fix to remove unsafe usage in lzf compression was backported to 1.3.5 in #8078\n", "created_at": "2014-10-20T16:36:27Z" } ], "number": 6962, "title": "Internal: Remove unsafe unaligned memory access - illegal on SPARC" }
{ "body": "This class potentially does unaligned memory access and does not bring much\nnow that we switched to global ords for terms aggregations.\n\nClose #6962\n", "number": 6963, "review_comments": [], "title": "Drop UnsafeUtils" }
{ "commits": [ { "message": "Core: Drop UnsafeUtils.\n\nThis class potentially does unaligned memory access and does not bring much\nnow that we switched to global ords for terms aggregations.\n\nClose #6962" }, { "message": "Use BytesRef.bytesEquals." } ], "files": [ { "diff": "@@ -20,7 +20,6 @@\n \n import org.apache.lucene.util.BytesRef;\n import org.elasticsearch.common.io.stream.StreamInput;\n-import org.elasticsearch.common.util.UnsafeUtils;\n import org.jboss.netty.buffer.ChannelBuffer;\n \n import java.io.IOException;\n@@ -42,16 +41,11 @@ public static boolean bytesEqual(BytesReference a, BytesReference b) {\n return false;\n }\n \n- if (a.hasArray() && b.hasArray()) {\n- // court-circuit to compare several bytes at once\n- return UnsafeUtils.equals(a.array(), a.arrayOffset(), b.array(), b.arrayOffset(), a.length());\n- } else {\n- return slowBytesEquals(a, b);\n- }\n+ return bytesEquals(a, b);\n }\n \n // pkg-private for testing\n- static boolean slowBytesEquals(BytesReference a, BytesReference b) {\n+ static boolean bytesEquals(BytesReference a, BytesReference b) {\n assert a.length() == b.length();\n for (int i = 0, end = a.length(); i < end; ++i) {\n if (a.get(i) != b.get(i)) {", "filename": "src/main/java/org/elasticsearch/common/bytes/BytesReference.java", "status": "modified" }, { "diff": "@@ -19,7 +19,7 @@\n \n package org.elasticsearch.common.hash;\n \n-import org.elasticsearch.common.util.UnsafeUtils;\n+import org.elasticsearch.common.util.ByteUtils;\n \n \n /**\n@@ -41,7 +41,7 @@ public static class Hash128 {\n protected static long getblock(byte[] key, int offset, int index) {\n int i_8 = index << 3;\n int blockOffset = offset + i_8;\n- return UnsafeUtils.readLongLE(key, blockOffset);\n+ return ByteUtils.readLongLE(key, blockOffset);\n }\n \n protected static long fmix(long k) {\n@@ -68,8 +68,8 @@ public static Hash128 hash128(byte[] key, int offset, int length, long seed, Has\n final int len16 = length & 0xFFFFFFF0; // higher multiple of 16 that is lower than or equal to length\n final int end = offset + len16;\n for (int i = offset; i < end; i += 16) {\n- long k1 = UnsafeUtils.readLongLE(key, i);\n- long k2 = UnsafeUtils.readLongLE(key, i + 8);\n+ long k1 = ByteUtils.readLongLE(key, i);\n+ long k2 = ByteUtils.readLongLE(key, i + 8);\n \n k1 *= C1;\n k1 = Long.rotateLeft(k1, 31);", "filename": "src/main/java/org/elasticsearch/common/hash/MurmurHash3.java", "status": "modified" }, { "diff": "@@ -77,7 +77,7 @@ public long find(BytesRef key, int code) {\n final long slot = slot(rehash(code), mask);\n for (long index = slot; ; index = nextSlot(index, mask)) {\n final long id = id(index);\n- if (id == -1L || UnsafeUtils.equals(key, get(id, spare))) {\n+ if (id == -1L || key.bytesEquals(get(id, spare))) {\n return id;\n }\n }\n@@ -99,7 +99,7 @@ private long set(BytesRef key, int code, long id) {\n append(id, key, code);\n ++size;\n return id;\n- } else if (UnsafeUtils.equals(key, get(curId, spare))) {\n+ } else if (key.bytesEquals(get(curId, spare))) {\n return -1 - curId;\n }\n }", "filename": "src/main/java/org/elasticsearch/common/util/BytesRefHash.java", "status": "modified" }, { "diff": "@@ -31,8 +31,8 @@\n import org.elasticsearch.common.lease.Releasables;\n import org.elasticsearch.common.util.BigArrays;\n import org.elasticsearch.common.util.ByteArray;\n+import org.elasticsearch.common.util.ByteUtils;\n import org.elasticsearch.common.util.IntArray;\n-import org.elasticsearch.common.util.UnsafeUtils;\n \n import java.io.IOException;\n import java.nio.ByteBuffer;\n@@ -438,7 +438,7 @@ private long index (long bucket, int index) {\n \n private int get(long bucket, int index) {\n runLens.get(index(bucket, index), 4, readSpare);\n- return UnsafeUtils.readIntLE(readSpare.bytes, readSpare.offset);\n+ return ByteUtils.readIntLE(readSpare.bytes, readSpare.offset);\n }\n \n private void set(long bucket, int index, int value) {", "filename": "src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/HyperLogLogPlusPlus.java", "status": "modified" }, { "diff": "@@ -37,13 +37,13 @@ public void testEquals() {\n final BytesArray b1 = new BytesArray(array1, offset1, len);\n final BytesArray b2 = new BytesArray(array2, offset2, len);\n assertTrue(BytesReference.Helper.bytesEqual(b1, b2));\n- assertTrue(BytesReference.Helper.slowBytesEquals(b1, b2));\n+ assertTrue(BytesReference.Helper.bytesEquals(b1, b2));\n assertEquals(Arrays.hashCode(b1.toBytes()), b1.hashCode());\n assertEquals(BytesReference.Helper.bytesHashCode(b1), BytesReference.Helper.slowHashCode(b2));\n \n // test same instance\n assertTrue(BytesReference.Helper.bytesEqual(b1, b1));\n- assertTrue(BytesReference.Helper.slowBytesEquals(b1, b1));\n+ assertTrue(BytesReference.Helper.bytesEquals(b1, b1));\n assertEquals(BytesReference.Helper.bytesHashCode(b1), BytesReference.Helper.slowHashCode(b1));\n \n if (len > 0) {\n@@ -54,7 +54,7 @@ public void testEquals() {\n // test changed bytes\n array1[offset1 + randomInt(len - 1)] += 13;\n assertFalse(BytesReference.Helper.bytesEqual(b1, b2));\n- assertFalse(BytesReference.Helper.slowBytesEquals(b1, b2));\n+ assertFalse(BytesReference.Helper.bytesEquals(b1, b2));\n }\n }\n ", "filename": "src/test/java/org/elasticsearch/common/bytes/BytesReferenceTests.java", "status": "modified" } ] }
{ "body": "This fails with `Unexpected token VALUE_STRING in aggregation [histo]` because \"50\" is passed as a string:\n\n```\ncurl -XGET 'http://localhost:9200/_search?pretty=1' -d '\n{\n \"aggs\" : {\n \"histo\" : {\n \"histogram\" : {\n \"interval\" : \"50\",\n \"field\" : \"number\"\n }\n }\n }\n}\n'\n```\n\nPerl won't guarantee that a number is a number and not a string.\n", "comments": [ { "body": "This is probably an issue for a lot of number fields in the aggregations. All will need to be checked for this\n", "created_at": "2014-07-16T15:16:44Z" }, { "body": "I guess we should just check for:\n\n```\n } else if (token == XContentParser.Token.VALUE_NUMBER || token == XContentParser.Token.VALUE_STRING) {\n\n```\n", "created_at": "2014-07-17T10:01:54Z" }, { "body": "yea, or just check `if token.isValue` and try and get the relevant type, the parsers will automatically convert strings to number if you ask for a long value\n", "created_at": "2014-07-21T18:50:59Z" } ], "number": 6893, "title": "Aggregations: histo \"interval\" should allow coercion from string" }
{ "body": "closes #6893\n", "number": 6948, "review_comments": [ { "body": "To coerce, should be:\n\n```\nparser.longValue(true);\n```\n", "created_at": "2014-07-21T19:25:58Z" }, { "body": "same here:\n\n```\n parser.longValue(true);\n```\n", "created_at": "2014-07-21T19:26:41Z" } ], "title": "More lenient type parsing in histo/cardinality aggs" }
{ "commits": [ { "message": "More lenient type parsing in histo/cardinality aggs\ncloses #6948\ncloses #6893" } ], "files": [ { "diff": "@@ -65,16 +65,12 @@ public AggregatorFactory parse(String aggregationName, XContentParser parser, Se\n currentFieldName = parser.currentName();\n } else if (vsParser.token(currentFieldName, token, parser)) {\n continue;\n- } else if (token == XContentParser.Token.VALUE_NUMBER) {\n+ } else if (token.isValue()) {\n if (\"interval\".equals(currentFieldName)) {\n interval = parser.longValue();\n } else if (\"min_doc_count\".equals(currentFieldName) || \"minDocCount\".equals(currentFieldName)) {\n minDocCount = parser.longValue();\n- } else {\n- throw new SearchParseException(context, \"Unknown key for a \" + token + \" in aggregation [\" + aggregationName + \"]: [\" + currentFieldName + \"].\");\n- }\n- } else if (token == XContentParser.Token.VALUE_BOOLEAN) {\n- if (\"keyed\".equals(currentFieldName)) {\n+ } else if (\"keyed\".equals(currentFieldName)) {\n keyed = parser.booleanValue();\n } else {\n throw new SearchParseException(context, \"Unknown key for a \" + token + \" in aggregation [\" + aggregationName + \"]: [\" + currentFieldName + \"].\");", "filename": "src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/HistogramParser.java", "status": "modified" }, { "diff": "@@ -56,14 +56,10 @@ public AggregatorFactory parse(String name, XContentParser parser, SearchContext\n currentFieldName = parser.currentName();\n } else if (vsParser.token(currentFieldName, token, parser)) {\n continue;\n- } else if (token == XContentParser.Token.VALUE_BOOLEAN) {\n+ } else if (token.isValue()) {\n if (\"rehash\".equals(currentFieldName)) {\n rehash = parser.booleanValue();\n- } else {\n- throw new SearchParseException(context, \"Unknown key for a \" + token + \" in [\" + name + \"]: [\" + currentFieldName + \"].\");\n- }\n- } else if (token == XContentParser.Token.VALUE_NUMBER) {\n- if (PRECISION_THRESHOLD.match(currentFieldName)) {\n+ } else if (PRECISION_THRESHOLD.match(currentFieldName)) {\n precisionThreshold = parser.longValue();\n } else {\n throw new SearchParseException(context, \"Unknown key for a \" + token + \" in [\" + name + \"]: [\" + currentFieldName + \"].\");", "filename": "src/main/java/org/elasticsearch/search/aggregations/metrics/cardinality/CardinalityParser.java", "status": "modified" } ] }
{ "body": "When a field is null, the completion mapping parser throws an error but it refers to the next field in the document, not the field with the completion type.\n\nAnother user had the same issue and reported it on your google group, but I don't see it here in the bug tracker. Here's the link he provided to reproduce the issue: \n\nhttps://gist.github.com/glade-at-gigwell/6408e0e4b69ddf2e8856\n\nAnd the stack trace:\n\n```\n {\"acknowledged\":true}[2014-05-18 13:40:24,150][INFO ][cluster.metadata ] [Aelfyre Whitemane] [completion_type_cant_handle_the_null_truth] creating index, cause [api], shards [1]/[0], mappings []\n {\"acknowledged\":true}[2014-05-18 13:40:24,224][INFO ][cluster.metadata ] [Aelfyre Whitemane] [completion_type_cant_handle_the_null_truth] create_mapping [object]\n {\"acknowledged\":true}[2014-05-18 13:40:24,245][INFO ][cluster.metadata ] [Aelfyre Whitemane] [completion_type_cant_handle_the_null_truth] update_mapping [object] (dynamic)\n {\"_index\":\"completion_type_cant_handle_the_null_truth\",\"_type\":\"object\",\"_id\":\"1\",\"_version\":1,\"created\":true}[2014-05-18 13:40:24,265][DEBUG][action.index ] [Aelfyre Whitemane] [completion_type_cant_handle_the_null_truth][0], node[k4lbsgzYSlWzynQkVGqMaw], [P], s[STARTED]: Failed to execute [index {[completion_type_cant_handle_the_null_truth][object][2], source[{\"field1\" : null,\"field2\" : \"nulls make me sad\"}]}]\n org.elasticsearch.index.mapper.MapperParsingException: failed to parse\nat org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:540)\nat org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:462)\nat org.elasticsearch.index.shard.service.InternalIndexShard.prepareIndex(InternalIndexShard.java:384)\nat org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:203)\nat org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:556)\nat org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$1.run(TransportShardReplicationOperationAction.java:426)\nat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\nat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\nat java.lang.Thread.run(Thread.java:744)\n Caused by: org.elasticsearch.ElasticsearchIllegalArgumentException: Unknown field name[field2], must be one of [payload, input, weight, output]\nat org.elasticsearch.index.mapper.core.CompletionFieldMapper.parse(CompletionFieldMapper.java:237)\nat org.elasticsearch.index.mapper.object.ObjectMapper.serializeNullValue(ObjectMapper.java:505)\nat org.elasticsearch.index.mapper.object.ObjectMapper.parse(ObjectMapper.java:465)\nat org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:515)\n... 8 more\n```\n", "comments": [], "number": 6399, "title": "Completion mapping type throws a misleading error on null value" }
{ "body": "When the mapper service gets a null value for a field, it tries to parse it with the mapper of any previously seen field, if that mapper happens to be CompletionMapper, it throws an error, as it only accepts certain fields.\n\nCloses #6399\n", "number": 6926, "review_comments": [ { "body": "Should we do instance checks here or have something similar to `AbstractFieldMapper.isSortable()` to decide if a mapper supports a certain feature, is `isSupportingNullValue`?\n", "created_at": "2014-07-21T10:46:05Z" }, { "body": "@spinscale Thanks for the suggestion, I think `isSupportingNullValue` is a cleaner way to do this. Changed as suggested.\n", "created_at": "2014-07-21T20:04:05Z" }, { "body": "another general question here. Does it make more sense to not silently ignore this but throw an exception (unless specified differently in the mapping)?\n", "created_at": "2014-07-22T06:55:38Z" }, { "body": "yeah wonder about that too?\n", "created_at": "2014-07-22T14:36:49Z" }, { "body": "or shorter: `supportsNullValue`? (though I might be completely wrong as my English is what it is...)\n", "created_at": "2014-08-01T13:52:21Z" } ], "title": "Completion mapping type throws a misleading error on null value" }
{ "commits": [ { "message": "[Fix] CompletionMapper throws misleading error on null value" } ], "files": [ { "diff": "@@ -286,6 +286,8 @@ public static Loading parse(String loading, Loading defaultValue) {\n \n boolean isSortable();\n \n+ boolean supportsNullValue();\n+\n boolean hasDocValues();\n \n Loading normsLoading(Loading defaultLoading);", "filename": "src/main/java/org/elasticsearch/index/mapper/FieldMapper.java", "status": "modified" }, { "diff": "@@ -842,6 +842,11 @@ public boolean isSortable() {\n return true;\n }\n \n+ @Override\n+ public boolean supportsNullValue() {\n+ return true;\n+ }\n+\n public boolean hasDocValues() {\n return docValues;\n }", "filename": "src/main/java/org/elasticsearch/index/mapper/core/AbstractFieldMapper.java", "status": "modified" }, { "diff": "@@ -451,6 +451,11 @@ public boolean hasDocValues() {\n return false;\n }\n \n+ @Override\n+ public boolean supportsNullValue() {\n+ return false;\n+ }\n+\n @Override\n public FieldType defaultFieldType() {\n return Defaults.FIELD_TYPE;", "filename": "src/main/java/org/elasticsearch/index/mapper/core/CompletionFieldMapper.java", "status": "modified" }, { "diff": "@@ -532,6 +532,11 @@ private void serializeNullValue(ParseContext context, String lastFieldName) thro\n // we can only handle null values if we have mappings for them\n Mapper mapper = mappers.get(lastFieldName);\n if (mapper != null) {\n+ if (mapper instanceof FieldMapper) {\n+ if (!((FieldMapper) mapper).supportsNullValue()) {\n+ throw new MapperParsingException(\"no object mapping found for null value in [\" + lastFieldName + \"]\");\n+ }\n+ }\n mapper.parse(context);\n }\n }", "filename": "src/main/java/org/elasticsearch/index/mapper/object/ObjectMapper.java", "status": "modified" }, { "diff": "@@ -1039,6 +1039,38 @@ public void testIssue5930() throws IOException {\n }\n }\n \n+ // see issue #6399\n+ @Test\n+ public void testIndexingUnrelatedNullValue() throws Exception {\n+ String mapping = jsonBuilder()\n+ .startObject()\n+ .startObject(TYPE)\n+ .startObject(\"properties\")\n+ .startObject(FIELD)\n+ .field(\"type\", \"completion\")\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .string();\n+\n+ assertAcked(client().admin().indices().prepareCreate(INDEX).addMapping(TYPE, mapping).get());\n+ ensureGreen();\n+\n+ client().prepareIndex(INDEX, TYPE, \"1\").setSource(FIELD, \"strings make me happy\", FIELD + \"_1\", \"nulls make me sad\")\n+ .setRefresh(true).get();\n+\n+ try {\n+ client().prepareIndex(INDEX, TYPE, \"2\").setSource(FIELD, null, FIELD + \"_1\", \"nulls make me sad\")\n+ .setRefresh(true).get();\n+ fail(\"Expected MapperParsingException for null value\");\n+ } catch (MapperParsingException e) {\n+ // make sure that the exception has the name of the field causing the error\n+ assertTrue(e.getDetailedMessage().contains(FIELD));\n+ }\n+\n+ }\n+\n private static String replaceReservedChars(String input, char replacement) {\n char[] charArray = input.toCharArray();\n for (int i = 0; i < charArray.length; i++) {", "filename": "src/test/java/org/elasticsearch/search/suggest/CompletionSuggestSearchTests.java", "status": "modified" } ] }
{ "body": "Today when we start a `TransportClient` we use the given transport addresses and create a `DiscoveryNode` from it without knowing the actual nodes version. We just use the `Version.CURRENT` which is an upper bound. Yet, the other node might be a version less than the currently running and serialisation of the nodes info might break. We should rather use a lower bound here which is the version of the first release with the same major version as `Version.CURRENT` since this is what we officially support. \n\nWe changed the format of the `NodesInfo` serialisation today and BWC tests broken on that. Yet we found a away to work around changing it but in the future we should be able to change transport protocol even if it's `NodesInfo`\n\nNote: this is not a problem until today but in the future this might prevent us from enhancing the protocol here.\n", "comments": [ { "body": "FYI, @javanna did some work around this one\n", "created_at": "2014-07-16T22:02:08Z" } ], "number": 6894, "title": "Unknown node version should be a lower bound" }
{ "body": "Today when we start a `TransportClient` we use the given transport\naddresses and create a `DiscoveryNode` from it without knowing the\nactual nodes version. We just use the `Version.CURRENT` which is an\nupper bound. Yet, the other node might be a version less than the\ncurrently running and serialisation of the nodes info might break. We\nshould rather use a lower bound here which is the version of the first\nrelease with the same major version as `Version.CURRENT` since this is\nwhat we officially support.\n\nThis commit moves to use the minimum major version or an RC / Snapshot\nif the current version is a snapshot.\n\nCloses #6894\n", "number": 6905, "review_comments": [ { "body": "typo! missing i ;)\n", "created_at": "2014-07-17T14:30:55Z" }, { "body": "I think I'd prefer to check `listedNodes()` instead of `filteredNodes()`, which wouldn't be empty if the transport client used the right cluster name (`foobar`). Otherwise it feels like we are testing two things at the same time (filtering out the nodes and updating the version)... \n", "created_at": "2014-07-17T21:37:06Z" }, { "body": "good idea!\n", "created_at": "2014-07-17T21:37:25Z" }, { "body": "can we remove this additional line (unless there was a specific reason why you added it)? ;)\n", "created_at": "2014-07-17T21:37:57Z" }, { "body": "I am already checking the connected nodes I don't get your comment.. I am testing if we update the version once we are connected... I can also add listening nodes but I do that on purpose with the wrong cluster name... I think you maybe missed the point of the test?\n", "created_at": "2014-07-18T07:36:35Z" }, { "body": "sure\n", "created_at": "2014-07-18T07:36:51Z" }, { "body": "thx\n", "created_at": "2014-07-18T07:36:58Z" }, { "body": "Let me try and explain you what I meant with my previous comment: we have `listedNodes` that contains the nodes added through `addTransportAdress`. After the sample phase we add the reachable ones to the connected nodes, or to the filtered nodes if they belong to a different cluster. In this test the transport client doesn't have the proper cluster name compared to the node you start, which is why the node ends up within the filtered ones. No huge deal, but it feels like we are also testing nodes filtering in this test, cause we rely on it.\n", "created_at": "2014-07-18T08:04:08Z" }, { "body": "Not sure why we get this `if`, I might be missing something but I thought we never update the listed nodes, thus they are always going to be the original discovery nodes with minimum comp version.\n", "created_at": "2014-07-18T08:09:25Z" }, { "body": "that is exactly what I wanted I want to test that we are actually setting the min version when we don't have connected to the node. I test that 1. we update it via connected & 2. if it's not connected we keep the min version I still don't get your comment :)\n", "created_at": "2014-07-18T08:22:28Z" }, { "body": "I see. I think I don't get my comment either at this point, it all makes sense!!! ;)\n", "created_at": "2014-07-18T08:40:54Z" } ], "title": "[CLIENT] Unknown node version should be a lower bound" }
{ "commits": [ { "message": "[CLIENT] Unknown node version should be a lower bound\n\nToday when we start a `TransportClient` we use the given transport\naddresses and create a `DiscoveryNode` from it without knowing the\nactual nodes version. We just use the `Version.CURRENT` which is an\nupper bound. Yet, the other node might be a version less than the\ncurrently running and serialisation of the nodes info might break. We\nshould rather use a lower bound here which is the version of the first\nrelease with the same major version as `Version.CURRENT` since this is\nwhat we officially support.\n\nThis commit moves to use the minimum major version or an RC / Snapshot\nif the current version is a snapshot.\n\nCloses #6894" } ], "files": [ { "diff": "@@ -453,6 +453,17 @@ public boolean onOrBefore(Version version) {\n return version.id >= id;\n }\n \n+ /**\n+ * Returns the minimum compatible version based on the current\n+ * version. Ie a node needs to have at least the return version in order\n+ * to communicate with a node running the current version. The returned version\n+ * is in most of the cases the smallest major version release unless the current version\n+ * is a beta or RC release then the version itself is returned.\n+ */\n+ public Version minimumCompatibilityVersion() {\n+ return Version.smallest(this, fromId(major * 1000000 + 99));\n+ }\n+\n /**\n * Just the version number (without -SNAPSHOT if snapshot).\n */", "filename": "src/main/java/org/elasticsearch/Version.java", "status": "modified" }, { "diff": "@@ -196,6 +196,10 @@ public TransportClient(Settings pSettings, boolean loadConfigSettings) throws El\n internalClient = injector.getInstance(InternalTransportClient.class);\n }\n \n+ TransportClientNodesService nodeService() {\n+ return nodesService;\n+ }\n+\n /**\n * Returns the current registered transport addresses to use (added using\n * {@link #addTransportAddress(org.elasticsearch.common.transport.TransportAddress)}.", "filename": "src/main/java/org/elasticsearch/client/transport/TransportClient.java", "status": "modified" }, { "diff": "@@ -67,7 +67,7 @@ public class TransportClientNodesService extends AbstractComponent {\n \n private final ThreadPool threadPool;\n \n- private final Version version;\n+ private final Version minCompatibilityVersion;\n \n // nodes that are added to be discovered\n private volatile ImmutableList<DiscoveryNode> listedNodes = ImmutableList.of();\n@@ -95,7 +95,7 @@ public TransportClientNodesService(Settings settings, ClusterName clusterName, T\n this.clusterName = clusterName;\n this.transportService = transportService;\n this.threadPool = threadPool;\n- this.version = version;\n+ this.minCompatibilityVersion = version.minimumCompatibilityVersion();\n \n this.nodesSamplerInterval = componentSettings.getAsTime(\"nodes_sampler_interval\", timeValueSeconds(5));\n this.pingTimeout = componentSettings.getAsTime(\"ping_timeout\", timeValueSeconds(5)).millis();\n@@ -161,7 +161,7 @@ public TransportClientNodesService addTransportAddresses(TransportAddress... tra\n ImmutableList.Builder<DiscoveryNode> builder = ImmutableList.builder();\n builder.addAll(listedNodes());\n for (TransportAddress transportAddress : filtered) {\n- DiscoveryNode node = new DiscoveryNode(\"#transport#-\" + tempNodeIdGenerator.incrementAndGet(), transportAddress, version);\n+ DiscoveryNode node = new DiscoveryNode(\"#transport#-\" + tempNodeIdGenerator.incrementAndGet(), transportAddress, minCompatibilityVersion);\n logger.debug(\"adding address [{}]\", node);\n builder.add(node);\n }", "filename": "src/main/java/org/elasticsearch/client/transport/TransportClientNodesService.java", "status": "modified" }, { "diff": "@@ -104,4 +104,14 @@ public void testVersion() {\n final Version version = randomFrom(Version.V_0_18_0, Version.V_0_90_13, Version.V_1_3_0);\n assertEquals(version, Version.indexCreated(ImmutableSettings.builder().put(IndexMetaData.SETTING_UUID, \"foo\").put(IndexMetaData.SETTING_VERSION_CREATED, version).build()));\n }\n+\n+ @Test\n+ public void testMinCompatVersion() {\n+ assertThat(Version.V_2_0_0.minimumCompatibilityVersion(), equalTo(Version.V_2_0_0));\n+ assertThat(Version.V_1_3_0.minimumCompatibilityVersion(), equalTo(Version.V_1_0_0));\n+ assertThat(Version.V_1_2_0.minimumCompatibilityVersion(), equalTo(Version.V_1_0_0));\n+ assertThat(Version.V_1_2_3.minimumCompatibilityVersion(), equalTo(Version.V_1_0_0));\n+ assertThat(Version.V_1_0_0_RC2.minimumCompatibilityVersion(), equalTo(Version.V_1_0_0_RC2));\n+ }\n+\n }\n\\ No newline at end of file", "filename": "src/test/java/org/elasticsearch/VersionTests.java", "status": "modified" }, { "diff": "@@ -19,13 +19,22 @@\n \n package org.elasticsearch.client.transport;\n \n+import org.elasticsearch.Version;\n+import org.elasticsearch.cluster.node.DiscoveryNode;\n import org.elasticsearch.common.settings.ImmutableSettings;\n+import org.elasticsearch.common.transport.TransportAddress;\n+import org.elasticsearch.node.Node;\n+import org.elasticsearch.node.NodeBuilder;\n+import org.elasticsearch.node.internal.InternalNode;\n import org.elasticsearch.test.ElasticsearchIntegrationTest;\n import org.elasticsearch.test.ElasticsearchIntegrationTest.ClusterScope;\n-import org.hamcrest.Matchers;\n+import org.elasticsearch.transport.TransportService;\n import org.junit.Test;\n \n-import static org.elasticsearch.test.ElasticsearchIntegrationTest.*;\n+import static org.elasticsearch.test.ElasticsearchIntegrationTest.Scope;\n+import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.greaterThanOrEqualTo;\n+import static org.hamcrest.Matchers.startsWith;\n \n @ClusterScope(scope = Scope.TEST, numDataNodes = 0, transportClientRatio = 1.0)\n public class TransportClientTests extends ElasticsearchIntegrationTest {\n@@ -35,7 +44,41 @@ public void testPickingUpChangesInDiscoveryNode() {\n String nodeName = internalCluster().startNode(ImmutableSettings.builder().put(\"node.data\", false));\n \n TransportClient client = (TransportClient) internalCluster().client(nodeName);\n- assertThat(client.connectedNodes().get(0).dataNode(), Matchers.equalTo(false));\n+ assertThat(client.connectedNodes().get(0).dataNode(), equalTo(false));\n \n }\n+\n+ @Test\n+ public void testNodeVersionIsUpdated() {\n+ TransportClient client = (TransportClient) internalCluster().client();\n+ TransportClientNodesService nodeService = client.nodeService();\n+ Node node = NodeBuilder.nodeBuilder().data(false).settings(ImmutableSettings.builder()\n+ .put(internalCluster().getDefaultSettings())\n+ .put(\"http.enabled\", false)\n+ .put(\"index.store.type\", \"ram\")\n+ .put(\"config.ignore_system_properties\", true) // make sure we get what we set :)\n+ .put(\"gateway.type\", \"none\")\n+ .build()).clusterName(\"foobar\").build();\n+ node.start();\n+ try {\n+ TransportAddress transportAddress = ((InternalNode) node).injector().getInstance(TransportService.class).boundAddress().publishAddress();\n+ client.addTransportAddress(transportAddress);\n+ assertThat(nodeService.connectedNodes().size(), greaterThanOrEqualTo(1)); // since we force transport clients there has to be one node started that we connect to.\n+ for (DiscoveryNode discoveryNode : nodeService.connectedNodes()) { // connected nodes have updated version\n+ assertThat(discoveryNode.getVersion(), equalTo(Version.CURRENT));\n+ }\n+\n+ for (DiscoveryNode discoveryNode : nodeService.listedNodes()) {\n+ assertThat(discoveryNode.id(), startsWith(\"#transport#-\"));\n+ assertThat(discoveryNode.getVersion(), equalTo(Version.CURRENT.minimumCompatibilityVersion()));\n+ }\n+\n+ assertThat(nodeService.filteredNodes().size(), equalTo(1));\n+ for (DiscoveryNode discoveryNode : nodeService.filteredNodes()) {\n+ assertThat(discoveryNode.getVersion(), equalTo(Version.CURRENT.minimumCompatibilityVersion()));\n+ }\n+ } finally {\n+ node.stop();\n+ }\n+ }\n }", "filename": "src/test/java/org/elasticsearch/client/transport/TransportClientTests.java", "status": "modified" } ] }
{ "body": "If the user sets a high refresh interval, the versionMap can use\nunbounded RAM. I fixed LiveVersionMap to track its RAM used, and\ntrigger refresh if it's > 25% of IW's RAM buffer. (We could add\nanother setting for this but we have so many settings already?).\n\nI also fixed deletes to prune every index.gc_deletes/4 msec, and I\nonly save a delete tombstone if index.gc_deletes > 0.\n\nI think we could expose the RAM used by versionMap somewhere\n(Marvel? _cat?), but we can do that separately ... I put a TODO.\n\nCloses #6378\n", "comments": [ { "body": "OK I folded in all the feedback here (thank you!), and added two new\ntests.\n\nI reworked how deletes are handled, so that they are now included in\nthe versionMap.current/old but also added to a separate tombstones\nmap so that we can prune that map separately from using refresh to\nfree up RAM. I think the logic is simpler now.\n", "created_at": "2014-06-10T16:58:47Z" }, { "body": "+1 to expose the RAM usage via an API. Can you please open an issue to do that? We might think further here and see how much RAM IW is using per shard as well, the DWFlushControl expose this to the FlushPolicy already so we might want to expose that via the IW API?\n", "created_at": "2014-06-12T12:30:49Z" }, { "body": "I opened #6483 to expose the RAM usage via ShardStats and indices cat API...\n", "created_at": "2014-06-12T13:34:57Z" }, { "body": "I made another round - it looks good! \n\nI think we should also add a test to validate behave when enableGcDeletes is false - maybe have index.gc_deletes set to 0ms and disable enableGcDeletes and see delete stick around..\n", "created_at": "2014-06-18T12:34:25Z" }, { "body": "OK I folded in all feedback I think!\n\nHowever I still have concerns about how we calculate the RAM usage ... I put two nocommits about it, but these are relatively minor issues and shouldn't hold up committing this if we can't think of a simple way to address them.\n\nAlso, I want to do some performance testing here in the updates case: net/net this will put somewhat more pressure on the bloom filters / terms dict for the PK lookups since the version map acts like a cache now since the last flush, whereas with this change it only has updates since the last refresh. Maybe not a big deal in practice ... only for updates to very recently indexed docs.\n", "created_at": "2014-06-19T13:40:09Z" }, { "body": "OK performance looks fine; I ran a test doing 75% adds and 25% updates (though, not biased for recency) using random UUID and there was no clear change...\n", "created_at": "2014-06-19T13:58:59Z" }, { "body": "OK I resolved the two nocommits about ramBytesUsed; I think this is ready now.\n", "created_at": "2014-06-20T16:20:01Z" }, { "body": "OK I pushed another iteration, trying to improve the RAM accounting using an idea from @bleskes to shift accounting of BytesRef/VersionValue back from the tombstones to current when a tombstone is removed. I also moved the forced pruning of tombtones to commit, and now call maybePruneDeletedTombstones from refresh.\n", "created_at": "2014-06-22T10:15:48Z" }, { "body": "LGTM\n", "created_at": "2014-06-26T08:15:57Z" }, { "body": "I think along with this, we can go back to Integer.MAX_VALUE default for index.translog.flush_threshold_ops.... I'll commit that.\n", "created_at": "2014-06-28T09:18:15Z" }, { "body": "@mikemccand can we make the move to `INT_MAX` a separate issue?\n", "created_at": "2014-06-29T10:53:11Z" }, { "body": "I think this is ready, mike if you want another review put the review label back pls\n", "created_at": "2014-07-02T07:39:41Z" }, { "body": "Thanks Simon, I think it's ready too. I put xlog flushing back to 5000 ops ... I'll commit this soon.\n", "created_at": "2014-07-04T10:11:23Z" }, { "body": "+1\n", "created_at": "2014-07-04T10:25:12Z" } ], "number": 6443, "title": "Force refresh when versionMap is using too much RAM" }
{ "body": "Implements exposing the IndexWriter and VersionMap RAM usage added in #6443 to the _cat endpoint. \n\nCloses #6483\n", "number": 6854, "review_comments": [ { "body": "Does this alias need to be `siwm` or something?\n", "created_at": "2014-07-14T14:27:10Z" }, { "body": "Typo: VersionMay should be VersionMap\n", "created_at": "2014-07-14T14:28:10Z" }, { "body": "Same for this alias I think. And maybe this and the above section deserve a bit of code deduplicate. They look pretty much the same but its a wall of text.\n", "created_at": "2014-07-14T14:28:44Z" }, { "body": "Good catch, @nik9000! Will fix\n", "created_at": "2014-07-14T14:47:42Z" } ], "title": "Expose IndexWriter and VersionMap RAM usage" }
{ "commits": [ { "message": "initial implementation exposing indexWriter and VersionMap ram usage" }, { "message": "Working tests; incorporated feedback" }, { "message": "added docs" } ], "files": [ { "diff": "@@ -179,4 +179,8 @@ operations |9\n |`segments.count` |`sc`, `segmentsCount` |No |Number of segments |4\n |`segments.memory` |`sm`, `segmentsMemory` |No |Memory used by\n segments |1.4kb\n+|`segments.index_writer_memory` |`siwm`, `segmentsIndexWriterMemory` |No\n+|Memory used by index writer |1.2kb\n+|`segments.version_map_memory` |`svmm`, `segmentsVersionMapMemory` |No\n+|Memory used by version map |1.0kb\n |=======================================================================", "filename": "docs/reference/cat/nodes.asciidoc", "status": "modified" }, { "diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.index.engine;\n \n+import org.elasticsearch.Version;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.io.stream.Streamable;\n@@ -36,6 +37,8 @@ public class SegmentsStats implements Streamable, ToXContent {\n \n private long count;\n private long memoryInBytes;\n+ private long indexWriterMemoryInBytes;\n+ private long versionMapMemoryInBytes;\n \n public SegmentsStats() {\n \n@@ -46,11 +49,21 @@ public void add(long count, long memoryInBytes) {\n this.memoryInBytes += memoryInBytes;\n }\n \n+ public void addIndexWriterMemoryInBytes(long indexWriterMemoryInBytes) {\n+ this.indexWriterMemoryInBytes += indexWriterMemoryInBytes;\n+ }\n+\n+ public void addVersionMapMemoryInBytes(long versionMapMemoryInBytes) {\n+ this.versionMapMemoryInBytes += versionMapMemoryInBytes;\n+ }\n+\n public void add(SegmentsStats mergeStats) {\n if (mergeStats == null) {\n return;\n }\n add(mergeStats.count, mergeStats.memoryInBytes);\n+ addIndexWriterMemoryInBytes(mergeStats.indexWriterMemoryInBytes);\n+ addVersionMapMemoryInBytes(mergeStats.versionMapMemoryInBytes);\n }\n \n /**\n@@ -71,6 +84,28 @@ public ByteSizeValue getMemory() {\n return new ByteSizeValue(memoryInBytes);\n }\n \n+ /**\n+ * Estimation of the memory usage by index writer\n+ */\n+ public long getIndexWriterMemoryInBytes() {\n+ return this.indexWriterMemoryInBytes;\n+ }\n+\n+ public ByteSizeValue getIndexWriterMemory() {\n+ return new ByteSizeValue(indexWriterMemoryInBytes);\n+ }\n+\n+ /**\n+ * Estimation of the memory usage by version map\n+ */\n+ public long getVersionMapMemoryInBytes() {\n+ return this.versionMapMemoryInBytes;\n+ }\n+\n+ public ByteSizeValue getVersionMapMemory() {\n+ return new ByteSizeValue(versionMapMemoryInBytes);\n+ }\n+\n public static SegmentsStats readSegmentsStats(StreamInput in) throws IOException {\n SegmentsStats stats = new SegmentsStats();\n stats.readFrom(in);\n@@ -82,6 +117,8 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n builder.startObject(Fields.SEGMENTS);\n builder.field(Fields.COUNT, count);\n builder.byteSizeField(Fields.MEMORY_IN_BYTES, Fields.MEMORY, memoryInBytes);\n+ builder.byteSizeField(Fields.INDEX_WRITER_MEMORY_IN_BYTES, Fields.INDEX_WRITER_MEMORY, indexWriterMemoryInBytes);\n+ builder.byteSizeField(Fields.VERSION_MAP_MEMORY_IN_BYTES, Fields.VERSION_MAP_MEMORY, versionMapMemoryInBytes);\n builder.endObject();\n return builder;\n }\n@@ -91,17 +128,29 @@ static final class Fields {\n static final XContentBuilderString COUNT = new XContentBuilderString(\"count\");\n static final XContentBuilderString MEMORY = new XContentBuilderString(\"memory\");\n static final XContentBuilderString MEMORY_IN_BYTES = new XContentBuilderString(\"memory_in_bytes\");\n+ static final XContentBuilderString INDEX_WRITER_MEMORY = new XContentBuilderString(\"index_writer_memory\");\n+ static final XContentBuilderString INDEX_WRITER_MEMORY_IN_BYTES = new XContentBuilderString(\"index_writer_memory_in_bytes\");\n+ static final XContentBuilderString VERSION_MAP_MEMORY = new XContentBuilderString(\"version_map_memory\");\n+ static final XContentBuilderString VERSION_MAP_MEMORY_IN_BYTES = new XContentBuilderString(\"version_map_memory_in_bytes\");\n }\n \n @Override\n public void readFrom(StreamInput in) throws IOException {\n count = in.readVLong();\n memoryInBytes = in.readLong();\n+ if (in.getVersion().onOrAfter(Version.V_1_3_0)) {\n+ indexWriterMemoryInBytes = in.readLong();\n+ versionMapMemoryInBytes = in.readLong();\n+ }\n }\n \n @Override\n public void writeTo(StreamOutput out) throws IOException {\n out.writeVLong(count);\n out.writeLong(memoryInBytes);\n+ if (out.getVersion().onOrAfter(Version.V_1_3_0)) {\n+ out.writeLong(indexWriterMemoryInBytes);\n+ out.writeLong(versionMapMemoryInBytes);\n+ }\n }\n }\n\\ No newline at end of file", "filename": "src/main/java/org/elasticsearch/index/engine/SegmentsStats.java", "status": "modified" }, { "diff": "@@ -1136,6 +1136,8 @@ public SegmentsStats segmentsStats() {\n for (AtomicReaderContext reader : searcher.reader().leaves()) {\n stats.add(1, getReaderRamBytesUsed(reader));\n }\n+ stats.addVersionMapMemoryInBytes(versionMap.ramBytesUsed());\n+ stats.addIndexWriterMemoryInBytes(indexWriter.ramBytesUsed());\n return stats;\n } finally {\n searcher.close();", "filename": "src/main/java/org/elasticsearch/index/engine/internal/InternalEngine.java", "status": "modified" }, { "diff": "@@ -240,6 +240,12 @@ Table getTableWithHeader(final RestRequest request) {\n table.addCell(\"segments.memory\", \"sibling:pri;alias:sm,segmentsMemory;default:false;text-align:right;desc:memory used by segments\");\n table.addCell(\"pri.segments.memory\", \"default:false;text-align:right;desc:memory used by segments\");\n \n+ table.addCell(\"segments.index_writer_memory\", \"sibling:pri;alias:siwm,segmentsIndexWriterMemory;default:false;text-align:right;desc:memory used by index writer\");\n+ table.addCell(\"pri.segments.index_writer_memory\", \"default:false;text-align:right;desc:memory used by index writer\");\n+\n+ table.addCell(\"segments.version_map_memory\", \"sibling:pri;alias:svmm,segmentsVersionMapMemory;default:false;text-align:right;desc:memory used by version map\");\n+ table.addCell(\"pri.segments.version_map_memory\", \"default:false;text-align:right;desc:memory used by version map\");\n+\n table.addCell(\"warmer.current\", \"sibling:pri;alias:wc,warmerCurrent;default:false;text-align:right;desc:current warmer ops\");\n table.addCell(\"pri.warmer.current\", \"default:false;text-align:right;desc:current warmer ops\");\n \n@@ -413,6 +419,12 @@ private Table buildTable(RestRequest request, String[] indices, ClusterHealthRes\n table.addCell(indexStats == null ? null : indexStats.getTotal().getSegments().getMemory());\n table.addCell(indexStats == null ? null : indexStats.getPrimaries().getSegments().getMemory());\n \n+ table.addCell(indexStats == null ? null : indexStats.getTotal().getSegments().getIndexWriterMemory());\n+ table.addCell(indexStats == null ? null : indexStats.getPrimaries().getSegments().getIndexWriterMemory());\n+\n+ table.addCell(indexStats == null ? null : indexStats.getTotal().getSegments().getVersionMapMemory());\n+ table.addCell(indexStats == null ? null : indexStats.getPrimaries().getSegments().getVersionMapMemory());\n+\n table.addCell(indexStats == null ? null : indexStats.getTotal().getWarmer().current());\n table.addCell(indexStats == null ? null : indexStats.getPrimaries().getWarmer().current());\n ", "filename": "src/main/java/org/elasticsearch/rest/action/cat/RestIndicesAction.java", "status": "modified" }, { "diff": "@@ -169,6 +169,8 @@ Table getTableWithHeader(final RestRequest request) {\n \n table.addCell(\"segments.count\", \"alias:sc,segmentsCount;default:false;text-align:right;desc:number of segments\");\n table.addCell(\"segments.memory\", \"alias:sm,segmentsMemory;default:false;text-align:right;desc:memory used by segments\");\n+ table.addCell(\"segments.index_writer_memory\", \"alias:siwm,segmentsIndexWriterMemory;default:false;text-align:right;desc:memory used by index writer\");\n+ table.addCell(\"segments.version_map_memory\", \"alias:svmm,segmentsVersionMapMemory;default:false;text-align:right;desc:memory used by version map\");\n \n table.addCell(\"suggest.current\", \"alias:suc,suggestCurrent;default:false;text-align:right;desc:number of current suggest ops\");\n table.addCell(\"suggest.time\", \"alias:suti,suggestTime;default:false;text-align:right;desc:time spend in suggest\");\n@@ -271,6 +273,8 @@ private Table buildTable(RestRequest req, ClusterStateResponse state, NodesInfoR\n \n table.addCell(stats == null ? null : stats.getIndices().getSegments().getCount());\n table.addCell(stats == null ? null : stats.getIndices().getSegments().getMemory());\n+ table.addCell(stats == null ? null : stats.getIndices().getSegments().getIndexWriterMemory());\n+ table.addCell(stats == null ? null : stats.getIndices().getSegments().getVersionMapMemory());\n \n table.addCell(stats == null ? null : stats.getIndices().getSuggest().getCurrent());\n table.addCell(stats == null ? null : stats.getIndices().getSuggest().getTime());", "filename": "src/main/java/org/elasticsearch/rest/action/cat/RestNodesAction.java", "status": "modified" }, { "diff": "@@ -145,6 +145,8 @@ Table getTableWithHeader(final RestRequest request) {\n \n table.addCell(\"segments.count\", \"alias:sc,segmentsCount;default:false;text-align:right;desc:number of segments\");\n table.addCell(\"segments.memory\", \"alias:sm,segmentsMemory;default:false;text-align:right;desc:memory used by segments\");\n+ table.addCell(\"segments.index_writer_memory\", \"alias:siwm,segmentsIndexWriterMemory;default:false;text-align:right;desc:memory used by index writer\");\n+ table.addCell(\"segments.version_map_memory\", \"alias:svmm,segmentsVersionMapMemory;default:false;text-align:right;desc:memory used by version map\");\n \n table.addCell(\"warmer.current\", \"alias:wc,warmerCurrent;default:false;text-align:right;desc:current warmer ops\");\n table.addCell(\"warmer.total\", \"alias:wto,warmerTotal;default:false;text-align:right;desc:total warmer ops\");\n@@ -242,6 +244,8 @@ private Table buildTable(RestRequest request, ClusterStateResponse state, Indice\n \n table.addCell(shardStats == null ? null : shardStats.getSegments().getCount());\n table.addCell(shardStats == null ? null : shardStats.getSegments().getMemory());\n+ table.addCell(shardStats == null ? null : shardStats.getSegments().getIndexWriterMemory());\n+ table.addCell(shardStats == null ? null : shardStats.getSegments().getVersionMapMemory());\n \n table.addCell(shardStats == null ? null : shardStats.getWarmer().current());\n table.addCell(shardStats == null ? null : shardStats.getWarmer().total());", "filename": "src/main/java/org/elasticsearch/rest/action/cat/RestShardsAction.java", "status": "modified" }, { "diff": "@@ -193,10 +193,15 @@ public void testSegmentsStats() {\n for (int i = 0; i < 20; i++) {\n index(\"test1\", \"type1\", Integer.toString(i), \"field\", \"value\");\n index(\"test1\", \"type2\", Integer.toString(i), \"field\", \"value\");\n- client().admin().indices().prepareFlush().get();\n }\n- client().admin().indices().prepareOptimize().setWaitForMerge(true).setMaxNumSegments(1).execute().actionGet();\n+\n IndicesStatsResponse stats = client().admin().indices().prepareStats().setSegments(true).get();\n+ assertThat(stats.getTotal().getSegments().getIndexWriterMemoryInBytes(), greaterThan(0l));\n+ assertThat(stats.getTotal().getSegments().getVersionMapMemoryInBytes(), greaterThan(0l));\n+\n+ client().admin().indices().prepareFlush().get();\n+ client().admin().indices().prepareOptimize().setWaitForMerge(true).setMaxNumSegments(1).execute().actionGet();\n+ stats = client().admin().indices().prepareStats().setSegments(true).get();\n \n assertThat(stats.getTotal().getSegments(), notNullValue());\n assertThat(stats.getTotal().getSegments().getCount(), equalTo((long)test1.totalNumShards));", "filename": "src/test/java/org/elasticsearch/indices/stats/SimpleIndexStatsTests.java", "status": "modified" } ] }
{ "body": "Hi,\n\nI'm working with histogram aggregation but there is something strange with keys.\nFor instance (cf : http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-aggregations-bucket-histogram-aggregation.html):\n\nIf I use this request :\n\n``` json\n{\n \"aggs\" : {\n \"prices\" : {\n \"histogram\" : {\n \"field\" : \"price\",\n \"interval\" : 50\n }\n }\n }\n}\n```\n\nI obtain something like this :\n\n``` json\n{\n \"aggregations\": {\n \"prices\" : {\n \"buckets\": [\n {\n \"key_as_string\" : \"0\",\n \"key\": 0,\n \"doc_count\": 2\n },\n {\n \"key_as_string\" : \"50\",\n \"key\": 50,\n \"doc_count\": 4\n },\n {\n \"key_as_string\" : \"150\",\n \"key\": 150,\n \"doc_count\": 3\n }\n ]\n }\n }\n}\n```\n\nInstead of :\n\n``` json\n{\n \"aggregations\": {\n \"prices\" : {\n \"buckets\": [\n {\n \"key\": 0,\n \"doc_count\": 2\n },\n {\n \"key\": 50,\n \"doc_count\": 4\n },\n {\n \"key\": 150,\n \"doc_count\": 3\n }\n ]\n }\n }\n}\n```\n\nYou could say, it's not important but it generates json ~1/3 bigger...\nIs there a mean to disable this ???\n\nMoreover, in Elasticsearch Java API, it could be fine to have a method to request the response as a hash instead keyed by the buckets keys (cf :http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-aggregations-bucket-histogram-aggregation.html#_response_format)\n\nThanks!!!\n", "comments": [ { "body": "There is currently no way to disable `key_as_string`.\n\n> Moreover, in Elasticsearch Java API, it could be fine to have a method to request the response as a hash instead keyed by the buckets keys\n\nI am not sure to get what you mean. The Java API already has hash-like access to the buckets via the `getBucketByKey` method, is it what you are looking for?\n", "created_at": "2014-06-30T15:18:49Z" }, { "body": "this is a bug... the `key_as_string` should only be there if format is explicitly specified (like in all other aggs)\n", "created_at": "2014-07-01T01:14:56Z" }, { "body": "Thanks jpountz for having format my post ;)\nThanks both for your answers.\n\nI meant it's not possible to ask ES for a JSON like this :\n\n``` json\n{\n \"aggregations\": {\n \"prices\": {\n \"buckets\": {\n \"0\": {\n \"key\": 0,\n \"doc_count\": 2\n },\n \"50\": {\n \"key\": 50,\n \"doc_count\": 4\n },\n \"150\": {\n \"key\": 150,\n \"doc_count\": 3\n }\n }\n }\n }\n}\n```\n\nYes it's possible to access the Json with getBucketByKey but the JSON is like this : \n\n``` json\n{\n \"aggregations\": {\n \"prices\" : {\n \"buckets\": [\n {\n \"key_as_string\" : \"0\",\n \"key\": 0,\n \"doc_count\": 2\n },\n {\n \"key_as_string\" : \"50\",\n \"key\": 50,\n \"doc_count\": 4\n },\n {\n \"key_as_string\" : \"150\",\n \"key\": 150,\n \"doc_count\": 3\n }\n ]\n }\n }\n}\n```\n\nFurthermore if it's a bug, why close that post?\n", "created_at": "2014-07-01T08:00:03Z" }, { "body": "> Furthermore if it's a bug, why close that post?\n\nBecause this post is still open, and the closed post is a duplicate.\n", "created_at": "2014-07-01T14:20:33Z" }, { "body": "Oups... My Bad.... :s\nSorry\n", "created_at": "2014-07-01T14:28:53Z" } ], "number": 6655, "title": "Aggregations: Histogram Aggregation key Bug" }
{ "body": "The key as string field in the response for the histogram aggregation will now only show if format is specified on the request.\n\nCloses #6655\n", "number": 6830, "review_comments": [], "title": "Fixed Histogram key_as_string bug" }
{ "commits": [], "files": [] }
{ "body": "I index a text field with type `token_count`. When indexing, this creates an additional field that holds the number of tokens in the text field.\nWhen a document is retrieved from the transaction log (because no flush happened yet), and I want to get the `token_count` of my text field, I would assume that the `token_count` field is simply not retrieved, because it does not exist yet. Instead I get a `NumberFormatException`.\n\nHere are the steps to reproduce:\n\n```\nDELETE testidx\n\nPUT testidx\n{\n \"settings\": {\n \"index.translog.disable_flush\": true,\n \"index.number_of_shards\": 1,\n \"refresh_interval\": \"1h\"\n },\n \"mappings\": {\n \"doc\": {\n \"properties\": {\n \"text\": {\n \"fields\": {\n \"word_count\": {\n \"type\": \"token_count\",\n \"store\": \"yes\",\n \"analyzer\": \"standard\"\n }\n },\n \"type\": \"string\"\n }\n }\n }\n }\n}\n\nPUT testidx/doc/1\n{\n \"text\": \"some text\"\n}\n\n#ok, get document from translog\nGET testidx/doc/1?realtime=true\n#ok, get document from index but it is not there yet\nGET testidx/doc/1?realtime=false\n# try to get the document from translog but also field text.word_count which is not there yet: NumberFormatException\nGET testidx/doc/1?fields=text.word_count&realtime=true\n\n```\n", "comments": [ { "body": "Here is what happens:\n\nFor multi-fields, the parent field is returned instead of `null` if a sub-field is requested. For the example above, when getting `text.word_count`, `text` is retrieved from the source and returned.\nWe could prevent this easily like this: 7f522fbd9542\n\nHowever, the FastVectorHighlighter relies on this functionality to highlight on multi-fields (see [here](https://github.com/elasticsearch/elasticsearch/blob/master/src/main/java/org/elasticsearch/search/highlight/vectorhighlight/SourceSimpleFragmentsBuilder.java#L55)), so this is not really a solution unless we want to prevent highlighting with the FastVectorHighlighter on multi-fields.\n\nThe other option is to simply catch the NumberFormatException and handle it like here: be999b1042\n", "created_at": "2014-07-04T16:12:50Z" }, { "body": "@brwe what is the status of this?\n", "created_at": "2014-07-09T10:30:41Z" }, { "body": "@s1monw Need to write more tests, did not get to it yet. Will continue Friday. \n", "created_at": "2014-07-09T11:56:07Z" }, { "body": "cool ok but it's going to be ready for 1.3 right?\n", "created_at": "2014-07-09T12:02:56Z" }, { "body": "depends on when the release is\n", "created_at": "2014-07-09T12:21:02Z" }, { "body": "A field of type `murmur3` actually has the same issue. In addition, if `murmur3` and `token_count` fields are not stored, GET will also return NumberFormatException after refresh, example below.\n\n```\nDELETE testidx\nPUT testidx\n{\n \"settings\": {\n \"index.translog.disable_flush\": true,\n \"index.number_of_shards\": 1,\n \"refresh_interval\": \"1h\"\n },\n \"mappings\": {\n \"doc\": {\n \"properties\": {\n \"token_count\": {\n \"type\": \"token_count\",\n \"analyzer\": \"standard\"\n },\n \"murmur\": {\n \"type\": \"murmur3\"\n }\n }\n }\n }\n}\n\nPOST testidx/doc/1\n{\n \"murmur\": \"Some value that can be hashed\",\n \"token_count\": \"A text with five words.\"\n}\n\nGET testidx/doc/1?routing=2&fields=murmur,token_count\n\nPOST testidx/_refresh\n\nGET testidx/doc/1?routing=2&fields=murmur,token_count\n\n```\n", "created_at": "2014-07-18T19:07:00Z" }, { "body": "Following the discussion on pull request #6826 I checked all field mappers and tried to figure out what they should return upon GET. I will call a field \"generated\" if the content is only available after indexing. \nWe discussed that we either throw a meaningful exception if the field is generated or ignore the field silently if a (new) parameter is set with the GET request. `FieldMapper` should get a new method `isGenerated()` which indicates weather the field will be found in the source or not.\n\nHere is what I think we should do:\n\nFor some core field types (`integer, float, string,...`), the behavior (`isGenerated()` returns `true` or `false`) should be configurable. The reason is that a different mapper might use them and store generated data in them. The Mapper attachment plugin does that: Fields like author (`string`), content_type (`string`) etc. are only available after tika parsing. \n\nThere are currently four field types (detailed list below):\n1. Fields that should not be configurable, because they are always generated\n2. Fields that not be configurable because they are never generated\n3. Fields that should not be configurable because they are never stored\n4. Fields that should be configurable\n\nFor 1-3 we simply have to implement `isGenerated()` accordingly.\n\nTo make the fields configurable we could add a parameter `\"is_generated\"` to the mapping which steers the behavior.\n\nPro: would be easy to do and also allow different types in plugin to very easily use the feature.\n\nCon: This would allow users to set `\"is_generated\"` accidentally - fields that are accessible via source would then still cause an exception if requested via GET while the document is not yet indexed\n\nFor fields that are not configurable, the parameter `\"is_generated\"` could be ignored without warning like so many other parameters.\n\nList of types and their category:\n\nThere is core types, root types, geo an ip.\n\n#### Core types\n\nThese should be configurable:\n\n```\nIntegerFieldMapper.java\nShortFieldMapper.java\nBinaryFieldMapper.java \nDateFieldMapper.java \nLongFieldMapper.java \nStringFieldMapper.java\nBooleanFieldMapper.java \nDoubleFieldMapper.java \n```\n\nThe following two should not be configurable because they are always generated:\n\n```\nMurmur3FieldMapper.java \nTokenCountFieldMapper.java\n```\n\nThis should not be configurable because it is never stored:\n\n```\nCompletionFieldMapper.java\n```\n\n#### ip an geo\n\nShould be configurable:\n\n```\nGeoPointFieldMapper.java \nGeoShapeFieldMapper.java\nIpFieldMapper.java\n```\n\n#### root types\n\nNever generated and should not be configurable:\n\n```\nRoutingFieldMapper.java \nTimestampFieldMapper.java\nIdFieldMapper.java \nSizeFieldMapper.java \nTypeFieldMapper.java\nBoostFieldMapper.java \nIndexFieldMapper.java \nSourceFieldMapper.java \nParentFieldMapper.java \nTTLFieldMapper.java \nVersionFieldMapper.java\n```\n\nAlways generated and should not be configurable:\n\n```\nAllFieldMapper.java \nFieldNamesFieldMapper.java \n```\n\nThe following should not be configurable, because they are never stored:\n\n```\nAnalyzerMapper.java \nUidFieldMapper.java\n```\n", "created_at": "2014-07-18T19:22:02Z" }, { "body": "hmpf. while writing tests I figured there are actually more cases to consider. will update soon...\n", "created_at": "2014-07-19T14:13:01Z" }, { "body": "There are two numeric fields that are currently generated (`Murmur3FieldMapper.java` and `TokenCountFieldMapper.java`) and two string fields (`AllFieldMapper.java` and `FieldNamesFieldMapper.java` ).\n\nThese should only be returned with GET (`fields=...`) if set to `stored` and not retuned if not `stored` regardless of if source is enabled or not (this was not so, see example above). If refresh has not been called between indexing and GET then this should cause an Exception unless `ignore_errors_on_generated_fields=true` (working title) is set with the GET request.\nUntil now `_all` and `_field_names` where silently ignored and getting the numeric fields caused a `NumberFormatException`.\n\nI am now unsure if we should make the core types configurable. By configurable, I actually meant adding a parameter to the type mapping such as\n\n```\n{\n type: string,\n is_generated: true/false\n ...\n}\n```\n\nI'll make a pull request without that and then maybe we can discuss further. \n\nJust for completeness, below is a list of all ungenerated field types and how they behave with GET.\n\n---\n\n## Fields with fixed behavior:\n\nNever stored -> should never be returned via GET\n\n`CompletionFieldMapper` \n\nAlways stored -> should always be returned via GET\n\n```\nParentFieldMapper.java \nTTLFieldMapper.java \n```\n\nStored or source enabled -> always return via GET, else never return\n\n```\nBoostFieldMapper.java \n```\n\nStored (but independent of source) -> always return via GET, else never return\n\n```\nTimestampFieldMapper.java\nSizeFieldMapper.java \nRoutingFieldMapper.java \n```\n\n## Fields that might be configurable\n\n```\nIntegerFieldMapper.java\nShortFieldMapper.java\nBinaryFieldMapper.java \nDateFieldMapper.java \nLongFieldMapper.java \nStringFieldMapper.java\nBooleanFieldMapper.java \nDoubleFieldMapper.java \nGeoPointFieldMapper.java \nGeoShapeFieldMapper.java\nIpFieldMapper.java\n```\n\n## Special fields which can never be in the \"fields\" list returned by GET anyway\n\n```\nIdFieldMapper.java \nTypeFieldMapper.java\nIndexFieldMapper.java \nSourceFieldMapper.java \nVersionFieldMapper.java\nAnalyzerMapper.java \nUidFieldMapper.java\n```\n", "created_at": "2014-07-23T07:54:42Z" } ], "number": 6676, "title": "GET: Add parameter to GET for checking if generated fields can be retrieved" }
{ "body": "If a field is a multi field and requested with a GET request then\nthis will return the value of the parent field in case the document\nis retrieved from the transaction log.\nIf the type is numeric but the value a string, the numeric value will\nbe parsed. This caused the NumberFprmatException in case the field\nis of type `token_count`.\n\ncloses #6676\n", "number": 6826, "review_comments": [], "title": "Catch the NumberFormatException and handle it in GET" }
{ "commits": [ { "message": "Catch the NumberFormatException and handle it in GET\n\nIf a field is a multi field and requested with a GET request then\nthis will return the value of the parent field in case the document\nis retrieved from the transaction log.\nIf the type is numeric but the value a string, the numeric value will\nbe parsed. This caused the NumberFprmatException in case the field\nis of type `token_count`.\n\ncloses #6676" } ], "files": [ { "diff": "@@ -254,7 +254,12 @@ public GetResult innerGet(String type, String id, String[] gFields, boolean real\n List<Object> values = searchLookup.source().extractRawValues(field);\n if (!values.isEmpty()) {\n for (int i = 0; i < values.size(); i++) {\n- values.set(i, x.valueForSearch(values.get(i)));\n+ try {\n+ values.set(i, x.valueForSearch(values.get(i)));\n+ } catch (NumberFormatException e) {\n+ values = null;\n+ break;\n+ }\n }\n value = values;\n }", "filename": "src/main/java/org/elasticsearch/index/get/ShardGetService.java", "status": "modified" }, { "diff": "@@ -33,12 +33,14 @@\n import org.elasticsearch.common.bytes.BytesArray;\n import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.settings.ImmutableSettings;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.index.engine.VersionConflictEngineException;\n import org.elasticsearch.test.ElasticsearchIntegrationTest;\n import org.elasticsearch.test.junit.annotations.TestLogging;\n import org.junit.Test;\n \n+import java.io.IOException;\n import java.util.Map;\n \n import static org.elasticsearch.client.Requests.clusterHealthRequest;\n@@ -885,4 +887,49 @@ public void testGetFields_complexField() throws Exception {\n assertThat(getResponse.getField(field).getValues().get(1).toString(), equalTo(\"value2\"));\n }\n \n+ @Test\n+ public void testRealTimeGet() throws IOException {\n+ XContentBuilder mappingBuilder = jsonBuilder()\n+ .startObject()\n+ .startObject(\"test\")\n+ .startObject(\"properties\")\n+ .startObject(\"foo\")\n+ .field(\"type\", \"multi_field\")\n+ .startObject(\"fields\")\n+ .startObject(\"foo\")\n+ .field(\"type\", \"string\")\n+ .field(\"store\", true)\n+ .field(\"analyzer\", \"simple\")\n+ .endObject()\n+ .startObject(\"token_count\")\n+ .field(\"type\", \"token_count\")\n+ .field(\"analyzer\", \"standard\")\n+ .field(\"store\", true)\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject();\n+ logger.info(mappingBuilder.string());\n+ prepareCreate(\"test\").addMapping(\"test\",mappingBuilder\n+ )\n+ .setSettings(\n+ jsonBuilder().startObject()\n+ .field(\"index.translog.disable_flush\", true)\n+ .field(\"refresh_interval\", \"1h\")\n+ .endObject()\n+ ).get();\n+ ensureGreen();\n+ client().prepareIndex(\"test\", \"test\", \"1\").setSource(jsonBuilder().startObject().field(\"foo\", \"bar\").endObject()).get();\n+ GetResponse response = client().prepareGet().setIndex(\"test\").setType(\"test\").setId(\"1\").setFields(\"foo.token_count\").get();\n+ assertNull(response.getField(\"foo.token_count\"));\n+ assertThat(response.getId(), equalTo(\"1\"));\n+ assertTrue(response.isExists());\n+ flush();\n+ response = client().prepareGet().setIndex(\"test\").setType(\"test\").setId(\"1\").setFields(\"foo.token_count\").get();\n+ assertThat(response.getId(), equalTo(\"1\"));\n+ assertTrue(response.isExists());\n+ assertNotNull(response.getField(\"foo.token_count\"));\n+ }\n }", "filename": "src/test/java/org/elasticsearch/get/GetActionTests.java", "status": "modified" } ] }