issue
dict | pr
dict | pr_details
dict |
---|---|---|
{
"body": "I was experimenting a bit with plugins and created a test plugin and called it \"es-plugin\". Maybe this was not the best choice for a name but i did it.\n\nIf you call `bin/plugin --remove es-plugin` the plugin got removed but the file `bin/plugin` itself was also deleted.\n",
"comments": [
{
"body": "o_O thanks for reporting it!\nWill fix that! Thanks\n",
"created_at": "2014-07-05T13:59:45Z"
},
{
"body": "@dadoonet what is the status here?\n",
"created_at": "2014-07-09T10:19:31Z"
},
{
"body": "@s1monw I checked it and I need to protect all known `bin/*` files:\n- elasticsearch\n- elasticsearch-service-x86.exe\n- plugin\n- elasticsearch-service-mgr.exe\n- elasticsearch.bat\n- plugin.bat\n- elasticsearch-service-x64.exe\n- elasticsearch.in.sh\n- service.bat\n\nMy first intention was to remove the support of installing plugins in `bin` dir but it sounds like we still need it.\n\nWill send a PR probably on friday.\n",
"created_at": "2014-07-09T18:15:05Z"
},
{
"body": "@s1monw PR opened here: #6817 waiting for review. Thanks!\n",
"created_at": "2014-07-10T17:41:16Z"
}
],
"number": 6745,
"title": "`bin/plugin` removes itself"
} | {
"body": "If you call `bin/plugin --remove es-plugin` the plugin got removed but the file `bin/plugin` itself was also deleted.\n\nWe now don't allow the following plugin names:\n- elasticsearch\n- elasticsearch-service-x86.exe\n- plugin\n- elasticsearch-service-mgr.exe\n- elasticsearch.bat\n- plugin.bat\n- elasticsearch-service-x64.exe\n- elasticsearch.in.sh\n- service.bat\n\nCloses #6745\n",
"number": 6817,
"review_comments": [
{
"body": "Suggest making this an ordered array/loop in the interest of maintainability:\n\n``` java\nboolean forbidden = Strings.isNullOrEmpty(name);\n\nString[] forbiddenNames = {\n \"elasticsearch\",\n \"elasticsearch.bat\",\n \"elasticsearch.in.sh\",\n \"elasticsearch-service-x86.exe\",\n \"elasticsearch-service-x64.exe\",\n \"elasticsearch-service-mgr.exe\",\n \"plugin\",\n \"plugin.bat\",\n \"service.bat\"\n};\n\nfor (String forbiddenName : forbiddenNames) {\n if (forbiddenName.equalsIgnoreCase(name)) {\n forbidden = true;\n break;\n }\n}\n\nif (forbidden) {\n throw new ElasticsearchIllegalArgumentException(\"This plugin name is not allowed\");\n}\n```\n",
"created_at": "2014-07-11T01:17:00Z"
},
{
"body": "can we put them all in a `private static final Set<String> BLACKLIST` and then just do \n\n``` Java\nif (Strings.hasLength(name) && BLACKLIST.contains(name.toLowerCase(Locale.ROOT))) {\n throw new ElasticsearchIllegalArgumentException(\"Illegal plugin name: \" + name);\n}\n```\n",
"created_at": "2014-07-15T10:36:25Z"
},
{
"body": "Good point. To ensure that `null` and blanks are captured, then we can do\n\n``` java\nif ( ! Strings.hasText(name) || BLACKLIST.contains(name.toLowerCase(Locale.ROOT))) {\n throw new ...\n}\n```\n",
"created_at": "2014-07-15T15:14:51Z"
}
],
"title": "bin/plugin removes itself"
} | {
"commits": [
{
"message": "bin/plugin removes itself\n\nIf you call `bin/plugin --remove es-plugin` the plugin got removed but the file `bin/plugin` itself was also deleted.\n\nWe now don't allow the following plugin names:\n\n* elasticsearch\n* elasticsearch-service-x86.exe\n* plugin\n* elasticsearch-service-mgr.exe\n* elasticsearch.bat\n* plugin.bat\n* elasticsearch-service-x64.exe\n* elasticsearch.in.sh\n* service.bat\n\nCloses #6745"
},
{
"message": "Update After review + adding some tests"
},
{
"message": "Update after review"
}
],
"files": [
{
"diff": "@@ -20,6 +20,7 @@\n package org.elasticsearch.plugins;\n \n import com.google.common.base.Strings;\n+import com.google.common.collect.ImmutableSet;\n import org.elasticsearch.ElasticsearchIllegalArgumentException;\n import org.elasticsearch.ElasticsearchIllegalStateException;\n import org.elasticsearch.ElasticsearchTimeoutException;\n@@ -46,6 +47,7 @@\n import java.util.zip.ZipEntry;\n import java.util.zip.ZipFile;\n \n+import static org.elasticsearch.common.Strings.hasLength;\n import static org.elasticsearch.common.settings.ImmutableSettings.Builder.EMPTY_SETTINGS;\n \n /**\n@@ -66,6 +68,14 @@ public enum OutputMode {\n // By default timeout is 0 which means no timeout\n public static final TimeValue DEFAULT_TIMEOUT = TimeValue.timeValueMillis(0);\n \n+ private static final ImmutableSet<Object> BLACKLIST = ImmutableSet.builder()\n+ .add(\"elasticsearch\",\n+ \"elasticsearch.bat\",\n+ \"elasticsearch.in.sh\",\n+ \"plugin\",\n+ \"plugin.bat\",\n+ \"service.bat\").build();\n+\n private final Environment environment;\n \n private String url;\n@@ -123,6 +133,8 @@ public void downloadAndExtract(String name) throws IOException {\n }\n \n PluginHandle pluginHandle = PluginHandle.parse(name);\n+ checkForForbiddenName(pluginHandle.name);\n+\n File pluginFile = pluginHandle.distroFile(environment);\n // extract the plugin\n File extractLocation = pluginHandle.extractedDir(environment);\n@@ -241,10 +253,7 @@ public void removePlugin(String name) throws IOException {\n PluginHandle pluginHandle = PluginHandle.parse(name);\n boolean removed = false;\n \n- if (Strings.isNullOrEmpty(pluginHandle.name)) {\n- throw new ElasticsearchIllegalArgumentException(\"plugin name is incorrect\");\n- }\n-\n+ checkForForbiddenName(pluginHandle.name);\n File pluginToDelete = pluginHandle.extractedDir(environment);\n if (pluginToDelete.exists()) {\n debug(\"Removing: \" + pluginToDelete.getPath());\n@@ -279,6 +288,12 @@ public void removePlugin(String name) throws IOException {\n }\n }\n \n+ private static void checkForForbiddenName(String name) {\n+ if (!hasLength(name) || BLACKLIST.contains(name.toLowerCase(Locale.ROOT))) {\n+ throw new ElasticsearchIllegalArgumentException(\"Illegal plugin name: \" + name);\n+ }\n+ }\n+\n public File[] getListInstalledPlugins() {\n File[] plugins = environment.pluginsFile().listFiles();\n return plugins;",
"filename": "src/main/java/org/elasticsearch/plugins/PluginManager.java",
"status": "modified"
},
{
"diff": "@@ -210,7 +210,7 @@ public void testInstallPluginNull() throws IOException {\n public void testInstallPlugin() throws IOException {\n PluginManager pluginManager = pluginManager(getPluginUrlForResource(\"plugin_with_classfile.zip\"));\n \n- pluginManager.downloadAndExtract(\"plugin\");\n+ pluginManager.downloadAndExtract(\"plugin-classfile\");\n File[] plugins = pluginManager.getListInstalledPlugins();\n assertThat(plugins, notNullValue());\n assertThat(plugins.length, is(1));\n@@ -332,6 +332,31 @@ public void testRemovePluginWithURLForm() throws Exception {\n pluginManager.removePlugin(\"file://whatever\");\n }\n \n+ @Test\n+ public void testForbiddenPluginName_ThrowsException() throws IOException {\n+ runTestWithForbiddenName(null);\n+ runTestWithForbiddenName(\"\");\n+ runTestWithForbiddenName(\"elasticsearch\");\n+ runTestWithForbiddenName(\"elasticsearch.bat\");\n+ runTestWithForbiddenName(\"elasticsearch.in.sh\");\n+ runTestWithForbiddenName(\"plugin\");\n+ runTestWithForbiddenName(\"plugin.bat\");\n+ runTestWithForbiddenName(\"service.bat\");\n+ runTestWithForbiddenName(\"ELASTICSEARCH\");\n+ runTestWithForbiddenName(\"ELASTICSEARCH.IN.SH\");\n+ }\n+\n+ private void runTestWithForbiddenName(String name) throws IOException {\n+ try {\n+ pluginManager(null).removePlugin(name);\n+ fail(\"this plugin name [\" + name +\n+ \"] should not be allowed\");\n+ } catch (ElasticsearchIllegalArgumentException e) {\n+ // We expect that error\n+ }\n+ }\n+\n+\n /**\n * Retrieve a URL string that represents the resource with the given {@code resourceName}.\n * @param resourceName The resource name relative to {@link PluginManagerTests}.",
"filename": "src/test/java/org/elasticsearch/plugin/PluginManagerTests.java",
"status": "modified"
}
]
} |
{
"body": "The DateHistogramBuilder stores and builds the DateHistogram [with a `long` value for the `pre_offset` and `post_offset`](https://github.com/elasticsearch/elasticsearch/blob/master/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/DateHistogramBuilder.java#L44), which is neither what the [API docs specify](http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-aggregations-bucket-datehistogram-aggregation.html#_pre_post_offset_2) (which specify the format is the data format `1s`, `2d`, etc.) nor what the [DateHistogramParser expect](https://github.com/elasticsearch/elasticsearch/blob/master/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/DateHistogramParser.java#L122).\n\nThis forces the improper construction of DateHistogram requests when using the Java API to construct queries. Both the `preOffset` and `postOffset` variables should be converted to `Strings`.\n",
"comments": [
{
"body": "I experience the same error so +1 on fixing this.\n",
"created_at": "2014-05-06T14:56:31Z"
}
],
"number": 5586,
"title": "Aggregations: DateHistogramBuilder uses wrong data type for pre_offset and post_offset"
} | {
"body": "...set as Strings\n\nThis is what DateHistogramParser expects so will enable the builder to build valid requests using these variables.\nAlso added tests for preOffset and postOffset since these tests did not exist\n\nCloses #5586\n",
"number": 6814,
"review_comments": [],
"title": "Aggregations: Fixed DateHistogramBuilder to accept preOffset and postOff..."
} | {
"commits": [
{
"message": "Aggregations: DateHistogramBuilder accepts String preOffset and postOffset\n\nThis is what DateHistogramParser expects so will enable the builder to build valid requests using these variables.\nAlso added tests for preOffset and postOffset since these tests did not exist\n\nCloses #5586"
}
],
"files": [
{
"diff": "@@ -41,8 +41,8 @@ public class DateHistogramBuilder extends ValuesSourceAggregationBuilder<DateHis\n private String postZone;\n private boolean preZoneAdjustLargeInterval;\n private String format;\n- long preOffset = 0;\n- long postOffset = 0;\n+ private String preOffset;\n+ private String postOffset;\n float factor = 1.0f;\n \n public DateHistogramBuilder(String name) {\n@@ -84,12 +84,12 @@ public DateHistogramBuilder preZoneAdjustLargeInterval(boolean preZoneAdjustLarg\n return this;\n }\n \n- public DateHistogramBuilder preOffset(long preOffset) {\n+ public DateHistogramBuilder preOffset(String preOffset) {\n this.preOffset = preOffset;\n return this;\n }\n \n- public DateHistogramBuilder postOffset(long postOffset) {\n+ public DateHistogramBuilder postOffset(String postOffset) {\n this.postOffset = postOffset;\n return this;\n }\n@@ -153,11 +153,11 @@ protected XContentBuilder doInternalXContent(XContentBuilder builder, Params par\n builder.field(\"pre_zone_adjust_large_interval\", true);\n }\n \n- if (preOffset != 0) {\n+ if (preOffset != null) {\n builder.field(\"pre_offset\", preOffset);\n }\n \n- if (postOffset != 0) {\n+ if (postOffset != null) {\n builder.field(\"post_offset\", postOffset);\n }\n ",
"filename": "src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/DateHistogramBuilder.java",
"status": "modified"
},
{
"diff": "@@ -1051,6 +1051,76 @@ public void singleValue_WithPreZone() throws Exception {\n assertThat(bucket.getDocCount(), equalTo(3l));\n }\n \n+ @Test\n+ public void singleValue_WithPreOffset() throws Exception {\n+ prepareCreate(\"idx2\").addMapping(\"type\", \"date\", \"type=date\").execute().actionGet();\n+ IndexRequestBuilder[] reqs = new IndexRequestBuilder[5];\n+ DateTime date = date(\"2014-03-11T00:00:00+00:00\");\n+ for (int i = 0; i < reqs.length; i++) {\n+ reqs[i] = client().prepareIndex(\"idx2\", \"type\", \"\" + i).setSource(jsonBuilder().startObject().field(\"date\", date).endObject());\n+ date = date.plusHours(1);\n+ }\n+ indexRandom(true, reqs);\n+\n+ SearchResponse response = client().prepareSearch(\"idx2\")\n+ .setQuery(matchAllQuery())\n+ .addAggregation(dateHistogram(\"date_histo\")\n+ .field(\"date\")\n+ .preOffset(\"-2h\")\n+ .interval(DateHistogram.Interval.DAY)\n+ .format(\"yyyy-MM-dd\"))\n+ .execute().actionGet();\n+\n+ assertThat(response.getHits().getTotalHits(), equalTo(5l));\n+\n+ DateHistogram histo = response.getAggregations().get(\"date_histo\");\n+ Collection<? extends DateHistogram.Bucket> buckets = histo.getBuckets();\n+ assertThat(buckets.size(), equalTo(2));\n+\n+ DateHistogram.Bucket bucket = histo.getBucketByKey(\"2014-03-10\");\n+ assertThat(bucket, Matchers.notNullValue());\n+ assertThat(bucket.getDocCount(), equalTo(2l));\n+\n+ bucket = histo.getBucketByKey(\"2014-03-11\");\n+ assertThat(bucket, Matchers.notNullValue());\n+ assertThat(bucket.getDocCount(), equalTo(3l));\n+ }\n+\n+ @Test\n+ public void singleValue_WithPostOffset() throws Exception {\n+ prepareCreate(\"idx2\").addMapping(\"type\", \"date\", \"type=date\").execute().actionGet();\n+ IndexRequestBuilder[] reqs = new IndexRequestBuilder[5];\n+ DateTime date = date(\"2014-03-11T00:00:00+00:00\");\n+ for (int i = 0; i < reqs.length; i++) {\n+ reqs[i] = client().prepareIndex(\"idx2\", \"type\", \"\" + i).setSource(jsonBuilder().startObject().field(\"date\", date).endObject());\n+ date = date.plusHours(6);\n+ }\n+ indexRandom(true, reqs);\n+\n+ SearchResponse response = client().prepareSearch(\"idx2\")\n+ .setQuery(matchAllQuery())\n+ .addAggregation(dateHistogram(\"date_histo\")\n+ .field(\"date\")\n+ .postOffset(\"2d\")\n+ .interval(DateHistogram.Interval.DAY)\n+ .format(\"yyyy-MM-dd\"))\n+ .execute().actionGet();\n+\n+ assertThat(response.getHits().getTotalHits(), equalTo(5l));\n+\n+ DateHistogram histo = response.getAggregations().get(\"date_histo\");\n+ Collection<? extends DateHistogram.Bucket> buckets = histo.getBuckets();\n+ assertThat(buckets.size(), equalTo(2));\n+\n+ DateHistogram.Bucket bucket = histo.getBucketByKey(\"2014-03-13\");\n+ assertThat(bucket, Matchers.notNullValue());\n+ assertThat(bucket.getDocCount(), equalTo(4l));\n+\n+ bucket = histo.getBucketByKey(\"2014-03-14\");\n+ assertThat(bucket, Matchers.notNullValue());\n+ assertThat(bucket.getDocCount(), equalTo(1l));\n+ }\n+\n @Test\n public void singleValue_WithPreZone_WithAadjustLargeInterval() throws Exception {\n prepareCreate(\"idx2\").addMapping(\"type\", \"date\", \"type=date\").execute().actionGet();",
"filename": "src/test/java/org/elasticsearch/search/aggregations/bucket/DateHistogramTests.java",
"status": "modified"
}
]
} |
{
"body": "We check if the version map needs to be refreshed after we released\nthe readlock which can cause the the engine being closed before we\nread the value from the volatile `indexWriter` field which can cause an\nNPE on the indexing thread. This commit also fixes a potential uncaught\nexception if the refresh failed due to the engine being already closed.\n",
"comments": [
{
"body": "This relates to #6443\n",
"created_at": "2014-07-08T19:51:24Z"
},
{
"body": "left one small comment, other than that, LGTM\n",
"created_at": "2014-07-08T19:58:50Z"
},
{
"body": "agreed... pushed a new commit\n",
"created_at": "2014-07-08T20:02:22Z"
}
],
"number": 6786,
"title": "Prevent NPE if engine is closed while version map is checked"
} | {
"body": "We run it out of lock, the indexWriter may be closed..\n\nRelates to #6443, #6786\n",
"number": 6794,
"review_comments": [],
"title": "[Engine] checkVersionMapRefresh shouldn't use indexWriter.getConfig()"
} | {
"commits": [
{
"message": "[Engine] checkVersionMapRefresh must be called under a read lock\n\nThe operation looks at indexWriter.getConfig(), which can throw a `org.apache.lucene.store.AlreadyClosedException` if the engine is already closed.\n\nRelates to #6443, #6786"
},
{
"message": "Don't use indexWriter.getConfig in checkVersionMapRefresh"
}
],
"files": [
{
"diff": "@@ -20,7 +20,6 @@\n package org.elasticsearch.index.engine.internal;\n \n import com.google.common.collect.Lists;\n-\n import org.apache.lucene.index.*;\n import org.apache.lucene.index.IndexWriter.IndexReaderWarmer;\n import org.apache.lucene.search.IndexSearcher;\n@@ -80,7 +79,6 @@\n import java.io.IOException;\n import java.util.*;\n import java.util.concurrent.CopyOnWriteArrayList;\n-import java.util.concurrent.RejectedExecutionException;\n import java.util.concurrent.TimeUnit;\n import java.util.concurrent.atomic.AtomicBoolean;\n import java.util.concurrent.atomic.AtomicInteger;\n@@ -389,10 +387,7 @@ public GetResult get(Get get) throws EngineException {\n public void create(Create create) throws EngineException {\n final IndexWriter writer;\n try (InternalLock _ = readLock.acquire()) {\n- writer = this.indexWriter;\n- if (writer == null) {\n- throw new EngineClosedException(shardId, failedEngine);\n- }\n+ writer = currentIndexWriter();\n try (Releasable r = throttle.acquireThrottle()) {\n innerCreate(create, writer);\n }\n@@ -403,7 +398,7 @@ public void create(Create create) throws EngineException {\n maybeFailEngine(t);\n throw new CreateFailedEngineException(shardId, create, t);\n }\n- checkVersionMapRefresh(writer);\n+ checkVersionMapRefresh();\n }\n \n private void maybeFailEngine(Throwable t) {\n@@ -485,10 +480,7 @@ private void innerCreateNoLock(Create create, IndexWriter writer, long currentVe\n public void index(Index index) throws EngineException {\n final IndexWriter writer;\n try (InternalLock _ = readLock.acquire()) {\n- writer = this.indexWriter;\n- if (writer == null) {\n- throw new EngineClosedException(shardId, failedEngine);\n- }\n+ writer = currentIndexWriter();\n try (Releasable r = throttle.acquireThrottle()) {\n innerIndex(index, writer);\n }\n@@ -499,29 +491,28 @@ public void index(Index index) throws EngineException {\n maybeFailEngine(t);\n throw new IndexFailedEngineException(shardId, index, t);\n }\n- checkVersionMapRefresh(writer);\n+ checkVersionMapRefresh();\n }\n \n- /** Forces a refresh if the versionMap is using too much RAM (currently > 25% of IndexWriter's RAM buffer).\n- * */\n- private void checkVersionMapRefresh(final IndexWriter indexWriter) {\n+ /**\n+ * Forces a refresh if the versionMap is using too much RAM (currently > 25% of IndexWriter's RAM buffer).\n+ */\n+ private void checkVersionMapRefresh() {\n // TODO: we force refresh when versionMap is using > 25% of IW's RAM buffer; should we make this separately configurable?\n- if (versionMap.ramBytesUsedForRefresh()/1024/1024. > 0.25 * indexWriter.getConfig().getRAMBufferSizeMB() && versionMapRefreshPending.getAndSet(true) == false) {\n- if (!closed) {\n- try {\n- // Now refresh to clear versionMap:\n- threadPool.executor(ThreadPool.Names.REFRESH).execute(new Runnable() {\n- public void run() {\n- try {\n- refresh(new Refresh(\"version_table_full\"));\n- } catch (EngineClosedException ex) {\n- // ignore\n- }\n- }\n- });\n- } catch (EsRejectedExecutionException ex) {\n- // that is fine too.. we might be shutting down\n- }\n+ if (versionMap.ramBytesUsedForRefresh() > 0.25 * indexingBufferSize.bytes() && versionMapRefreshPending.getAndSet(true) == false) {\n+ try {\n+ // Now refresh to clear versionMap:\n+ threadPool.executor(ThreadPool.Names.REFRESH).execute(new Runnable() {\n+ public void run() {\n+ try {\n+ refresh(new Refresh(\"version_table_full\"));\n+ } catch (EngineClosedException ex) {\n+ // ignore\n+ }\n+ }\n+ });\n+ } catch (EsRejectedExecutionException ex) {\n+ // that is fine too.. we might be shutting down\n }\n }\n }\n@@ -596,11 +587,11 @@ public void delete(Delete delete) throws EngineException {\n \n maybePruneDeletedTombstones();\n }\n- \n+\n private void maybePruneDeletedTombstones() {\n // It's expensive to prune because we walk the deletes map acquiring dirtyLock for each uid so we only do it\n // every 1/4 of gcDeletesInMillis:\n- if (enableGcDeletes && threadPool.estimatedTimeInMillis() - lastDeleteVersionPruneTimeMSec > gcDeletesInMillis*0.25) {\n+ if (enableGcDeletes && threadPool.estimatedTimeInMillis() - lastDeleteVersionPruneTimeMSec > gcDeletesInMillis * 0.25) {\n pruneDeletedTombstones();\n }\n }",
"filename": "src/main/java/org/elasticsearch/index/engine/internal/InternalEngine.java",
"status": "modified"
}
]
} |
{
"body": "Consider the following repro use case:\n\n``` json\nPUT /bulkindex1\nPUT /bulkindex2\nPOST /_bulk\n{\"index\":{\"_id\":\"1\",\"_type\":\"index1_type\",\"_index\":\"bulkindex1\"}}\n{\"text\": \"hallo1\" }\n{\"index\":{\"_id\":\"1\",\"_type\":\"index2_type\",\"_index\":\"bulkindex2\"}}\n{\"text\": \"hallo2\" }\nGET /bulkindex*/_search\nPOST /bulkindex2/_close\nPOST /_bulk\n{\"index\":{\"_id\":\"1\",\"_type\":\"index1_type\",\"_index\":\"bulkindex1\"}}\n{\"text\": \"hallo1-update\" }\n{\"index\":{\"_id\":\"1\",\"_type\":\"index2_type\",\"_index\":\"bulkindex2\"}}\n{\"text\": \"hallo2\" }\n```\n\nThe 2nd bulk action will certainly fail since bulkindex2 is closed. However, when the _bulk request is submitted, ES fails the entire _bulk request:\n\n``` json\n{\n \"error\": \"IndexMissingException[[bulkindex2] missing]\",\n \"status\": 404\n}\n```\n\nExpected behavior in this case is for ES to still process the request against the other index that is available and report the 404 as part of the response for the action against bulkindex2 like the following:\n\n``` json\n{\n \"took\": 14,\n \"errors\": true,\n \"items\": [\n {\n \"index\": {\n \"_index\": \"bulkindex1\",\n \"_type\": \"index1_type\",\n \"_id\": \"1\",\n \"_version\": 3,\n \"status\": 200\n }\n },\n {\n \"index\": {\n \"_index\": \"bulkindex2\",\n \"_type\": \"index2_type\",\n \"_id\": \"1\",\n \"status\": 404,\n \"error\": \"IndexMissingException[[bulkindex2] missing]\",\n }\n }\n ]\n}\n```\n",
"comments": [
{
"body": "This issue was giving me some trouble as well, saw it was fixed in the 1.1 and 1.2 branches, see #4987.\n",
"created_at": "2014-06-12T17:00:13Z"
},
{
"body": "@tmkujala This bug is not fixed. It is not the same as #4987. That one tackles _invalid_ index names. This one talks about performing bulk operations against indices that does not exits and where the response would be 404. This of course only occurs if you have the option `action.auto_create_index` set to false. But I experience this error as well.\n",
"created_at": "2014-07-08T10:02:23Z"
},
{
"body": "@spinscale I've confirmed that this bug still exists in master.\n",
"created_at": "2014-07-08T10:11:19Z"
},
{
"body": "@clintongormley @spinscale \nI have seen a few similar issues like #4987 which had the same effect but was caused by a different error. Is there any reason why there are checks for specific errors in the bulk items and the errors that do not match those cause the full request to fail?\n\nRight now I index my documents in elasticsearch one by one but I am planing on queuing up our incoming data in redis and then bulk it in to optimize performance. The errors that could occur when those bulks are indexed cannot be known but I was hoping that if one index fails the others will not. Can I trust this to be the case?\n",
"created_at": "2014-07-09T07:32:02Z"
},
{
"body": "Has any progress been made on this? Its a huge performance show-stopper for my company.\n",
"created_at": "2014-08-04T14:01:20Z"
}
],
"number": 6410,
"title": "Bulk request against multiple indices fails on missing index instead of failing individual actions"
} | {
"body": "The bulk API request was marked as completely failed,\nin case a request with a closed index was referred in\nany of the requests inside of a bulk one.\n\nImplementation Note: Currently the implementation is a bit more verbose in order to prevent an `instanceof` check and another cast - if that is fast enough, we could execute that logic only once at the beginning of the loop (thinking this might be a bit overoptimization here).\n\nCloses #6410\n",
"number": 6790,
"review_comments": [
{
"body": "dude can we extract and interface that we can implement on all of them that allows use to return `index, type, id`?\n",
"created_at": "2014-07-09T19:37:11Z"
},
{
"body": "can we do:\n\n``` Java\n\nif (addFailureIfIndexIsClosed(updateRequest.index(), updateRequest.type(), updateRequest.id(), bulkRequest, responses, i)) {\n continue;\n}\n```\n",
"created_at": "2014-07-09T19:38:19Z"
},
{
"body": "maybe we should call it `Indexable` or `DocumentRequest` something like this and it should also have a routing or even better the common denominator of update / delete / index ie all the methods they share?\n",
"created_at": "2014-07-15T13:03:03Z"
},
{
"body": "if we add `routing()` to the interface we can make this on else if block instead of two?\n",
"created_at": "2014-07-15T13:03:42Z"
},
{
"body": "honestly, BulkRequest should then only allow you to add `Indexable` or whatever we call it and then we can skip all the instance of crap here too :) and trash the `ElasticsearchException` at the bottom\n",
"created_at": "2014-07-15T13:05:27Z"
}
],
"title": "Do not fail whole request on closed index"
} | {
"commits": [
{
"message": "Bulk API: Do not fail whole request on closed index\n\nThe bulk API request was marked as completely failed,\nin case a request with a closed index was referred in\nany of the requests inside of a bulk one.\n\nCloses #6410"
},
{
"message": "Refactoring/incorporating commits comments"
},
{
"message": "added test which index auto creation disabled"
},
{
"message": "removed/refactored some code"
}
],
"files": [
{
"diff": "@@ -0,0 +1,32 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.action;\n+\n+/**\n+ * Generic interface to group ActionRequest, which work on single document level\n+ *\n+ * Forces this class return index/type/id getters\n+ */\n+public interface SingleDocumentWriteRequest {\n+\n+ String index();\n+ String type();\n+ String id();\n+\n+}",
"filename": "src/main/java/org/elasticsearch/action/SingleDocumentWriteRequest.java",
"status": "added"
},
{
"diff": "@@ -27,6 +27,7 @@\n import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.action.ActionListener;\n import org.elasticsearch.action.ActionRequest;\n+import org.elasticsearch.action.SingleDocumentWriteRequest;\n import org.elasticsearch.action.admin.indices.create.CreateIndexRequest;\n import org.elasticsearch.action.admin.indices.create.CreateIndexResponse;\n import org.elasticsearch.action.admin.indices.create.TransportCreateIndexAction;\n@@ -38,15 +39,18 @@\n import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.block.ClusterBlockLevel;\n+import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.MappingMetaData;\n import org.elasticsearch.cluster.metadata.MetaData;\n import org.elasticsearch.cluster.routing.GroupShardsIterator;\n import org.elasticsearch.cluster.routing.ShardIterator;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.util.concurrent.AtomicArray;\n+import org.elasticsearch.index.Index;\n import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.indices.IndexAlreadyExistsException;\n+import org.elasticsearch.indices.IndexClosedException;\n import org.elasticsearch.rest.RestStatus;\n import org.elasticsearch.threadpool.ThreadPool;\n import org.elasticsearch.transport.BaseTransportRequestHandler;\n@@ -96,26 +100,15 @@ protected void doExecute(final BulkRequest bulkRequest, final ActionListener<Bul\n if (autoCreateIndex.needToCheck()) {\n final Set<String> indices = Sets.newHashSet();\n for (ActionRequest request : bulkRequest.requests) {\n- if (request instanceof IndexRequest) {\n- IndexRequest indexRequest = (IndexRequest) request;\n- if (!indices.contains(indexRequest.index())) {\n- indices.add(indexRequest.index());\n- }\n- } else if (request instanceof DeleteRequest) {\n- DeleteRequest deleteRequest = (DeleteRequest) request;\n- if (!indices.contains(deleteRequest.index())) {\n- indices.add(deleteRequest.index());\n- }\n- } else if (request instanceof UpdateRequest) {\n- UpdateRequest updateRequest = (UpdateRequest) request;\n- if (!indices.contains(updateRequest.index())) {\n- indices.add(updateRequest.index());\n+ if (request instanceof SingleDocumentWriteRequest) {\n+ SingleDocumentWriteRequest req = (SingleDocumentWriteRequest) request;\n+ if (!indices.contains(req.index())) {\n+ indices.add(req.index());\n }\n } else {\n throw new ElasticsearchException(\"Parsed unknown request in bulk actions: \" + request.getClass().getSimpleName());\n }\n }\n-\n final AtomicInteger counter = new AtomicInteger(indices.size());\n ClusterState state = clusterService.state();\n for (final String index : indices) {\n@@ -199,32 +192,39 @@ private void executeBulk(final BulkRequest bulkRequest, final long startTime, fi\n MetaData metaData = clusterState.metaData();\n for (int i = 0; i < bulkRequest.requests.size(); i++) {\n ActionRequest request = bulkRequest.requests.get(i);\n- if (request instanceof IndexRequest) {\n- IndexRequest indexRequest = (IndexRequest) request;\n- String aliasOrIndex = indexRequest.index();\n- indexRequest.index(clusterState.metaData().concreteSingleIndex(indexRequest.index()));\n-\n- MappingMetaData mappingMd = null;\n- if (metaData.hasIndex(indexRequest.index())) {\n- mappingMd = metaData.index(indexRequest.index()).mappingOrDefault(indexRequest.type());\n+ if (request instanceof SingleDocumentWriteRequest) {\n+ SingleDocumentWriteRequest req = (SingleDocumentWriteRequest) request;\n+ if (addFailureIfIndexIsClosed(req, bulkRequest, responses, i)) {\n+ continue;\n }\n- try {\n- indexRequest.process(metaData, aliasOrIndex, mappingMd, allowIdGeneration);\n- } catch (ElasticsearchParseException e) {\n- BulkItemResponse.Failure failure = new BulkItemResponse.Failure(indexRequest.index(), indexRequest.type(), indexRequest.id(), e);\n- BulkItemResponse bulkItemResponse = new BulkItemResponse(i, \"index\", failure);\n- responses.set(i, bulkItemResponse);\n- // make sure the request gets never processed again\n- bulkRequest.requests.set(i, null);\n+\n+ if (request instanceof IndexRequest) {\n+ IndexRequest indexRequest = (IndexRequest) request;\n+ String aliasOrIndex = indexRequest.index();\n+ indexRequest.index(clusterState.metaData().concreteSingleIndex(indexRequest.index()));\n+\n+ MappingMetaData mappingMd = null;\n+ if (metaData.hasIndex(indexRequest.index())) {\n+ mappingMd = metaData.index(indexRequest.index()).mappingOrDefault(indexRequest.type());\n+ }\n+ try {\n+ indexRequest.process(metaData, aliasOrIndex, mappingMd, allowIdGeneration);\n+ } catch (ElasticsearchParseException e) {\n+ BulkItemResponse.Failure failure = new BulkItemResponse.Failure(indexRequest.index(), indexRequest.type(), indexRequest.id(), e);\n+ BulkItemResponse bulkItemResponse = new BulkItemResponse(i, \"index\", failure);\n+ responses.set(i, bulkItemResponse);\n+ // make sure the request gets never processed again\n+ bulkRequest.requests.set(i, null);\n+ }\n+ } else if (request instanceof DeleteRequest) {\n+ DeleteRequest deleteRequest = (DeleteRequest) request;\n+ deleteRequest.routing(clusterState.metaData().resolveIndexRouting(deleteRequest.routing(), deleteRequest.index()));\n+ deleteRequest.index(clusterState.metaData().concreteSingleIndex(deleteRequest.index()));\n+ } else if (request instanceof UpdateRequest) {\n+ UpdateRequest updateRequest = (UpdateRequest) request;\n+ updateRequest.routing(clusterState.metaData().resolveIndexRouting(updateRequest.routing(), updateRequest.index()));\n+ updateRequest.index(clusterState.metaData().concreteSingleIndex(updateRequest.index()));\n }\n- } else if (request instanceof DeleteRequest) {\n- DeleteRequest deleteRequest = (DeleteRequest) request;\n- deleteRequest.routing(clusterState.metaData().resolveIndexRouting(deleteRequest.routing(), deleteRequest.index()));\n- deleteRequest.index(clusterState.metaData().concreteSingleIndex(deleteRequest.index()));\n- } else if (request instanceof UpdateRequest) {\n- UpdateRequest updateRequest = (UpdateRequest) request;\n- updateRequest.routing(clusterState.metaData().resolveIndexRouting(updateRequest.routing(), updateRequest.index()));\n- updateRequest.index(clusterState.metaData().concreteSingleIndex(updateRequest.index()));\n }\n }\n \n@@ -337,6 +337,23 @@ private void finishHim() {\n }\n }\n \n+ private boolean addFailureIfIndexIsClosed(SingleDocumentWriteRequest request, BulkRequest bulkRequest, AtomicArray<BulkItemResponse> responses, int idx) {\n+ MetaData metaData = this.clusterService.state().metaData();\n+ String concreteIndex = this.clusterService.state().metaData().concreteSingleIndex(request.index());\n+ boolean isClosed = metaData.index(concreteIndex).getState() == IndexMetaData.State.CLOSE;\n+\n+ if (isClosed) {\n+ BulkItemResponse.Failure failure = new BulkItemResponse.Failure(request.index(), request.type(), request.id(),\n+ new IndexClosedException(new Index(metaData.index(request.index()).getIndex())));\n+ BulkItemResponse bulkItemResponse = new BulkItemResponse(idx, \"index\", failure);\n+ responses.set(idx, bulkItemResponse);\n+ // make sure the request gets never processed again\n+ bulkRequest.requests.set(idx, null);\n+ }\n+\n+ return isClosed;\n+ }\n+\n class TransportHandler extends BaseTransportRequestHandler<BulkRequest> {\n \n @Override",
"filename": "src/main/java/org/elasticsearch/action/bulk/TransportBulkAction.java",
"status": "modified"
},
{
"diff": "@@ -20,6 +20,7 @@\n package org.elasticsearch.action.delete;\n \n import org.elasticsearch.action.ActionRequestValidationException;\n+import org.elasticsearch.action.SingleDocumentWriteRequest;\n import org.elasticsearch.action.support.replication.ShardReplicationOperationRequest;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.io.stream.StreamInput;\n@@ -42,7 +43,7 @@\n * @see org.elasticsearch.client.Client#delete(DeleteRequest)\n * @see org.elasticsearch.client.Requests#deleteRequest(String)\n */\n-public class DeleteRequest extends ShardReplicationOperationRequest<DeleteRequest> {\n+public class DeleteRequest extends ShardReplicationOperationRequest<DeleteRequest> implements SingleDocumentWriteRequest {\n \n private String type;\n private String id;",
"filename": "src/main/java/org/elasticsearch/action/delete/DeleteRequest.java",
"status": "modified"
},
{
"diff": "@@ -23,6 +23,7 @@\n import org.elasticsearch.*;\n import org.elasticsearch.action.ActionRequestValidationException;\n import org.elasticsearch.action.RoutingMissingException;\n+import org.elasticsearch.action.SingleDocumentWriteRequest;\n import org.elasticsearch.action.support.replication.ShardReplicationOperationRequest;\n import org.elasticsearch.client.Requests;\n import org.elasticsearch.cluster.metadata.MappingMetaData;\n@@ -62,7 +63,7 @@\n * @see org.elasticsearch.client.Requests#indexRequest(String)\n * @see org.elasticsearch.client.Client#index(IndexRequest)\n */\n-public class IndexRequest extends ShardReplicationOperationRequest<IndexRequest> {\n+public class IndexRequest extends ShardReplicationOperationRequest<IndexRequest> implements SingleDocumentWriteRequest {\n \n /**\n * Operation type controls if the type of the index operation.",
"filename": "src/main/java/org/elasticsearch/action/index/IndexRequest.java",
"status": "modified"
},
{
"diff": "@@ -21,6 +21,7 @@\n \n import com.google.common.collect.Maps;\n import org.elasticsearch.action.ActionRequestValidationException;\n+import org.elasticsearch.action.SingleDocumentWriteRequest;\n import org.elasticsearch.action.WriteConsistencyLevel;\n import org.elasticsearch.action.index.IndexRequest;\n import org.elasticsearch.action.support.replication.ReplicationType;\n@@ -44,7 +45,7 @@\n \n /**\n */\n-public class UpdateRequest extends InstanceShardOperationRequest<UpdateRequest> {\n+public class UpdateRequest extends InstanceShardOperationRequest<UpdateRequest> implements SingleDocumentWriteRequest {\n \n private String type;\n private String id;",
"filename": "src/main/java/org/elasticsearch/action/update/UpdateRequest.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,72 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.document;\n+\n+import org.elasticsearch.action.bulk.BulkRequest;\n+import org.elasticsearch.action.bulk.BulkResponse;\n+import org.elasticsearch.action.delete.DeleteRequest;\n+import org.elasticsearch.action.index.IndexRequest;\n+import org.elasticsearch.action.search.SearchResponse;\n+import org.elasticsearch.action.update.UpdateRequest;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.test.ElasticsearchIntegrationTest;\n+import org.junit.Test;\n+\n+import static org.elasticsearch.common.settings.ImmutableSettings.settingsBuilder;\n+import static org.elasticsearch.test.ElasticsearchIntegrationTest.ClusterScope;\n+import static org.elasticsearch.test.ElasticsearchIntegrationTest.Scope;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertHitCount;\n+import static org.hamcrest.Matchers.is;\n+\n+/**\n+ *\n+ */\n+@ClusterScope(numDataNodes = 1, scope = Scope.SUITE)\n+public class BulkNoAutoCreateIndexTests extends ElasticsearchIntegrationTest {\n+\n+ @Override\n+ protected Settings nodeSettings(int nodeOrdinal) {\n+ return settingsBuilder().put(super.nodeSettings(nodeOrdinal)).put(\"action.auto_create_index\", false).build();\n+ }\n+\n+ @Test // issue 6410\n+ public void testThatMissingIndexDoesNotAbortFullBulkRequest() throws Exception {\n+ createIndex(\"bulkindex1\", \"bulkindex2\");\n+ BulkRequest bulkRequest = new BulkRequest();\n+ bulkRequest.add(new IndexRequest(\"bulkindex1\", \"index1_type\", \"1\").source(\"text\", \"hallo1\"))\n+ .add(new IndexRequest(\"bulkindex2\", \"index2_type\", \"1\").source(\"text\", \"hallo2\"))\n+ .add(new IndexRequest(\"bulkindex2\", \"index2_type\").source(\"text\", \"hallo2\"))\n+ .add(new UpdateRequest(\"bulkindex2\", \"index2_type\", \"2\").doc(\"foo\", \"bar\"))\n+ .add(new DeleteRequest(\"bulkindex2\", \"index2_type\", \"3\"))\n+ .refresh(true);\n+\n+ client().bulk(bulkRequest).get();\n+ SearchResponse searchResponse = client().prepareSearch(\"bulkindex*\").get();\n+ assertHitCount(searchResponse, 3);\n+\n+ assertAcked(client().admin().indices().prepareClose(\"bulkindex2\"));\n+\n+ BulkResponse bulkResponse = client().bulk(bulkRequest).get();\n+ assertThat(bulkResponse.hasFailures(), is(true));\n+ assertThat(bulkResponse.getItems().length, is(5));\n+\n+ }\n+\n+}",
"filename": "src/test/java/org/elasticsearch/document/BulkNoAutoCreateIndexTests.java",
"status": "added"
},
{
"diff": "@@ -20,6 +20,7 @@\n package org.elasticsearch.document;\n \n import com.google.common.base.Charsets;\n+import org.elasticsearch.action.bulk.BulkRequest;\n import org.elasticsearch.action.bulk.BulkRequestBuilder;\n import org.elasticsearch.action.bulk.BulkResponse;\n import org.elasticsearch.action.count.CountResponse;\n@@ -629,5 +630,28 @@ public void testThatFailedUpdateRequestReturnsCorrectType() throws Exception {\n assertThat(bulkItemResponse.getItems()[4].getOpType(), is(\"delete\"));\n assertThat(bulkItemResponse.getItems()[5].getOpType(), is(\"delete\"));\n }\n+\n+ @Test // issue 6410\n+ public void testThatMissingIndexDoesNotAbortFullBulkRequest() throws Exception{\n+ createIndex(\"bulkindex1\", \"bulkindex2\");\n+ BulkRequest bulkRequest = new BulkRequest();\n+ bulkRequest.add(new IndexRequest(\"bulkindex1\", \"index1_type\", \"1\").source(\"text\", \"hallo1\"))\n+ .add(new IndexRequest(\"bulkindex2\", \"index2_type\", \"1\").source(\"text\", \"hallo2\"))\n+ .add(new IndexRequest(\"bulkindex2\", \"index2_type\").source(\"text\", \"hallo2\"))\n+ .add(new UpdateRequest(\"bulkindex2\", \"index2_type\", \"2\").doc(\"foo\", \"bar\"))\n+ .add(new DeleteRequest(\"bulkindex2\", \"index2_type\", \"3\"))\n+ .refresh(true);\n+\n+ client().bulk(bulkRequest).get();\n+ SearchResponse searchResponse = client().prepareSearch(\"bulkindex*\").get();\n+ assertHitCount(searchResponse, 3);\n+\n+ assertAcked(client().admin().indices().prepareClose(\"bulkindex2\"));\n+\n+ BulkResponse bulkResponse = client().bulk(bulkRequest).get();\n+ assertThat(bulkResponse.hasFailures(), is(true));\n+ assertThat(bulkResponse.getItems().length, is(5));\n+\n+ }\n }\n ",
"filename": "src/test/java/org/elasticsearch/document/BulkTests.java",
"status": "modified"
}
]
} |
{
"body": "```\nPUT /t/test/1\n{\n \"text\": \"baseball bats\"\n}\n\nGET /t/test/_search?explain\n{\n \"query\": {\n \"function_score\": {\n \"score_mode\": \"sum\",\n \"boost_mode\": \"replace\",\n \"functions\": [\n {\n \"filter\": {\n \"term\": {\n \"text\": \"baseball\"\n }\n }\n }\n ]\n }\n }\n}\n```\n\nThrows NPE. With a function (eg boost_factor) it doesn't:\n\n```\nGET /t/test/_search?explain\n{\n \"query\": {\n \"function_score\": {\n \"score_mode\": \"sum\",\n \"boost_mode\": \"replace\",\n \"functions\": [\n {\n \"filter\": {\n \"term\": {\n \"text\": \"baseball\"\n }\n }\n },\n \"boost_factor\": 1\n ]\n }\n }\n}\n```\n",
"comments": [
{
"body": "I guess we should throw a parse exception here no?\n",
"created_at": "2014-05-23T12:06:36Z"
},
{
"body": "ah @brwe has is already assigned... I almost took it... :)\n",
"created_at": "2014-05-23T12:07:03Z"
}
],
"number": 6292,
"title": "Query DSL: Function score without function throws NPE"
} | {
"body": "closes #6292\n",
"number": 6784,
"review_comments": [],
"title": "Throw exception if function in function score query is null"
} | {
"commits": [
{
"message": "Throw exception if function in function score query is null\n\ncloses #6292"
}
],
"files": [
{
"diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.index.query.functionscore;\n \n+import org.elasticsearch.ElasticsearchIllegalArgumentException;\n import org.elasticsearch.common.lucene.search.function.CombineFunction;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.index.query.BaseQueryBuilder;\n@@ -66,19 +67,28 @@ public FunctionScoreQueryBuilder() {\n }\n \n public FunctionScoreQueryBuilder(ScoreFunctionBuilder scoreFunctionBuilder) {\n+ if (scoreFunctionBuilder == null) {\n+ throw new ElasticsearchIllegalArgumentException(\"function_score: function must not be null\");\n+ }\n queryBuilder = null;\n filterBuilder = null;\n this.filters.add(null);\n this.scoreFunctions.add(scoreFunctionBuilder);\n }\n \n public FunctionScoreQueryBuilder add(FilterBuilder filter, ScoreFunctionBuilder scoreFunctionBuilder) {\n+ if (scoreFunctionBuilder == null) {\n+ throw new ElasticsearchIllegalArgumentException(\"function_score: function must not be null\");\n+ }\n this.filters.add(filter);\n this.scoreFunctions.add(scoreFunctionBuilder);\n return this;\n }\n \n public FunctionScoreQueryBuilder add(ScoreFunctionBuilder scoreFunctionBuilder) {\n+ if (scoreFunctionBuilder == null) {\n+ throw new ElasticsearchIllegalArgumentException(\"function_score: function must not be null\");\n+ }\n this.filters.add(null);\n this.scoreFunctions.add(scoreFunctionBuilder);\n return this;",
"filename": "src/main/java/org/elasticsearch/index/query/functionscore/FunctionScoreQueryBuilder.java",
"status": "modified"
},
{
"diff": "@@ -195,6 +195,9 @@ private String parseFiltersAndFunctions(QueryParseContext parseContext, XContent\n if (filter == null) {\n filter = Queries.MATCH_ALL_FILTER;\n }\n+ if (scoreFunction == null) {\n+ throw new ElasticsearchParseException(\"function_score: One entry in functions list is missing a function.\");\n+ }\n filterFunctions.add(new FiltersFunctionScoreQuery.FilterFunction(filter, scoreFunction));\n \n }",
"filename": "src/main/java/org/elasticsearch/index/query/functionscore/FunctionScoreQueryParser.java",
"status": "modified"
},
{
"diff": "@@ -19,16 +19,20 @@\n \n package org.elasticsearch.search.functionscore;\n \n+import org.elasticsearch.ElasticsearchIllegalArgumentException;\n import org.elasticsearch.ElasticsearchIllegalStateException;\n+import org.elasticsearch.ElasticsearchParseException;\n import org.elasticsearch.action.ActionFuture;\n import org.elasticsearch.action.index.IndexRequestBuilder;\n import org.elasticsearch.action.search.SearchPhaseExecutionException;\n import org.elasticsearch.action.search.SearchResponse;\n import org.elasticsearch.action.search.SearchType;\n+import org.elasticsearch.action.search.ShardSearchFailure;\n import org.elasticsearch.common.geo.GeoPoint;\n import org.elasticsearch.common.lucene.search.function.CombineFunction;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentFactory;\n+import org.elasticsearch.index.query.FilterBuilders;\n import org.elasticsearch.index.query.MatchAllFilterBuilder;\n import org.elasticsearch.index.query.QueryBuilders;\n import org.elasticsearch.index.query.functionscore.DecayFunctionBuilder;\n@@ -38,6 +42,7 @@\n import org.joda.time.DateTime;\n import org.junit.Test;\n \n+import java.io.IOException;\n import java.util.ArrayList;\n import java.util.List;\n \n@@ -843,4 +848,94 @@ public void errorMessageForFaultyFunctionScoreBody() throws Exception {\n }\n }\n \n+ // issue https://github.com/elasticsearch/elasticsearch/issues/6292\n+ @Test\n+ public void testMissingFunctionThrowsElasticsearchParseException() throws IOException {\n+\n+ // example from issue https://github.com/elasticsearch/elasticsearch/issues/6292\n+ String doc = \"{\\n\" +\n+ \" \\\"text\\\": \\\"baseball bats\\\"\\n\" +\n+ \"}\\n\";\n+\n+ String query = \"{\\n\" +\n+ \" \\\"function_score\\\": {\\n\" +\n+ \" \\\"score_mode\\\": \\\"sum\\\",\\n\" +\n+ \" \\\"boost_mode\\\": \\\"replace\\\",\\n\" +\n+ \" \\\"functions\\\": [\\n\" +\n+ \" {\\n\" +\n+ \" \\\"filter\\\": {\\n\" +\n+ \" \\\"term\\\": {\\n\" +\n+ \" \\\"text\\\": \\\"baseball\\\"\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \" ]\\n\" +\n+ \" }\\n\" +\n+ \"}\\n\";\n+\n+ client().prepareIndex(\"t\", \"test\").setSource(doc).get();\n+ refresh();\n+ ensureYellow(\"t\");\n+ try {\n+ client().search(\n+ searchRequest().source(\n+ searchSource().query(query))).actionGet();\n+ fail(\"Should fail with SearchPhaseExecutionException\");\n+ } catch (SearchPhaseExecutionException failure) {\n+ assertTrue(failure.getMessage().contains(\"SearchParseException\"));\n+ assertFalse(failure.getMessage().contains(\"NullPointerException\"));\n+ }\n+\n+ query = \"{\\n\" +\n+ \" \\\"function_score\\\": {\\n\" +\n+ \" \\\"score_mode\\\": \\\"sum\\\",\\n\" +\n+ \" \\\"boost_mode\\\": \\\"replace\\\",\\n\" +\n+ \" \\\"functions\\\": [\\n\" +\n+ \" {\\n\" +\n+ \" \\\"filter\\\": {\\n\" +\n+ \" \\\"term\\\": {\\n\" +\n+ \" \\\"text\\\": \\\"baseball\\\"\\n\" +\n+ \" }\\n\" +\n+ \" },\\n\" +\n+ \" \\\"boost_factor\\\": 2\\n\" +\n+ \" },\\n\" +\n+ \" {\\n\" +\n+ \" \\\"filter\\\": {\\n\" +\n+ \" \\\"term\\\": {\\n\" +\n+ \" \\\"text\\\": \\\"baseball\\\"\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \" ]\\n\" +\n+ \" }\\n\" +\n+ \"}\";\n+\n+ try {\n+ client().search(\n+ searchRequest().source(\n+ searchSource().query(query))).actionGet();\n+ fail(\"Should fail with SearchPhaseExecutionException\");\n+ } catch (SearchPhaseExecutionException failure) {\n+ assertTrue(failure.getMessage().contains(\"SearchParseException\"));\n+ assertFalse(failure.getMessage().contains(\"NullPointerException\"));\n+ assertTrue(failure.getMessage().contains(\"One entry in functions list is missing a function\"));\n+ }\n+\n+ // next test java client\n+ try {\n+ client().prepareSearch(\"t\").setQuery(QueryBuilders.functionScoreQuery(FilterBuilders.matchAllFilter(), null)).get();\n+ } catch (ElasticsearchIllegalArgumentException failure) {\n+ assertTrue(failure.getMessage().contains(\"function must not be null\"));\n+ }\n+ try {\n+ client().prepareSearch(\"t\").setQuery(QueryBuilders.functionScoreQuery().add(FilterBuilders.matchAllFilter(), null)).get();\n+ } catch (ElasticsearchIllegalArgumentException failure) {\n+ assertTrue(failure.getMessage().contains(\"function must not be null\"));\n+ }\n+ try {\n+ client().prepareSearch(\"t\").setQuery(QueryBuilders.functionScoreQuery().add(null)).get();\n+ } catch (ElasticsearchIllegalArgumentException failure) {\n+ assertTrue(failure.getMessage().contains(\"function must not be null\"));\n+ }\n+ }\n }",
"filename": "src/test/java/org/elasticsearch/search/functionscore/DecayFunctionScoreTests.java",
"status": "modified"
}
]
} |
{
"body": "If the user sets a high refresh interval, the versionMap can use\nunbounded RAM. I fixed LiveVersionMap to track its RAM used, and\ntrigger refresh if it's > 25% of IW's RAM buffer. (We could add\nanother setting for this but we have so many settings already?).\n\nI also fixed deletes to prune every index.gc_deletes/4 msec, and I\nonly save a delete tombstone if index.gc_deletes > 0.\n\nI think we could expose the RAM used by versionMap somewhere\n(Marvel? _cat?), but we can do that separately ... I put a TODO.\n\nCloses #6378\n",
"comments": [
{
"body": "OK I folded in all the feedback here (thank you!), and added two new\ntests.\n\nI reworked how deletes are handled, so that they are now included in\nthe versionMap.current/old but also added to a separate tombstones\nmap so that we can prune that map separately from using refresh to\nfree up RAM. I think the logic is simpler now.\n",
"created_at": "2014-06-10T16:58:47Z"
},
{
"body": "+1 to expose the RAM usage via an API. Can you please open an issue to do that? We might think further here and see how much RAM IW is using per shard as well, the DWFlushControl expose this to the FlushPolicy already so we might want to expose that via the IW API?\n",
"created_at": "2014-06-12T12:30:49Z"
},
{
"body": "I opened #6483 to expose the RAM usage via ShardStats and indices cat API...\n",
"created_at": "2014-06-12T13:34:57Z"
},
{
"body": "I made another round - it looks good! \n\nI think we should also add a test to validate behave when enableGcDeletes is false - maybe have index.gc_deletes set to 0ms and disable enableGcDeletes and see delete stick around..\n",
"created_at": "2014-06-18T12:34:25Z"
},
{
"body": "OK I folded in all feedback I think!\n\nHowever I still have concerns about how we calculate the RAM usage ... I put two nocommits about it, but these are relatively minor issues and shouldn't hold up committing this if we can't think of a simple way to address them.\n\nAlso, I want to do some performance testing here in the updates case: net/net this will put somewhat more pressure on the bloom filters / terms dict for the PK lookups since the version map acts like a cache now since the last flush, whereas with this change it only has updates since the last refresh. Maybe not a big deal in practice ... only for updates to very recently indexed docs.\n",
"created_at": "2014-06-19T13:40:09Z"
},
{
"body": "OK performance looks fine; I ran a test doing 75% adds and 25% updates (though, not biased for recency) using random UUID and there was no clear change...\n",
"created_at": "2014-06-19T13:58:59Z"
},
{
"body": "OK I resolved the two nocommits about ramBytesUsed; I think this is ready now.\n",
"created_at": "2014-06-20T16:20:01Z"
},
{
"body": "OK I pushed another iteration, trying to improve the RAM accounting using an idea from @bleskes to shift accounting of BytesRef/VersionValue back from the tombstones to current when a tombstone is removed. I also moved the forced pruning of tombtones to commit, and now call maybePruneDeletedTombstones from refresh.\n",
"created_at": "2014-06-22T10:15:48Z"
},
{
"body": "LGTM\n",
"created_at": "2014-06-26T08:15:57Z"
},
{
"body": "I think along with this, we can go back to Integer.MAX_VALUE default for index.translog.flush_threshold_ops.... I'll commit that.\n",
"created_at": "2014-06-28T09:18:15Z"
},
{
"body": "@mikemccand can we make the move to `INT_MAX` a separate issue?\n",
"created_at": "2014-06-29T10:53:11Z"
},
{
"body": "I think this is ready, mike if you want another review put the review label back pls\n",
"created_at": "2014-07-02T07:39:41Z"
},
{
"body": "Thanks Simon, I think it's ready too. I put xlog flushing back to 5000 ops ... I'll commit this soon.\n",
"created_at": "2014-07-04T10:11:23Z"
},
{
"body": "+1\n",
"created_at": "2014-07-04T10:25:12Z"
}
],
"number": 6443,
"title": "Force refresh when versionMap is using too much RAM"
} | {
"body": "I think it's safe to do this (again) now that #6443 is in.\n",
"number": 6783,
"review_comments": [],
"title": "Set default translog `flush_threshold_ops` to unlimited, to flush by byte size by default and not penalize tiny documents"
} | {
"commits": [
{
"message": "Translog: set default flush_threshold_ops to unlimited, so that we only flush by byte size by default and don't penalize tiny documents"
}
],
"files": [
{
"diff": "@@ -78,7 +78,7 @@ public TranslogService(ShardId shardId, @IndexSettings Settings indexSettings, I\n this.indexSettingsService = indexSettingsService;\n this.indexShard = indexShard;\n this.translog = translog;\n- this.flushThresholdOperations = componentSettings.getAsInt(FLUSH_THRESHOLD_OPS_KEY, componentSettings.getAsInt(\"flush_threshold\", 5000));\n+ this.flushThresholdOperations = componentSettings.getAsInt(FLUSH_THRESHOLD_OPS_KEY, componentSettings.getAsInt(\"flush_threshold\", Integer.MAX_VALUE));\n this.flushThresholdSize = componentSettings.getAsBytesSize(FLUSH_THRESHOLD_SIZE_KEY, new ByteSizeValue(200, ByteSizeUnit.MB));\n this.flushThresholdPeriod = componentSettings.getAsTime(FLUSH_THRESHOLD_PERIOD_KEY, TimeValue.timeValueMinutes(30));\n this.interval = componentSettings.getAsTime(FLUSH_THRESHOLD_INTERVAL_KEY, timeValueMillis(5000));",
"filename": "src/main/java/org/elasticsearch/index/translog/TranslogService.java",
"status": "modified"
}
]
} |
{
"body": "The explanation provided by `/_validate` for the `match_phrase_prefix` is missing the prefix part - it just shows the same explanation as for `match_phrase` \n\n```\ncurl -XGET 'http://127.0.0.1:9200/test/test/_validate/query?pretty=1&explain=true' -d '\n{\n \"match_phrase\" : {\n \"text\" : \"one tw\"\n }\n}\n'\n\n# {\n# \"_shards\" : {\n# \"failed\" : 0,\n# \"successful\" : 1,\n# \"total\" : 1\n# },\n# \"explanations\" : [\n# {\n# \"index\" : \"test\",\n# \"explanation\" : \"text:\\\"one tw\\\"\",\n# \"valid\" : true\n# }\n# ],\n# \"valid\" : true\n# }\n\n\ncurl -XGET 'http://127.0.0.1:9200/test/test/_validate/query?pretty=1&explain=true' -d '\n{\n \"match_phrase_prefix\" : {\n \"text\" : \"one tw\"\n }\n}\n'\n\n# {\n# \"_shards\" : {\n# \"failed\" : 0,\n# \"successful\" : 1,\n# \"total\" : 1\n# },\n# \"explanations\" : [\n# {\n# \"index\" : \"test\",\n# \"explanation\" : \"text:\\\"one tw\\\"\",\n# \"valid\" : true\n# }\n# ],\n# \"valid\" : true\n# }\n```\n",
"comments": [
{
"body": "Closed by #6767 \n\nThe match_phrase_prefix provided the same explanation as the match_phrase\nquery. There was no indication that the last term was run as a prefix\nquery.\n\nThis change marks the last term (or terms if there are multiple terms\nin the same position) with a *\n",
"created_at": "2014-07-07T14:29:11Z"
}
],
"number": 2449,
"title": "Query DSL: Improved explanation for match_phrase_prefix"
} | {
"body": "The match_phrase_prefix provided the same explanation as the match_phrase\nquery. There was no indication that the last term was run as a prefix\nquery.\n\nThis change marks the last term (or terms if there are multiple terms\nin the same position) with a *\n\nCloses #2449\n",
"number": 6767,
"review_comments": [],
"title": "Query DSL: Improved explanation for match_phrase_prefix"
} | {
"commits": [
{
"message": "Query DSL: Improved explanation for match_phrase_prefix\n\nThe match_phrase_prefix provided the same explanation as the match_phrase\nquery. There was no indication that the last term was run as a prefix\nquery.\n\nThis change marks the last term (or terms if there are multiple terms\nin the same position) with a *\n\nCloses #2449"
}
],
"files": [
{
"diff": "@@ -198,15 +198,27 @@ public final String toString(String f) {\n buffer.append(\"(\");\n for (int j = 0; j < terms.length; j++) {\n buffer.append(terms[j].text());\n- if (j < terms.length - 1)\n- buffer.append(\" \");\n+ if (j < terms.length - 1) {\n+ if (i.hasNext()) {\n+ buffer.append(\" \");\n+ } else {\n+ buffer.append(\"* \");\n+ }\n+ }\n+ }\n+ if (i.hasNext()) {\n+ buffer.append(\") \");\n+ } else {\n+ buffer.append(\"*)\");\n }\n- buffer.append(\")\");\n } else {\n buffer.append(terms[0].text());\n+ if (i.hasNext()) {\n+ buffer.append(\" \");\n+ } else {\n+ buffer.append(\"*\");\n+ }\n }\n- if (i.hasNext())\n- buffer.append(\" \");\n }\n buffer.append(\"\\\"\");\n \n@@ -272,8 +284,8 @@ private boolean termArraysEquals(List<Term[]> termArrays1, List<Term[]> termArra\n }\n return true;\n }\n- \n+\n public String getField() {\n return field;\n }\n-}\n\\ No newline at end of file\n+}",
"filename": "src/main/java/org/elasticsearch/common/lucene/search/MultiPhrasePrefixQuery.java",
"status": "modified"
},
{
"diff": "@@ -226,7 +226,7 @@ public void explainValidateQueryTwoNodes() throws IOException {\n assertThat(response.getQueryExplanation().get(0).getExplanation(), nullValue());\n \n }\n- \n+\n for (Client client : internalCluster()) {\n ValidateQueryResponse response = client.admin().indices().prepareValidateQuery(\"test\")\n .setQuery(QueryBuilders.queryString(\"foo\"))\n@@ -297,6 +297,43 @@ public void explainFilteredAlias() {\n assertThat(validateQueryResponse.getQueryExplanation().get(0).getExplanation(), containsString(\"field:value1\"));\n }\n \n+ @Test\n+ public void explainMatchPhrasePrefix() {\n+ assertAcked(prepareCreate(\"test\").setSettings(\n+ ImmutableSettings.settingsBuilder().put(indexSettings())\n+ .put(\"index.analysis.filter.syns.type\", \"synonym\")\n+ .putArray(\"index.analysis.filter.syns.synonyms\", \"one,two\")\n+ .put(\"index.analysis.analyzer.syns.tokenizer\", \"standard\")\n+ .putArray(\"index.analysis.analyzer.syns.filter\", \"syns\")\n+ ).addMapping(\"test\", \"field\",\"type=string,analyzer=syns\"));\n+ ensureGreen();\n+\n+ ValidateQueryResponse validateQueryResponse = client().admin().indices().prepareValidateQuery(\"test\")\n+ .setQuery(QueryBuilders.matchPhrasePrefixQuery(\"field\", \"foo\")).setExplain(true).get();\n+ assertThat(validateQueryResponse.isValid(), equalTo(true));\n+ assertThat(validateQueryResponse.getQueryExplanation().size(), equalTo(1));\n+ assertThat(validateQueryResponse.getQueryExplanation().get(0).getExplanation(), containsString(\"field:\\\"foo*\\\"\"));\n+\n+ validateQueryResponse = client().admin().indices().prepareValidateQuery(\"test\")\n+ .setQuery(QueryBuilders.matchPhrasePrefixQuery(\"field\", \"foo bar\")).setExplain(true).get();\n+ assertThat(validateQueryResponse.isValid(), equalTo(true));\n+ assertThat(validateQueryResponse.getQueryExplanation().size(), equalTo(1));\n+ assertThat(validateQueryResponse.getQueryExplanation().get(0).getExplanation(), containsString(\"field:\\\"foo bar*\\\"\"));\n+\n+ // Stacked tokens\n+ validateQueryResponse = client().admin().indices().prepareValidateQuery(\"test\")\n+ .setQuery(QueryBuilders.matchPhrasePrefixQuery(\"field\", \"one bar\")).setExplain(true).get();\n+ assertThat(validateQueryResponse.isValid(), equalTo(true));\n+ assertThat(validateQueryResponse.getQueryExplanation().size(), equalTo(1));\n+ assertThat(validateQueryResponse.getQueryExplanation().get(0).getExplanation(), containsString(\"field:\\\"(one two) bar*\\\"\"));\n+\n+ validateQueryResponse = client().admin().indices().prepareValidateQuery(\"test\")\n+ .setQuery(QueryBuilders.matchPhrasePrefixQuery(\"field\", \"foo one\")).setExplain(true).get();\n+ assertThat(validateQueryResponse.isValid(), equalTo(true));\n+ assertThat(validateQueryResponse.getQueryExplanation().size(), equalTo(1));\n+ assertThat(validateQueryResponse.getQueryExplanation().get(0).getExplanation(), containsString(\"field:\\\"foo (one* two*)\\\"\"));\n+ }\n+\n @Test\n public void irrelevantPropertiesBeforeQuery() throws IOException {\n createIndex(\"test\");",
"filename": "src/test/java/org/elasticsearch/validate/SimpleValidateQueryTests.java",
"status": "modified"
}
]
} |
{
"body": "Hi, \nI think this is an issue. I posted in the user group but got no response. \nWhen the following query is created on an Index with lot of fields...\n\n```\n\"bool\" : {\n \"must\" : {\n \"bool\" : {\n \"should\" : [ {\n \"bool\" : {\n \"must\" : {\n \"query_string\" : {\n \"query\" : \"\\\"microsoft\\\"\",\n \"fields\" : [ \"nameTokens\" ],\n \"analyzer\" : \"analyzerWithoutStop\"\n }\n },\n \"boost\" : 5.0\n }\n }, {\n \"bool\" : {\n \"should\" : {\n \"query_string\" : {\n \"query\" : \"\\\"insideview.com\\\"\",\n \"fields\" : [ \"companyDomain\" ],\n \"analyzer\" : \"analyzerWithoutStop\"\n }\n },\n \"boost\" : 10.0\n }\n } ]\n }\n }\n }\n}\n```\n\nOn the \"nameTokens\" filed the boost parameter keeps increasing with the number of queries. Thats is\n1. When the query is fired for the first time. The explanation i got for the two results, one matched the \"nameToken\" clause and the other matched the \"companyDomain\" clause:\n\n1st result\n\n```\n3.1991732 = (MATCH) product of:\n 6.3983464 = (MATCH) sum of:\n 6.3983464 = (MATCH) weight(companyDomain:insideview.com^10.0 in 223), product of:\n 0.93694496 = queryWeight(companyDomain:insideview.com^10.0), product of:\n 10.0 = boost\n 6.8289456 = idf(docFreq=2, maxDocs=1020)\n 0.013720199 = queryNorm\n 6.8289456 = (MATCH) fieldWeight(companyDomain:insideview.com in 223), product of:\n 1.0 = tf(termFreq(companyDomain:insideview.com)=1)\n 6.8289456 = idf(docFreq=2, maxDocs=1020)\n 1.0 = fieldNorm(field=companyDomain, doc=223)\n 0.5 = coord(1/2)\n```\n\n2nd result\n\n```\n0.89017844 = (MATCH) product of:\n 1.7803569 = (MATCH) sum of:\n 1.7803569 = (MATCH) weight(nameTokens:microsoft^5.0 in 209), product of:\n 0.3494771 = queryWeight(nameTokens:microsoft^5.0), product of:\n 5.0 = boost\n 5.0943446 = idf(docFreq=16, maxDocs=1020)\n 0.013720199 = queryNorm\n 5.0943446 = (MATCH) fieldWeight(nameTokens:microsoft in 209), product of:\n 1.0 = tf(termFreq(nameTokens:microsoft)=1)\n 5.0943446 = idf(docFreq=16, maxDocs=1020)\n 1.0 = fieldNorm(field=nameTokens, doc=209)\n 0.5 = coord(1/2)\n```\n\n2nd time the same search result is fired:\n\n```\n2.5471714 = (MATCH) product of:\n 5.0943427 = (MATCH) sum of:\n 5.0943427 = (MATCH) weight(nameTokens:microsoft^15625.0 in 209), product of:\n 0.99999964 = queryWeight(nameTokens:microsoft^15625.0), product of:\n 15625.0 = boost\n 5.0943446 = idf(docFreq=16, maxDocs=1020)\n 1.2562947E-5 = queryNorm\n 5.0943446 = (MATCH) fieldWeight(nameTokens:microsoft in 209), product of:\n 1.0 = tf(termFreq(nameTokens:microsoft)=1)\n 5.0943446 = idf(docFreq=16, maxDocs=1020)\n 1.0 = fieldNorm(field=nameTokens, doc=209)\n 0.5 = coord(1/2)\n```\n\n2nd result\n\n```\n 0.0029293338 = (MATCH) product of:\n 0.0058586677 = (MATCH) sum of:\n 0.0058586677 = (MATCH) weight(companyDomain:insideview.com^10.0 in 223), product of:\n 8.5791684E-4 = queryWeight(companyDomain:insideview.com^10.0), product of:\n 10.0 = boost\n 6.8289456 = idf(docFreq=2, maxDocs=1020)\n 1.2562947E-5 = queryNorm\n 6.8289456 = (MATCH) fieldWeight(companyDomain:insideview.com in 223), product of:\n 1.0 = tf(termFreq(companyDomain:insideview.com)=1)\n 6.8289456 = idf(docFreq=2, maxDocs=1020)\n 1.0 = fieldNorm(field=companyDomain, doc=223)\n 0.5 = coord(1/2)\n```\n\nIf you look at the boost parameter in the query explanation, it is increasing with every query of one clause. \n",
"comments": [],
"number": 2542,
"title": "Issue with boost settings"
} | {
"body": "The query string cache can't return the same instance, since Query is mutable changing the query else where in the execution path changes the instance in the cache too.\n\nFixes #2542\n",
"number": 6733,
"review_comments": [],
"title": "The query_string cache should returned cloned Query instances."
} | {
"commits": [
{
"message": "The query string cache can't return the same instance, since Query is mutable changing the query else where in the execution path changes the instance in the cache too.\n\nInstead the query parser cache should return a cloned instances.\n\nCloses #2542"
}
],
"files": [
{
"diff": "@@ -63,7 +63,12 @@ public ResidentQueryParserCache(Index index, @IndexSettings Settings indexSettin\n \n @Override\n public Query get(QueryParserSettings queryString) {\n- return cache.getIfPresent(queryString);\n+ Query value = cache.getIfPresent(queryString);\n+ if (value != null) {\n+ return value.clone();\n+ } else {\n+ return null;\n+ }\n }\n \n @Override",
"filename": "src/main/java/org/elasticsearch/index/cache/query/parser/resident/ResidentQueryParserCache.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,35 @@\n+package org.elasticsearch.index.cache.query.parser.resident;\n+\n+import org.apache.lucene.index.Term;\n+import org.apache.lucene.queryparser.classic.QueryParserSettings;\n+import org.apache.lucene.search.Query;\n+import org.apache.lucene.search.TermQuery;\n+import org.elasticsearch.common.settings.ImmutableSettings;\n+import org.elasticsearch.index.Index;\n+import org.elasticsearch.test.ElasticsearchTestCase;\n+import org.junit.Test;\n+\n+import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.not;\n+import static org.hamcrest.Matchers.sameInstance;\n+\n+/**\n+ */\n+public class ResidentQueryParserCacheTest extends ElasticsearchTestCase {\n+\n+ @Test\n+ public void testCaching() throws Exception {\n+ ResidentQueryParserCache cache = new ResidentQueryParserCache(new Index(\"test\"), ImmutableSettings.EMPTY);\n+ QueryParserSettings key = new QueryParserSettings();\n+ key.queryString(\"abc\");\n+ key.defaultField(\"a\");\n+ key.boost(2.0f);\n+\n+ Query query = new TermQuery(new Term(\"a\", \"abc\"));\n+ cache.put(key, query);\n+\n+ assertThat(cache.get(key), not(sameInstance(query)));\n+ assertThat(cache.get(key), equalTo(query));\n+ }\n+\n+}",
"filename": "src/test/java/org/elasticsearch/index/cache/query/parser/resident/ResidentQueryParserCacheTest.java",
"status": "added"
},
{
"diff": "@@ -47,10 +47,7 @@\n import org.junit.Test;\n \n import java.io.IOException;\n-import java.util.HashSet;\n-import java.util.Locale;\n-import java.util.Random;\n-import java.util.Set;\n+import java.util.*;\n import java.util.concurrent.ExecutionException;\n \n import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_NUMBER_OF_REPLICAS;\n@@ -2457,4 +2454,31 @@ public void testFilteredQuery() throws Exception {\n }\n }\n \n+ @Test\n+ public void testQueryStringParserCache() throws Exception {\n+ createIndex(\"test\");\n+ indexRandom(true, Arrays.asList(\n+ client().prepareIndex(\"test\", \"type\", \"1\").setSource(\"nameTokens\", \"xyz\")\n+ ));\n+\n+ SearchResponse response = client().prepareSearch(\"test\")\n+ .setQuery(QueryBuilders.queryString(\"xyz\").boost(100))\n+ .get();\n+\n+ assertThat(response.getHits().totalHits(), equalTo(1l));\n+ assertThat(response.getHits().getAt(0).id(), equalTo(\"1\"));\n+\n+ float score = response.getHits().getAt(0).getScore();\n+\n+ for (int i = 0; i < 100; i++) {\n+ response = client().prepareSearch(\"test\")\n+ .setQuery(QueryBuilders.queryString(\"xyz\").boost(100))\n+ .get();\n+\n+ assertThat(response.getHits().totalHits(), equalTo(1l));\n+ assertThat(response.getHits().getAt(0).id(), equalTo(\"1\"));\n+ assertThat(Float.compare(score, response.getHits().getAt(0).getScore()), equalTo(0));\n+ }\n+ }\n+\n }",
"filename": "src/test/java/org/elasticsearch/search/query/SimpleQueryTests.java",
"status": "modified"
}
]
} |
{
"body": "I have setup a parent/child relationship where parent is optional and i handle the routing in that case. And this worked fine in 1.0.1 but when i bumped to 1.2.1 to use aggregations it stopped working.\n\nThe exception i get\n\n``` java\n[2014-07-04 07:59:30,333][INFO ][node ] [Tantra] version[1.2.1], pid[23438], build[6c95b75/2014-06-03T15:02:52Z]\n[2014-07-04 07:59:30,334][INFO ][node ] [Tantra] initializing ...\n[2014-07-04 07:59:30,339][INFO ][plugins ] [Tantra] loaded [], sites []\n[2014-07-04 07:59:32,581][INFO ][script ] [Tantra] compiling script file [/etc/elasticsearch/scripts/activity-period.mvel]\n[2014-07-04 07:59:33,016][INFO ][node ] [Tantra] initialized\n[2014-07-04 07:59:33,016][INFO ][node ] [Tantra] starting ...\n[2014-07-04 07:59:33,092][INFO ][transport ] [Tantra] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/10.0.2.15:9300]}\n[2014-07-04 07:59:36,133][INFO ][cluster.service ] [Tantra] new_master [Tantra][jUaCLA6GTyadduwbg_tTwg][packer-virtualbox][inet[/10.0.2.15:9300]], reason: zen-disco-join (elected_as_master)\n[2014-07-04 07:59:36,171][INFO ][discovery ] [Tantra] elasticsearch/jUaCLA6GTyadduwbg_tTwg\n[2014-07-04 07:59:36,262][INFO ][http ] [Tantra] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/10.0.2.15:9200]}\n[2014-07-04 07:59:37,094][INFO ][gateway ] [Tantra] recovered [33] indices into cluster_state\n[2014-07-04 07:59:37,116][INFO ][node ] [Tantra] started\n[2014-07-04 07:59:45,985][DEBUG][action.search.type ] [Tantra] [application][2], node[jUaCLA6GTyadduwbg_tTwg], [P], s[STARTED]: Failed to execute [org.elasticsearch.action.search.SearchRequest@6fa23b68] lastShard [true]\norg.elasticsearch.search.SearchParseException: [application][2]: from[-1],size[-1]: Parse Failure [Failed to parse source [{\n \"query\": {\n \"filtered\": {\n \"filter\": {\n \"bool\": {\n \"must\": [\n {\n \"has_parent\": {\n \"type\": \"clients\",\n \"query\": {\n \"bool\": {\n \"must\": [],\n \"must_not\": [],\n \"should\": []\n }\n },\n \"filter\": {\n \"bool\": {\n \"must\": [\n {\n \"terms\": {\n \"interests\": [\n \"hello\"\n ]\n }\n }\n ],\n \"must_not\": [],\n \"should\": []\n }\n }\n }\n },\n {\n \"geo_distance\": {\n \"distance\": \"20km\",\n \"coordinate\": \"59.85499699999999,17.6490213\"\n }\n }\n ],\n \"must_not\": [],\n \"should\": []\n }\n },\n \"query\": []\n }\n },\n \"size\": 25\n}\n]]\n at org.elasticsearch.search.SearchService.parseSource(SearchService.java:649)\n at org.elasticsearch.search.SearchService.createContext(SearchService.java:511)\n at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:483)\n at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:252)\n at org.elasticsearch.search.action.SearchServiceTransportAction$5.call(SearchServiceTransportAction.java:206)\n at org.elasticsearch.search.action.SearchServiceTransportAction$5.call(SearchServiceTransportAction.java:203)\n at org.elasticsearch.search.action.SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:517)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n at java.lang.Thread.run(Thread.java:744)\nCaused by: java.lang.NullPointerException\n at org.elasticsearch.common.xcontent.XContentHelper.createParser(XContentHelper.java:46)\n at org.elasticsearch.index.query.support.XContentStructure.asQuery(XContentStructure.java:88)\n at org.elasticsearch.index.query.support.XContentStructure$InnerQuery.asQuery(XContentStructure.java:154)\n at org.elasticsearch.index.query.HasParentFilterParser.parse(HasParentFilterParser.java:115)\n at org.elasticsearch.index.query.QueryParseContext.executeFilterParser(QueryParseContext.java:283)\n at org.elasticsearch.index.query.QueryParseContext.parseInnerFilter(QueryParseContext.java:264)\n at org.elasticsearch.index.query.BoolFilterParser.parse(BoolFilterParser.java:92)\n at org.elasticsearch.index.query.QueryParseContext.executeFilterParser(QueryParseContext.java:283)\n at org.elasticsearch.index.query.QueryParseContext.parseInnerFilter(QueryParseContext.java:264)\n at org.elasticsearch.index.query.FilteredQueryParser.parse(FilteredQueryParser.java:74)\n at org.elasticsearch.index.query.QueryParseContext.parseInnerQuery(QueryParseContext.java:227)\n at org.elasticsearch.index.query.IndexQueryParserService.parse(IndexQueryParserService.java:334)\n at org.elasticsearch.index.query.IndexQueryParserService.parse(IndexQueryParserService.java:260)\n at org.elasticsearch.search.query.QueryParseElement.parse(QueryParseElement.java:33)\n at org.elasticsearch.search.SearchService.parseSource(SearchService.java:633)\n ... 9 more\n[2014-07-04 07:59:45,985][DEBUG][action.search.type ] [Tantra] [application][1], node[jUaCLA6GTyadduwbg_tTwg], [P], s[STARTED]: Failed to execute [org.elasticsearch.action.search.SearchRequest@6fa23b68] lastShard [true]\norg.elasticsearch.search.SearchParseException: [application][1]: from[-1],size[-1]: Parse Failure [Failed to parse source [{\n \"query\": {\n \"filtered\": {\n \"filter\": {\n \"bool\": {\n \"must\": [\n {\n \"has_parent\": {\n \"type\": \"clients\",\n \"query\": {\n \"bool\": {\n \"must\": [],\n \"must_not\": [],\n \"should\": []\n }\n },\n \"filter\": {\n \"bool\": {\n \"must\": [\n {\n \"terms\": {\n \"interests\": [\n \"hello\"\n ]\n }\n }\n ],\n \"must_not\": [],\n \"should\": []\n }\n }\n }\n },\n {\n \"geo_distance\": {\n \"distance\": \"20km\",\n \"coordinate\": \"59.85499699999999,17.6490213\"\n }\n }\n ],\n \"must_not\": [],\n \"should\": []\n }\n },\n \"query\": []\n }\n },\n \"size\": 25\n}\n]]\n at org.elasticsearch.search.SearchService.parseSource(SearchService.java:649)\n at org.elasticsearch.search.SearchService.createContext(SearchService.java:511)\n at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:483)\n at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:252)\n at org.elasticsearch.search.action.SearchServiceTransportAction$5.call(SearchServiceTransportAction.java:206)\n at org.elasticsearch.search.action.SearchServiceTransportAction$5.call(SearchServiceTransportAction.java:203)\n at org.elasticsearch.search.action.SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:517)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n at java.lang.Thread.run(Thread.java:744)\nCaused by: java.lang.NullPointerException\n at org.elasticsearch.common.xcontent.XContentHelper.createParser(XContentHelper.java:46)\n at org.elasticsearch.index.query.support.XContentStructure.asQuery(XContentStructure.java:88)\n at org.elasticsearch.index.query.support.XContentStructure$InnerQuery.asQuery(XContentStructure.java:154)\n at org.elasticsearch.index.query.HasParentFilterParser.parse(HasParentFilterParser.java:115)\n at org.elasticsearch.index.query.QueryParseContext.executeFilterParser(QueryParseContext.java:283)\n at org.elasticsearch.index.query.QueryParseContext.parseInnerFilter(QueryParseContext.java:264)\n at org.elasticsearch.index.query.BoolFilterParser.parse(BoolFilterParser.java:92)\n at org.elasticsearch.index.query.QueryParseContext.executeFilterParser(QueryParseContext.java:283)\n at org.elasticsearch.index.query.QueryParseContext.parseInnerFilter(QueryParseContext.java:264)\n at org.elasticsearch.index.query.FilteredQueryParser.parse(FilteredQueryParser.java:74)\n at org.elasticsearch.index.query.QueryParseContext.parseInnerQuery(QueryParseContext.java:227)\n at org.elasticsearch.index.query.IndexQueryParserService.parse(IndexQueryParserService.java:334)\n at org.elasticsearch.index.query.IndexQueryParserService.parse(IndexQueryParserService.java:260)\n at org.elasticsearch.search.query.QueryParseElement.parse(QueryParseElement.java:33)\n at org.elasticsearch.search.SearchService.parseSource(SearchService.java:633)\n ... 9 more\n[2014-07-04 07:59:45,985][DEBUG][action.search.type ] [Tantra] [application][0], node[jUaCLA6GTyadduwbg_tTwg], [P], s[STARTED]: Failed to execute [org.elasticsearch.action.search.SearchRequest@6fa23b68] lastShard [true]\norg.elasticsearch.search.SearchParseException: [application][0]: from[-1],size[-1]: Parse Failure [Failed to parse source [{\n \"query\": {\n \"filtered\": {\n \"filter\": {\n \"bool\": {\n \"must\": [\n {\n \"has_parent\": {\n \"type\": \"clients\",\n \"query\": {\n \"bool\": {\n \"must\": [],\n \"must_not\": [],\n \"should\": []\n }\n },\n \"filter\": {\n \"bool\": {\n \"must\": [\n {\n \"terms\": {\n \"interests\": [\n \"hello\"\n ]\n }\n }\n ],\n \"must_not\": [],\n \"should\": []\n }\n }\n }\n },\n {\n \"geo_distance\": {\n \"distance\": \"20km\",\n \"coordinate\": \"59.85499699999999,17.6490213\"\n }\n }\n ],\n \"must_not\": [],\n \"should\": []\n }\n },\n \"query\": []\n }\n },\n \"size\": 25\n}\n]]\n at org.elasticsearch.search.SearchService.parseSource(SearchService.java:649)\n at org.elasticsearch.search.SearchService.createContext(SearchService.java:511)\n at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:483)\n at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:252)\n at org.elasticsearch.search.action.SearchServiceTransportAction$5.call(SearchServiceTransportAction.java:206)\n at org.elasticsearch.search.action.SearchServiceTransportAction$5.call(SearchServiceTransportAction.java:203)\n at org.elasticsearch.search.action.SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:517)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n at java.lang.Thread.run(Thread.java:744)\nCaused by: java.lang.NullPointerException\n at org.elasticsearch.common.xcontent.XContentHelper.createParser(XContentHelper.java:46)\n at org.elasticsearch.index.query.support.XContentStructure.asQuery(XContentStructure.java:88)\n at org.elasticsearch.index.query.support.XContentStructure$InnerQuery.asQuery(XContentStructure.java:154)\n at org.elasticsearch.index.query.HasParentFilterParser.parse(HasParentFilterParser.java:115)\n at org.elasticsearch.index.query.QueryParseContext.executeFilterParser(QueryParseContext.java:283)\n at org.elasticsearch.index.query.QueryParseContext.parseInnerFilter(QueryParseContext.java:264)\n at org.elasticsearch.index.query.BoolFilterParser.parse(BoolFilterParser.java:92)\n at org.elasticsearch.index.query.QueryParseContext.executeFilterParser(QueryParseContext.java:283)\n at org.elasticsearch.index.query.QueryParseContext.parseInnerFilter(QueryParseContext.java:264)\n at org.elasticsearch.index.query.FilteredQueryParser.parse(FilteredQueryParser.java:74)\n at org.elasticsearch.index.query.QueryParseContext.parseInnerQuery(QueryParseContext.java:227)\n at org.elasticsearch.index.query.IndexQueryParserService.parse(IndexQueryParserService.java:334)\n at org.elasticsearch.index.query.IndexQueryParserService.parse(IndexQueryParserService.java:260)\n at org.elasticsearch.search.query.QueryParseElement.parse(QueryParseElement.java:33)\n at org.elasticsearch.search.SearchService.parseSource(SearchService.java:633)\n ... 9 more\n[2014-07-04 07:59:45,985][DEBUG][action.search.type ] [Tantra] [application][4], node[jUaCLA6GTyadduwbg_tTwg], [P], s[STARTED]: Failed to execute [org.elasticsearch.action.search.SearchRequest@6fa23b68] lastShard [true]\norg.elasticsearch.search.SearchParseException: [application][4]: from[-1],size[-1]: Parse Failure [Failed to parse source [{\n \"query\": {\n \"filtered\": {\n \"filter\": {\n \"bool\": {\n \"must\": [\n {\n \"has_parent\": {\n \"type\": \"clients\",\n \"query\": {\n \"bool\": {\n \"must\": [],\n \"must_not\": [],\n \"should\": []\n }\n },\n \"filter\": {\n \"bool\": {\n \"must\": [\n {\n \"terms\": {\n \"interests\": [\n \"hello\"\n ]\n }\n }\n ],\n \"must_not\": [],\n \"should\": []\n }\n }\n }\n },\n {\n \"geo_distance\": {\n \"distance\": \"20km\",\n \"coordinate\": \"59.85499699999999,17.6490213\"\n }\n }\n ],\n \"must_not\": [],\n \"should\": []\n }\n },\n \"query\": []\n }\n },\n \"size\": 25\n}\n]]\n at org.elasticsearch.search.SearchService.parseSource(SearchService.java:649)\n at org.elasticsearch.search.SearchService.createContext(SearchService.java:511)\n at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:483)\n at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:252)\n at org.elasticsearch.search.action.SearchServiceTransportAction$5.call(SearchServiceTransportAction.java:206)\n at org.elasticsearch.search.action.SearchServiceTransportAction$5.call(SearchServiceTransportAction.java:203)\n at org.elasticsearch.search.action.SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:517)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n at java.lang.Thread.run(Thread.java:744)\nCaused by: java.lang.NullPointerException\n at org.elasticsearch.common.xcontent.XContentHelper.createParser(XContentHelper.java:46)\n at org.elasticsearch.index.query.support.XContentStructure.asQuery(XContentStructure.java:88)\n at org.elasticsearch.index.query.support.XContentStructure$InnerQuery.asQuery(XContentStructure.java:154)\n at org.elasticsearch.index.query.HasParentFilterParser.parse(HasParentFilterParser.java:115)\n at org.elasticsearch.index.query.QueryParseContext.executeFilterParser(QueryParseContext.java:283)\n at org.elasticsearch.index.query.QueryParseContext.parseInnerFilter(QueryParseContext.java:264)\n at org.elasticsearch.index.query.BoolFilterParser.parse(BoolFilterParser.java:92)\n at org.elasticsearch.index.query.QueryParseContext.executeFilterParser(QueryParseContext.java:283)\n at org.elasticsearch.index.query.QueryParseContext.parseInnerFilter(QueryParseContext.java:264)\n at org.elasticsearch.index.query.FilteredQueryParser.parse(FilteredQueryParser.java:74)\n at org.elasticsearch.index.query.QueryParseContext.parseInnerQuery(QueryParseContext.java:227)\n at org.elasticsearch.index.query.IndexQueryParserService.parse(IndexQueryParserService.java:334)\n at org.elasticsearch.index.query.IndexQueryParserService.parse(IndexQueryParserService.java:260)\n at org.elasticsearch.search.query.QueryParseElement.parse(QueryParseElement.java:33)\n at org.elasticsearch.search.SearchService.parseSource(SearchService.java:633)\n ... 9 more\n[2014-07-04 07:59:45,985][DEBUG][action.search.type ] [Tantra] [application][3], node[jUaCLA6GTyadduwbg_tTwg], [P], s[STARTED]: Failed to execute [org.elasticsearch.action.search.SearchRequest@6fa23b68]\norg.elasticsearch.search.SearchParseException: [application][3]: from[-1],size[-1]: Parse Failure [Failed to parse source [{\n \"query\": {\n \"filtered\": {\n \"filter\": {\n \"bool\": {\n \"must\": [\n {\n \"has_parent\": {\n \"type\": \"clients\",\n \"query\": {\n \"bool\": {\n \"must\": [],\n \"must_not\": [],\n \"should\": []\n }\n },\n \"filter\": {\n \"bool\": {\n \"must\": [\n {\n \"terms\": {\n \"interests\": [\n \"hello\"\n ]\n }\n }\n ],\n \"must_not\": [],\n \"should\": []\n }\n }\n }\n },\n {\n \"geo_distance\": {\n \"distance\": \"20km\",\n \"coordinate\": \"59.85499699999999,17.6490213\"\n }\n }\n ],\n \"must_not\": [],\n \"should\": []\n }\n },\n \"query\": []\n }\n },\n \"size\": 25\n}\n]]\n at org.elasticsearch.search.SearchService.parseSource(SearchService.java:649)\n at org.elasticsearch.search.SearchService.createContext(SearchService.java:511)\n at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:483)\n at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:252)\n at org.elasticsearch.search.action.SearchServiceTransportAction$5.call(SearchServiceTransportAction.java:206)\n at org.elasticsearch.search.action.SearchServiceTransportAction$5.call(SearchServiceTransportAction.java:203)\n at org.elasticsearch.search.action.SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:517)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n at java.lang.Thread.run(Thread.java:744)\nCaused by: java.lang.NullPointerException\n at org.elasticsearch.common.xcontent.XContentHelper.createParser(XContentHelper.java:46)\n at org.elasticsearch.index.query.support.XContentStructure.asQuery(XContentStructure.java:88)\n at org.elasticsearch.index.query.support.XContentStructure$InnerQuery.asQuery(XContentStructure.java:154)\n at org.elasticsearch.index.query.HasParentFilterParser.parse(HasParentFilterParser.java:115)\n at org.elasticsearch.index.query.QueryParseContext.executeFilterParser(QueryParseContext.java:283)\n at org.elasticsearch.index.query.QueryParseContext.parseInnerFilter(QueryParseContext.java:264)\n at org.elasticsearch.index.query.BoolFilterParser.parse(BoolFilterParser.java:92)\n at org.elasticsearch.index.query.QueryParseContext.executeFilterParser(QueryParseContext.java:283)\n at org.elasticsearch.index.query.QueryParseContext.parseInnerFilter(QueryParseContext.java:264)\n at org.elasticsearch.index.query.FilteredQueryParser.parse(FilteredQueryParser.java:74)\n at org.elasticsearch.index.query.QueryParseContext.parseInnerQuery(QueryParseContext.java:227)\n at org.elasticsearch.index.query.IndexQueryParserService.parse(IndexQueryParserService.java:334)\n at org.elasticsearch.index.query.IndexQueryParserService.parse(IndexQueryParserService.java:260)\n at org.elasticsearch.search.query.QueryParseElement.parse(QueryParseElement.java:33)\n at org.elasticsearch.search.SearchService.parseSource(SearchService.java:633)\n ... 9 more\n[2014-07-04 07:59:45,999][DEBUG][action.search.type ] [Tantra] All shards failed for phase: [query]\n```\n\nMapping\n\n``` json\n{\n \"application\": {\n \"mappings\": {\n \"clients\": {\n \"properties\": {\n \"createdAt\": {\n \"format\": \"date_time_no_millis\",\n \"type\": \"date\"\n },\n \"email\": {\n \"type\": \"string\"\n },\n \"facebookId\": {\n \"type\": \"long\"\n },\n \"favorites\": {\n \"type\": \"integer\"\n },\n \"id\": {\n \"type\": \"integer\"\n },\n \"interests\": {\n \"type\": \"string\"\n },\n \"name\": {\n \"type\": \"string\"\n },\n \"surname\": {\n \"type\": \"string\"\n },\n \"updatedAt\": {\n \"format\": \"date_time_no_millis\",\n \"type\": \"date\"\n }\n }\n },\n \"devices\": {\n \"_parent\": {\n \"type\": \"clients\"\n },\n \"_routing\": {\n \"required\": true\n },\n \"properties\": {\n \"active\": {\n \"type\": \"boolean\"\n },\n \"appName\": {\n \"type\": \"string\"\n },\n \"appVersion\": {\n \"type\": \"string\"\n },\n \"carrier\": {\n \"type\": \"string\"\n },\n \"coordinate\": {\n \"type\": \"geo_point\"\n },\n \"country\": {\n \"index\": \"not_analyzed\",\n \"type\": \"string\"\n },\n \"createdAt\": {\n \"format\": \"date_time_no_millis\",\n \"type\": \"date\"\n },\n \"deviceId\": {\n \"index\": \"not_analyzed\",\n \"type\": \"string\"\n },\n \"deviceModel\": {\n \"type\": \"string\"\n },\n \"deviceName\": {\n \"type\": \"string\"\n },\n \"id\": {\n \"index\": \"not_analyzed\",\n \"type\": \"string\"\n },\n \"inactivatedAt\": {\n \"format\": \"date_time_no_millis\",\n \"type\": \"date\"\n },\n \"mobileCountryCode\": {\n \"index\": \"not_analyzed\",\n \"type\": \"string\"\n },\n \"os\": {\n \"type\": \"string\"\n },\n \"osVersion\": {\n \"type\": \"string\"\n },\n \"pushNotificationId\": {\n \"index\": \"not_analyzed\",\n \"type\": \"string\"\n },\n \"updatedAt\": {\n \"format\": \"date_time_no_millis\",\n \"type\": \"date\"\n }\n }\n }\n }\n }\n}\n```\n",
"comments": [
{
"body": "thanks for opening this.. I have a fix for that already and will push it soon. as a workaround just use a `match_all` query instead of the:\n\n``` JSON\n \"bool\" : {\n \"must\": [],\n \"must_not\": [],\n \"should\": []\n }\n```\n",
"created_at": "2014-07-04T09:11:07Z"
}
],
"number": 6722,
"title": "Replace empty bool queries with match_all to prevent NullPointerExceptions"
} | {
"body": "This causes a NPE since XContentStructure checks if the query is null\nand takes this as the condition to parse from the byte source which is\nactually null in that case.\n\nCloses #6722\n",
"number": 6723,
"review_comments": [],
"title": "QueryParser can return null from a query"
} | {
"commits": [
{
"message": "[Query] QueryParser can return null from a query\n\nThis causes a NPE since XContentStructure checks if the query is null\nand takes this as the condition to parse from the byte source which is\nactually null in that case.\n\nCloses #6722"
}
],
"files": [
{
"diff": "@@ -127,13 +127,14 @@ public Query asFilter(String... types) throws IOException {\n */\n public static class InnerQuery extends XContentStructure {\n private Query query = null;\n-\n+ private boolean queryParsed = false;\n public InnerQuery(QueryParseContext parseContext1, @Nullable String... types) throws IOException {\n super(parseContext1);\n if (types != null) {\n String[] origTypes = QueryParseContext.setTypesWithPrevious(types);\n try {\n query = parseContext1.parseInnerQuery();\n+ queryParsed = true;\n } finally {\n QueryParseContext.setTypes(origTypes);\n }\n@@ -150,7 +151,7 @@ public InnerQuery(QueryParseContext parseContext1, @Nullable String... types) th\n */\n @Override\n public Query asQuery(String... types) throws IOException {\n- if (this.query == null) {\n+ if (!queryParsed) { // query can be null\n this.query = super.asQuery(types);\n }\n return this.query;\n@@ -164,6 +165,8 @@ public Query asQuery(String... types) throws IOException {\n */\n public static class InnerFilter extends XContentStructure {\n private Query query = null;\n+ private boolean queryParsed = false;\n+\n \n public InnerFilter(QueryParseContext parseContext1, @Nullable String... types) throws IOException {\n super(parseContext1);\n@@ -172,6 +175,7 @@ public InnerFilter(QueryParseContext parseContext1, @Nullable String... types) t\n try {\n Filter innerFilter = parseContext1.parseInnerFilter();\n query = new XConstantScoreQuery(innerFilter);\n+ queryParsed = true;\n } finally {\n QueryParseContext.setTypes(origTypes);\n }\n@@ -190,7 +194,7 @@ public InnerFilter(QueryParseContext parseContext1, @Nullable String... types) t\n */\n @Override\n public Query asFilter(String... types) throws IOException {\n- if (this.query == null) {\n+ if (!queryParsed) { // query can be null\n this.query = super.asFilter(types);\n }\n return this.query;",
"filename": "src/main/java/org/elasticsearch/index/query/support/XContentStructure.java",
"status": "modified"
},
{
"diff": "@@ -125,6 +125,24 @@ public void multiLevelChild() throws Exception {\n assertThat(searchResponse.getHits().getAt(0).id(), equalTo(\"gc1\"));\n }\n \n+ @Test\n+ // see #6722\n+ public void test6722() throws ElasticsearchException, IOException {\n+ assertAcked(prepareCreate(\"test\")\n+ .addMapping(\"foo\")\n+ .addMapping(\"test\", \"_parent\", \"type=foo\"));\n+ ensureGreen();\n+\n+ // index simple data\n+ client().prepareIndex(\"test\", \"foo\", \"1\").setSource(\"foo\", 1).get();\n+ client().prepareIndex(\"test\", \"test\").setSource(\"foo\", 1).setParent(\"1\").get();\n+ refresh();\n+\n+ SearchResponse searchResponse = client().prepareSearch(\"test\").setSource(\"{\\\"query\\\":{\\\"filtered\\\":{\\\"filter\\\":{\\\"has_parent\\\":{\\\"type\\\":\\\"test\\\",\\\"query\\\":{\\\"bool\\\":{\\\"must\\\":[],\\\"must_not\\\":[],\\\"should\\\":[]}}},\\\"query\\\":[]}}}}\").get();\n+ assertNoFailures(searchResponse);\n+ assertThat(searchResponse.getHits().totalHits(), equalTo(2l));\n+ }\n+\n @Test\n // see #2744\n public void test2744() throws ElasticsearchException, IOException {",
"filename": "src/test/java/org/elasticsearch/search/child/SimpleChildQuerySearchTests.java",
"status": "modified"
}
]
} |
{
"body": "It appears there is no way to get nested settings from template GET.\n\n```\n$ curl localhost:9200\n{\n \"status\" : 200,\n \"name\" : \"Ezekiel\",\n \"version\" : {\n \"number\" : \"1.2.1\",\n \"build_hash\" : \"6c95b759f9e7ef0f8e17f77d850da43ce8a4b364\",\n \"build_timestamp\" : \"2014-06-03T15:02:52Z\",\n \"build_snapshot\" : false,\n \"lucene_version\" : \"4.8\"\n },\n \"tagline\" : \"You Know, for Search\"\n}\n\n$ curl -XPUT localhost:9200/_template/test1 -d '{\n \"template\": \"test1-*\",\n \"settings\": {\n \"index\": {\n \"mapper\": {\n \"dynamic\": false\n }\n }\n }\n}'\n{\"acknowledged\":true}\n\n$ curl -XGET 'localhost:9200/_template/test1?pretty'\n{\n \"test1\" : {\n \"order\" : 0,\n \"template\" : \"test1-*\",\n \"settings\" : {\n \"index.mapper.dynamic\" : \"false\"\n },\n \"mappings\" : { },\n \"aliases\" : { }\n }\n}\n\ncurl -XGET 'localhost:9200/_template/test1?pretty&flat_settings=false'\n{\n \"test1\" : {\n \"order\" : 0,\n \"template\" : \"test1-*\",\n \"settings\" : {\n \"index.mapper.dynamic\" : \"false\"\n },\n \"mappings\" : { },\n \"aliases\" : { }\n }\n}\n```\n",
"comments": [
{
"body": "Thanks for the report, I'll look into it.\n",
"created_at": "2014-07-02T06:38:40Z"
},
{
"body": "It will be fixed in the 1.3.0 release. I did not backport to 1.2.2 as there is a minor break since it changes the default for the GET template API from `flat_settings=false` to `flat_settings=true`.\n",
"created_at": "2014-07-02T07:23:57Z"
}
],
"number": 6671,
"title": "Index Templates API: GET templates doesn't honor the `flat_settings` parameter."
} | {
"body": "Close #6671\n",
"number": 6672,
"review_comments": [],
"title": "GET templates doesn't honor the `flat_settings` parameter."
} | {
"commits": [
{
"message": "Templates: GET templates doesn't honor the `flat_settings` parameter.\n\nClose #6671"
}
],
"files": [
{
"diff": "@@ -16,7 +16,8 @@ setup:\n name: test\n \n - match: {test.template: \"test-*\"}\n- - match: {test.settings: {index.number_of_shards: '1', index.number_of_replicas: '0'}}\n+ - match: {test.settings: {index: {number_of_shards: '1', number_of_replicas: '0'}}}\n+\n ---\n \"Get all templates\":\n \n@@ -43,3 +44,13 @@ setup:\n local: true\n \n - is_true: test\n+\n+---\n+\"Get template with flat settings\":\n+\n+ - do:\n+ indices.get_template:\n+ name: test\n+ flat_settings: true\n+\n+ - match: {test.settings: {index.number_of_shards: '1', index.number_of_replicas: '0'}}",
"filename": "rest-api-spec/test/indices.get_template/10_basic.yaml",
"status": "modified"
},
{
"diff": "@@ -12,6 +12,7 @@\n - do:\n indices.get_template:\n name: test\n+ flat_settings: true\n \n - match: {test.template: \"test-*\"}\n - match: {test.settings: {index.number_of_shards: '1', index.number_of_replicas: '0'}}\n@@ -54,6 +55,7 @@\n - do:\n indices.get_template:\n name: test\n+ flat_settings: true\n \n - match: {test.template: \"test-*\"}\n - match: {test.settings: {index.number_of_shards: '1', index.number_of_replicas: '0'}}",
"filename": "rest-api-spec/test/indices.put_template/10_basic.yaml",
"status": "modified"
},
{
"diff": "@@ -275,9 +275,7 @@ public static void toXContent(IndexTemplateMetaData indexTemplateMetaData, XCont\n builder.field(\"template\", indexTemplateMetaData.template());\n \n builder.startObject(\"settings\");\n- for (Map.Entry<String, String> entry : indexTemplateMetaData.settings().getAsMap().entrySet()) {\n- builder.field(entry.getKey(), entry.getValue());\n- }\n+ indexTemplateMetaData.settings().toXContent(builder, params);\n builder.endObject();\n \n if (params.paramAsBoolean(\"reduce_mappings\", false)) {",
"filename": "src/main/java/org/elasticsearch/cluster/metadata/IndexTemplateMetaData.java",
"status": "modified"
}
]
} |
{
"body": "This just happened as I was restarting elasticsearch (1.1.1) on one node. As it came up again it failed to start one of the shards with the following exception:\n[2014-06-29 04:38:35,401][INFO ][node ] [es-6636e.recfut.net] started\n[2014-06-29 04:38:56,268][WARN ][indices.cluster ] [es-6636e.recfut.net] [reference_2014-04-10_1][2] failed to start shard\norg.elasticsearch.index.gateway.IndexShardGatewayRecoveryException: [reference_2014-04-10_1][2] failed recovery\n at org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:256)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n at java.lang.Thread.run(Thread.java:724)\nCaused by: org.elasticsearch.index.engine.FlushNotAllowedEngineException: [reference_2014-04-10_1][2] already flushing...\n at org.elasticsearch.index.engine.internal.InternalEngine.flush(InternalEngine.java:756)\n at org.elasticsearch.index.shard.service.InternalIndexShard.performRecoveryFinalization(InternalIndexShard.java:716)\n at org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:250)\n at org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:197)\n ... 3 more\n[2014-06-29 04:38:56,824][WARN ][cluster.action.shard ] [es-6636e.recfut.net] [reference_2014-04-10_1][2] sending failed shard for [reference_2014-04-10_1][2], node[IdoOCT9rTwWUZMCwf95hcg], [P], s[INITIALIZING], indexUUID [va11FwbtRTmsGsKW97LYkA], reason [Failed to start shard, message [IndexShardGatewayRecoveryException[[reference_2014-04-10_1][2] failed recovery]; nested: FlushNotAllowedEngineException[[reference_2014-04-10_1][2] already flushing...]; ]]\n\nAfter being anxious for a while I did another restart, and this time all shards started. There were no writes happening to the cluster during this. But it was very worrying.\n",
"comments": [
{
"body": "At the end of a recovery process (from disk or copying primaries) we do a flush in order to make sure that all recent changes are committed to disk. If there is already an ongoing flush the flush operation fails (the assumption is that the flush was caused by an external API). I looked a bit at the code and I think I found a place that could cause a background flush, even if the shard is not yet started. I will work to remove it.\n\nOther than that - your response was correct. It's just an unlucky timing.\n",
"created_at": "2014-06-30T13:21:06Z"
},
{
"body": "Thank you, that is exactly the kind of response I was hoping for.\n\nOn Mon, Jun 30, 2014 at 3:21 PM, Boaz Leskes notifications@github.com\nwrote:\n\n> At the end of a recovery process (from disk or copying primaries) we do a\n> flush in order to make sure that all recent changes are committed to disk.\n> If there is already an ongoing flush the flush operation fails (the\n> assumption is that the flush was caused by an external API). I looked a bit\n> at the code and I think I found a place that could cause a background\n> flush, even if the shard is not yet started. I will work to remove it.\n> \n> Other than that - your response was correct. It's just an unlucky timing.\n> \n> —\n> Reply to this email directly or view it on GitHub\n> https://github.com/elasticsearch/elasticsearch/issues/6642#issuecomment-47530259\n> .\n",
"created_at": "2014-06-30T13:35:04Z"
}
],
"number": 6642,
"title": "Failed to start shard when restarting elasticsearch"
} | {
"body": "At the moment the IndexingMemoryController can try to update the index buffer memory of shards at any give moment. This update involves a flush, which may cause a FlushNotAllowedEngineException to be thrown in a concurrently finalizing recovery.\n\nCloses #6642\n",
"number": 6667,
"review_comments": [
{
"body": "if it is a test only setting please make it pkg private and move the test if necesary. If not we should add javadocs and pull it up to the interface\n",
"created_at": "2014-07-02T07:15:53Z"
},
{
"body": "javadocs pls :)\n",
"created_at": "2014-07-02T07:16:24Z"
},
{
"body": "I'd really like us to start using `EnumSets` for this kind of stuff and then do \n\n``` Java\nprivate static final EnumSet<IndexShardState> CAN_UPDATE_INDEX_BUFFER_STATES = EnumSet.of(....);\n\nif (CAN_UPDATE_INDEX_BUFFER_STATES.contains(state) == false) {\n continue;\n} \n```\n",
"created_at": "2014-07-02T07:19:38Z"
},
{
"body": "why do you need local GW here? shouldn't `none` be enough ie. default?\n",
"created_at": "2014-07-02T07:22:08Z"
},
{
"body": "Making it package private is tricky because the test that needs it lives in another package indeed, and I think it is in the right place. On the other hand this feels like an internal/implementation detail method and I don't think it should go on the interface. We call the class an InternalEngine so I think it's OK to have public methods which are not part of the official interface of it.\n",
"created_at": "2014-07-02T08:51:25Z"
},
{
"body": "done\n",
"created_at": "2014-07-02T08:51:35Z"
},
{
"body": "Good one. done.\n",
"created_at": "2014-07-02T08:51:48Z"
},
{
"body": "this can be package private?\n",
"created_at": "2014-07-02T09:59:52Z"
},
{
"body": "add a comment // public for testing?\n",
"created_at": "2014-07-02T10:00:28Z"
}
],
"title": "IndexingMemoryController should only update buffer settings of fully recovered shards"
} | {
"commits": [
{
"message": "IndexingMemoryController should only update buffer settings of recovered shards\n\nAt the moment the IndexingMemoryController can try to update the index buffer memory of shards at any give moment. This update involves a flush, which may cause a FlushNotAllowedEngineException to be thrown in a concurrently finalizing recovery.\n\nCloses #6642"
},
{
"message": "feedback round 1"
},
{
"message": "Added enum set"
}
],
"files": [
{
"diff": "@@ -19,19 +19,7 @@\n \n package org.elasticsearch.index.engine.internal;\n \n-import java.io.IOException;\n-import java.util.*;\n-import java.util.concurrent.ConcurrentMap;\n-import java.util.concurrent.CopyOnWriteArrayList;\n-import java.util.concurrent.TimeUnit;\n-import java.util.concurrent.atomic.AtomicBoolean;\n-import java.util.concurrent.atomic.AtomicInteger;\n-import java.util.concurrent.atomic.AtomicLong;\n-import java.util.concurrent.locks.Condition;\n-import java.util.concurrent.locks.Lock;\n-import java.util.concurrent.locks.ReentrantLock;\n-import java.util.concurrent.locks.ReentrantReadWriteLock;\n-\n+import com.google.common.collect.Lists;\n import org.apache.lucene.index.*;\n import org.apache.lucene.index.IndexWriter.IndexReaderWarmer;\n import org.apache.lucene.search.IndexSearcher;\n@@ -40,7 +28,6 @@\n import org.apache.lucene.search.SearcherManager;\n import org.apache.lucene.store.AlreadyClosedException;\n import org.apache.lucene.store.LockObtainFailedException;\n-import org.apache.lucene.store.NoLockFactory;\n import org.apache.lucene.util.BytesRef;\n import org.apache.lucene.util.IOUtils;\n import org.elasticsearch.ElasticsearchException;\n@@ -53,7 +40,6 @@\n import org.elasticsearch.common.lease.Releasable;\n import org.elasticsearch.common.lease.Releasables;\n import org.elasticsearch.common.logging.ESLogger;\n-import org.elasticsearch.common.logging.Loggers;\n import org.elasticsearch.common.lucene.LoggerInfoStream;\n import org.elasticsearch.common.lucene.Lucene;\n import org.elasticsearch.common.lucene.SegmentReaderUtils;\n@@ -64,7 +50,6 @@\n import org.elasticsearch.common.unit.ByteSizeUnit;\n import org.elasticsearch.common.unit.ByteSizeValue;\n import org.elasticsearch.common.unit.TimeValue;\n-import org.elasticsearch.common.util.concurrent.ConcurrentCollections;\n import org.elasticsearch.common.util.concurrent.EsExecutors;\n import org.elasticsearch.index.analysis.AnalysisService;\n import org.elasticsearch.index.codec.CodecService;\n@@ -89,7 +74,18 @@\n import org.elasticsearch.indices.warmer.IndicesWarmer;\n import org.elasticsearch.indices.warmer.InternalIndicesWarmer;\n import org.elasticsearch.threadpool.ThreadPool;\n-import com.google.common.collect.Lists;\n+\n+import java.io.IOException;\n+import java.util.*;\n+import java.util.concurrent.CopyOnWriteArrayList;\n+import java.util.concurrent.TimeUnit;\n+import java.util.concurrent.atomic.AtomicBoolean;\n+import java.util.concurrent.atomic.AtomicInteger;\n+import java.util.concurrent.atomic.AtomicLong;\n+import java.util.concurrent.locks.Condition;\n+import java.util.concurrent.locks.Lock;\n+import java.util.concurrent.locks.ReentrantLock;\n+import java.util.concurrent.locks.ReentrantReadWriteLock;\n \n /**\n *\n@@ -314,6 +310,11 @@ public TimeValue defaultRefreshInterval() {\n return new TimeValue(1, TimeUnit.SECONDS);\n }\n \n+ /** return the current indexing buffer size setting * */\n+ public ByteSizeValue indexingBufferSize() {\n+ return indexingBufferSize;\n+ }\n+\n @Override\n public void enableGcDeletes(boolean enableGcDeletes) {\n this.enableGcDeletes = enableGcDeletes;\n@@ -1566,11 +1567,11 @@ public Releasable acquireThrottle() {\n \n @Override\n public void beforeMerge(OnGoingMerge merge) {\n- if (numMergesInFlight.incrementAndGet() > maxNumMerges) {\n- if (isThrottling.getAndSet(true) == false) {\n- logger.info(\"now throttling indexing: numMergesInFlight={}, maxNumMerges={}\", numMergesInFlight, maxNumMerges);\n- }\n- lock = lockReference;\n+ if (numMergesInFlight.incrementAndGet() > maxNumMerges) {\n+ if (isThrottling.getAndSet(true) == false) {\n+ logger.info(\"now throttling indexing: numMergesInFlight={}, maxNumMerges={}\", numMergesInFlight, maxNumMerges);\n+ }\n+ lock = lockReference;\n }\n }\n \n@@ -1588,7 +1589,8 @@ public void afterMerge(OnGoingMerge merge) {\n private static final class NoOpLock implements Lock {\n \n @Override\n- public void lock() {}\n+ public void lock() {\n+ }\n \n @Override\n public void lockInterruptibly() throws InterruptedException {",
"filename": "src/main/java/org/elasticsearch/index/engine/internal/InternalEngine.java",
"status": "modified"
},
{
"diff": "@@ -32,6 +32,7 @@\n import org.elasticsearch.index.engine.EngineClosedException;\n import org.elasticsearch.index.engine.FlushNotAllowedEngineException;\n import org.elasticsearch.index.service.IndexService;\n+import org.elasticsearch.index.shard.IndexShardState;\n import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.index.shard.service.IndexShard;\n import org.elasticsearch.index.shard.service.InternalIndexShard;\n@@ -41,6 +42,7 @@\n import org.elasticsearch.monitor.jvm.JvmInfo;\n import org.elasticsearch.threadpool.ThreadPool;\n \n+import java.util.EnumSet;\n import java.util.List;\n import java.util.Map;\n import java.util.concurrent.ScheduledFuture;\n@@ -64,7 +66,7 @@ public class IndexingMemoryController extends AbstractLifecycleComponent<Indexin\n \n private final TimeValue inactiveTime;\n private final TimeValue interval;\n- private final AtomicBoolean shardsCreatedOrDeleted = new AtomicBoolean();\n+ private final AtomicBoolean shardsRecoveredOrDeleted = new AtomicBoolean();\n \n private final Listener listener = new Listener();\n \n@@ -74,6 +76,8 @@ public class IndexingMemoryController extends AbstractLifecycleComponent<Indexin\n \n private final Object mutex = new Object();\n \n+ private static final EnumSet<IndexShardState> CAN_UPDATE_INDEX_BUFFER_STATES = EnumSet.of(IndexShardState.POST_RECOVERY, IndexShardState.STARTED, IndexShardState.RELOCATED);\n+\n @Inject\n public IndexingMemoryController(Settings settings, ThreadPool threadPool, IndicesService indicesService) {\n super(settings);\n@@ -151,6 +155,14 @@ protected void doStop() throws ElasticsearchException {\n protected void doClose() throws ElasticsearchException {\n }\n \n+ /**\n+ * returns the current budget for the total amount of indexing buffers of\n+ * active shards on this node\n+ */\n+ public ByteSizeValue indexingBufferSize() {\n+ return indexingBuffer;\n+ }\n+\n class ShardsIndicesStatusChecker implements Runnable {\n @Override\n public void run() {\n@@ -206,9 +218,9 @@ public void run() {\n // ignore\n }\n }\n- boolean shardsCreatedOrDeleted = IndexingMemoryController.this.shardsCreatedOrDeleted.compareAndSet(true, false);\n- if (shardsCreatedOrDeleted || activeInactiveStatusChanges) {\n- calcAndSetShardBuffers(\"active/inactive[\" + activeInactiveStatusChanges + \"] created/deleted[\" + shardsCreatedOrDeleted + \"]\");\n+ boolean shardsRecoveredOrDeleted = IndexingMemoryController.this.shardsRecoveredOrDeleted.compareAndSet(true, false);\n+ if (shardsRecoveredOrDeleted || activeInactiveStatusChanges) {\n+ calcAndSetShardBuffers(\"active/inactive[\" + activeInactiveStatusChanges + \"] recovered/deleted[\" + shardsRecoveredOrDeleted + \"]\");\n }\n }\n }\n@@ -217,25 +229,25 @@ public void run() {\n class Listener extends IndicesLifecycle.Listener {\n \n @Override\n- public void afterIndexShardCreated(IndexShard indexShard) {\n+ public void afterIndexShardPostRecovery(IndexShard indexShard) {\n synchronized (mutex) {\n shardsIndicesStatus.put(indexShard.shardId(), new ShardIndexingStatus());\n- shardsCreatedOrDeleted.set(true);\n+ shardsRecoveredOrDeleted.set(true);\n }\n }\n \n @Override\n public void afterIndexShardClosed(ShardId shardId) {\n synchronized (mutex) {\n shardsIndicesStatus.remove(shardId);\n- shardsCreatedOrDeleted.set(true);\n+ shardsRecoveredOrDeleted.set(true);\n }\n }\n }\n \n \n private void calcAndSetShardBuffers(String reason) {\n- int shardsCount = countShards();\n+ int shardsCount = countActiveShards();\n if (shardsCount == 0) {\n return;\n }\n@@ -258,6 +270,11 @@ private void calcAndSetShardBuffers(String reason) {\n logger.debug(\"recalculating shard indexing buffer (reason={}), total is [{}] with [{}] active shards, each shard set to indexing=[{}], translog=[{}]\", reason, indexingBuffer, shardsCount, shardIndexingBufferSize, shardTranslogBufferSize);\n for (IndexService indexService : indicesService) {\n for (IndexShard indexShard : indexService) {\n+ IndexShardState state = indexShard.state();\n+ if (!CAN_UPDATE_INDEX_BUFFER_STATES.contains(state)) {\n+ logger.trace(\"shard [{}] is not yet ready for index buffer update. index shard state: [{}]\", indexShard.shardId(), state);\n+ continue;\n+ }\n ShardIndexingStatus status = shardsIndicesStatus.get(indexShard.shardId());\n if (status == null || !status.inactiveIndexing) {\n try {\n@@ -270,14 +287,14 @@ private void calcAndSetShardBuffers(String reason) {\n // ignore\n continue;\n } catch (Exception e) {\n- logger.warn(\"failed to set shard [{}][{}] index buffer to [{}]\", indexShard.shardId().index().name(), indexShard.shardId().id(), shardIndexingBufferSize);\n+ logger.warn(\"failed to set shard {} index buffer to [{}]\", indexShard.shardId(), shardIndexingBufferSize);\n }\n }\n }\n }\n }\n \n- private int countShards() {\n+ private int countActiveShards() {\n int shardsCount = 0;\n for (IndexService indexService : indicesService) {\n for (IndexShard indexShard : indexService) {",
"filename": "src/main/java/org/elasticsearch/indices/memory/IndexingMemoryController.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,80 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.indices.memory;\n+\n+import com.google.common.base.Predicate;\n+import org.elasticsearch.common.settings.ImmutableSettings;\n+import org.elasticsearch.index.engine.internal.InternalEngine;\n+import org.elasticsearch.index.shard.service.InternalIndexShard;\n+import org.elasticsearch.indices.IndicesService;\n+import org.elasticsearch.test.ElasticsearchIntegrationTest;\n+import org.junit.Test;\n+\n+\n+@ElasticsearchIntegrationTest.ClusterScope(scope = ElasticsearchIntegrationTest.Scope.TEST, numDataNodes = 0)\n+public class IndexMemoryControllerTests extends ElasticsearchIntegrationTest {\n+\n+ @Test\n+ public void testIndexBufferSizeUpdateAfterShardCreation() throws InterruptedException {\n+\n+ internalCluster().startNode(ImmutableSettings.builder()\n+ .put(\"http.enabled\", \"false\")\n+ .put(\"discovery.type\", \"local\")\n+ .put(\"indices.memory.interval\", \"1s\")\n+ );\n+\n+ client().admin().indices().prepareCreate(\"test1\")\n+ .setSettings(ImmutableSettings.builder()\n+ .put(\"number_of_shards\", 1)\n+ .put(\"number_of_replicas\", 0)\n+ ).get();\n+\n+ ensureGreen();\n+\n+ final InternalIndexShard shard1 = (InternalIndexShard) internalCluster().getInstance(IndicesService.class).indexService(\"test1\").shard(0);\n+\n+ client().admin().indices().prepareCreate(\"test2\")\n+ .setSettings(ImmutableSettings.builder()\n+ .put(\"number_of_shards\", 1)\n+ .put(\"number_of_replicas\", 0)\n+ ).get();\n+\n+ ensureGreen();\n+\n+ final InternalIndexShard shard2 = (InternalIndexShard) internalCluster().getInstance(IndicesService.class).indexService(\"test2\").shard(0);\n+ final long expectedShardSize = internalCluster().getInstance(IndexingMemoryController.class).indexingBufferSize().bytes() / 2;\n+\n+ boolean success = awaitBusy(new Predicate<Object>() {\n+ @Override\n+ public boolean apply(Object input) {\n+ return ((InternalEngine) shard1.engine()).indexingBufferSize().bytes() <= expectedShardSize &&\n+ ((InternalEngine) shard2.engine()).indexingBufferSize().bytes() <= expectedShardSize;\n+ }\n+ });\n+\n+ if (!success) {\n+ fail(\"failed to update shard indexing buffer size. expected [\" + expectedShardSize + \"] shard1 [\" +\n+ ((InternalEngine) shard1.engine()).indexingBufferSize().bytes() + \"] shard2 [\" +\n+ ((InternalEngine) shard1.engine()).indexingBufferSize().bytes() + \"]\"\n+ );\n+ }\n+\n+ }\n+}",
"filename": "src/test/java/org/elasticsearch/indices/memory/IndexMemoryControllerTests.java",
"status": "added"
}
]
} |
{
"body": "Line 524 uses \"include == null\" to check if the exclude string is null.\n\nI don't know why this had to be a one-liner, but I don't like using ternary operators, particularly twice (!) on the same line. Makes it harder to read and can lead to bugs like this.\n\nAffects v1.2.1\n\nCheers!\n",
"comments": [],
"number": 6632,
"title": "Fix source excludes setting if no includes were provided"
} | {
"body": "Due to a bogus if-check in SearchSourceBuilder.fetchSource(String include, String exclude)\nthe excludes only got set when the includes were not null. Fixed this and added some\nbasic tests.\n\nCloses #6632\n",
"number": 6649,
"review_comments": [
{
"body": "You meant to add here the issue number right?\n",
"created_at": "2014-06-30T16:07:36Z"
},
{
"body": "oops. fixed\n",
"created_at": "2014-07-01T06:22:47Z"
}
],
"title": "JAVA API: Fix source excludes setting if no includes were provided"
} | {
"commits": [
{
"message": "JAVA API: Fix source excludes setting if no includes were provided\n\nDue to a bogus if-check in SearchSourceBuilder.fetchSource(String include, String exclude)\nthe excludes only got set when the includes were not null. Fixed this and added some\nbasic tests.\n\nCloses #6632"
}
],
"files": [
{
"diff": "@@ -521,7 +521,7 @@ public SearchSourceBuilder fetchSource(boolean fetch) {\n * @param exclude An optional exclude (optionally wildcarded) pattern to filter the returned _source\n */\n public SearchSourceBuilder fetchSource(@Nullable String include, @Nullable String exclude) {\n- return fetchSource(include == null ? Strings.EMPTY_ARRAY : new String[]{include}, include == null ? Strings.EMPTY_ARRAY : new String[]{exclude});\n+ return fetchSource(include == null ? Strings.EMPTY_ARRAY : new String[]{include}, exclude == null ? Strings.EMPTY_ARRAY : new String[]{exclude});\n }\n \n /**",
"filename": "src/main/java/org/elasticsearch/search/builder/SearchSourceBuilder.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,86 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.search.builder;\n+\n+import org.elasticsearch.common.xcontent.json.JsonXContent;\n+import org.elasticsearch.test.ElasticsearchTestCase;\n+import org.junit.Test;\n+\n+import java.io.IOException;\n+import java.util.List;\n+import java.util.Map;\n+\n+import static org.hamcrest.Matchers.*;\n+\n+public class SearchSourceBuilderTest extends ElasticsearchTestCase {\n+\n+ SearchSourceBuilder builder = new SearchSourceBuilder();\n+\n+ @Test // issue #6632\n+ public void testThatSearchSourceBuilderIncludesExcludesAreAppliedCorrectly() throws Exception {\n+ builder.fetchSource(\"foo\", null);\n+ assertIncludes(builder, \"foo\");\n+ assertExcludes(builder);\n+\n+ builder.fetchSource(null, \"foo\");\n+ assertIncludes(builder);\n+ assertExcludes(builder, \"foo\");\n+\n+ builder.fetchSource(null, new String[]{\"foo\"});\n+ assertIncludes(builder);\n+ assertExcludes(builder, \"foo\");\n+\n+ builder.fetchSource(new String[]{\"foo\"}, null);\n+ assertIncludes(builder, \"foo\");\n+ assertExcludes(builder);\n+\n+ builder.fetchSource(\"foo\", \"bar\");\n+ assertIncludes(builder, \"foo\");\n+ assertExcludes(builder, \"bar\");\n+\n+ builder.fetchSource(new String[]{\"foo\"}, new String[]{\"bar\", \"baz\"});\n+ assertIncludes(builder, \"foo\");\n+ assertExcludes(builder, \"bar\", \"baz\");\n+ }\n+\n+ private void assertIncludes(SearchSourceBuilder builder, String... elems) throws IOException {\n+ assertFieldValues(builder, \"includes\", elems);\n+ }\n+\n+ private void assertExcludes(SearchSourceBuilder builder, String... elems) throws IOException {\n+ assertFieldValues(builder, \"excludes\", elems);\n+ }\n+\n+ private void assertFieldValues(SearchSourceBuilder builder, String fieldName, String... elems) throws IOException {\n+ Map<String, Object> map = getSourceMap(builder);\n+\n+ assertThat(map, hasKey(fieldName));\n+ assertThat(map.get(fieldName), is(instanceOf(List.class)));\n+ List<String> castedList = (List<String>) map.get(fieldName);\n+ assertThat(castedList, hasSize(elems.length));\n+ assertThat(castedList, hasItems(elems));\n+ }\n+\n+ private Map<String, Object> getSourceMap(SearchSourceBuilder builder) throws IOException {\n+ Map<String, Object> data = JsonXContent.jsonXContent.createParser(builder.toString()).mapAndClose();\n+ assertThat(data, hasKey(\"_source\"));\n+ return (Map<String, Object>) data.get(\"_source\");\n+ }\n+\n+}",
"filename": "src/test/java/org/elasticsearch/search/builder/SearchSourceBuilderTest.java",
"status": "added"
}
]
} |
{
"body": "Opening an issue as suggested by @dadoonet after the following discussion on the mailing list: https://groups.google.com/forum/#!topic/elasticsearch/HAA4Y8Qziqg\n\nWhen a bulk update action fails, an \"index\" entry can be returned in the response.\n\nExample bulk commands list with an update command failing because the second date can't be parsed:\n\n``` json\n{ \"index\" : { \"_index\" : \"test\", \"_type\" : \"type1\", \"_id\" : \"1\" } } \n{ \"title\" : \"Great Title of doc 1\" } \n{ \"index\" : { \"_index\" : \"test\", \"_type\" : \"type1\", \"_id\" : \"2\" } } \n{ \"title\" : \"Great Title of doc 2\" } \n{ \"update\" : { \"_index\" : \"test\", \"_type\" : \"type1\", \"_id\" : \"1\" } } \n{ \"doc\" : { \"date\" : \"2014-04-30T23:59:57\" }} \n{ \"update\" : { \"_index\" : \"test\", \"_type\" : \"type1\", \"_id\" : \"2\" } } \n{ \"doc\" : { \"date\" : \"2014-04-31T00:00:01\" }} \n{ \"delete\" : { \"_index\" : \"test\", \"_type\" : \"type1\", \"_id\" : \"1\" } } \n{ \"delete\" : { \"_index\" : \"test\", \"_type\" : \"type1\", \"_id\" : \"2\" } }\n```\n\nHere is the actual response (elasticsearch v1.1):\n\n``` json\n{\n \"took\" : 4, \n \"errors\" : true, \n \"items\" : [ { \n \"index\" : { \n \"_index\" : \"test\", \n \"_type\" : \"type1\", \n \"_id\" : \"1\", \n \"_version\" : 8, \n \"status\" : 201 \n } \n }, { \n \"index\" : { \n \"_index\" : \"test\", \n \"_type\" : \"type1\", \n \"_id\" : \"2\", \n \"_version\" : 5, \n \"status\" : 201 \n } \n }, { \n \"update\" : { \n \"_index\" : \"test\", \n \"_type\" : \"type1\", \n \"_id\" : \"1\", \n \"_version\" : 9, \n \"status\" : 200 \n } \n }, { \n \"index\" : { \n \"_index\" : \"test\", \n \"_type\" : \"type1\", \n \"_id\" : \"2\", \n \"status\" : 400, \n \"error\" : \"MapperParsingException[failed to parse [date]]; nested: MapperParsingException[failed to parse date field [2014-04-31T00:00:01], tried both date format [dateOptionalTime], and timestamp number with locale []]; nested: IllegalFieldValueException[Cannot parse \\\"2014-04-31T00:00:01\\\": Value 31 for dayOfMonth must be in the range [1,30]]; \" \n } \n }, { \n \"delete\" : { \n \"_index\" : \"test\", \n \"_type\" : \"type1\", \n \"_id\" : \"1\", \n \"_version\" : 10, \n \"status\" : 200, \n \"found\" : true \n } \n }, { \n \"delete\" : { \n \"_index\" : \"test\", \n \"_type\" : \"type1\", \n \"_id\" : \"2\", \n \"_version\" : 6, \n \"status\" : 200, \n \"found\" : true \n } \n } ] \n}\n```\n\nThe problem here concerns the bulk update response for the forth item. It's key is `index` while it should be `update`, as in the following hand edited response:\n\n``` json\n{\n \"took\" : 4, \n \"errors\" : true, \n \"items\" : [ { \n \"index\" : { \n \"_index\" : \"test\", \n \"_type\" : \"type1\", \n \"_id\" : \"1\", \n \"_version\" : 8, \n \"status\" : 201 \n } \n }, { \n \"index\" : { \n \"_index\" : \"test\", \n \"_type\" : \"type1\", \n \"_id\" : \"2\", \n \"_version\" : 5, \n \"status\" : 201 \n } \n }, { \n \"update\" : { \n \"_index\" : \"test\", \n \"_type\" : \"type1\", \n \"_id\" : \"1\", \n \"_version\" : 9, \n \"status\" : 200 \n } \n }, { \n \"update\" : { \n \"_index\" : \"test\", \n \"_type\" : \"type1\", \n \"_id\" : \"2\", \n \"status\" : 400, \n \"error\" : \"MapperParsingException[failed to parse [date]]; nested: MapperParsingException[failed to parse date field [2014-04-31T00:00:01], tried both date format [dateOptionalTime], and timestamp number with locale []]; nested: IllegalFieldValueException[Cannot parse \\\"2014-04-31T00:00:01\\\": Value 31 for dayOfMonth must be in the range [1,30]]; \" \n } \n }, { \n \"delete\" : { \n \"_index\" : \"test\", \n \"_type\" : \"type1\", \n \"_id\" : \"1\", \n \"_version\" : 10, \n \"status\" : 200, \n \"found\" : true \n } \n }, { \n \"delete\" : { \n \"_index\" : \"test\", \n \"_type\" : \"type1\", \n \"_id\" : \"2\", \n \"_version\" : 6, \n \"status\" : 200, \n \"found\" : true \n } \n } ] \n}\n```\n\nI didn't do a lot of code reading but a possible starting point for investigations is there: https://github.com/elasticsearch/elasticsearch/blob/1.1/src/main/java/org/elasticsearch/action/bulk/TransportShardBulkAction.java#L317\n",
"comments": [],
"number": 6630,
"title": "Bulk API: Fix return of wrong request type on failed updates"
} | {
"body": "In case an update request failed (for example when updating with a\nwrongly formatted date), the returned index operation type was index\ninstead of update.\n\nCloses #6630\n",
"number": 6646,
"review_comments": [
{
"body": "a bit of nitpicking here: can we use a constant? I see that we use the same string many times in this class. Same could be done for delete and the other string constants as well while we are at it (could be a separate change though)...\n",
"created_at": "2014-06-30T16:31:58Z"
},
{
"body": "added a second commit with delete and update constants, index is not needed as it is always extracted from the request operation type\n",
"created_at": "2014-07-01T06:23:29Z"
}
],
"title": "Fix return of wrong request type on failed updates"
} | {
"commits": [
{
"message": "Bulk API: Fix return of wrong request type on failed updates\n\nIn case an update request failed (for example when updating with a\nwrongly formatted date), the returned index operation type was index\ninstead of update.\n\nCloses #6630"
},
{
"message": "Refactoring: Replaced string values with static constants\n\nin TransportShardBulkAction after fixing an issue."
}
],
"files": [
{
"diff": "@@ -72,6 +72,9 @@\n */\n public class TransportShardBulkAction extends TransportShardReplicationOperationAction<BulkShardRequest, BulkShardRequest, BulkShardResponse> {\n \n+ private final static String OP_TYPE_UPDATE = \"update\";\n+ private final static String OP_TYPE_DELETE = \"delete\";\n+\n private final MappingUpdatedAction mappingUpdatedAction;\n private final UpdateHelper updateHelper;\n private final boolean allowIdGeneration;\n@@ -208,7 +211,7 @@ protected PrimaryResponse<BulkShardResponse, BulkShardRequest> shardOperationOnP\n try {\n // add the response\n DeleteResponse deleteResponse = shardDeleteOperation(deleteRequest, indexShard).response();\n- responses[requestIndex] = new BulkItemResponse(item.id(), \"delete\", deleteResponse);\n+ responses[requestIndex] = new BulkItemResponse(item.id(), OP_TYPE_DELETE, deleteResponse);\n } catch (Throwable e) {\n // rethrow the failure if we are going to retry on primary and let parent failure to handle it\n if (retryPrimaryException(e)) {\n@@ -223,7 +226,7 @@ protected PrimaryResponse<BulkShardResponse, BulkShardRequest> shardOperationOnP\n } else {\n logger.debug(\"[{}][{}] failed to execute bulk item (delete) {}\", e, shardRequest.request.index(), shardRequest.shardId, deleteRequest);\n }\n- responses[requestIndex] = new BulkItemResponse(item.id(), \"delete\",\n+ responses[requestIndex] = new BulkItemResponse(item.id(), OP_TYPE_DELETE,\n new BulkItemResponse.Failure(deleteRequest.index(), deleteRequest.type(), deleteRequest.id(), e));\n // nullify the request so it won't execute on the replicas\n request.items()[requestIndex] = null;\n@@ -255,7 +258,7 @@ protected PrimaryResponse<BulkShardResponse, BulkShardRequest> shardOperationOnP\n Tuple<XContentType, Map<String, Object>> sourceAndContent = XContentHelper.convertToMap(indexSourceAsBytes, true);\n updateResponse.setGetResult(updateHelper.extractGetResult(updateRequest, indexResponse.getVersion(), sourceAndContent.v2(), sourceAndContent.v1(), indexSourceAsBytes));\n }\n- responses[requestIndex] = new BulkItemResponse(item.id(), \"update\", updateResponse);\n+ responses[requestIndex] = new BulkItemResponse(item.id(), OP_TYPE_UPDATE, updateResponse);\n if (result.mappingToUpdate != null) {\n mappingsToUpdate.add(result.mappingToUpdate);\n }\n@@ -273,12 +276,12 @@ protected PrimaryResponse<BulkShardResponse, BulkShardRequest> shardOperationOnP\n DeleteRequest deleteRequest = updateResult.request();\n updateResponse = new UpdateResponse(response.getIndex(), response.getType(), response.getId(), response.getVersion(), false);\n updateResponse.setGetResult(updateHelper.extractGetResult(updateRequest, response.getVersion(), updateResult.result.updatedSourceAsMap(), updateResult.result.updateSourceContentType(), null));\n- responses[requestIndex] = new BulkItemResponse(item.id(), \"update\", updateResponse);\n+ responses[requestIndex] = new BulkItemResponse(item.id(), OP_TYPE_UPDATE, updateResponse);\n // Replace the update request to the translated delete request to execute on the replica.\n request.items()[requestIndex] = new BulkItemRequest(request.items()[requestIndex].id(), deleteRequest);\n break;\n case NONE:\n- responses[requestIndex] = new BulkItemResponse(item.id(), \"update\", updateResult.noopResult);\n+ responses[requestIndex] = new BulkItemResponse(item.id(), OP_TYPE_UPDATE, updateResult.noopResult);\n request.items()[requestIndex] = null; // No need to go to the replica\n break;\n }\n@@ -290,7 +293,7 @@ protected PrimaryResponse<BulkShardResponse, BulkShardRequest> shardOperationOnP\n // updateAttemptCount is 0 based and marks current attempt, if it's equal to retryOnConflict we are going out of the iteration\n if (updateAttemptsCount >= updateRequest.retryOnConflict()) {\n // we can't try any more\n- responses[requestIndex] = new BulkItemResponse(item.id(), \"update\",\n+ responses[requestIndex] = new BulkItemResponse(item.id(), OP_TYPE_UPDATE,\n new BulkItemResponse.Failure(updateRequest.index(), updateRequest.type(), updateRequest.id(), t));\n request.items()[requestIndex] = null; // do not send to replicas\n }\n@@ -304,7 +307,7 @@ protected PrimaryResponse<BulkShardResponse, BulkShardRequest> shardOperationOnP\n throw (ElasticsearchException) t;\n }\n if (updateResult.result == null) {\n- responses[requestIndex] = new BulkItemResponse(item.id(), \"update\", new BulkItemResponse.Failure(updateRequest.index(), updateRequest.type(), updateRequest.id(), t));\n+ responses[requestIndex] = new BulkItemResponse(item.id(), OP_TYPE_UPDATE, new BulkItemResponse.Failure(updateRequest.index(), updateRequest.type(), updateRequest.id(), t));\n } else {\n switch (updateResult.result.operation()) {\n case UPSERT:\n@@ -315,7 +318,7 @@ protected PrimaryResponse<BulkShardResponse, BulkShardRequest> shardOperationOnP\n } else {\n logger.debug(\"[{}][{}] failed to execute bulk item (index) {}\", t, shardRequest.request.index(), shardRequest.shardId, indexRequest);\n }\n- responses[requestIndex] = new BulkItemResponse(item.id(), indexRequest.opType().lowercase(),\n+ responses[requestIndex] = new BulkItemResponse(item.id(), OP_TYPE_UPDATE,\n new BulkItemResponse.Failure(indexRequest.index(), indexRequest.type(), indexRequest.id(), t));\n break;\n case DELETE:\n@@ -325,7 +328,7 @@ protected PrimaryResponse<BulkShardResponse, BulkShardRequest> shardOperationOnP\n } else {\n logger.debug(\"[{}][{}] failed to execute bulk item (delete) {}\", t, shardRequest.request.index(), shardRequest.shardId, deleteRequest);\n }\n- responses[requestIndex] = new BulkItemResponse(item.id(), \"delete\",\n+ responses[requestIndex] = new BulkItemResponse(item.id(), OP_TYPE_DELETE,\n new BulkItemResponse.Failure(deleteRequest.index(), deleteRequest.type(), deleteRequest.id(), t));\n break;\n }",
"filename": "src/main/java/org/elasticsearch/action/bulk/TransportShardBulkAction.java",
"status": "modified"
},
{
"diff": "@@ -23,9 +23,12 @@\n import org.elasticsearch.action.bulk.BulkRequestBuilder;\n import org.elasticsearch.action.bulk.BulkResponse;\n import org.elasticsearch.action.count.CountResponse;\n+import org.elasticsearch.action.delete.DeleteRequest;\n import org.elasticsearch.action.get.GetResponse;\n+import org.elasticsearch.action.index.IndexRequest;\n import org.elasticsearch.action.index.IndexResponse;\n import org.elasticsearch.action.search.SearchResponse;\n+import org.elasticsearch.action.update.UpdateRequest;\n import org.elasticsearch.action.update.UpdateRequestBuilder;\n import org.elasticsearch.action.update.UpdateResponse;\n import org.elasticsearch.common.bytes.BytesArray;\n@@ -596,5 +599,35 @@ public void testThatInvalidIndexNamesShouldNotBreakCompleteBulkRequest() {\n assertThat(bulkResponse.getItems()[i].isFailed(), is(expectedFailures[i]));\n }\n }\n+\n+ @Test // issue 6630\n+ public void testThatFailedUpdateRequestReturnsCorrectType() throws Exception {\n+ BulkResponse indexBulkItemResponse = client().prepareBulk()\n+ .add(new IndexRequest(\"test\", \"type\", \"3\").source(\"{ \\\"title\\\" : \\\"Great Title of doc 3\\\" }\"))\n+ .add(new IndexRequest(\"test\", \"type\", \"4\").source(\"{ \\\"title\\\" : \\\"Great Title of doc 4\\\" }\"))\n+ .add(new IndexRequest(\"test\", \"type\", \"5\").source(\"{ \\\"title\\\" : \\\"Great Title of doc 5\\\" }\"))\n+ .add(new IndexRequest(\"test\", \"type\", \"6\").source(\"{ \\\"title\\\" : \\\"Great Title of doc 6\\\" }\"))\n+ .setRefresh(true)\n+ .get();\n+ assertNoFailures(indexBulkItemResponse);\n+\n+ BulkResponse bulkItemResponse = client().prepareBulk()\n+ .add(new IndexRequest(\"test\", \"type\", \"1\").source(\"{ \\\"title\\\" : \\\"Great Title of doc 1\\\" }\"))\n+ .add(new IndexRequest(\"test\", \"type\", \"2\").source(\"{ \\\"title\\\" : \\\"Great Title of doc 2\\\" }\"))\n+ .add(new UpdateRequest(\"test\", \"type\", \"3\").doc(\"{ \\\"date\\\" : \\\"2014-01-30T23:59:57\\\"}\"))\n+ .add(new UpdateRequest(\"test\", \"type\", \"4\").doc(\"{ \\\"date\\\" : \\\"2014-13-30T23:59:57\\\"}\"))\n+ .add(new DeleteRequest(\"test\", \"type\", \"5\"))\n+ .add(new DeleteRequest(\"test\", \"type\", \"6\"))\n+ .get();\n+\n+ assertNoFailures(indexBulkItemResponse);\n+ assertThat(bulkItemResponse.getItems().length, is(6));\n+ assertThat(bulkItemResponse.getItems()[0].getOpType(), is(\"index\"));\n+ assertThat(bulkItemResponse.getItems()[1].getOpType(), is(\"index\"));\n+ assertThat(bulkItemResponse.getItems()[2].getOpType(), is(\"update\"));\n+ assertThat(bulkItemResponse.getItems()[3].getOpType(), is(\"update\"));\n+ assertThat(bulkItemResponse.getItems()[4].getOpType(), is(\"delete\"));\n+ assertThat(bulkItemResponse.getItems()[5].getOpType(), is(\"delete\"));\n+ }\n }\n ",
"filename": "src/test/java/org/elasticsearch/document/BulkTests.java",
"status": "modified"
}
]
} |
{
"body": "I'm playing around with groovy and I think exceptions aren't serializing properly:\n\n```\n[2014-06-23 18:19:54,091][INFO ][org.elasticsearch.index.mapper] Action Failed\norg.elasticsearch.transport.RemoteTransportException: [node_0][inet[/192.168.0.101:9300]][index]\nCaused by: org.elasticsearch.transport.RemoteTransportException: Failed to deserialize exception response from stream\nCaused by: org.elasticsearch.transport.TransportSerializationException: Failed to deserialize exception response from stream\n at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:169)\n at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:123)\n at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)\n at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)\n at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)\n at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)\n at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)\n at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)\n at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)\n at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)\n at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)\n at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)\n at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)\n at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)\n at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)\n at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)\n at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)\n at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)\n at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)\n at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)\n at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n at java.lang.Thread.run(Thread.java:744)\nCaused by: java.io.InvalidClassException: failed to read class descriptor\n at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1603)\n at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1517)\n at java.io.ObjectInputStream.readClass(ObjectInputStream.java:1483)\n at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1333)\n at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)\n at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)\n at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)\n at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)\n at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)\n at java.io.ObjectInputStream.defaultReadObject(ObjectInputStream.java:500)\n at java.lang.Throwable.readObject(Throwable.java:914)\n at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source)\n at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n at java.lang.reflect.Method.invoke(Method.java:606)\n at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1017)\n at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1893)\n at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)\n at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)\n at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)\n at java.io.ObjectInputStream.defaultReadObject(ObjectInputStream.java:500)\n at java.lang.Throwable.readObject(Throwable.java:914)\n at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source)\n at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n at java.lang.reflect.Method.invoke(Method.java:606)\n at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1017)\n at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1893)\n at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)\n at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)\n at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)\n at java.io.ObjectInputStream.defaultReadObject(ObjectInputStream.java:500)\n at java.lang.Throwable.readObject(Throwable.java:914)\n at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source)\n at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n at java.lang.reflect.Method.invoke(Method.java:606)\n at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1017)\n at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1893)\n at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)\n at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)\n at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)\n at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:167)\n ... 23 more\nCaused by: java.lang.ClassNotFoundException: Script4\n at java.net.URLClassLoader$1.run(URLClassLoader.java:366)\n at java.net.URLClassLoader$1.run(URLClassLoader.java:355)\n at java.security.AccessController.doPrivileged(Native Method)\n at java.net.URLClassLoader.findClass(URLClassLoader.java:354)\n at java.lang.ClassLoader.loadClass(ClassLoader.java:425)\n at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)\n at java.lang.ClassLoader.loadClass(ClassLoader.java:358)\n at org.elasticsearch.common.io.ThrowableObjectInputStream.loadClass(ThrowableObjectInputStream.java:93)\n at org.elasticsearch.common.io.ThrowableObjectInputStream.readClassDescriptor(ThrowableObjectInputStream.java:67)\n at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1601)\n ... 62 more\n```\n\nI'm in the middle of work on #6566 so I don't have easy reproduction steps, but I'll see if I can make some soon.\n",
"comments": [
{
"body": "Ping @dakrone. I'll work on it in a bit but I think it has to do with when a variable isn't found. \n",
"created_at": "2014-06-23T22:28:14Z"
},
{
"body": "@nik9000 sounds good, I definitely want to figure out what's causing this.\n",
"created_at": "2014-06-23T22:29:04Z"
},
{
"body": "@dakrone, got it: start a bunch of servers. Enough that your request goes to more then one. Then do this:\n\n``` js\ncurl -XPOST \"http://localhost:9200/test/test/1?pretty\" -d '{\"content\": \"findme\"}'\ncurl -XPOST \"http://localhost:9200/test/test/2?pretty\" -d '{\"title\": \"cat\", \"content\": \"findme\"}'\ncurl -XPOST \"http://localhost:9200/test/test/3?pretty\" -d '{\"title\": \"table\", \"content\": \"findme\"}'\ncurl -XPOST \"http://localhost:9200/test/_refresh?pretty\"\ncurl -XPOST \"http://localhost:9200/test/test/_search?pretty\" -d '{\n \"query\": {\n \"filtered\": {\n \"filter\": {\n \"script\": {\n \"script\": \"1 == not_found\",\n \"lang\": \"groovy\"\n }\n }\n }\n }\n}'\n```\n\nI don't imagine the contents of the documents matter - just that they end up on a bunch of nodes.\nHere is what that spits out for me:\n\n``` js\n{\n \"took\" : 35,\n \"timed_out\" : false,\n \"_shards\" : {\n \"total\" : 5,\n \"successful\" : 2,\n \"failed\" : 3,\n \"failures\" : [ {\n \"index\" : \"test\",\n \"shard\" : 4,\n \"status\" : 500,\n \"reason\" : \"QueryPhaseExecutionException[[test][4]: query[filtered(ConstantScore(ScriptFilter(1 == not_found)))->cache(_type:test)],from[0],size[10]: Query Failed [Failed to execute main query]]; nested: MissingPropertyException[No such property: not_found for class: Script5]; \"\n }, {\n \"index\" : \"test\",\n \"shard\" : 3,\n \"status\" : 500,\n \"reason\" : \"RemoteTransportException[Failed to deserialize exception response from stream]; nested: TransportSerializationException[Failed to deserialize exception response from stream]; nested: InvalidClassException[failed to read class descriptor]; nested: ClassNotFoundException[Script3]; \"\n }, {\n \"index\" : \"test\",\n \"shard\" : 2,\n \"status\" : 500,\n \"reason\" : \"RemoteTransportException[Failed to deserialize exception response from stream]; nested: TransportSerializationException[Failed to deserialize exception response from stream]; nested: InvalidClassException[failed to read class descriptor]; nested: ClassNotFoundException[Script2]; \"\n } ]\n },\n \"hits\" : {\n \"total\" : 0,\n \"max_score\" : null,\n \"hits\" : [ ]\n }\n}\n```\n",
"created_at": "2014-06-23T22:41:13Z"
},
{
"body": "3 servers did it for me.\n",
"created_at": "2014-06-23T22:42:03Z"
},
{
"body": "I think it comes from trying to serialize the groovy exceptions objects, I would suggest we catch a script execution exception, but not have the throwable in our wrapping script exception. Users might run client side node/transport clients that don't have groovy in the class path for example.\n",
"created_at": "2014-06-24T10:55:18Z"
},
{
"body": "I think @kimchy's right. What I'm seeing is that groovy decided to name the compiled classes something different but even without that the chance that the user doesn't even have groovy in their classpath means we should transform the exception without adding the cause. I imagine MVEL didn't have this problem because MVEL was required and was interpreted instead of compiled to byte code.\n",
"created_at": "2014-06-24T13:13:15Z"
},
{
"body": "Thanks for bringing this up @nik9000!\n",
"created_at": "2014-06-30T15:14:53Z"
}
],
"number": 6598,
"title": "Scripting: Wrap groovy script exceptions in a serializable Exception object"
} | {
"body": "Fixes #6598\n\nIt prevents ES from trying to serialize the default Groovy exceptions, which want to carry over a lot of state that doesn't serialize properly.\n",
"number": 6628,
"review_comments": [
{
"body": "It'd be nice to be sure it contained that `not_found` wasn't found.\n",
"created_at": "2014-06-26T12:50:08Z"
},
{
"body": "Good idea! I've added this.\n",
"created_at": "2014-06-26T18:29:43Z"
},
{
"body": "I suggest we trace log the failure as well, the stack trace might prove important if we are debugging more complex script failures\n",
"created_at": "2014-06-27T07:09:08Z"
},
{
"body": "few more things:\n- we should also wrap compilation failures\n- are we sure all groovy code path wraps GroovyRuntimeException? might make sense to catch a more generic exception?\n",
"created_at": "2014-06-27T07:21:45Z"
},
{
"body": "I agree on logging, and I think we should enumerate the failures that this can have (like sandbox exceptions), I'll work on adding this.\n",
"created_at": "2014-06-27T08:32:34Z"
},
{
"body": "can we log the groovy script name here? and start with lower case e :) . Also, catch Throwable in case of assertions enabled?\n",
"created_at": "2014-06-30T13:12:17Z"
},
{
"body": "same as the previous logging comment, script name, ...\n",
"created_at": "2014-06-30T13:12:46Z"
},
{
"body": "can we use ExceptionHelper#detailedMessage?\n",
"created_at": "2014-06-30T13:13:08Z"
},
{
"body": "can we use ExceptionHelper#detailedMessage?\n",
"created_at": "2014-06-30T13:13:49Z"
},
{
"body": "you have a logger in the AbstractComponent, can you use that and not create a new one? It automatically makes sure to log the node name in top level component, and if its an index / shard service, log the index/shard as well\n",
"created_at": "2014-06-30T13:15:00Z"
},
{
"body": "can we add a status code here that is not 500? probably its a user error? 400?\n",
"created_at": "2014-06-30T13:15:54Z"
},
{
"body": "can we add a status code here that is not 500? probably its a user error? 400?\n",
"created_at": "2014-06-30T13:16:00Z"
},
{
"body": "the script name is going to be \"Script1.groovy\", \"Script2.groovy\", \"Script3.groovy\", is that actually helpful?\n",
"created_at": "2014-06-30T13:16:48Z"
},
{
"body": "What do you think about catching AssertionError directly? I am worried that catching Throwable means catching `OutOfMemoryError` and swallowing the exception (which we have to do for the groovy exceptions).\n",
"created_at": "2014-06-30T13:24:28Z"
},
{
"body": "I think you forgot to add the name of the script here to the failure log\n",
"created_at": "2014-06-30T13:46:14Z"
},
{
"body": "I think you forgot to add the name of the script here to the failure log\n",
"created_at": "2014-06-30T13:46:22Z"
}
],
"title": "Scripting: Wrap groovy script exceptions in a serializable Exception object"
} | {
"commits": [
{
"message": "Wrap groovy script exceptions in a serializable Exception object\n\nFixes #6598"
}
],
"files": [
{
"diff": "@@ -0,0 +1,38 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.script.groovy;\n+\n+import org.elasticsearch.ElasticsearchException;\n+import org.elasticsearch.rest.RestStatus;\n+\n+/**\n+ * Exception used to wrap groovy script compilation exceptions so they are\n+ * correctly serialized between nodes.\n+ */\n+public class GroovyScriptCompilationException extends ElasticsearchException {\n+ public GroovyScriptCompilationException(String message) {\n+ super(message);\n+ }\n+\n+ @Override\n+ public RestStatus status() {\n+ return RestStatus.BAD_REQUEST;\n+ }\n+}",
"filename": "src/main/java/org/elasticsearch/script/groovy/GroovyScriptCompilationException.java",
"status": "added"
},
{
"diff": "@@ -35,9 +35,11 @@\n import org.codehaus.groovy.control.SourceUnit;\n import org.codehaus.groovy.control.customizers.CompilationCustomizer;\n import org.codehaus.groovy.control.customizers.ImportCustomizer;\n+import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.component.AbstractComponent;\n import org.elasticsearch.common.inject.Inject;\n+import org.elasticsearch.common.logging.ESLogger;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.script.ExecutableScript;\n import org.elasticsearch.script.ScriptEngineService;\n@@ -106,7 +108,14 @@ public boolean sandboxed() {\n \n @Override\n public Object compile(String script) {\n- return loader.parseClass(script, generateScriptName());\n+ try {\n+ return loader.parseClass(script, generateScriptName());\n+ } catch (Throwable e) {\n+ if (logger.isTraceEnabled()) {\n+ logger.trace(\"exception compiling Groovy script:\", e);\n+ }\n+ throw new GroovyScriptCompilationException(ExceptionsHelper.detailedMessage(e));\n+ }\n }\n \n /**\n@@ -129,7 +138,7 @@ public ExecutableScript executable(Object compiledScript, Map<String, Object> va\n if (vars != null) {\n allVars.putAll(vars);\n }\n- return new GroovyScript(createScript(compiledScript, allVars));\n+ return new GroovyScript(createScript(compiledScript, allVars), this.logger);\n } catch (Exception e) {\n throw new ScriptException(\"failed to build executable script\", e);\n }\n@@ -145,7 +154,7 @@ public SearchScript search(Object compiledScript, SearchLookup lookup, @Nullable\n allVars.putAll(vars);\n }\n Script scriptObject = createScript(compiledScript, allVars);\n- return new GroovyScript(scriptObject, lookup);\n+ return new GroovyScript(scriptObject, lookup, this.logger);\n } catch (Exception e) {\n throw new ScriptException(\"failed to build search script\", e);\n }\n@@ -180,14 +189,16 @@ public static final class GroovyScript implements ExecutableScript, SearchScript\n private final SearchLookup lookup;\n private final Map<String, Object> variables;\n private final UpdateableFloat score;\n+ private final ESLogger logger;\n \n- public GroovyScript(Script script) {\n- this(script, null);\n+ public GroovyScript(Script script, ESLogger logger) {\n+ this(script, null, logger);\n }\n \n- public GroovyScript(Script script, SearchLookup lookup) {\n+ public GroovyScript(Script script, SearchLookup lookup, ESLogger logger) {\n this.script = script;\n this.lookup = lookup;\n+ this.logger = logger;\n this.variables = script.getBinding().getVariables();\n this.score = new UpdateableFloat(0);\n // Add the _score variable, which will be updated per-document by\n@@ -237,7 +248,14 @@ public void setNextSource(Map<String, Object> source) {\n \n @Override\n public Object run() {\n- return script.run();\n+ try {\n+ return script.run();\n+ } catch (Throwable e) {\n+ if (logger.isTraceEnabled()) {\n+ logger.trace(\"exception running Groovy script\", e);\n+ }\n+ throw new GroovyScriptExecutionException(ExceptionsHelper.detailedMessage(e));\n+ }\n }\n \n @Override",
"filename": "src/main/java/org/elasticsearch/script/groovy/GroovyScriptEngineService.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,38 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.script.groovy;\n+\n+import org.elasticsearch.ElasticsearchException;\n+import org.elasticsearch.rest.RestStatus;\n+\n+/**\n+ * Exception used to wrap groovy script execution exceptions so they are\n+ * correctly serialized between nodes.\n+ */\n+public class GroovyScriptExecutionException extends ElasticsearchException {\n+ public GroovyScriptExecutionException(String message) {\n+ super(message);\n+ }\n+\n+ @Override\n+ public RestStatus status() {\n+ return RestStatus.BAD_REQUEST;\n+ }\n+}",
"filename": "src/main/java/org/elasticsearch/script/groovy/GroovyScriptExecutionException.java",
"status": "added"
},
{
"diff": "@@ -19,11 +19,20 @@\n \n package org.elasticsearch.script;\n \n+import org.elasticsearch.ExceptionsHelper;\n+import org.elasticsearch.action.index.IndexRequestBuilder;\n+import org.elasticsearch.action.search.SearchPhaseExecutionException;\n import org.elasticsearch.action.search.SearchResponse;\n import org.elasticsearch.test.ElasticsearchIntegrationTest;\n import org.junit.Test;\n \n+import java.util.List;\n+\n+import static com.google.common.collect.Lists.newArrayList;\n+import static org.elasticsearch.index.query.FilterBuilders.scriptFilter;\n+import static org.elasticsearch.index.query.QueryBuilders.constantScoreQuery;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertNoFailures;\n+import static org.hamcrest.Matchers.equalTo;\n \n /**\n * Various tests for Groovy scripting\n@@ -47,4 +56,50 @@ public void assertScript(String script) {\n \"; 1\\\", \\\"type\\\": \\\"number\\\", \\\"lang\\\": \\\"groovy\\\"}}}\").get();\n assertNoFailures(resp);\n }\n+\n+ @Test\n+ public void testGroovyExceptionSerialization() throws Exception {\n+ List<IndexRequestBuilder> reqs = newArrayList();\n+ for (int i = 0; i < randomIntBetween(50, 500); i++) {\n+ reqs.add(client().prepareIndex(\"test\", \"doc\", \"\" + i).setSource(\"foo\", \"bar\"));\n+ }\n+ indexRandom(true, false, reqs);\n+ try {\n+ client().prepareSearch(\"test\").setQuery(constantScoreQuery(scriptFilter(\"1 == not_found\").lang(\"groovy\"))).get();\n+ fail(\"should have thrown an exception\");\n+ } catch (SearchPhaseExecutionException e) {\n+ assertThat(ExceptionsHelper.detailedMessage(e) + \"should not contained NotSerializableTransportException\",\n+ ExceptionsHelper.detailedMessage(e).contains(\"NotSerializableTransportException\"), equalTo(false));\n+ assertThat(ExceptionsHelper.detailedMessage(e) + \"should have contained GroovyScriptExecutionException\",\n+ ExceptionsHelper.detailedMessage(e).contains(\"GroovyScriptExecutionException\"), equalTo(true));\n+ assertThat(ExceptionsHelper.detailedMessage(e) + \"should have contained not_found\",\n+ ExceptionsHelper.detailedMessage(e).contains(\"No such property: not_found\"), equalTo(true));\n+ }\n+\n+ try {\n+ client().prepareSearch(\"test\").setQuery(constantScoreQuery(\n+ scriptFilter(\"pr = Runtime.getRuntime().exec(\\\"touch /tmp/gotcha\\\"); pr.waitFor()\").lang(\"groovy\"))).get();\n+ fail(\"should have thrown an exception\");\n+ } catch (SearchPhaseExecutionException e) {\n+ assertThat(ExceptionsHelper.detailedMessage(e) + \"should not contained NotSerializableTransportException\",\n+ ExceptionsHelper.detailedMessage(e).contains(\"NotSerializableTransportException\"), equalTo(false));\n+ assertThat(ExceptionsHelper.detailedMessage(e) + \"should have contained GroovyScriptCompilationException\",\n+ ExceptionsHelper.detailedMessage(e).contains(\"GroovyScriptCompilationException\"), equalTo(true));\n+ assertThat(ExceptionsHelper.detailedMessage(e) + \"should have contained Method calls not allowed on [java.lang.Runtime]\",\n+ ExceptionsHelper.detailedMessage(e).contains(\"Method calls not allowed on [java.lang.Runtime]\"), equalTo(true));\n+ }\n+\n+ try {\n+ client().prepareSearch(\"test\").setQuery(constantScoreQuery(\n+ scriptFilter(\"assert false\").lang(\"groovy\"))).get();\n+ fail(\"should have thrown an exception\");\n+ } catch (SearchPhaseExecutionException e) {\n+ assertThat(ExceptionsHelper.detailedMessage(e) + \"should not contained NotSerializableTransportException\",\n+ ExceptionsHelper.detailedMessage(e).contains(\"NotSerializableTransportException\"), equalTo(false));\n+ assertThat(ExceptionsHelper.detailedMessage(e) + \"should have contained GroovyScriptExecutionException\",\n+ ExceptionsHelper.detailedMessage(e).contains(\"GroovyScriptExecutionException\"), equalTo(true));\n+ assertThat(ExceptionsHelper.detailedMessage(e) + \"should have contained an assert error\",\n+ ExceptionsHelper.detailedMessage(e).contains(\"PowerAssertionError[assert false\"), equalTo(true));\n+ }\n+ }\n }",
"filename": "src/test/java/org/elasticsearch/script/GroovyScriptTests.java",
"status": "modified"
}
]
} |
{
"body": "I have observed on a number of occasions that when I have large batch sizes / concurrent requests that I don't always get the same number of documents out of ES that I put in. Looking at:\n\nhttps://github.com/elasticsearch/elasticsearch/blob/master/src/main/java/org/elasticsearch/action/bulk/BulkProcessor.java#L204\n\n``` java\n /**\n * Closes the processor. If flushing by time is enabled, then its shutdown. Any remaining bulk actions are flushed.\n */\n public synchronized void close() {\n if (closed) {\n return;\n }\n closed = true;\n if (this.scheduledFuture != null) {\n this.scheduledFuture.cancel(false);\n this.scheduler.shutdown();\n }\n if (bulkRequest.numberOfActions() > 0) {\n execute();\n }\n }\n```\n\nWhat happens if there are concurrent requests in-flight before the `close` call is made? Shouldn't this method block on those finishing? Or is the requirement / expectation that the client call `close`\n on `TransportClient` which will block on the in-flight requests for up to 10 seconds? \n\nAs far as I can tell, there is no reliable way to know it is safe to shutdown the JVM.\n",
"comments": [
{
"body": "This may be related but seems to concern error checking: https://github.com/elasticsearch/elasticsearch/issues/4301\n",
"created_at": "2014-05-27T10:50:14Z"
},
{
"body": "This appears to be a symptom of the missing functionality described in #4158. Is there a client side workaround given #5038? \n",
"created_at": "2014-05-27T12:55:25Z"
},
{
"body": "Hi @btiernay, #5038 is going to be fixed soon (see #6495 ), also I think it would be good to add a blocking variant of the `close` method, that tries and wait for all the in-flight bulk requests to be completed #4180 is just a merge away and achieves exactly this ;)\n\nDoes this address your concerns?\n",
"created_at": "2014-06-13T12:27:31Z"
}
],
"number": 6314,
"title": "BulkProcessor's close ignores in-flight bulkRequests"
} | {
"body": "Blocks until all bulk requests have completed. \n\nUpdated based on feedback\nUpdated formatting\n\nCloses #4158 \nCloses #6314 \n",
"number": 6586,
"review_comments": [],
"title": "Add a blocking variant of close() method to BulkProcessor"
} | {
"commits": [
{
"message": "Implementation of blocking close method\n\nBlocks until all bulk requests have completed. This fixes #4158"
}
],
"files": [
{
"diff": "@@ -191,13 +191,33 @@ public static Builder builder(Client client, Listener listener) {\n }\n }\n \n+ /**\n+ * Closes the processor. If flushing by time is enabled, then it's shutdown. Any remaining bulk actions are flushed.\n+ */\n @Override\n+ public void close() {\n+ try {\n+ awaitClose(0, TimeUnit.NANOSECONDS);\n+ } catch(InterruptedException exc) {\n+ Thread.currentThread().interrupt();\n+ }\n+ }\n+\n /**\n- * Closes the processor. If flushing by time is enabled, then its shutdown. Any remaining bulk actions are flushed.\n+ * Closes the processor. If flushing by time is enabled, then it's shutdown. Any remaining bulk actions are flushed.\n+ *\n+ * If concurrent requests are not enabled, returns {@code true} immediately.\n+ * If concurrent requests are enabled, waits for up to the specified timeout for all bulk requests to complete then returns {@code true},\n+ * If the specified waiting time elapses before all bulk requests complete, {@code false} is returned.\n+ *\n+ * @param timeout The maximum time to wait for the bulk requests to complete\n+ * @param unit The time unit of the {@code timeout} argument\n+ * @return {@code true} if all bulk requests completed and {@code false} if the waiting time elapsed before all the bulk requests completed\n+ * @throws InterruptedException If the current thread is interrupted\n */\n- public synchronized void close() {\n+ public synchronized boolean awaitClose(long timeout, TimeUnit unit) throws InterruptedException {\n if (closed) {\n- return;\n+ return true;\n }\n closed = true;\n if (this.scheduledFuture != null) {\n@@ -207,6 +227,10 @@ public synchronized void close() {\n if (bulkRequest.numberOfActions() > 0) {\n execute();\n }\n+ if (this.concurrentRequests < 1) {\n+ return true;\n+ }\n+ return semaphore.tryAcquire(this.concurrentRequests, timeout, unit);\n }\n \n /**",
"filename": "src/main/java/org/elasticsearch/action/bulk/BulkProcessor.java",
"status": "modified"
}
]
} |
{
"body": "Create a new schema with a field and the _timestamp field enabled:\n\ncurl -XPOST http://localhost:9200/foo -d '\n{\n \"mappings\": {\n \"product\": {\n \"_timestamp\" : { \"enabled\" : true },\n \"properties\": {\n \"field1\": { \"type\": \"integer\" }\n }\n }\n }\n}'\n\nRetrieve the mapping, all looks good:\ncurl http://localhost:9200/foo/_mapping\n{\"foo\":{\"product\":{\"_timestamp\":{\"enabled\":true},\"properties\":{\"field1\":{\"type\":\"integer\"}}}}}\n\nNow add another field:\ncurl -XPUT http://localhost:9200/foo/product/_mapping -d '\n{\n \"product\" : {\n \"properties\": {\n \"field2\" : {\"type\" : \"integer\" }\n }\n }\n}\n'\nGet the mapping again:\ncurl http://localhost:9200/foo/_mapping\n{\"foo\":{\"product\":{\"properties\":{\"field1\":{\"type\":\"integer\"},\"field2\":{\"type\":\"integer\"}}}}}\n\nThe _timestamp field is gone!\n",
"comments": [
{
"body": "version 0.90.3\n",
"created_at": "2014-02-07T18:09:50Z"
},
{
"body": "Hey\n\nI cannot reproduce this with any of the 0.90 release (tried .0, .3 and .11).\n\nThe URLs you provided are not valid (must be `_mapping` and the PUT call actual create a document called `mapping` in the `foo` index. Can you fix this and provide a working example, so we can either find out if this is a bug on elasticsearch side or maybe your HTTP calls are not correct.\n\nThanks!\n",
"created_at": "2014-02-10T12:15:42Z"
},
{
"body": "Sorry, formatting fail :). Have updated the CURLs. Thanks.\n",
"created_at": "2014-02-10T19:59:55Z"
},
{
"body": "any update? I'll fix this if not\n",
"created_at": "2014-03-06T17:44:32Z"
},
{
"body": "Whoah, I just ran into this today. Heres my test case:\nhttps://gist.github.com/hanneskaeufler/2b6d69a2fc1a77509500\n\nI´m running version 1.1.2\n\nIs that expected behaviour or a bug?\n",
"created_at": "2014-06-17T16:29:06Z"
},
{
"body": "hey there,\n\nthis looks like a bug indeed from a birds eye view, I will take a look as soon as I can!\n",
"created_at": "2014-06-17T16:59:06Z"
}
],
"number": 5053,
"title": "Mapping: Fix possibility of losing meta configuration on field mapping update"
} | {
"body": "The TTL, size, timestamp and index meta properties could be lost on an\nupdate of a single field mapping due to a wrong comparison in the\nmerge method (which was caused by a wrong initialization, which marked\nan update as explicitely disabled instead of unset.\n\nCloses #5053\n",
"number": 6550,
"review_comments": [],
"title": "Fix possibility of losing meta configuration on field mapping update"
} | {
"commits": [
{
"message": "Mapping: Fix possibility of losing meta configuration on field mapping update\n\nThe TTL, size, timestamp and index meta properties could be lost on an\nupdate of a single field mapping due to a wrong comparison in the\nmerge method (which was caused by a wrong initialization, which marked\nan update as explicitely disabled instead of unset.\n\nCloses #5053"
}
],
"files": [
{
"diff": "@@ -67,7 +67,7 @@ public static class Defaults extends AbstractFieldMapper.Defaults {\n FIELD_TYPE.freeze();\n }\n \n- public static final EnabledAttributeMapper ENABLED_STATE = EnabledAttributeMapper.DISABLED;\n+ public static final EnabledAttributeMapper ENABLED_STATE = EnabledAttributeMapper.UNSET_DISABLED;\n }\n \n public static class Builder extends AbstractFieldMapper.Builder<Builder, IndexFieldMapper> {\n@@ -202,7 +202,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n if (includeDefaults || fieldType().stored() != Defaults.FIELD_TYPE.stored() && enabledState.enabled) {\n builder.field(\"store\", fieldType().stored());\n }\n- if (includeDefaults || enabledState != Defaults.ENABLED_STATE) {\n+ if (includeDefaults || enabledState.enabled != Defaults.ENABLED_STATE.enabled) {\n builder.field(\"enabled\", enabledState.enabled);\n }\n ",
"filename": "src/main/java/org/elasticsearch/index/mapper/internal/IndexFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -47,7 +47,7 @@ public class SizeFieldMapper extends IntegerFieldMapper implements RootMapper {\n \n public static class Defaults extends IntegerFieldMapper.Defaults {\n public static final String NAME = CONTENT_TYPE;\n- public static final EnabledAttributeMapper ENABLED_STATE = EnabledAttributeMapper.DISABLED;\n+ public static final EnabledAttributeMapper ENABLED_STATE = EnabledAttributeMapper.UNSET_DISABLED;\n \n public static final FieldType SIZE_FIELD_TYPE = new FieldType(IntegerFieldMapper.Defaults.FIELD_TYPE);\n \n@@ -161,7 +161,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n return builder;\n }\n builder.startObject(contentType());\n- if (includeDefaults || enabledState != Defaults.ENABLED_STATE) {\n+ if (includeDefaults || enabledState.enabled != Defaults.ENABLED_STATE.enabled) {\n builder.field(\"enabled\", enabledState.enabled);\n }\n if (includeDefaults || fieldType().stored() != Defaults.SIZE_FIELD_TYPE.stored() && enabledState.enabled) {",
"filename": "src/main/java/org/elasticsearch/index/mapper/internal/SizeFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -64,7 +64,7 @@ public static class Defaults extends LongFieldMapper.Defaults {\n TTL_FIELD_TYPE.freeze();\n }\n \n- public static final EnabledAttributeMapper ENABLED_STATE = EnabledAttributeMapper.DISABLED;\n+ public static final EnabledAttributeMapper ENABLED_STATE = EnabledAttributeMapper.UNSET_DISABLED;\n public static final long DEFAULT = -1;\n }\n \n@@ -225,7 +225,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n return builder;\n }\n builder.startObject(CONTENT_TYPE);\n- if (includeDefaults || enabledState != Defaults.ENABLED_STATE) {\n+ if (includeDefaults || enabledState.enabled != Defaults.ENABLED_STATE.enabled) {\n builder.field(\"enabled\", enabledState.enabled);\n }\n if (includeDefaults || defaultTTL != Defaults.DEFAULT && enabledState.enabled) {",
"filename": "src/main/java/org/elasticsearch/index/mapper/internal/TTLFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -67,7 +67,7 @@ public static class Defaults extends DateFieldMapper.Defaults {\n FIELD_TYPE.freeze();\n }\n \n- public static final EnabledAttributeMapper ENABLED = EnabledAttributeMapper.DISABLED;\n+ public static final EnabledAttributeMapper ENABLED = EnabledAttributeMapper.UNSET_DISABLED;\n public static final String PATH = null;\n public static final FormatDateTimeFormatter DATE_TIME_FORMATTER = Joda.forPattern(DEFAULT_DATE_TIME_FORMAT);\n }\n@@ -230,7 +230,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n return builder;\n }\n builder.startObject(CONTENT_TYPE);\n- if (includeDefaults || enabledState != Defaults.ENABLED) {\n+ if (includeDefaults || enabledState.enabled != Defaults.ENABLED.enabled) {\n builder.field(\"enabled\", enabledState.enabled);\n }\n if (enabledState.enabled) {",
"filename": "src/main/java/org/elasticsearch/index/mapper/internal/TimestampFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,71 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.index.mapper.index;\n+\n+import org.elasticsearch.action.admin.indices.mapping.get.GetMappingsResponse;\n+import org.elasticsearch.action.admin.indices.mapping.put.PutMappingResponse;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.test.ElasticsearchIntegrationTest;\n+import org.junit.Test;\n+\n+import java.io.IOException;\n+import java.util.Locale;\n+import java.util.Map;\n+\n+import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n+import static org.hamcrest.Matchers.*;\n+\n+/**\n+ *\n+ */\n+public class IndexTypeMapperIntegrationTests extends ElasticsearchIntegrationTest {\n+\n+ @Test // issue 5053\n+ public void testThatUpdatingMappingShouldNotRemoveSizeMappingConfiguration() throws Exception {\n+ String index = \"foo\";\n+ String type = \"mytype\";\n+\n+ XContentBuilder builder = jsonBuilder().startObject().startObject(\"_index\").field(\"enabled\", true).endObject().endObject();\n+ assertAcked(client().admin().indices().prepareCreate(index).addMapping(type, builder));\n+\n+ // check mapping again\n+ assertIndexMappingEnabled(index, type);\n+\n+ // update some field in the mapping\n+ XContentBuilder updateMappingBuilder = jsonBuilder().startObject().startObject(\"properties\").startObject(\"otherField\").field(\"type\", \"string\").endObject().endObject();\n+ PutMappingResponse putMappingResponse = client().admin().indices().preparePutMapping(index).setType(type).setSource(updateMappingBuilder).get();\n+ assertAcked(putMappingResponse);\n+\n+ // make sure timestamp field is still in mapping\n+ assertIndexMappingEnabled(index, type);\n+ }\n+\n+ private void assertIndexMappingEnabled(String index, String type) throws IOException {\n+ String errMsg = String.format(Locale.ROOT, \"Expected index field mapping to be enabled for %s/%s\", index, type);\n+\n+ GetMappingsResponse getMappingsResponse = client().admin().indices().prepareGetMappings(index).addTypes(type).get();\n+ Map<String, Object> mappingSource = getMappingsResponse.getMappings().get(index).get(type).getSourceAsMap();\n+ assertThat(errMsg, mappingSource, hasKey(\"_index\"));\n+ String ttlAsString = mappingSource.get(\"_index\").toString();\n+ assertThat(ttlAsString, is(notNullValue()));\n+ assertThat(errMsg, ttlAsString, is(\"{enabled=true}\"));\n+ }\n+\n+}",
"filename": "src/test/java/org/elasticsearch/index/mapper/index/IndexTypeMapperIntegrationTests.java",
"status": "added"
},
{
"diff": "@@ -0,0 +1,67 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.index.mapper.size;\n+\n+import org.elasticsearch.action.admin.indices.mapping.get.GetMappingsResponse;\n+import org.elasticsearch.action.admin.indices.mapping.put.PutMappingResponse;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.test.ElasticsearchIntegrationTest;\n+import org.junit.Test;\n+\n+import java.io.IOException;\n+import java.util.Locale;\n+import java.util.Map;\n+\n+import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n+import static org.hamcrest.Matchers.*;\n+\n+public class SizeMappingIntegrationTests extends ElasticsearchIntegrationTest {\n+\n+ @Test // issue 5053\n+ public void testThatUpdatingMappingShouldNotRemoveSizeMappingConfiguration() throws Exception {\n+ String index = \"foo\";\n+ String type = \"mytype\";\n+\n+ XContentBuilder builder = jsonBuilder().startObject().startObject(\"_size\").field(\"enabled\", true).endObject().endObject();\n+ assertAcked(client().admin().indices().prepareCreate(index).addMapping(type, builder));\n+\n+ // check mapping again\n+ assertSizeMappingEnabled(index, type);\n+\n+ // update some field in the mapping\n+ XContentBuilder updateMappingBuilder = jsonBuilder().startObject().startObject(\"properties\").startObject(\"otherField\").field(\"type\", \"string\").endObject().endObject();\n+ PutMappingResponse putMappingResponse = client().admin().indices().preparePutMapping(index).setType(type).setSource(updateMappingBuilder).get();\n+ assertAcked(putMappingResponse);\n+\n+ // make sure timestamp field is still in mapping\n+ assertSizeMappingEnabled(index, type);\n+ }\n+\n+ private void assertSizeMappingEnabled(String index, String type) throws IOException {\n+ String errMsg = String.format(Locale.ROOT, \"Expected size field mapping to be enabled for %s/%s\", index, type);\n+\n+ GetMappingsResponse getMappingsResponse = client().admin().indices().prepareGetMappings(index).addTypes(type).get();\n+ Map<String, Object> mappingSource = getMappingsResponse.getMappings().get(index).get(type).getSourceAsMap();\n+ assertThat(errMsg, mappingSource, hasKey(\"_size\"));\n+ String ttlAsString = mappingSource.get(\"_size\").toString();\n+ assertThat(ttlAsString, is(notNullValue()));\n+ assertThat(errMsg, ttlAsString, is(\"{enabled=true}\"));\n+ }\n+}",
"filename": "src/test/java/org/elasticsearch/index/mapper/size/SizeMappingIntegrationTests.java",
"status": "added"
},
{
"diff": "@@ -19,12 +19,19 @@\n \n package org.elasticsearch.timestamp;\n \n+import org.elasticsearch.action.admin.indices.mapping.get.GetMappingsResponse;\n+import org.elasticsearch.action.admin.indices.mapping.put.PutMappingResponse;\n import org.elasticsearch.action.get.GetResponse;\n+import org.elasticsearch.cluster.metadata.MappingMetaData;\n import org.elasticsearch.common.Priority;\n-import org.elasticsearch.common.xcontent.XContentFactory;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.test.ElasticsearchIntegrationTest;\n import org.junit.Test;\n \n+import java.util.Locale;\n+\n+import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n import static org.hamcrest.Matchers.*;\n \n /**\n@@ -35,7 +42,7 @@ public class SimpleTimestampTests extends ElasticsearchIntegrationTest {\n public void testSimpleTimestamp() throws Exception {\n \n client().admin().indices().prepareCreate(\"test\")\n- .addMapping(\"type1\", XContentFactory.jsonBuilder().startObject().startObject(\"type1\").startObject(\"_timestamp\").field(\"enabled\", true).field(\"store\", \"yes\").endObject().endObject().endObject())\n+ .addMapping(\"type1\", jsonBuilder().startObject().startObject(\"type1\").startObject(\"_timestamp\").field(\"enabled\", true).field(\"store\", \"yes\").endObject().endObject().endObject())\n .execute().actionGet();\n client().admin().cluster().prepareHealth().setWaitForEvents(Priority.LANGUID).setWaitForGreenStatus().execute().actionGet();\n \n@@ -82,4 +89,32 @@ public void testSimpleTimestamp() throws Exception {\n getResponse = client().prepareGet(\"test\", \"type1\", \"1\").setFields(\"_timestamp\").setRealtime(false).execute().actionGet();\n assertThat(((Number) getResponse.getField(\"_timestamp\").getValue()).longValue(), equalTo(timestamp));\n }\n+\n+ @Test // issue 5053\n+ public void testThatUpdatingMappingShouldNotRemoveTimestampConfiguration() throws Exception {\n+ String index = \"foo\";\n+ String type = \"mytype\";\n+\n+ XContentBuilder builder = jsonBuilder().startObject().startObject(\"_timestamp\").field(\"enabled\", true).endObject().endObject();\n+ assertAcked(client().admin().indices().prepareCreate(index).addMapping(type, builder));\n+\n+ // check mapping again\n+ assertTimestampMappingEnabled(index, type);\n+\n+ // update some field in the mapping\n+ XContentBuilder updateMappingBuilder = jsonBuilder().startObject().startObject(\"properties\").startObject(\"otherField\").field(\"type\", \"string\").endObject().endObject();\n+ PutMappingResponse putMappingResponse = client().admin().indices().preparePutMapping(index).setType(type).setSource(updateMappingBuilder).get();\n+ assertAcked(putMappingResponse);\n+\n+ // make sure timestamp field is still in mapping\n+ assertTimestampMappingEnabled(index, type);\n+ }\n+\n+ private void assertTimestampMappingEnabled(String index, String type) {\n+ GetMappingsResponse getMappingsResponse = client().admin().indices().prepareGetMappings(index).addTypes(type).get();\n+ MappingMetaData.Timestamp timestamp = getMappingsResponse.getMappings().get(index).get(type).timestamp();\n+ assertThat(timestamp, is(notNullValue()));\n+ String errMsg = String.format(Locale.ROOT, \"Expected timestamp field mapping to be enabled for %s/%s\", index, type);\n+ assertThat(errMsg, timestamp.enabled(), is(true));\n+ }\n }",
"filename": "src/test/java/org/elasticsearch/timestamp/SimpleTimestampTests.java",
"status": "modified"
},
{
"diff": "@@ -20,18 +20,26 @@\n package org.elasticsearch.ttl;\n \n import com.google.common.base.Predicate;\n+import org.elasticsearch.action.admin.indices.mapping.get.GetMappingsResponse;\n+import org.elasticsearch.action.admin.indices.mapping.put.PutMappingResponse;\n import org.elasticsearch.action.admin.indices.stats.IndicesStatsResponse;\n import org.elasticsearch.action.get.GetResponse;\n import org.elasticsearch.action.index.IndexResponse;\n+import org.elasticsearch.cluster.metadata.MappingMetaData;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.test.ElasticsearchIntegrationTest;\n import org.elasticsearch.test.ElasticsearchIntegrationTest.ClusterScope;\n import org.junit.Test;\n \n+import java.io.IOException;\n+import java.util.Locale;\n+import java.util.Map;\n import java.util.concurrent.TimeUnit;\n \n import static org.elasticsearch.common.settings.ImmutableSettings.settingsBuilder;\n+import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n import static org.elasticsearch.test.ElasticsearchIntegrationTest.*;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n import static org.hamcrest.Matchers.*;\n@@ -192,4 +200,35 @@ public boolean apply(Object input) {\n getResponse = client().prepareGet(\"test\", \"type1\", \"with_routing\").setRouting(\"routing\").setFields(\"_ttl\").setRealtime(false).execute().actionGet();\n assertThat(getResponse.isExists(), equalTo(false));\n }\n+\n+ @Test // issue 5053\n+ public void testThatUpdatingMappingShouldNotRemoveTTLConfiguration() throws Exception {\n+ String index = \"foo\";\n+ String type = \"mytype\";\n+\n+ XContentBuilder builder = jsonBuilder().startObject().startObject(\"_ttl\").field(\"enabled\", true).endObject().endObject();\n+ assertAcked(client().admin().indices().prepareCreate(index).addMapping(type, builder));\n+\n+ // check mapping again\n+ assertTTLMappingEnabled(index, type);\n+\n+ // update some field in the mapping\n+ XContentBuilder updateMappingBuilder = jsonBuilder().startObject().startObject(\"properties\").startObject(\"otherField\").field(\"type\", \"string\").endObject().endObject();\n+ PutMappingResponse putMappingResponse = client().admin().indices().preparePutMapping(index).setType(type).setSource(updateMappingBuilder).get();\n+ assertAcked(putMappingResponse);\n+\n+ // make sure timestamp field is still in mapping\n+ assertTTLMappingEnabled(index, type);\n+ }\n+\n+ private void assertTTLMappingEnabled(String index, String type) throws IOException {\n+ String errMsg = String.format(Locale.ROOT, \"Expected ttl field mapping to be enabled for %s/%s\", index, type);\n+\n+ GetMappingsResponse getMappingsResponse = client().admin().indices().prepareGetMappings(index).addTypes(type).get();\n+ Map<String, Object> mappingSource = getMappingsResponse.getMappings().get(index).get(type).getSourceAsMap();\n+ assertThat(errMsg, mappingSource, hasKey(\"_ttl\"));\n+ String ttlAsString = mappingSource.get(\"_ttl\").toString();\n+ assertThat(ttlAsString, is(notNullValue()));\n+ assertThat(errMsg, ttlAsString, is(\"{enabled=true}\"));\n+ }\n }",
"filename": "src/test/java/org/elasticsearch/ttl/SimpleTTLTests.java",
"status": "modified"
}
]
} |
{
"body": "The percolator appears to have some strange matching behavior with nested objects defined in the document mapping. Steps to reproduce:\n\ncurl -XDELETE localhost:9200/test\n\ncurl -XPUT localhost:9200/test -d '{\n \"settings\": {\n \"index.number_of_shards\": 1,\n \"index.number_of_replicas\": 0\n },\n \"mappings\": {\n \"doc\": {\n \"properties\": {\n \"persons\": {\n \"type\": \"nested\"\n }\n }\n }\n }\n}'\n\ncurl -XPUT localhost:9200/test/.percolator/1 -d '{\n \"query\": {\n \"bool\": {\n \"must\": {\n \"match\": {\n \"name\": \"obama\"\n }\n }\n }\n }\n}'\n\ncurl -XPUT localhost:9200/test/.percolator/2 -d '{\n \"query\": {\n \"bool\": {\n \"must_not\": {\n \"match\": {\n \"name\": \"obama\"\n }\n }\n }\n }\n}'\n\ncurl -XPUT localhost:9200/test/.percolator/3 -d '{\n \"query\": {\n \"bool\": {\n \"must\": {\n \"match\": {\n \"persons.foo\": \"bar\"\n }\n }\n }\n }\n}'\n\ncurl -XPUT localhost:9200/test/.percolator/4 -d '{\n \"query\": {\n \"bool\": {\n \"must_not\": {\n \"match\": {\n \"persons.foo\": \"bar\"\n }\n }\n }\n }\n}'\n\ncurl -XPUT localhost:9200/test/.percolator/5 -d '{\n \"query\": {\n \"bool\": {\n \"must\": {\n \"nested\": {\n \"path\": \"persons\",\n \"query\": {\n \"match\": {\n \"persons.foo\": \"bar\"\n }\n }\n }\n }\n }\n }\n}'\n\ncurl -XPUT localhost:9200/test/.percolator/6 -d '{\n \"query\": {\n \"bool\": {\n \"must_not\": {\n \"nested\": {\n \"path\": \"persons\",\n \"query\": {\n \"match\": {\n \"persons.foo\": \"bar\"\n }\n }\n }\n }\n }\n }\n}'\n\ncurl -XPOST localhost:9200/_refresh\n\nThi should only match 1 and 5, but instead it matches 1,2,3,4,5,6\n\ncurl -XPOST localhost:9200/test/doc/_percolate?pretty -d '{\n \"doc\": {\n \"name\": \"obama\",\n \"persons\": [\n {\n \"foo\": \"bar\"\n }\n ]\n }\n}'\n",
"comments": [],
"number": 6540,
"title": "Percolator: Fix handling of nested documents"
} | {
"body": "Nested documents were indexed as separate documents, but it was never checked\nif the hits represent nested documents or not. Therefore, nested objects could\nmatch not nested queries and nested queries could also match not nested documents.\n\nExamples are in issue #6540 .\n\ncloses #6540\n",
"number": 6544,
"review_comments": [
{
"body": "Maybe also verify the results via the percolate count api?\n",
"created_at": "2014-06-18T08:40:22Z"
}
],
"title": "Fix handling of nested documents"
} | {
"commits": [
{
"message": "percolator: fix handling of nested documents\n\nNested documents were indexed as separate documents, but it was never checked\nif the hits represent nested documents or not. Therefore, nested objects could\nmatch not nested queries and nested queries could also match not nested documents.\n\nExamples are in issue #6540 .\n\ncloses #6540"
},
{
"message": "add test with match_all"
},
{
"message": "fix for percolate query"
}
],
"files": [
{
"diff": "@@ -68,6 +68,7 @@\n import org.elasticsearch.index.mapper.internal.IdFieldMapper;\n import org.elasticsearch.index.percolator.stats.ShardPercolateService;\n import org.elasticsearch.index.query.ParsedQuery;\n+import org.elasticsearch.index.search.nested.NonNestedDocsFilter;\n import org.elasticsearch.index.service.IndexService;\n import org.elasticsearch.index.shard.service.IndexShard;\n import org.elasticsearch.indices.IndicesService;\n@@ -210,8 +211,9 @@ public PercolateShardResponse percolate(PercolateShardRequest request) {\n \n // parse the source either into one MemoryIndex, if it is a single document or index multiple docs if nested\n PercolatorIndex percolatorIndex;\n+ boolean isNested = indexShard.mapperService().documentMapper(request.documentType()).hasNestedObjects();\n if (parsedDocument.docs().size() > 1) {\n- assert indexShard.mapperService().documentMapper(request.documentType()).hasNestedObjects();\n+ assert isNested;\n percolatorIndex = multi;\n } else {\n percolatorIndex = single;\n@@ -232,7 +234,7 @@ public PercolateShardResponse percolate(PercolateShardRequest request) {\n context.percolatorTypeId = action.id();\n \n percolatorIndex.prepare(context, parsedDocument);\n- return action.doPercolate(request, context);\n+ return action.doPercolate(request, context, isNested);\n } finally {\n context.close();\n shardPercolateService.postPercolate(System.nanoTime() - startTime);\n@@ -418,7 +420,7 @@ interface PercolatorType {\n \n ReduceResult reduce(List<PercolateShardResponse> shardResults);\n \n- PercolateShardResponse doPercolate(PercolateShardRequest request, PercolateContext context);\n+ PercolateShardResponse doPercolate(PercolateShardRequest request, PercolateContext context, boolean isNested);\n \n }\n \n@@ -443,13 +445,17 @@ public ReduceResult reduce(List<PercolateShardResponse> shardResults) {\n }\n \n @Override\n- public PercolateShardResponse doPercolate(PercolateShardRequest request, PercolateContext context) {\n+ public PercolateShardResponse doPercolate(PercolateShardRequest request, PercolateContext context, boolean isNested) {\n long count = 0;\n Lucene.ExistsCollector collector = new Lucene.ExistsCollector();\n for (Map.Entry<BytesRef, Query> entry : context.percolateQueries().entrySet()) {\n collector.reset();\n try {\n- context.docSearcher().search(entry.getValue(), collector);\n+ if (isNested) {\n+ context.docSearcher().search(entry.getValue(), NonNestedDocsFilter.INSTANCE, collector);\n+ } else {\n+ context.docSearcher().search(entry.getValue(), collector);\n+ }\n } catch (Throwable e) {\n logger.debug(\"[\" + entry.getKey() + \"] failed to execute query\", e);\n throw new PercolateException(context.indexShard().shardId(), \"failed to execute\", e);\n@@ -477,11 +483,11 @@ public ReduceResult reduce(List<PercolateShardResponse> shardResults) {\n }\n \n @Override\n- public PercolateShardResponse doPercolate(PercolateShardRequest request, PercolateContext context) {\n+ public PercolateShardResponse doPercolate(PercolateShardRequest request, PercolateContext context, boolean isNested) {\n long count = 0;\n Engine.Searcher percolatorSearcher = context.indexShard().acquireSearcher(\"percolate\");\n try {\n- Count countCollector = count(logger, context);\n+ Count countCollector = count(logger, context, isNested);\n queryBasedPercolating(percolatorSearcher, context, countCollector);\n count = countCollector.counter();\n } catch (Throwable e) {\n@@ -534,7 +540,7 @@ public ReduceResult reduce(List<PercolateShardResponse> shardResults) {\n }\n \n @Override\n- public PercolateShardResponse doPercolate(PercolateShardRequest request, PercolateContext context) {\n+ public PercolateShardResponse doPercolate(PercolateShardRequest request, PercolateContext context, boolean isNested) {\n long count = 0;\n List<BytesRef> matches = new ArrayList<>();\n List<Map<String, HighlightField>> hls = new ArrayList<>();\n@@ -547,7 +553,11 @@ public PercolateShardResponse doPercolate(PercolateShardRequest request, Percola\n context.hitContext().cache().clear();\n }\n try {\n- context.docSearcher().search(entry.getValue(), collector);\n+ if (isNested) {\n+ context.docSearcher().search(entry.getValue(), NonNestedDocsFilter.INSTANCE, collector);\n+ } else {\n+ context.docSearcher().search(entry.getValue(), collector);\n+ }\n } catch (Throwable e) {\n logger.debug(\"[\" + entry.getKey() + \"] failed to execute query\", e);\n throw new PercolateException(context.indexShard().shardId(), \"failed to execute\", e);\n@@ -583,10 +593,10 @@ public ReduceResult reduce(List<PercolateShardResponse> shardResults) {\n }\n \n @Override\n- public PercolateShardResponse doPercolate(PercolateShardRequest request, PercolateContext context) {\n+ public PercolateShardResponse doPercolate(PercolateShardRequest request, PercolateContext context, boolean isNested) {\n Engine.Searcher percolatorSearcher = context.indexShard().acquireSearcher(\"percolate\");\n try {\n- Match match = match(logger, context, highlightPhase);\n+ Match match = match(logger, context, highlightPhase, isNested);\n queryBasedPercolating(percolatorSearcher, context, match);\n List<BytesRef> matches = match.matches();\n List<Map<String, HighlightField>> hls = match.hls();\n@@ -616,10 +626,10 @@ public ReduceResult reduce(List<PercolateShardResponse> shardResults) {\n }\n \n @Override\n- public PercolateShardResponse doPercolate(PercolateShardRequest request, PercolateContext context) {\n+ public PercolateShardResponse doPercolate(PercolateShardRequest request, PercolateContext context, boolean isNested) {\n Engine.Searcher percolatorSearcher = context.indexShard().acquireSearcher(\"percolate\");\n try {\n- MatchAndScore matchAndScore = matchAndScore(logger, context, highlightPhase);\n+ MatchAndScore matchAndScore = matchAndScore(logger, context, highlightPhase, isNested);\n queryBasedPercolating(percolatorSearcher, context, matchAndScore);\n List<BytesRef> matches = matchAndScore.matches();\n List<Map<String, HighlightField>> hls = matchAndScore.hls();\n@@ -730,10 +740,10 @@ public ReduceResult reduce(List<PercolateShardResponse> shardResults) {\n }\n \n @Override\n- public PercolateShardResponse doPercolate(PercolateShardRequest request, PercolateContext context) {\n+ public PercolateShardResponse doPercolate(PercolateShardRequest request, PercolateContext context, boolean isNested) {\n Engine.Searcher percolatorSearcher = context.indexShard().acquireSearcher(\"percolate\");\n try {\n- MatchAndSort matchAndSort = QueryCollector.matchAndSort(logger, context);\n+ MatchAndSort matchAndSort = QueryCollector.matchAndSort(logger, context, isNested);\n queryBasedPercolating(percolatorSearcher, context, matchAndSort);\n TopDocs topDocs = matchAndSort.topDocs();\n long count = topDocs.totalHits;\n@@ -785,7 +795,6 @@ private void queryBasedPercolating(Engine.Searcher percolatorSearcher, Percolate\n percolatorTypeFilter = context.indexService().cache().filter().cache(percolatorTypeFilter);\n XFilteredQuery query = new XFilteredQuery(context.percolateQuery(), percolatorTypeFilter);\n percolatorSearcher.searcher().search(query, percolateCollector);\n-\n for (Collector queryCollector : percolateCollector.facetAndAggregatorCollector) {\n if (queryCollector instanceof XCollector) {\n ((XCollector) queryCollector).postCollection();",
"filename": "src/main/java/org/elasticsearch/percolator/PercolatorService.java",
"status": "modified"
},
{
"diff": "@@ -32,6 +32,7 @@\n import org.elasticsearch.index.mapper.FieldMapper;\n import org.elasticsearch.index.mapper.internal.IdFieldMapper;\n import org.elasticsearch.index.query.ParsedQuery;\n+import org.elasticsearch.index.search.nested.NonNestedDocsFilter;\n import org.elasticsearch.search.aggregations.AggregationPhase;\n import org.elasticsearch.search.aggregations.Aggregator;\n import org.elasticsearch.search.aggregations.bucket.global.GlobalAggregator;\n@@ -55,6 +56,7 @@ abstract class QueryCollector extends Collector {\n final IndexSearcher searcher;\n final ConcurrentMap<BytesRef, Query> queries;\n final ESLogger logger;\n+ boolean isNestedDoc = false;\n \n final Lucene.ExistsCollector collector = new Lucene.ExistsCollector();\n BytesRef current;\n@@ -63,12 +65,13 @@ abstract class QueryCollector extends Collector {\n \n final List<Collector> facetAndAggregatorCollector;\n \n- QueryCollector(ESLogger logger, PercolateContext context) {\n+ QueryCollector(ESLogger logger, PercolateContext context, boolean isNestedDoc) {\n this.logger = logger;\n this.queries = context.percolateQueries();\n this.searcher = context.docSearcher();\n final FieldMapper<?> idMapper = context.mapperService().smartNameFieldMapper(IdFieldMapper.NAME);\n this.idFieldData = context.fieldData().getForField(idMapper);\n+ this.isNestedDoc = isNestedDoc;\n \n ImmutableList.Builder<Collector> facetAggCollectorBuilder = ImmutableList.builder();\n if (context.facets() != null) {\n@@ -139,20 +142,20 @@ public boolean acceptsDocsOutOfOrder() {\n }\n \n \n- static Match match(ESLogger logger, PercolateContext context, HighlightPhase highlightPhase) {\n- return new Match(logger, context, highlightPhase);\n+ static Match match(ESLogger logger, PercolateContext context, HighlightPhase highlightPhase, boolean isNestedDoc) {\n+ return new Match(logger, context, highlightPhase, isNestedDoc);\n }\n \n- static Count count(ESLogger logger, PercolateContext context) {\n- return new Count(logger, context);\n+ static Count count(ESLogger logger, PercolateContext context, boolean isNestedDoc) {\n+ return new Count(logger, context, isNestedDoc);\n }\n \n- static MatchAndScore matchAndScore(ESLogger logger, PercolateContext context, HighlightPhase highlightPhase) {\n- return new MatchAndScore(logger, context, highlightPhase);\n+ static MatchAndScore matchAndScore(ESLogger logger, PercolateContext context, HighlightPhase highlightPhase, boolean isNestedDoc) {\n+ return new MatchAndScore(logger, context, highlightPhase, isNestedDoc);\n }\n \n- static MatchAndSort matchAndSort(ESLogger logger, PercolateContext context) {\n- return new MatchAndSort(logger, context);\n+ static MatchAndSort matchAndSort(ESLogger logger, PercolateContext context, boolean isNestedDoc) {\n+ return new MatchAndSort(logger, context, isNestedDoc);\n }\n \n \n@@ -179,8 +182,8 @@ final static class Match extends QueryCollector {\n final int size;\n long counter = 0;\n \n- Match(ESLogger logger, PercolateContext context, HighlightPhase highlightPhase) {\n- super(logger, context);\n+ Match(ESLogger logger, PercolateContext context, HighlightPhase highlightPhase, boolean isNestedDoc) {\n+ super(logger, context, isNestedDoc);\n this.limit = context.limit;\n this.size = context.size();\n this.context = context;\n@@ -202,7 +205,11 @@ public void collect(int doc) throws IOException {\n context.hitContext().cache().clear();\n }\n \n- searcher.search(query, collector);\n+ if (isNestedDoc) {\n+ searcher.search(query, NonNestedDocsFilter.INSTANCE, collector);\n+ } else {\n+ searcher.search(query, collector);\n+ }\n if (collector.exists()) {\n if (!limit || counter < size) {\n matches.add(values.copyShared());\n@@ -236,8 +243,8 @@ final static class MatchAndSort extends QueryCollector {\n \n private final TopScoreDocCollector topDocsCollector;\n \n- MatchAndSort(ESLogger logger, PercolateContext context) {\n- super(logger, context);\n+ MatchAndSort(ESLogger logger, PercolateContext context, boolean isNestedDoc) {\n+ super(logger, context, isNestedDoc);\n // TODO: Use TopFieldCollector.create(...) for ascending and decending scoring?\n topDocsCollector = TopScoreDocCollector.create(context.size(), false);\n }\n@@ -252,7 +259,11 @@ public void collect(int doc) throws IOException {\n // run the query\n try {\n collector.reset();\n- searcher.search(query, collector);\n+ if (isNestedDoc) {\n+ searcher.search(query, NonNestedDocsFilter.INSTANCE, collector);\n+ } else {\n+ searcher.search(query, collector);\n+ }\n if (collector.exists()) {\n topDocsCollector.collect(doc);\n postMatch(doc);\n@@ -294,8 +305,8 @@ final static class MatchAndScore extends QueryCollector {\n \n private Scorer scorer;\n \n- MatchAndScore(ESLogger logger, PercolateContext context, HighlightPhase highlightPhase) {\n- super(logger, context);\n+ MatchAndScore(ESLogger logger, PercolateContext context, HighlightPhase highlightPhase, boolean isNestedDoc) {\n+ super(logger, context, isNestedDoc);\n this.limit = context.limit;\n this.size = context.size();\n this.context = context;\n@@ -316,7 +327,11 @@ public void collect(int doc) throws IOException {\n context.parsedQuery(new ParsedQuery(query, ImmutableMap.<String, Filter>of()));\n context.hitContext().cache().clear();\n }\n- searcher.search(query, collector);\n+ if (isNestedDoc) {\n+ searcher.search(query, NonNestedDocsFilter.INSTANCE, collector);\n+ } else {\n+ searcher.search(query, collector);\n+ }\n if (collector.exists()) {\n if (!limit || counter < size) {\n matches.add(values.copyShared());\n@@ -360,8 +375,8 @@ final static class Count extends QueryCollector {\n \n private long counter = 0;\n \n- Count(ESLogger logger, PercolateContext context) {\n- super(logger, context);\n+ Count(ESLogger logger, PercolateContext context, boolean isNestedDoc) {\n+ super(logger, context, isNestedDoc);\n }\n \n @Override\n@@ -374,7 +389,11 @@ public void collect(int doc) throws IOException {\n // run the query\n try {\n collector.reset();\n- searcher.search(query, collector);\n+ if (isNestedDoc) {\n+ searcher.search(query, NonNestedDocsFilter.INSTANCE, collector);\n+ } else {\n+ searcher.search(query, collector);\n+ }\n if (collector.exists()) {\n counter++;\n postMatch(doc);",
"filename": "src/main/java/org/elasticsearch/percolator/QueryCollector.java",
"status": "modified"
},
{
"diff": "@@ -61,6 +61,7 @@\n import static org.elasticsearch.common.xcontent.XContentFactory.*;\n import static org.elasticsearch.index.query.FilterBuilders.termFilter;\n import static org.elasticsearch.index.query.QueryBuilders.*;\n+import static org.elasticsearch.index.query.functionscore.ScoreFunctionBuilders.exponentialDecayFunction;\n import static org.elasticsearch.index.query.functionscore.ScoreFunctionBuilders.scriptFunction;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.*;\n import static org.hamcrest.Matchers.*;\n@@ -1843,4 +1844,141 @@ XContentBuilder getNotMatchingNestedDoc() throws IOException {\n .endArray().endObject();\n return doc;\n }\n+\n+ // issue\n+ @Test\n+ public void testNestedDocFilter() throws IOException {\n+ String mapping = \"{\\n\" +\n+ \" \\\"doc\\\": {\\n\" +\n+ \" \\\"properties\\\": {\\n\" +\n+ \" \\\"persons\\\": {\\n\" +\n+ \" \\\"type\\\": \\\"nested\\\"\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \" }\";\n+ String doc = \"{\\n\" +\n+ \" \\\"name\\\": \\\"obama\\\",\\n\" +\n+ \" \\\"persons\\\": [\\n\" +\n+ \" {\\n\" +\n+ \" \\\"foo\\\": \\\"bar\\\"\\n\" +\n+ \" }\\n\" +\n+ \" ]\\n\" +\n+ \" }\";\n+ String q1 = \"{\\n\" +\n+ \" \\\"query\\\": {\\n\" +\n+ \" \\\"bool\\\": {\\n\" +\n+ \" \\\"must\\\": {\\n\" +\n+ \" \\\"match\\\": {\\n\" +\n+ \" \\\"name\\\": \\\"obama\\\"\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \" },\\n\" +\n+ \"\\\"text\\\":\\\"foo\\\"\"+\n+ \"}\";\n+ String q2 = \"{\\n\" +\n+ \" \\\"query\\\": {\\n\" +\n+ \" \\\"bool\\\": {\\n\" +\n+ \" \\\"must_not\\\": {\\n\" +\n+ \" \\\"match\\\": {\\n\" +\n+ \" \\\"name\\\": \\\"obama\\\"\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \" },\\n\" +\n+ \"\\\"text\\\":\\\"foo\\\"\"+\n+ \"}\";\n+ String q3 = \"{\\n\" +\n+ \" \\\"query\\\": {\\n\" +\n+ \" \\\"bool\\\": {\\n\" +\n+ \" \\\"must\\\": {\\n\" +\n+ \" \\\"match\\\": {\\n\" +\n+ \" \\\"persons.foo\\\": \\\"bar\\\"\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \" },\\n\" +\n+ \"\\\"text\\\":\\\"foo\\\"\"+\n+ \"}\";\n+ String q4 = \"{\\n\" +\n+ \" \\\"query\\\": {\\n\" +\n+ \" \\\"bool\\\": {\\n\" +\n+ \" \\\"must_not\\\": {\\n\" +\n+ \" \\\"match\\\": {\\n\" +\n+ \" \\\"persons.foo\\\": \\\"bar\\\"\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \" },\\n\" +\n+ \"\\\"text\\\":\\\"foo\\\"\"+\n+ \"}\";\n+ String q5 = \"{\\n\" +\n+ \" \\\"query\\\": {\\n\" +\n+ \" \\\"bool\\\": {\\n\" +\n+ \" \\\"must\\\": {\\n\" +\n+ \" \\\"nested\\\": {\\n\" +\n+ \" \\\"path\\\": \\\"persons\\\",\\n\" +\n+ \" \\\"query\\\": {\\n\" +\n+ \" \\\"match\\\": {\\n\" +\n+ \" \\\"persons.foo\\\": \\\"bar\\\"\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \" },\\n\" +\n+ \"\\\"text\\\":\\\"foo\\\"\"+\n+ \"}\";\n+ String q6 = \"{\\n\" +\n+ \" \\\"query\\\": {\\n\" +\n+ \" \\\"bool\\\": {\\n\" +\n+ \" \\\"must_not\\\": {\\n\" +\n+ \" \\\"nested\\\": {\\n\" +\n+ \" \\\"path\\\": \\\"persons\\\",\\n\" +\n+ \" \\\"query\\\": {\\n\" +\n+ \" \\\"match\\\": {\\n\" +\n+ \" \\\"persons.foo\\\": \\\"bar\\\"\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \" }\\n\" +\n+ \" },\\n\" +\n+ \"\\\"text\\\":\\\"foo\\\"\"+\n+ \"}\";\n+ assertAcked(client().admin().indices().prepareCreate(\"test\").addMapping(\"doc\", mapping));\n+ ensureGreen(\"test\");\n+ client().prepareIndex(\"test\", PercolatorService.TYPE_NAME).setSource(q1).setId(\"q1\").get();\n+ client().prepareIndex(\"test\", PercolatorService.TYPE_NAME).setSource(q2).setId(\"q2\").get();\n+ client().prepareIndex(\"test\", PercolatorService.TYPE_NAME).setSource(q3).setId(\"q3\").get();\n+ client().prepareIndex(\"test\", PercolatorService.TYPE_NAME).setSource(q4).setId(\"q4\").get();\n+ client().prepareIndex(\"test\", PercolatorService.TYPE_NAME).setSource(q5).setId(\"q5\").get();\n+ client().prepareIndex(\"test\", PercolatorService.TYPE_NAME).setSource(q6).setId(\"q6\").get();\n+ refresh();\n+ PercolateResponse response = client().preparePercolate()\n+ .setIndices(\"test\").setDocumentType(\"doc\")\n+ .setPercolateDoc(docBuilder().setDoc(doc))\n+ .get();\n+ assertMatchCount(response, 3l);\n+ Set<String> expectedIds = new HashSet<>();\n+ expectedIds.add(\"q1\");\n+ expectedIds.add(\"q4\");\n+ expectedIds.add(\"q5\");\n+ for (PercolateResponse.Match match : response.getMatches()) {\n+ assertTrue(expectedIds.remove(match.getId().string()));\n+ }\n+ assertTrue(expectedIds.isEmpty());\n+ response = client().preparePercolate().setOnlyCount(true)\n+ .setIndices(\"test\").setDocumentType(\"doc\")\n+ .setPercolateDoc(docBuilder().setDoc(doc))\n+ .get();\n+ assertMatchCount(response, 3l);\n+ response = client().preparePercolate().setScore(randomBoolean()).setSortByScore(randomBoolean()).setOnlyCount(randomBoolean()).setSize(10).setPercolateQuery(QueryBuilders.termQuery(\"text\", \"foo\"))\n+ .setIndices(\"test\").setDocumentType(\"doc\")\n+ .setPercolateDoc(docBuilder().setDoc(doc))\n+ .get();\n+ assertMatchCount(response, 3l);\n+ }\n }\n+",
"filename": "src/test/java/org/elasticsearch/percolator/PercolatorTests.java",
"status": "modified"
}
]
} |
{
"body": "This test fails occasionally when running the Perl test suite:\n\nhttps://github.com/elasticsearch/elasticsearch/blob/master/rest-api-spec/test/indices.delete_mapping/10_basic.yaml\n\nEssentially, the test creates three indices, doesn't wait for status yellow, then runs a delete mapping request. Normally this succeeds but sometimes it just hangs. After 30s my Perl client times out, and shortly thereafter I see a null pointer exception in Elasticsearch. \n\nLogs of requests from the Perl client, and trace logs from Elasticsearch are available here: https://gist.github.com/clintongormley/1a4b4eae386f271e4dc9\n\nI think the problem is that not all shards are live when the delete mapping request starts, so the delete-by-query request never runs on the unstarted shards. See https://gist.github.com/clintongormley/1a4b4eae386f271e4dc9#file-eslogs-L1353 Elasticsearch waits for successful responses from all delete-by-query requests, and so just hangs until the requests time out, and we get the NPE: https://gist.github.com/clintongormley/1a4b4eae386f271e4dc9#file-eslogs-L1642\n",
"comments": [
{
"body": "Temporarily added a wait-for-yellow to the delete_mapping YAML tests. Once this issue has been fixed, can revert:\n- branch 1.0 3a4fe61e8e911b1b63a53489297c1832dc99a922\n- branch 1.1 8b4ad897c12a0ace8f45dcd05c3e61da98a9b00b\n- branch 1.x 70662004e003932902e99c143f2fc3da744d98f1\n- master a972aaa7aec7419197d08dce6eaea1d87422846e\n",
"created_at": "2014-05-09T10:08:01Z"
},
{
"body": "I think this was fixed by #7744. If we see it again reopen.\n",
"created_at": "2014-09-25T11:12:18Z"
}
],
"number": 5997,
"title": "Delete mapping sometimes hangs"
} | {
"body": "If multiple clients attempted to delete a mapping, or a single client attempted to delete a mapping as an index is being\ncreated a NPE could be observed in the MetaDataMappingService. This fix makes sure we don't try to access a null indexMetaData object.\nAlso add a test to spawn multiple create and delete threads to provoke this race condition.\n\nCloses #5997\n",
"number": 6531,
"review_comments": [
{
"body": "you can use `assertAcked(prepareCreate(...))` here\n",
"created_at": "2014-06-17T11:26:52Z"
},
{
"body": "`assertAcked` here too\n",
"created_at": "2014-06-17T11:27:27Z"
},
{
"body": "maybe we could randomize the number of iterations and the number of threads used below?\n",
"created_at": "2014-06-17T11:28:32Z"
}
],
"title": "Mapping: Fix delete mapping race condition."
} | {
"commits": [
{
"message": "Mapping: Fix delete mapping race condition.\n\nIf multiple clients attempted to delete a mapping, or a single client attempted to delete a mapping as an index is being\ncreated a NPE could be observed in the MetaDataMappingService. This fix makes sure we don't try to access a null indexMetaData object.\nAlso add a test to spawn multiple create and delete threads to provoke this race condition.\n\nCloses #5997"
}
],
"files": [
{
"diff": "@@ -428,10 +428,11 @@ public ClusterState execute(ClusterState currentState) {\n String latestIndexWithout = null;\n for (String indexName : request.indices()) {\n IndexMetaData indexMetaData = currentState.metaData().index(indexName);\n- IndexMetaData.Builder indexBuilder = IndexMetaData.builder(indexMetaData);\n- \n+\n if (indexMetaData != null) {\n+ IndexMetaData.Builder indexBuilder = IndexMetaData.builder(indexMetaData);\n boolean isLatestIndexWithout = true;\n+\n for (String type : request.types()) {\n if (indexMetaData.mappings().containsKey(type)) {\n indexBuilder.removeMapping(type);\n@@ -443,12 +444,18 @@ public ClusterState execute(ClusterState currentState) {\n latestIndexWithout = indexMetaData.index();\n }\n \n+ builder.put(indexBuilder);\n }\n- builder.put(indexBuilder);\n }\n \n if (!changed) {\n- throw new TypeMissingException(new Index(latestIndexWithout), request.types());\n+ if (latestIndexWithout == null){\n+ //We never got a latestIndexWith since when we tried to access the index/mapping it\n+ //had been deleted out from underneath us.\n+ throw new TypeMissingException(new Index(request.indices()[0]), request.types());\n+ } else {\n+ throw new TypeMissingException(new Index(latestIndexWithout), request.types());\n+ }\n }\n \n logger.info(\"[{}] remove_mapping [{}]\", request.indices(), request.types());",
"filename": "src/main/java/org/elasticsearch/cluster/metadata/MetaDataMappingService.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,114 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.index.mapper.delete;\n+\n+import org.elasticsearch.ElasticsearchException;\n+import org.elasticsearch.action.admin.indices.create.CreateIndexResponse;\n+import org.elasticsearch.action.admin.indices.mapping.delete.DeleteMappingResponse;\n+import org.elasticsearch.indices.IndexAlreadyExistsException;\n+import org.elasticsearch.indices.IndexMissingException;\n+import org.elasticsearch.indices.TypeMissingException;\n+import org.elasticsearch.test.ElasticsearchIntegrationTest;\n+import org.junit.Test;\n+\n+import java.util.ArrayList;\n+import java.util.List;\n+\n+/**\n+ */\n+@ElasticsearchIntegrationTest.ClusterScope(scope = ElasticsearchIntegrationTest.Scope.SUITE)\n+public class MultithreadedDeleteMappingTest extends ElasticsearchIntegrationTest {\n+\n+ class CreateThread extends Thread{\n+ public void run(){\n+ for (int i=0; i<1000; ++i){\n+ try {\n+ CreateIndexResponse cir = client().admin().indices().prepareCreate(\"test_index\").setSource(\" {\\\"mappings\\\":{\" +\n+ \" \\\"test_type\\\":{\" +\n+ \" \\\"properties\\\":{\" +\n+ \" \\\"text\\\":{\" +\n+ \" \\\"type\\\": \\\"string\\\",\" +\n+ \" \\\"analyzer\\\": \\\"whitespace\\\"}\" +\n+ \" }\" +\n+ \" }\" +\n+ \" }\" +\n+ \"}\").execute().actionGet();\n+ assertTrue(cir.isAcknowledged());\n+ } catch( IndexAlreadyExistsException e ){\n+\n+ }\n+ }\n+ }\n+ }\n+\n+ class DeleteThread extends Thread{\n+ public void run() {\n+ for (int i=0; i<1000; ++i) {\n+ try {\n+ DeleteMappingResponse dmr = client().admin().indices().prepareDeleteMapping(\"test_index\").setType(\"test_type\").execute().actionGet();\n+ assertTrue(dmr.isAcknowledged());\n+ client().admin().indices().prepareDelete(\"test_index\").execute().actionGet();\n+ } catch ( IndexMissingException e) {\n+\n+ } catch( TypeMissingException te ){\n+\n+ }\n+ }\n+ }\n+ }\n+\n+ public void startAll(List<Thread> threads){\n+ for(Thread t : threads){\n+ t.start();\n+ }\n+ }\n+\n+ public void joinAll(List<Thread> threads) {\n+ for (Thread t : threads) {\n+ boolean joined = false;\n+ while(!joined) {\n+ try {\n+ t.join();\n+ joined = true;\n+ } catch (InterruptedException ie) {\n+ }\n+ }\n+ }\n+ }\n+\n+ @Test\n+ public void testMultiThreadedMappingTest(){\n+ List<Thread> createList = new ArrayList<>();\n+ List<Thread> deleteList = new ArrayList<>();\n+ for (int i = 0; i<10; ++i) {\n+ createList.add(new CreateThread());\n+ }\n+\n+ for (int i =0; i<10; ++i){\n+ deleteList.add(new DeleteThread());\n+ }\n+\n+ startAll(createList);\n+ startAll(deleteList);\n+\n+ joinAll(createList);\n+ joinAll(deleteList);\n+ }\n+}",
"filename": "src/test/java/org/elasticsearch/index/mapper/delete/MultithreadedDeleteMappingTest.java",
"status": "added"
}
]
} |
{
"body": "When using a BulkProcessor with setting concurrentRequests > 0 and an exception occurs on row 283 (client.bulk(...)) other than InterruptedException then it is not caught and subsequently the afterBulk function is never called.\n\nIt is reproducible by running:\n- a transport client without an elasticsearch to connect to\n- with the BulkProcessor configured with concurrentRequests > 0\n- sending in enough documents so that the bulk is sent\n\nThis problem leads to that you only get exception for 1 of the documents in the bulk and there is no way to know that the other documents also failed.\n",
"comments": [
{
"body": "FYI this bug is related to #4153, #4155 and #4158\n",
"created_at": "2014-02-06T13:56:34Z"
},
{
"body": "Any progress on this yet?\n",
"created_at": "2014-03-31T08:32:29Z"
},
{
"body": "I'm trying to implement a workaround for #6314 and this is required. Any update here for a fix?\n",
"created_at": "2014-05-27T12:50:42Z"
},
{
"body": "Hello, \n\nI'm seeing an issue with the bulk request API where I am sending many bulk requests simultaneously.\n\nWhen my index reaches a certain size and the amount of memory assigned to each shard is increased (i.e. increasing from 56Mb allocated to 128Mb allocated) the pending bulk requests fail to respond (i.e. no call to 'onResponse' or 'onFailure' in the BulkRequest ActionListener) effectively locking my application from sending any additional requests as the semaphore flags acquired before calling 'execute' are never released.\n\nCould this issue relate to a problem that I am seeing with the bulk request API? \n\nThanks,\nAndrew\n",
"created_at": "2014-06-13T14:01:18Z"
},
{
"body": "Hi @awnixon ,\nif you mean the semaphore within `BulkProcessor`, that's now released in a finally block. Which version of elasticsearch are you using though? This reminds me of #4153, it might be your problem if you're using an old version of elasticsearch,\n",
"created_at": "2014-06-13T14:20:06Z"
},
{
"body": "Hi @javanna,\n\nWithin my own client application I manage the number of bulk requests which may be sent using a semaphore flag which is acquired before calling execute and released once a response is received.\nThe issue you referenced does sound similar to my own, however I am using the 2.0.0-SNAPSHOT version (as of the 9th of June). \n\nAlso, I do not see any exceptions coming from the elasticsearch node. Simply the statement that the amount of memory allocated to each shard is to be increased at which point the number of pending requests in my client application steadily increases without any response from the server. I am still able to execute curl -XGet requests (i.e. elastisearch cluster state is reported as green on request).\n\nI hope that helps. \n\nAll the best, \nAndrew \n",
"created_at": "2014-06-13T14:33:38Z"
},
{
"body": "Then the problem can't be in the `BulkProcessor` as you are not using it. I'd suggest to switch to it though. There might be an issue in your client code that handles the semaphore?\n",
"created_at": "2014-06-13T14:59:17Z"
},
{
"body": "Thanks for the suggestion. I've switched to the BulkProcessor and found the\nsituation to be the same where I set the level of concurrency to > 1.\nI believe that I have found the root cause of my issue however. Whilst\nusing concurrent concurrent bulk requests I'm reaching (or exceeding) the\nspecified \"indices.memory.max_index_buffer_size\" (which I've set to 256Mb\nduring testing). When I increase the \"indices.memory.max_index_buffer_size\"\nvalue my issue no longer occurs. Would you expect elasticsearch to handle\nthis situation by rejecting some/all pending requests or should it be\nreturning to the current set of pending requests once the\n\"index_buffer_size\" for each shard has been updated?\n\nOn 13 June 2014 15:59, Luca Cavanna notifications@github.com wrote:\n\n> Then the problem can't be in the BulkProcessor as you are not using it.\n> I'd suggest to switch to it though. There might be an issue in your client\n> code that handles the semaphore?\n> \n> —\n> Reply to this email directly or view it on GitHub\n> https://github.com/elasticsearch/elasticsearch/issues/5038#issuecomment-46021420\n> .\n",
"created_at": "2014-06-13T18:47:57Z"
}
],
"number": 5038,
"title": "Java API: BulkProcessor does not call afterBulk when bulk throws eg NoNodeAvailableException"
} | {
"body": "Java API: Make sure afterBulk is always called in BulkProcessor.\n\nAlso strenghtened BulkProcessorTests by adding randomizations to existing tests and new tests for concurrent request.\n\nCloses #5038\n",
"number": 6495,
"review_comments": [
{
"body": "I think you should also check if we called it though?\n",
"created_at": "2014-06-13T09:59:53Z"
},
{
"body": "I might be wrong...but `InterruptedException` can only be thrown in `semaphore.acquire`, which is before `beforeBulk` gets called. IMO we should just remove this line, that's why I added the comment...\n",
"created_at": "2014-06-13T10:01:54Z"
},
{
"body": "yea makes sense or we move the beforeBulk before the semaphore which might make more sense since we want to keep the critical section as small as possible?\n",
"created_at": "2014-06-13T10:13:43Z"
},
{
"body": "I like the idea of moving the `beforeBulk` up one line. This removes the need for the `beforeCalled` variable as well. Pushed a new commit.\n",
"created_at": "2014-06-13T10:21:49Z"
},
{
"body": "maybe rename those methods to assert\\* as this is what they do... same with the next one\n",
"created_at": "2014-06-13T15:06:15Z"
},
{
"body": "sure, will change that!\n",
"created_at": "2014-06-13T15:09:07Z"
}
],
"title": "Make sure afterBulk is always called in BulkProcessor"
} | {
"commits": [
{
"message": "[TEST] Moved BulkProcessor tests to newly added BulkProcessorTests class"
},
{
"message": "Java API: Make sure afterBulk is always called in BulkProcessor\n\nAlso strenghtened BulkProcessorTests by adding randomizations to existing tests and new tests for concurrent requests and expcetions\n\nCloses #5038"
},
{
"message": "Updated according to review\n\nMoved listener.beforeBulk up one line, before sempahore.acquire. This also removes the need for the beforeCalled boolean variable."
},
{
"message": "Made sure that afterBulk is called only once per request if concurrentRequests==0"
},
{
"message": "[TEST] Covered the no concurrent requests in BulkProcessorTests randomizations"
},
{
"message": "[TEST] renamed check* test methods to assert*\n\nAlso remove needless test method parameters"
}
],
"files": [
{
"diff": "@@ -276,17 +276,22 @@ private void execute() {\n \n if (concurrentRequests == 0) {\n // execute in a blocking fashion...\n+ boolean afterCalled = false;\n try {\n listener.beforeBulk(executionId, bulkRequest);\n- listener.afterBulk(executionId, bulkRequest, client.bulk(bulkRequest).actionGet());\n+ BulkResponse bulkItemResponses = client.bulk(bulkRequest).actionGet();\n+ afterCalled = true;\n+ listener.afterBulk(executionId, bulkRequest, bulkItemResponses);\n } catch (Exception e) {\n- listener.afterBulk(executionId, bulkRequest, e);\n+ if (!afterCalled) {\n+ listener.afterBulk(executionId, bulkRequest, e);\n+ }\n }\n } else {\n boolean success = false;\n try {\n- semaphore.acquire();\n listener.beforeBulk(executionId, bulkRequest);\n+ semaphore.acquire();\n client.bulk(bulkRequest, new ActionListener<BulkResponse>() {\n @Override\n public void onResponse(BulkResponse response) {\n@@ -310,12 +315,13 @@ public void onFailure(Throwable e) {\n } catch (InterruptedException e) {\n Thread.interrupted();\n listener.afterBulk(executionId, bulkRequest, e);\n+ } catch (Throwable t) {\n+ listener.afterBulk(executionId, bulkRequest, t);\n } finally {\n if (!success) { // if we fail on client.bulk() release the semaphore\n semaphore.release();\n }\n }\n-\n }\n }\n ",
"filename": "src/main/java/org/elasticsearch/action/bulk/BulkProcessor.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,326 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.action.bulk;\n+\n+import org.elasticsearch.action.get.MultiGetItemResponse;\n+import org.elasticsearch.action.get.MultiGetRequestBuilder;\n+import org.elasticsearch.action.get.MultiGetResponse;\n+import org.elasticsearch.action.index.IndexRequest;\n+import org.elasticsearch.client.Client;\n+import org.elasticsearch.client.transport.TransportClient;\n+import org.elasticsearch.common.settings.ImmutableSettings;\n+import org.elasticsearch.common.unit.ByteSizeUnit;\n+import org.elasticsearch.common.unit.ByteSizeValue;\n+import org.elasticsearch.common.unit.TimeValue;\n+import org.elasticsearch.test.ElasticsearchIntegrationTest;\n+import org.junit.Test;\n+\n+import java.util.Arrays;\n+import java.util.HashSet;\n+import java.util.List;\n+import java.util.Set;\n+import java.util.concurrent.CopyOnWriteArrayList;\n+import java.util.concurrent.CountDownLatch;\n+import java.util.concurrent.TimeUnit;\n+import java.util.concurrent.atomic.AtomicInteger;\n+\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n+import static org.hamcrest.Matchers.*;\n+\n+public class BulkProcessorTests extends ElasticsearchIntegrationTest {\n+\n+ @Test\n+ public void testThatBulkProcessorCountIsCorrect() throws InterruptedException {\n+\n+ final CountDownLatch latch = new CountDownLatch(1);\n+ BulkProcessorTestListener listener = new BulkProcessorTestListener(latch);\n+\n+ int numDocs = randomIntBetween(10, 100);\n+ try (BulkProcessor processor = BulkProcessor.builder(client(), listener).setName(\"foo\")\n+ //let's make sure that the bulk action limit trips, one single execution will index all the documents\n+ .setConcurrentRequests(randomIntBetween(0, 1)).setBulkActions(numDocs)\n+ .setFlushInterval(TimeValue.timeValueHours(24)).setBulkSize(new ByteSizeValue(1, ByteSizeUnit.GB))\n+ .build()) {\n+\n+ MultiGetRequestBuilder multiGetRequestBuilder = indexDocs(client(), processor, numDocs);\n+\n+ latch.await();\n+\n+ assertThat(listener.beforeCounts.get(), equalTo(1));\n+ assertThat(listener.afterCounts.get(), equalTo(1));\n+ assertThat(listener.bulkFailures.size(), equalTo(0));\n+ assertResponseItems(listener.bulkItems, numDocs);\n+ assertMultiGetResponse(multiGetRequestBuilder.get(), numDocs);\n+ }\n+ }\n+\n+ @Test\n+ public void testBulkProcessorFlush() throws InterruptedException {\n+ final CountDownLatch latch = new CountDownLatch(1);\n+ BulkProcessorTestListener listener = new BulkProcessorTestListener(latch);\n+\n+ int numDocs = randomIntBetween(10, 100);\n+\n+ try (BulkProcessor processor = BulkProcessor.builder(client(), listener).setName(\"foo\")\n+ //let's make sure that this bulk won't be automatically flushed\n+ .setConcurrentRequests(randomIntBetween(0, 10)).setBulkActions(numDocs + randomIntBetween(1, 100))\n+ .setFlushInterval(TimeValue.timeValueHours(24)).setBulkSize(new ByteSizeValue(1, ByteSizeUnit.GB)).build()) {\n+\n+ MultiGetRequestBuilder multiGetRequestBuilder = indexDocs(client(), processor, numDocs);\n+\n+ assertThat(latch.await(randomInt(500), TimeUnit.MILLISECONDS), equalTo(false));\n+ //we really need an explicit flush as none of the bulk thresholds was reached\n+ processor.flush();\n+ latch.await();\n+\n+ assertThat(listener.beforeCounts.get(), equalTo(1));\n+ assertThat(listener.afterCounts.get(), equalTo(1));\n+ assertThat(listener.bulkFailures.size(), equalTo(0));\n+ assertResponseItems(listener.bulkItems, numDocs);\n+ assertMultiGetResponse(multiGetRequestBuilder.get(), numDocs);\n+ }\n+ }\n+\n+ @Test\n+ public void testBulkProcessorConcurrentRequests() throws Exception {\n+ int bulkActions = randomIntBetween(10, 100);\n+ int numDocs = randomIntBetween(bulkActions, bulkActions + 100);\n+ int concurrentRequests = randomIntBetween(0, 10);\n+\n+ int expectedBulkActions = numDocs / bulkActions;\n+\n+ final CountDownLatch latch = new CountDownLatch(expectedBulkActions);\n+ int totalExpectedBulkActions = numDocs % bulkActions == 0 ? expectedBulkActions : expectedBulkActions + 1;\n+ final CountDownLatch closeLatch = new CountDownLatch(totalExpectedBulkActions);\n+\n+ BulkProcessorTestListener listener = new BulkProcessorTestListener(latch, closeLatch);\n+\n+ MultiGetRequestBuilder multiGetRequestBuilder;\n+\n+ try (BulkProcessor processor = BulkProcessor.builder(client(), listener)\n+ .setConcurrentRequests(concurrentRequests).setBulkActions(bulkActions)\n+ //set interval and size to high values\n+ .setFlushInterval(TimeValue.timeValueHours(24)).setBulkSize(new ByteSizeValue(1, ByteSizeUnit.GB)).build()) {\n+\n+ multiGetRequestBuilder = indexDocs(client(), processor, numDocs);\n+\n+ latch.await();\n+\n+ assertThat(listener.beforeCounts.get(), equalTo(expectedBulkActions));\n+ assertThat(listener.afterCounts.get(), equalTo(expectedBulkActions));\n+ assertThat(listener.bulkFailures.size(), equalTo(0));\n+ assertThat(listener.bulkItems.size(), equalTo(numDocs - numDocs % bulkActions));\n+ }\n+\n+ closeLatch.await();\n+\n+ assertThat(listener.beforeCounts.get(), equalTo(totalExpectedBulkActions));\n+ assertThat(listener.afterCounts.get(), equalTo(totalExpectedBulkActions));\n+ assertThat(listener.bulkFailures.size(), equalTo(0));\n+ assertThat(listener.bulkItems.size(), equalTo(numDocs));\n+\n+ Set<String> ids = new HashSet<>();\n+ for (BulkItemResponse bulkItemResponse : listener.bulkItems) {\n+ assertThat(bulkItemResponse.isFailed(), equalTo(false));\n+ assertThat(bulkItemResponse.getIndex(), equalTo(\"test\"));\n+ assertThat(bulkItemResponse.getType(), equalTo(\"test\"));\n+ //with concurrent requests > 1 we can't rely on the order of the bulk requests\n+ assertThat(Integer.valueOf(bulkItemResponse.getId()), both(greaterThan(0)).and(lessThanOrEqualTo(numDocs)));\n+ //we do want to check that we don't get duplicate ids back\n+ assertThat(ids.add(bulkItemResponse.getId()), equalTo(true));\n+ }\n+\n+ assertMultiGetResponse(multiGetRequestBuilder.get(), numDocs);\n+ }\n+\n+ @Test\n+ //https://github.com/elasticsearch/elasticsearch/issues/5038\n+ public void testBulkProcessorConcurrentRequestsNoNodeAvailableException() throws Exception {\n+ //we create a transport client with no nodes to make sure it throws NoNodeAvailableException\n+ Client transportClient = new TransportClient();\n+\n+ int bulkActions = randomIntBetween(10, 100);\n+ int numDocs = randomIntBetween(bulkActions, bulkActions + 100);\n+ int concurrentRequests = randomIntBetween(0, 10);\n+\n+ int expectedBulkActions = numDocs / bulkActions;\n+\n+ final CountDownLatch latch = new CountDownLatch(expectedBulkActions);\n+ int totalExpectedBulkActions = numDocs % bulkActions == 0 ? expectedBulkActions : expectedBulkActions + 1;\n+ final CountDownLatch closeLatch = new CountDownLatch(totalExpectedBulkActions);\n+\n+ BulkProcessorTestListener listener = new BulkProcessorTestListener(latch, closeLatch);\n+\n+ try (BulkProcessor processor = BulkProcessor.builder(transportClient, listener)\n+ .setConcurrentRequests(concurrentRequests).setBulkActions(bulkActions)\n+ //set interval and size to high values\n+ .setFlushInterval(TimeValue.timeValueHours(24)).setBulkSize(new ByteSizeValue(1, ByteSizeUnit.GB)).build()) {\n+\n+ indexDocs(transportClient, processor, numDocs);\n+\n+ latch.await();\n+\n+ assertThat(listener.beforeCounts.get(), equalTo(expectedBulkActions));\n+ assertThat(listener.afterCounts.get(), equalTo(expectedBulkActions));\n+ assertThat(listener.bulkFailures.size(), equalTo(expectedBulkActions));\n+ assertThat(listener.bulkItems.size(), equalTo(0));\n+ }\n+\n+ closeLatch.await();\n+\n+ assertThat(listener.bulkFailures.size(), equalTo(totalExpectedBulkActions));\n+ assertThat(listener.bulkItems.size(), equalTo(0));\n+ }\n+\n+ @Test\n+ public void testBulkProcessorConcurrentRequestsReadOnlyIndex() throws Exception {\n+ createIndex(\"test-ro\");\n+ assertAcked(client().admin().indices().prepareUpdateSettings(\"test-ro\")\n+ .setSettings(ImmutableSettings.builder().put(\"index.blocks.read_only\", true)));\n+ ensureGreen();\n+\n+ int bulkActions = randomIntBetween(10, 100);\n+ int numDocs = randomIntBetween(bulkActions, bulkActions + 100);\n+ int concurrentRequests = randomIntBetween(0, 10);\n+\n+ int expectedBulkActions = numDocs / bulkActions;\n+\n+ final CountDownLatch latch = new CountDownLatch(expectedBulkActions);\n+ int totalExpectedBulkActions = numDocs % bulkActions == 0 ? expectedBulkActions : expectedBulkActions + 1;\n+ final CountDownLatch closeLatch = new CountDownLatch(totalExpectedBulkActions);\n+\n+ int testDocs = 0;\n+ int testReadOnlyDocs = 0;\n+ MultiGetRequestBuilder multiGetRequestBuilder = client().prepareMultiGet();\n+ BulkProcessorTestListener listener = new BulkProcessorTestListener(latch, closeLatch);\n+\n+ try (BulkProcessor processor = BulkProcessor.builder(client(), listener)\n+ .setConcurrentRequests(concurrentRequests).setBulkActions(bulkActions)\n+ //set interval and size to high values\n+ .setFlushInterval(TimeValue.timeValueHours(24)).setBulkSize(new ByteSizeValue(1, ByteSizeUnit.GB)).build()) {\n+\n+ for (int i = 1; i <= numDocs; i++) {\n+ if (randomBoolean()) {\n+ testDocs++;\n+ processor.add(new IndexRequest(\"test\", \"test\", Integer.toString(testDocs)).source(\"field\", \"value\"));\n+ multiGetRequestBuilder.add(\"test\", \"test\", Integer.toString(testDocs));\n+ } else {\n+ testReadOnlyDocs++;\n+ processor.add(new IndexRequest(\"test-ro\", \"test\", Integer.toString(testReadOnlyDocs)).source(\"field\", \"value\"));\n+ }\n+ }\n+ }\n+\n+ closeLatch.await();\n+\n+ assertThat(listener.beforeCounts.get(), equalTo(totalExpectedBulkActions));\n+ assertThat(listener.afterCounts.get(), equalTo(totalExpectedBulkActions));\n+ assertThat(listener.bulkFailures.size(), equalTo(0));\n+ assertThat(listener.bulkItems.size(), equalTo(testDocs + testReadOnlyDocs));\n+\n+ Set<String> ids = new HashSet<>();\n+ Set<String> readOnlyIds = new HashSet<>();\n+ for (BulkItemResponse bulkItemResponse : listener.bulkItems) {\n+ assertThat(bulkItemResponse.getIndex(), either(equalTo(\"test\")).or(equalTo(\"test-ro\")));\n+ assertThat(bulkItemResponse.getType(), equalTo(\"test\"));\n+ if (bulkItemResponse.getIndex().equals(\"test\")) {\n+ assertThat(bulkItemResponse.isFailed(), equalTo(false));\n+ //with concurrent requests > 1 we can't rely on the order of the bulk requests\n+ assertThat(Integer.valueOf(bulkItemResponse.getId()), both(greaterThan(0)).and(lessThanOrEqualTo(testDocs)));\n+ //we do want to check that we don't get duplicate ids back\n+ assertThat(ids.add(bulkItemResponse.getId()), equalTo(true));\n+ } else {\n+ assertThat(bulkItemResponse.isFailed(), equalTo(true));\n+ //with concurrent requests > 1 we can't rely on the order of the bulk requests\n+ assertThat(Integer.valueOf(bulkItemResponse.getId()), both(greaterThan(0)).and(lessThanOrEqualTo(testReadOnlyDocs)));\n+ //we do want to check that we don't get duplicate ids back\n+ assertThat(readOnlyIds.add(bulkItemResponse.getId()), equalTo(true));\n+ }\n+ }\n+\n+ assertMultiGetResponse(multiGetRequestBuilder.get(), testDocs);\n+ }\n+\n+ private static MultiGetRequestBuilder indexDocs(Client client, BulkProcessor processor, int numDocs) {\n+ MultiGetRequestBuilder multiGetRequestBuilder = client.prepareMultiGet();\n+ for (int i = 1; i <= numDocs; i++) {\n+ processor.add(new IndexRequest(\"test\", \"test\", Integer.toString(i)).source(\"field\", randomRealisticUnicodeOfLengthBetween(1, 30)));\n+ multiGetRequestBuilder.add(\"test\", \"test\", Integer.toString(i));\n+ }\n+ return multiGetRequestBuilder;\n+ }\n+\n+ private static void assertResponseItems(List<BulkItemResponse> bulkItemResponses, int numDocs) {\n+ assertThat(bulkItemResponses.size(), is(numDocs));\n+ int i = 1;\n+ for (BulkItemResponse bulkItemResponse : bulkItemResponses) {\n+ assertThat(bulkItemResponse.getIndex(), equalTo(\"test\"));\n+ assertThat(bulkItemResponse.getType(), equalTo(\"test\"));\n+ assertThat(bulkItemResponse.getId(), equalTo(Integer.toString(i++)));\n+ assertThat(bulkItemResponse.isFailed(), equalTo(false));\n+ }\n+ }\n+\n+ private static void assertMultiGetResponse(MultiGetResponse multiGetResponse, int numDocs) {\n+ assertThat(multiGetResponse.getResponses().length, equalTo(numDocs));\n+ int i = 1;\n+ for (MultiGetItemResponse multiGetItemResponse : multiGetResponse) {\n+ assertThat(multiGetItemResponse.getIndex(), equalTo(\"test\"));\n+ assertThat(multiGetItemResponse.getType(), equalTo(\"test\"));\n+ assertThat(multiGetItemResponse.getId(), equalTo(Integer.toString(i++)));\n+ }\n+ }\n+\n+ private static class BulkProcessorTestListener implements BulkProcessor.Listener {\n+\n+ private final CountDownLatch[] latches;\n+ private final AtomicInteger beforeCounts = new AtomicInteger();\n+ private final AtomicInteger afterCounts = new AtomicInteger();\n+ private final List<BulkItemResponse> bulkItems = new CopyOnWriteArrayList<>();\n+ private final List<Throwable> bulkFailures = new CopyOnWriteArrayList<>();\n+\n+ private BulkProcessorTestListener(CountDownLatch... latches) {\n+ this.latches = latches;\n+ }\n+\n+ @Override\n+ public void beforeBulk(long executionId, BulkRequest request) {\n+ beforeCounts.incrementAndGet();\n+ }\n+\n+ @Override\n+ public void afterBulk(long executionId, BulkRequest request, BulkResponse response) {\n+ bulkItems.addAll(Arrays.asList(response.getItems()));\n+ afterCounts.incrementAndGet();\n+ for (CountDownLatch latch : latches) {\n+ latch.countDown();\n+ }\n+ }\n+\n+ @Override\n+ public void afterBulk(long executionId, BulkRequest request, Throwable failure) {\n+ bulkFailures.add(failure);\n+ afterCounts.incrementAndGet();\n+ for (CountDownLatch latch : latches) {\n+ latch.countDown();\n+ }\n+ }\n+ }\n+}",
"filename": "src/test/java/org/elasticsearch/action/bulk/BulkProcessorTests.java",
"status": "added"
},
{
"diff": "@@ -20,14 +20,10 @@\n package org.elasticsearch.document;\n \n import com.google.common.base.Charsets;\n-import com.google.common.collect.Maps;\n-import org.elasticsearch.action.bulk.BulkProcessor;\n-import org.elasticsearch.action.bulk.BulkRequest;\n import org.elasticsearch.action.bulk.BulkRequestBuilder;\n import org.elasticsearch.action.bulk.BulkResponse;\n import org.elasticsearch.action.count.CountResponse;\n import org.elasticsearch.action.get.GetResponse;\n-import org.elasticsearch.action.index.IndexRequest;\n import org.elasticsearch.action.index.IndexResponse;\n import org.elasticsearch.action.search.SearchResponse;\n import org.elasticsearch.action.update.UpdateRequestBuilder;\n@@ -41,10 +37,7 @@\n import org.junit.Test;\n \n import java.util.ArrayList;\n-import java.util.Map;\n-import java.util.concurrent.CountDownLatch;\n import java.util.concurrent.CyclicBarrier;\n-import java.util.concurrent.atomic.AtomicReference;\n \n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.*;\n@@ -576,50 +569,6 @@ public void preParsingSourceDueToIdShouldNotBreakCompleteBulkRequest() throws Ex\n assertExists(get(\"test\", \"type\", \"48\"));\n }\n \n- @Test\n- public void testThatBulkProcessorCountIsCorrect() throws InterruptedException {\n- final AtomicReference<BulkResponse> responseRef = new AtomicReference<>();\n- final AtomicReference<Throwable> failureRef = new AtomicReference<>();\n- final CountDownLatch latch = new CountDownLatch(1);\n- BulkProcessor.Listener listener = new BulkProcessor.Listener() {\n- @Override\n- public void beforeBulk(long executionId, BulkRequest request) {\n- }\n-\n- @Override\n- public void afterBulk(long executionId, BulkRequest request, BulkResponse response) {\n- responseRef.set(response);\n- latch.countDown();\n- }\n-\n- @Override\n- public void afterBulk(long executionId, BulkRequest request, Throwable failure) {\n- failureRef.set(failure);\n- latch.countDown();\n- }\n- };\n-\n-\n- try (BulkProcessor processor = BulkProcessor.builder(client(), listener).setBulkActions(5)\n- .setConcurrentRequests(1).setName(\"foo\").build()) {\n- Map<String, Object> data = Maps.newHashMap();\n- data.put(\"foo\", \"bar\");\n-\n- processor.add(new IndexRequest(\"test\", \"test\", \"1\").source(data));\n- processor.add(new IndexRequest(\"test\", \"test\", \"2\").source(data));\n- processor.add(new IndexRequest(\"test\", \"test\", \"3\").source(data));\n- processor.add(new IndexRequest(\"test\", \"test\", \"4\").source(data));\n- processor.add(new IndexRequest(\"test\", \"test\", \"5\").source(data));\n-\n- latch.await();\n- BulkResponse response = responseRef.get();\n- Throwable error = failureRef.get();\n- assertThat(error, nullValue());\n- assertThat(\"Could not get a bulk response even after an explicit flush.\", response, notNullValue());\n- assertThat(response.getItems().length, is(5));\n- }\n- }\n-\n @Test // issue 4987\n public void testThatInvalidIndexNamesShouldNotBreakCompleteBulkRequest() {\n int bulkEntryCount = randomIntBetween(10, 50);\n@@ -647,49 +596,5 @@ public void testThatInvalidIndexNamesShouldNotBreakCompleteBulkRequest() {\n assertThat(bulkResponse.getItems()[i].isFailed(), is(expectedFailures[i]));\n }\n }\n-\n- @Test\n- public void testBulkProcessorFlush() throws InterruptedException {\n- final AtomicReference<BulkResponse> responseRef = new AtomicReference<>();\n- final AtomicReference<Throwable> failureRef = new AtomicReference<>();\n- final CountDownLatch latch = new CountDownLatch(1);\n- BulkProcessor.Listener listener = new BulkProcessor.Listener() {\n- @Override\n- public void beforeBulk(long executionId, BulkRequest request) {\n- }\n-\n- @Override\n- public void afterBulk(long executionId, BulkRequest request, BulkResponse response) {\n- responseRef.set(response);\n- latch.countDown();\n- }\n-\n- @Override\n- public void afterBulk(long executionId, BulkRequest request, Throwable failure) {\n- failureRef.set(failure);\n- latch.countDown();\n- }\n- };\n-\n- try (BulkProcessor processor = BulkProcessor.builder(client(), listener).setBulkActions(6)\n- .setConcurrentRequests(1).setName(\"foo\").build()) {\n- Map<String, Object> data = Maps.newHashMap();\n- data.put(\"foo\", \"bar\");\n-\n- processor.add(new IndexRequest(\"test\", \"test\", \"1\").source(data));\n- processor.add(new IndexRequest(\"test\", \"test\", \"2\").source(data));\n- processor.add(new IndexRequest(\"test\", \"test\", \"3\").source(data));\n- processor.add(new IndexRequest(\"test\", \"test\", \"4\").source(data));\n- processor.add(new IndexRequest(\"test\", \"test\", \"5\").source(data));\n-\n- processor.flush();\n- latch.await();\n- BulkResponse response = responseRef.get();\n- Throwable error = failureRef.get();\n- assertThat(error, nullValue());\n- assertThat(\"Could not get a bulk response even after an explicit flush.\", response, notNullValue());\n- assertThat(response.getItems().length, is(5));\n- }\n- }\n }\n ",
"filename": "src/test/java/org/elasticsearch/document/BulkTests.java",
"status": "modified"
}
]
} |
{
"body": "If a snapshot is interrupted by running of disk space or connection loss to the repository folder, it might be not possible to delete such snapshot.\n",
"comments": [],
"number": 6383,
"title": "Snapshot/Restore: Allow deleting of interrupted snapshot"
} | {
"body": "Makes it possible to delete snapshots that are missing some of the metadata files. This can happen if snapshot creation failed because repository drive ran out of disk space.\n\nCloses #6383\n",
"number": 6453,
"review_comments": [
{
"body": "you can use `try/with` here since XContentParser implements Closeable?\n",
"created_at": "2014-06-18T19:05:51Z"
},
{
"body": "do we nee the `TRACE` logging here?\n",
"created_at": "2014-06-18T19:23:41Z"
},
{
"body": "can you please use `indexRandom` and maybe use a random number of docs?\n",
"created_at": "2014-06-18T19:24:04Z"
}
],
"title": "Improve deletion of corrupted snapshots"
} | {
"commits": [
{
"message": "Improve deletion of corrupted snapshots\n\nMakes it possible to delete snapshots that are missing some of the metadata files. This can happen if snapshot creation failed because repository drive ran out of disk space.\n\nCloses #6383"
}
],
"files": [
{
"diff": "@@ -259,7 +259,7 @@ public void initializeSnapshot(SnapshotId snapshotId, ImmutableList<String> indi\n @Override\n public void deleteSnapshot(SnapshotId snapshotId) {\n Snapshot snapshot = readSnapshot(snapshotId);\n- MetaData metaData = readSnapshotMetaData(snapshotId, snapshot.indices());\n+ MetaData metaData = readSnapshotMetaData(snapshotId, snapshot.indices(), true);\n try {\n String blobName = snapshotBlobName(snapshotId);\n // Delete snapshot file first so we wouldn't end up with partially deleted snapshot that looks OK\n@@ -284,11 +284,13 @@ public void deleteSnapshot(SnapshotId snapshotId) {\n try {\n indexMetaDataBlobContainer.deleteBlob(blobName);\n } catch (IOException ex) {\n- throw new SnapshotException(snapshotId, \"failed to delete metadata\", ex);\n+ logger.warn(\"[{}] failed to delete metadata for index [{}]\", ex, snapshotId, index);\n }\n IndexMetaData indexMetaData = metaData.index(index);\n- for (int i = 0; i < indexMetaData.getNumberOfShards(); i++) {\n- indexShardRepository.delete(snapshotId, new ShardId(index, i));\n+ if (indexMetaData != null) {\n+ for (int i = 0; i < indexMetaData.getNumberOfShards(); i++) {\n+ indexShardRepository.delete(snapshotId, new ShardId(index, i));\n+ }\n }\n }\n } catch (IOException ex) {\n@@ -367,41 +369,7 @@ public ImmutableList<SnapshotId> snapshots() {\n */\n @Override\n public MetaData readSnapshotMetaData(SnapshotId snapshotId, ImmutableList<String> indices) {\n- MetaData metaData;\n- try {\n- byte[] data = snapshotsBlobContainer.readBlobFully(metaDataBlobName(snapshotId));\n- metaData = readMetaData(data);\n- } catch (FileNotFoundException | NoSuchFileException ex) {\n- throw new SnapshotMissingException(snapshotId, ex);\n- } catch (IOException ex) {\n- throw new SnapshotException(snapshotId, \"failed to get snapshots\", ex);\n- }\n- MetaData.Builder metaDataBuilder = MetaData.builder(metaData);\n- for (String index : indices) {\n- BlobPath indexPath = basePath().add(\"indices\").add(index);\n- ImmutableBlobContainer indexMetaDataBlobContainer = blobStore().immutableBlobContainer(indexPath);\n- XContentParser parser = null;\n- try {\n- byte[] data = indexMetaDataBlobContainer.readBlobFully(snapshotBlobName(snapshotId));\n- parser = XContentHelper.createParser(data, 0, data.length);\n- XContentParser.Token token;\n- if ((token = parser.nextToken()) == XContentParser.Token.START_OBJECT) {\n- IndexMetaData indexMetaData = IndexMetaData.Builder.fromXContent(parser);\n- if ((token = parser.nextToken()) == XContentParser.Token.END_OBJECT) {\n- metaDataBuilder.put(indexMetaData, false);\n- continue;\n- }\n- }\n- throw new ElasticsearchParseException(\"unexpected token [\" + token + \"]\");\n- } catch (IOException ex) {\n- throw new SnapshotException(snapshotId, \"failed to read metadata\", ex);\n- } finally {\n- if (parser != null) {\n- parser.close();\n- }\n- }\n- }\n- return metaDataBuilder.build();\n+ return readSnapshotMetaData(snapshotId, indices, false);\n }\n \n /**\n@@ -439,6 +407,48 @@ public Snapshot readSnapshot(SnapshotId snapshotId) {\n }\n }\n \n+ private MetaData readSnapshotMetaData(SnapshotId snapshotId, ImmutableList<String> indices, boolean ignoreIndexErrors) {\n+ MetaData metaData;\n+ try {\n+ byte[] data = snapshotsBlobContainer.readBlobFully(metaDataBlobName(snapshotId));\n+ metaData = readMetaData(data);\n+ } catch (FileNotFoundException | NoSuchFileException ex) {\n+ throw new SnapshotMissingException(snapshotId, ex);\n+ } catch (IOException ex) {\n+ throw new SnapshotException(snapshotId, \"failed to get snapshots\", ex);\n+ }\n+ MetaData.Builder metaDataBuilder = MetaData.builder(metaData);\n+ for (String index : indices) {\n+ BlobPath indexPath = basePath().add(\"indices\").add(index);\n+ ImmutableBlobContainer indexMetaDataBlobContainer = blobStore().immutableBlobContainer(indexPath);\n+ try {\n+ byte[] data = indexMetaDataBlobContainer.readBlobFully(snapshotBlobName(snapshotId));\n+ try (XContentParser parser = XContentHelper.createParser(data, 0, data.length)) {\n+ XContentParser.Token token;\n+ if ((token = parser.nextToken()) == XContentParser.Token.START_OBJECT) {\n+ IndexMetaData indexMetaData = IndexMetaData.Builder.fromXContent(parser);\n+ if ((token = parser.nextToken()) == XContentParser.Token.END_OBJECT) {\n+ metaDataBuilder.put(indexMetaData, false);\n+ continue;\n+ }\n+ }\n+ if (!ignoreIndexErrors) {\n+ throw new ElasticsearchParseException(\"unexpected token [\" + token + \"]\");\n+ } else {\n+ logger.warn(\"[{}] [{}] unexpected token while reading snapshot metadata [{}]\", snapshotId, index, token);\n+ }\n+ }\n+ } catch (IOException ex) {\n+ if (!ignoreIndexErrors) {\n+ throw new SnapshotException(snapshotId, \"failed to read metadata\", ex);\n+ } else {\n+ logger.warn(\"[{}] [{}] failed to read metadata for index\", snapshotId, index, ex);\n+ }\n+ }\n+ }\n+ return metaDataBuilder.build();\n+ }\n+\n /**\n * Configures RateLimiter based on repository and global settings\n *\n@@ -465,9 +475,7 @@ private RateLimiter getRateLimiter(RepositorySettings repositorySettings, String\n * @throws IOException parse exceptions\n */\n private BlobStoreSnapshot readSnapshot(byte[] data) throws IOException {\n- XContentParser parser = null;\n- try {\n- parser = XContentHelper.createParser(data, 0, data.length);\n+ try (XContentParser parser = XContentHelper.createParser(data, 0, data.length)) {\n XContentParser.Token token;\n if ((token = parser.nextToken()) == XContentParser.Token.START_OBJECT) {\n if ((token = parser.nextToken()) == XContentParser.Token.FIELD_NAME) {\n@@ -479,10 +487,6 @@ private BlobStoreSnapshot readSnapshot(byte[] data) throws IOException {\n }\n }\n throw new ElasticsearchParseException(\"unexpected token [\" + token + \"]\");\n- } finally {\n- if (parser != null) {\n- parser.close();\n- }\n }\n }\n \n@@ -494,9 +498,7 @@ private BlobStoreSnapshot readSnapshot(byte[] data) throws IOException {\n * @throws IOException parse exceptions\n */\n private MetaData readMetaData(byte[] data) throws IOException {\n- XContentParser parser = null;\n- try {\n- parser = XContentHelper.createParser(data, 0, data.length);\n+ try (XContentParser parser = XContentHelper.createParser(data, 0, data.length)) {\n XContentParser.Token token;\n if ((token = parser.nextToken()) == XContentParser.Token.START_OBJECT) {\n if ((token = parser.nextToken()) == XContentParser.Token.FIELD_NAME) {\n@@ -508,10 +510,6 @@ private MetaData readMetaData(byte[] data) throws IOException {\n }\n }\n throw new ElasticsearchParseException(\"unexpected token [\" + token + \"]\");\n- } finally {\n- if (parser != null) {\n- parser.close();\n- }\n }\n }\n \n@@ -615,9 +613,7 @@ protected void writeSnapshotList(ImmutableList<SnapshotId> snapshots) throws IOE\n protected ImmutableList<SnapshotId> readSnapshotList() throws IOException {\n byte[] data = snapshotsBlobContainer.readBlobFully(SNAPSHOTS_FILE);\n ArrayList<SnapshotId> snapshots = new ArrayList<>();\n- XContentParser parser = null;\n- try {\n- parser = XContentHelper.createParser(data, 0, data.length);\n+ try (XContentParser parser = XContentHelper.createParser(data, 0, data.length)) {\n if (parser.nextToken() == XContentParser.Token.START_OBJECT) {\n if (parser.nextToken() == XContentParser.Token.FIELD_NAME) {\n String currentFieldName = parser.currentName();\n@@ -630,10 +626,6 @@ protected ImmutableList<SnapshotId> readSnapshotList() throws IOException {\n }\n }\n }\n- } finally {\n- if (parser != null) {\n- parser.close();\n- }\n }\n return ImmutableList.copyOf(snapshots);\n }",
"filename": "src/main/java/org/elasticsearch/repositories/blobstore/BlobStoreRepository.java",
"status": "modified"
},
{
"diff": "@@ -629,6 +629,45 @@ public void deleteSnapshotTest() throws Exception {\n assertThat(numberOfFiles(repo), equalTo(numberOfFiles[0]));\n }\n \n+ @Test\n+ public void deleteSnapshotWithMissingIndexAndShardMetadataTest() throws Exception {\n+ Client client = client();\n+\n+ File repo = newTempDir(LifecycleScope.SUITE);\n+ logger.info(\"--> creating repository at \" + repo.getAbsolutePath());\n+ assertAcked(client.admin().cluster().preparePutRepository(\"test-repo\")\n+ .setType(\"fs\").setSettings(ImmutableSettings.settingsBuilder()\n+ .put(\"location\", repo)\n+ .put(\"compress\", false)\n+ .put(\"chunk_size\", randomIntBetween(100, 1000))));\n+\n+ createIndex(\"test-idx-1\", \"test-idx-2\");\n+ ensureYellow();\n+ logger.info(\"--> indexing some data\");\n+ indexRandom(true,\n+ client().prepareIndex(\"test-idx-1\", \"doc\").setSource(\"foo\", \"bar\"),\n+ client().prepareIndex(\"test-idx-2\", \"doc\").setSource(\"foo\", \"bar\"));\n+\n+ logger.info(\"--> creating snapshot\");\n+ CreateSnapshotResponse createSnapshotResponse = client.admin().cluster().prepareCreateSnapshot(\"test-repo\", \"test-snap-1\").setWaitForCompletion(true).setIndices(\"test-idx-*\").get();\n+ assertThat(createSnapshotResponse.getSnapshotInfo().successfulShards(), greaterThan(0));\n+ assertThat(createSnapshotResponse.getSnapshotInfo().successfulShards(), equalTo(createSnapshotResponse.getSnapshotInfo().totalShards()));\n+\n+ logger.info(\"--> delete index metadata and shard metadata\");\n+ File indices = new File(repo, \"indices\");\n+ File testIndex1 = new File(indices, \"test-idx-1\");\n+ File testIndex2 = new File(indices, \"test-idx-2\");\n+ File testIndex2Shard0 = new File(testIndex2, \"0\");\n+ new File(testIndex1, \"snapshot-test-snap-1\").delete();\n+ new File(testIndex2Shard0, \"snapshot-test-snap-1\").delete();\n+\n+ logger.info(\"--> delete snapshot\");\n+ client.admin().cluster().prepareDeleteSnapshot(\"test-repo\", \"test-snap-1\").get();\n+\n+ logger.info(\"--> make sure snapshot doesn't exist\");\n+ assertThrows(client.admin().cluster().prepareGetSnapshots(\"test-repo\").addSnapshots(\"test-snap-1\"), SnapshotMissingException.class);\n+ }\n+\n @Test\n @TestLogging(\"snapshots:TRACE\")\n public void snapshotClosedIndexTest() throws Exception {",
"filename": "src/test/java/org/elasticsearch/snapshots/SharedClusterSnapshotRestoreTests.java",
"status": "modified"
}
]
} |
{
"body": "There is a pretty nasty [bug](https://issues.apache.org/jira/browse/LUCENE-5738) in the lock factory we use that can cause nodes to use the same data dir wiping each others data. Luckily this is unlikely to happen if the nodes are running in different JVM which they do unless they are embedded. We should place the fix for this as an X-Class until the fix is released.\n\nyet this might have other side-effects that we haven't uncovered yet so I mark it critical\n",
"comments": [
{
"body": "+1, looks great\n",
"created_at": "2014-06-06T09:44:55Z"
}
],
"number": 6424,
"title": "Use XNativeFSLockFactory instead of the buggy Lucene 4.8.1 version"
} | {
"body": "There is a pretty nasty bug in the lock factory we use that can cause\nnodes to use the same data dir wiping each others data. Luckily this is\nunlikely to happen if the nodes are running in different JVM which they\ndo unless they are embedded.\n\nSee LUCENE-5738\n\nCloses #6424\n",
"number": 6425,
"review_comments": [],
"title": "FileSystem: Use XNativeFSLockFactory instead of the buggy Lucene 4.8.1 version"
} | {
"commits": [
{
"message": "FileSystem: Use XNativeFSLockFactory instead of the buggy Lucene 4.8.1 version\n\nThere is a pretty nasty bug in the lock factory we use that can cause\nnodes to use the same data dir wiping each others data. Luckily this is\nunlikely to happen if the nodes are running in different JVM which they\ndo unless they are embedded.\n\nSee LUCENE-5738\n\nCloses #6424"
}
],
"files": [
{
"diff": "@@ -58,3 +58,6 @@ com.google.common.primitives.Longs#compare(long,long)\n \n @defaultMessage we have an optimized XStringField to reduce analysis creation overhead\n org.apache.lucene.document.Field#<init>(java.lang.String,java.lang.String,org.apache.lucene.document.FieldType)\n+\n+@defaultMessage Use XNativeFSLockFactory instead of the buggy NativeFSLockFactory see LUCENE-5738 - remove once Lucene 4.9 is released\n+org.apache.lucene.store.NativeFSLockFactory",
"filename": "core-signatures.txt",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,246 @@\n+package org.apache.lucene.store;\n+\n+/*\n+ * Licensed to the Apache Software Foundation (ASF) under one or more\n+ * contributor license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright ownership.\n+ * The ASF licenses this file to You under the Apache License, Version 2.0\n+ * (the \"License\"); you may not use this file except in compliance with\n+ * the License. You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing, software\n+ * distributed under the License is distributed on an \"AS IS\" BASIS,\n+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+ * See the License for the specific language governing permissions and\n+ * limitations under the License.\n+ */\n+\n+import java.nio.channels.FileChannel;\n+import java.nio.channels.FileLock;\n+import java.nio.channels.OverlappingFileLockException;\n+import java.nio.file.StandardOpenOption;\n+import java.io.File;\n+import java.io.IOException;\n+import java.util.Collections;\n+import java.util.HashSet;\n+import java.util.Set;\n+\n+import org.apache.lucene.util.IOUtils;\n+import org.elasticsearch.Version;\n+\n+/**\n+ * <p>Implements {@link LockFactory} using native OS file\n+ * locks. Note that because this LockFactory relies on\n+ * java.nio.* APIs for locking, any problems with those APIs\n+ * will cause locking to fail. Specifically, on certain NFS\n+ * environments the java.nio.* locks will fail (the lock can\n+ * incorrectly be double acquired) whereas {@link\n+ * SimpleFSLockFactory} worked perfectly in those same\n+ * environments. For NFS based access to an index, it's\n+ * recommended that you try {@link SimpleFSLockFactory}\n+ * first and work around the one limitation that a lock file\n+ * could be left when the JVM exits abnormally.</p>\n+ *\n+ * <p>The primary benefit of {@link XNativeFSLockFactory} is\n+ * that locks (not the lock file itsself) will be properly\n+ * removed (by the OS) if the JVM has an abnormal exit.</p>\n+ * \n+ * <p>Note that, unlike {@link SimpleFSLockFactory}, the existence of\n+ * leftover lock files in the filesystem is fine because the OS\n+ * will free the locks held against these files even though the\n+ * files still remain. Lucene will never actively remove the lock\n+ * files, so although you see them, the index may not be locked.</p>\n+ *\n+ * <p>Special care needs to be taken if you change the locking\n+ * implementation: First be certain that no writer is in fact\n+ * writing to the index otherwise you can easily corrupt\n+ * your index. Be sure to do the LockFactory change on all Lucene\n+ * instances and clean up all leftover lock files before starting\n+ * the new configuration for the first time. Different implementations\n+ * can not work together!</p>\n+ *\n+ * <p>If you suspect that this or any other LockFactory is\n+ * not working properly in your environment, you can easily\n+ * test it by using {@link VerifyingLockFactory}, {@link\n+ * LockVerifyServer} and {@link LockStressTest}.</p>\n+ *\n+ * @see LockFactory\n+ */\n+\n+public class XNativeFSLockFactory extends FSLockFactory {\n+\n+ static {\n+ assert Version.CURRENT.luceneVersion == org.apache.lucene.util.Version.LUCENE_48 : \"Remove this class in Lucene 4.9\";\n+ }\n+\n+ /**\n+ * Create a XNativeFSLockFactory instance, with null (unset)\n+ * lock directory. When you pass this factory to a {@link FSDirectory}\n+ * subclass, the lock directory is automatically set to the\n+ * directory itself. Be sure to create one instance for each directory\n+ * your create!\n+ */\n+ public XNativeFSLockFactory() {\n+ this((File) null);\n+ }\n+\n+ /**\n+ * Create a XNativeFSLockFactory instance, storing lock\n+ * files into the specified lockDirName:\n+ *\n+ * @param lockDirName where lock files are created.\n+ */\n+ public XNativeFSLockFactory(String lockDirName) {\n+ this(new File(lockDirName));\n+ }\n+\n+ /**\n+ * Create a XNativeFSLockFactory instance, storing lock\n+ * files into the specified lockDir:\n+ * \n+ * @param lockDir where lock files are created.\n+ */\n+ public XNativeFSLockFactory(File lockDir) {\n+ setLockDir(lockDir);\n+ }\n+\n+ @Override\n+ public synchronized Lock makeLock(String lockName) {\n+ if (lockPrefix != null)\n+ lockName = lockPrefix + \"-\" + lockName;\n+ return new NativeFSLock(lockDir, lockName);\n+ }\n+\n+ @Override\n+ public void clearLock(String lockName) throws IOException {\n+ makeLock(lockName).close();\n+ }\n+}\n+\n+class NativeFSLock extends Lock {\n+\n+ private FileChannel channel;\n+ private FileLock lock;\n+ private File path;\n+ private File lockDir;\n+ private static final Set<String> LOCK_HELD = Collections.synchronizedSet(new HashSet<String>());\n+\n+\n+ public NativeFSLock(File lockDir, String lockFileName) {\n+ this.lockDir = lockDir;\n+ path = new File(lockDir, lockFileName);\n+ }\n+\n+\n+ @Override\n+ public synchronized boolean obtain() throws IOException {\n+\n+ if (lock != null) {\n+ // Our instance is already locked:\n+ return false;\n+ }\n+\n+ // Ensure that lockDir exists and is a directory.\n+ if (!lockDir.exists()) {\n+ if (!lockDir.mkdirs())\n+ throw new IOException(\"Cannot create directory: \" +\n+ lockDir.getAbsolutePath());\n+ } else if (!lockDir.isDirectory()) {\n+ // TODO: NoSuchDirectoryException instead?\n+ throw new IOException(\"Found regular file where directory expected: \" + \n+ lockDir.getAbsolutePath());\n+ }\n+ final String canonicalPath = path.getCanonicalPath();\n+ // Make sure nobody else in-process has this lock held\n+ // already, and, mark it held if not:\n+ // This is a pretty crazy workaround for some documented\n+ // but yet awkward JVM behavior:\n+ //\n+ // On some systems, closing a channel releases all locks held by the Java virtual machine on the underlying file\n+ // regardless of whether the locks were acquired via that channel or via another channel open on the same file.\n+ // It is strongly recommended that, within a program, a unique channel be used to acquire all locks on any given\n+ // file.\n+ //\n+ // This essentially means if we close \"A\" channel for a given file all locks might be released... the odd part\n+ // is that we can't re-obtain the lock in the same JVM but from a different process if that happens. Nevertheless\n+ // this is super trappy. See LUCENE-5738\n+ boolean obtained = false;\n+ if (LOCK_HELD.add(canonicalPath)) {\n+ try {\n+ channel = FileChannel.open(path.toPath(), StandardOpenOption.CREATE, StandardOpenOption.WRITE);\n+ try {\n+ lock = channel.tryLock();\n+ obtained = lock != null;\n+ } catch (IOException | OverlappingFileLockException e) {\n+ // At least on OS X, we will sometimes get an\n+ // intermittent \"Permission Denied\" IOException,\n+ // which seems to simply mean \"you failed to get\n+ // the lock\". But other IOExceptions could be\n+ // \"permanent\" (eg, locking is not supported via\n+ // the filesystem). So, we record the failure\n+ // reason here; the timeout obtain (usually the\n+ // one calling us) will use this as \"root cause\"\n+ // if it fails to get the lock.\n+ failureReason = e;\n+ }\n+ } finally {\n+ if (obtained == false) { // not successful - clear up and move out\n+ clearLockHeld(path);\n+ final FileChannel toClose = channel;\n+ channel = null;\n+ IOUtils.closeWhileHandlingException(toClose);\n+ }\n+ }\n+ }\n+ return obtained;\n+ }\n+\n+ @Override\n+ public synchronized void close() throws IOException {\n+ try {\n+ if (lock != null) {\n+ try {\n+ lock.release();\n+ lock = null;\n+ } finally {\n+ clearLockHeld(path);\n+ }\n+ }\n+ } finally {\n+ IOUtils.close(channel);\n+ channel = null;\n+ }\n+ }\n+\n+ private static final void clearLockHeld(File path) throws IOException {\n+ boolean remove = LOCK_HELD.remove(path.getCanonicalPath());\n+ assert remove : \"Lock was cleared but never marked as held\";\n+ }\n+\n+ @Override\n+ public synchronized boolean isLocked() {\n+ // The test for is isLocked is not directly possible with native file locks:\n+ \n+ // First a shortcut, if a lock reference in this instance is available\n+ if (lock != null) return true;\n+ \n+ // Look if lock file is present; if not, there can definitely be no lock!\n+ if (!path.exists()) return false;\n+ \n+ // Try to obtain and release (if was locked) the lock\n+ try {\n+ boolean obtained = obtain();\n+ if (obtained) close();\n+ return !obtained;\n+ } catch (IOException ioe) {\n+ return false;\n+ } \n+ }\n+\n+ @Override\n+ public String toString() {\n+ return \"NativeFSLock@\" + path;\n+ }\n+}",
"filename": "src/main/java/org/apache/lucene/store/XNativeFSLockFactory.java",
"status": "added"
},
{
"diff": "@@ -22,7 +22,7 @@\n import com.google.common.collect.Sets;\n import com.google.common.primitives.Ints;\n import org.apache.lucene.store.Lock;\n-import org.apache.lucene.store.NativeFSLockFactory;\n+import org.apache.lucene.store.XNativeFSLockFactory;\n import org.apache.lucene.util.IOUtils;\n import org.elasticsearch.ElasticsearchIllegalStateException;\n import org.elasticsearch.cluster.node.DiscoveryNode;\n@@ -78,7 +78,7 @@ public NodeEnvironment(Settings settings, Environment environment) {\n }\n logger.trace(\"obtaining node lock on {} ...\", dir.getAbsolutePath());\n try {\n- NativeFSLockFactory lockFactory = new NativeFSLockFactory(dir);\n+ XNativeFSLockFactory lockFactory = new XNativeFSLockFactory(dir);\n Lock tmpLock = lockFactory.makeLock(\"node.lock\");\n boolean obtained = tmpLock.obtain();\n if (obtained) {",
"filename": "src/main/java/org/elasticsearch/env/NodeEnvironment.java",
"status": "modified"
},
{
"diff": "@@ -64,7 +64,7 @@ protected final LockFactory buildLockFactory() throws IOException {\n LockFactory lockFactory = NoLockFactory.getNoLockFactory();\n if (fsLock.equals(\"native\")) {\n // TODO LUCENE MONITOR: this is not needed in next Lucene version\n- lockFactory = new NativeFSLockFactory();\n+ lockFactory = new XNativeFSLockFactory();\n } else if (fsLock.equals(\"simple\")) {\n lockFactory = new SimpleFSLockFactory();\n } else if (fsLock.equals(\"none\")) {",
"filename": "src/main/java/org/elasticsearch/index/store/fs/FsDirectoryService.java",
"status": "modified"
}
]
} |
{
"body": "One noticeable issue found with ES 1.2.0 during our deployment is that it threw exception when created default mappings with ‘include_in_all’ nested under it (doesn’t matter it’s set to true/false). For example, the following index creation command returns error when against ES 1.2.0 but it works well against ES 1.1.1 \n\nPUT test \n{ \n \"mappings\": { \n \"_default_\": { \n \"include_in_all\": true \n } \n } \n} \n\nResult \n{ \n \"error\": \"MapperParsingException[mapping [_default_]]; nested: MapperParsingException[Root type mapping not empty after parsing! Remaining fields: [include_in_all : true]]; \", \n \"status\": 400 \n} \n\nIs this a regression with ES 1.2.0? \n",
"comments": [
{
"body": "HI @JeffreyZZ \n\nThat mapping is incorrect. `include_in_all` needs to be specified under a field, not at the top level. In previous versions it was just ignored, but in 1.2 it now tells you that it is incorrect.\n\nYou're probably looking for:\n\n```\nPUT /test \n{\n \"mappings\": {\n \"_default_\": {\n \"_all\": {\n \"enabled\": false\n }\n }\n }\n}\n```\n",
"created_at": "2014-05-24T10:52:40Z"
},
{
"body": "Make sense. Thanks @clintongormley !\n",
"created_at": "2014-05-27T17:53:28Z"
},
{
"body": "Are you sure? According to http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-root-object-type.html, it says:\n\n> The root object mapping is an object type mapping that maps the root object (the type itself). On top \n> of all the different mappings that can be set using the object type mapping, it allows for additional, \n> type level mapping definitions.\n\nand in http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-object-type.html, it says:\n\n> include_in_all can be set on the object type level. When set, it propagates down to all the inner \n> mappings defined within the object that do no explicitly set it.\n\nGreets, Sebastian\n",
"created_at": "2014-05-28T11:58:57Z"
},
{
"body": "@skurfuerst \n\nYou're absolutely right - this is a regression. /cc @brwe \n",
"created_at": "2014-05-30T11:15:14Z"
},
{
"body": "Yes, opened pull request #6353\n",
"created_at": "2014-05-30T13:26:58Z"
},
{
"body": "@brwe @brwe For ES 1.2.0, is there any workaround to this issue or any impact of this issue?\n",
"created_at": "2014-05-31T03:19:24Z"
},
{
"body": "You could create a dynamic template for the types (example below). \"include_in_all\" will then not be set in the root object but still be applied to all fields and objects. \n\nAs for the impact, if \"include_in_all\" is already set in the root type and you upgrade to 1.2, a MapperParsingException will be thrown on startup. The next time that the type mapping is updated, the \"include_at_all\" setting will be removed. \n\nUnfortunately, I found that a side effect seems to be that it will not only be removed from the root object, but also from fields. I'll have to look into this further.\n\n\"include_in_all\" via \"dynamic_template\":\n\n```\nPUT test\n{\n \"mappings\": {\n \"_default_\": {\n \"dynamic_templates\": [\n {\n \"include_all\": {\n \"match\": \"*\",\n \"mapping\": {\n \"include_in_all\": \"true\"\n }\n }\n }\n ]\n }\n }\n}\nPOST test/cat/1\n{\n \"text\": \"text\"\n}\nGET test/_mapping\n\n```\n\nResult should be\n\n```\n{\n \"test\": {\n \"mappings\": {\n \"_default_\": {\n \"dynamic_templates\": [\n {\n \"include_all\": {\n \"mapping\": {\n \"include_in_all\": \"true\"\n },\n \"match\": \"*\"\n }\n }\n ],\n \"properties\": {}\n },\n \"cat\": {\n \"dynamic_templates\": [\n {\n \"include_all\": {\n \"mapping\": {\n \"include_in_all\": \"true\"\n },\n \"match\": \"*\"\n }\n }\n ],\n \"properties\": {\n \"text\": {\n \"type\": \"string\",\n \"include_in_all\": true\n }\n }\n }\n }\n }\n}\n```\n",
"created_at": "2014-06-01T17:28:26Z"
},
{
"body": "@brwe @clintongormley \n\nSeems this regression also has impact on the index template if the template defines 'include_in_all' under the type. It allows you to create the template but you're NOT able to create the index of the template in ES 1.2.0. There is the repro steps : \n\n[Step 1] create the following index template against ES 1.2.0 cluster\nPUT /_template/template_logs\n{\n \"order\": 0,\n \"template\": \"logs*\",\n \"settings\": {\n \"index.analysis.analyzer.keyword_analyzer.tokenizer\": \"keyword\"\n },\n \"mappings\": {\n \"LogMessage\": {\n \"include_in_all\": false,\n \"properties\": {\n \"activityId\": {\n \"analyzer\": \"keyword_analyzer\",\n \"type\": \"string\"\n }\n }\n }\n }\n}\n\n[Step 2] Try to index this the following document \nPUT /logs-monday/LogMessage/1\n{\n \"activityId\": \"fcfdab46-730a-4918-b5bd-1da9448608b1\"\n}\n\n[Result]\n{\n \"error\": \"RemoteTransportException[[ES-TEST-2-master][inet[/10.1.0.38:9300]][indices/create]]; nested: MapperParsingException[mapping [LogMessage]]; nested: MapperParsingException[Root type mapping not empty after parsing! Remaining fields: [include_in_all : false]]; \",\n \"status\": 400\n}\n\n[Expected] \nIndex 'logs-monday' is created and 1 document is indexed. \n\nTry the above steps against ES 1.1.1, it works. Any workaround?\n",
"created_at": "2014-06-02T07:00:48Z"
},
{
"body": "There is unfortunately no workaround. \n",
"created_at": "2014-06-02T15:45:13Z"
},
{
"body": "@brwe Thanks for reply! BTW, any ETA for ES 1.2.1 release?\n",
"created_at": "2014-06-03T18:59:23Z"
},
{
"body": "We released yesterday: http://www.elasticsearch.org/blog/elasticsearch-1-2-1-released/\nSorry for the late reply.\n",
"created_at": "2014-06-04T06:07:17Z"
},
{
"body": "@brwe Excellent, thank you!\n",
"created_at": "2014-06-04T16:25:05Z"
},
{
"body": "Your mapping is invalid. I'm assuming the intent of the above is to create the type `pseudo_doc` in any new index, which has the field `content`?\n\nIn this case, the `config/mappings/_default/pseudo_doc.json` file should look like this:\n\n```\n{\n \"properties\": {\n \"content\": {\n \"dynamic\": false,\n \"properties\": {\n \"author_id\": {\n \"type\": \"string\"\n },\n \"author_name\": {\n \"type\": \"string\"\n },\n \"content\": {\n \"type\": \"string\"\n },\n \"title\": {\n \"type\": \"string\"\n },\n \"urls\": {\n \"type\": \"string\"\n },\n \"destination_url\": {\n \"type\": \"string\"\n },\n \"publisher\": {\n \"type\": \"string\"\n }\n }\n }\n }\n}\n```\n\nAlso, I really recommend doing something like this with index templates, rather than with config files. It is much easier to manage such things via the API instead of via static config files.\n",
"created_at": "2014-06-06T10:12:56Z"
}
],
"number": 6304,
"title": "Mapping: MapperParsingException when create default mapping with 'include_in_all' nested "
} | {
"body": "include_in_all can also be set on type level (root object).\nThis fixes a regression introduced in #6093\n\ncloses #6304\n",
"number": 6353,
"review_comments": [],
"title": "Object and Type parsing: Fix include_in_all in type"
} | {
"commits": [
{
"message": "Object and Type parsing: Fix include_in_all in type\n\ninclude_in_all can also be set on type level (root object).\nThis fixes a regression introduced in #6093\n\ncloses #6304"
}
],
"files": [
{
"diff": "@@ -211,15 +211,16 @@ protected static boolean parseObjectOrDocumentTypeProperties(String fieldName, O\n parseProperties(builder, (Map<String, Object>) fieldNode, parserContext);\n }\n return true;\n+ } else if (fieldName.equals(\"include_in_all\")) {\n+ builder.includeInAll(nodeBooleanValue(fieldNode));\n+ return true;\n }\n return false;\n }\n \n protected static void parseObjectProperties(String name, String fieldName, Object fieldNode, ObjectMapper.Builder builder) {\n if (fieldName.equals(\"path\")) {\n builder.pathType(parsePathType(name, fieldNode.toString()));\n- } else if (fieldName.equals(\"include_in_all\")) {\n- builder.includeInAll(nodeBooleanValue(fieldNode));\n }\n }\n ",
"filename": "src/main/java/org/elasticsearch/index/mapper/object/ObjectMapper.java",
"status": "modified"
},
{
"diff": "@@ -413,6 +413,13 @@ public void testRootMappersStillWorking() {\n rootTypes.put(IndexFieldMapper.NAME, \"{\\\"enabled\\\" : true}\");\n rootTypes.put(SourceFieldMapper.NAME, \"{\\\"enabled\\\" : true}\");\n rootTypes.put(TypeFieldMapper.NAME, \"{\\\"store\\\" : true}\");\n+ rootTypes.put(\"include_in_all\", \"true\");\n+ rootTypes.put(\"index_analyzer\", \"\\\"standard\\\"\");\n+ rootTypes.put(\"search_analyzer\", \"\\\"standard\\\"\");\n+ rootTypes.put(\"analyzer\", \"\\\"standard\\\"\");\n+ rootTypes.put(\"dynamic_date_formats\", \"[\\\"yyyy-MM-dd\\\", \\\"dd-MM-yyyy\\\"]\");\n+ rootTypes.put(\"numeric_detection\", \"true\");\n+ rootTypes.put(\"dynamic_templates\", \"[]\");\n for (String key : rootTypes.keySet()) {\n mapping += \"\\\"\" + key+ \"\\\"\" + \":\" + rootTypes.get(key) + \",\\n\";\n }",
"filename": "src/test/java/org/elasticsearch/index/mapper/all/SimpleAllMapperTests.java",
"status": "modified"
}
]
} |
{
"body": "Steps to reproduce:\n\n1) Restart ES\n\n2) Run this (which returns an error as expected):\n\n```\nGET _search/template\n{\n \"template\": {\n \"query\": { \"match_all\": {}},\n \"size\": \"{{my_size}}\"\n }\n}\n```\n\n3) Then run this (which still returns an error - not expected):\n\n```\nGET _search/template\n{\n \"template\": {\n \"query\": { \"match_all\": {}},\n \"size\": \"{{my_size}}\"\n },\n \"params\": {\n \"my_size\": 1\n }\n}\n```\n",
"comments": [
{
"body": "I can reproduce this... I will look \n",
"created_at": "2014-05-27T16:01:35Z"
},
{
"body": "If you disable the cache this behavior doesn't occur.\nBrian\n\nOn Tue, May 27, 2014 at 5:02 PM, Simon Willnauer\nnotifications@github.comwrote:\n\n> I can reproduce this... I will look\n> \n> —\n> Reply to this email directly or view it on GitHubhttps://github.com/elasticsearch/elasticsearch/issues/6318#issuecomment-44296417\n> .\n",
"created_at": "2014-05-27T16:46:22Z"
}
],
"number": 6318,
"title": "Search: Search template not replacing parameter after initial failure in parameter substitution"
} | {
"body": "Mustache extracts the key/value pairs for parameter substitution from\nobjects and maps but it's decided on the first execution. We need to\nmake sure if the params are null we pass an empty map to ensure we\nbind the map based extractor\n\nCloses #6318\n",
"number": 6326,
"review_comments": [
{
"body": "why this opening bracket here and the closing one in line 186? Apart from that LGTM\n",
"created_at": "2014-05-28T10:34:47Z"
},
{
"body": "because I like to use this to structure things in tests.\n",
"created_at": "2014-05-28T11:05:19Z"
}
],
"title": "Ensure internal scope extrators are always operating on a Map"
} | {
"commits": [
{
"message": "Mustache: Ensure internal scope extrators are always operating on a Map\n\nMustache extracts the key/value pairs for parameter substitution from\nobjects and maps but it's decided on the first execution. We need to\nmake sure if the params are null we pass an empty map to ensure we\nbind the map based extractor\n\nCloses #6318"
}
],
"files": [
{
"diff": "@@ -33,6 +33,7 @@\n \n import java.io.IOException;\n import java.lang.ref.SoftReference;\n+import java.util.Collections;\n import java.util.Map;\n \n /**\n@@ -163,7 +164,7 @@ private class MustacheExecutableScript implements ExecutableScript {\n public MustacheExecutableScript(Mustache mustache,\n Map<String, Object> vars) {\n this.mustache = mustache;\n- this.vars = vars;\n+ this.vars = vars == null ? Collections.EMPTY_MAP : vars;\n }\n \n @Override",
"filename": "src/main/java/org/elasticsearch/script/mustache/MustacheScriptEngineService.java",
"status": "modified"
},
{
"diff": "@@ -168,6 +168,28 @@ public void testSearchRequestTemplateSource() throws Exception {\n assertHitCount(searchResponse, 2);\n }\n \n+ @Test\n+ // Releates to #6318\n+ public void testSearchRequestFail() throws Exception {\n+ SearchRequest searchRequest = new SearchRequest();\n+ searchRequest.indices(\"_all\");\n+ try {\n+ String query = \"{ \\\"template\\\" : { \\\"query\\\": {\\\"match_all\\\": {}}, \\\"size\\\" : \\\"{{my_size}}\\\" } }\";\n+ BytesReference bytesRef = new BytesArray(query);\n+ searchRequest.templateSource(bytesRef, false);\n+ client().search(searchRequest).get();\n+ fail(\"expected exception\");\n+ } catch (Throwable ex) {\n+ // expected - no params\n+ }\n+ String query = \"{ \\\"template\\\" : { \\\"query\\\": {\\\"match_all\\\": {}}, \\\"size\\\" : \\\"{{my_size}}\\\" }, \\\"params\\\" : { \\\"my_size\\\": 1 } }\";\n+ BytesReference bytesRef = new BytesArray(query);\n+ searchRequest.templateSource(bytesRef, false);\n+\n+ SearchResponse searchResponse = client().search(searchRequest).get();\n+ assertThat(searchResponse.getHits().hits().length, equalTo(1));\n+ }\n+\n @Test\n public void testThatParametersCanBeSet() throws Exception {\n index(\"test\", \"type\", \"1\", jsonBuilder().startObject().field(\"theField\", \"foo\").endObject());",
"filename": "src/test/java/org/elasticsearch/index/query/TemplateQueryTest.java",
"status": "modified"
}
]
} |
{
"body": "Hi,\n\nWe are running an elasticsearch 1.1.1 6 node cluster with 256GB of ram, and using 96GB JVM heap sizes. I've noticed that when I set the filter cache size to 32GB or over with this command:\n\n```\ncurl -XPUT \"http://localhost:9200/_cluster/settings\" -d'\n{\n \"transient\" : {\n \"indices.cache.filter.size\" : \"50%\"\n }\n}'\n```\n\nThe field cache size keeps growing above and beyond the indicated limit. The relevant node stats show that the filter cache size is about 69GB in size, which is over the configured limit of 48GB\n\n```\n\"filter_cache\" : {\n \"memory_size_in_bytes\" : 74550217274,\n \"evictions\" : 8665179\n},\n```\n\nI've enable debug logging on the node itself and it looks like the cache itself is getting created with the correct values:\n\n```\n[2014-05-21 00:31:57,215][DEBUG][indices.cache.filter ] [ess02-006] using [node] weighted filter cache with size [50%], actual_size [47.9gb], expire [null], clean_interval [1m]\n```\n\nWhats strange is that when I set the limit to 31.9GB, the limit is enforced, which leads me to believe there is some sort of overflow going on.\n\nThanks,\nDaniel\n",
"comments": [
{
"body": "Hi,\nI dug a little deeper into the caching logic, and I think I have found the root cause. The class `IndicesFilterCache` sets `concurrencyLevel` to a hardcoded 16:\n\n``` java\nprivate void buildCache() {\n CacheBuilder<WeightedFilterCache.FilterCacheKey, DocIdSet> cacheBuilder = CacheBuilder.newBuilder()\n .removalListener(this)\n .maximumWeight(sizeInBytes).weigher(new WeightedFilterCache.FilterCacheValueWeigher());\n\n // defaults to 4, but this is a busy map for all indices, increase it a bit\n cacheBuilder.concurrencyLevel(16);\n\n if (expire != null) {\n cacheBuilder.expireAfterAccess(expire.millis(), TimeUnit.MILLISECONDS);\n }\n\n cache = cacheBuilder.build();\n}\n```\n\nhttps://github.com/elasticsearch/elasticsearch/blob/9ed34b5a9e9769b1264bf04d9b9a674794515bc6/src/main/java/org/elasticsearch/indices/cache/filter/IndicesFilterCache.java#L116\n\nIn the Guava libraries, the eviction code is as follows:\n\n``` java\nvoid evictEntries() {\n if (!map.evictsBySize()) {\n return;\n }\n\n drainRecencyQueue();\n while (totalWeight > maxSegmentWeight) {\n ReferenceEntry<K, V> e = getNextEvictable();\n if (!removeEntry(e, e.getHash(), RemovalCause.SIZE)) {\n throw new AssertionError();\n }\n }\n}\n```\n\nhttps://code.google.com/p/guava-libraries/source/browse/guava/src/com/google/common/cache/LocalCache.java#2659\n\nSince `totalWeight` is an `int` and `maxSegmentWeight` is a `long` set to `maxWeight / concurrencyLevel`, when `maxWeight` is 32GB or above, then the value of `maxSegmentWeight` will be set to above the maximum value of `int` and the check \n\n``` java\nwhile (totalWeight > maxSegmentWeight) {\n```\n\nwill always fail.\n",
"created_at": "2014-05-21T18:54:46Z"
},
{
"body": "Wow, good catch! I think it would make sense to file a bug to Guava?\n",
"created_at": "2014-05-22T00:22:51Z"
},
{
"body": "> Wow, good catch! I think it would make sense to file a bug to Guava?\n\nIndeed!\n\nI'd file that with Guava but also clamp the size of the cache in Elasticsearch to 32GB - 1 for the time being.\n\nAs an aside I imagine 96GB heaps cause super long pause times on hot spot.\n",
"created_at": "2014-05-22T12:21:02Z"
},
{
"body": "+1\n",
"created_at": "2014-05-22T12:21:57Z"
},
{
"body": "I've got the code open and have a few free moments so I can work on it if no one else wants it.\n",
"created_at": "2014-05-22T12:22:53Z"
},
{
"body": "That works for me, feel free to ping me when it's ready and you want a review.\n",
"created_at": "2014-05-22T12:23:58Z"
},
{
"body": "> Wow, good catch! I think it would make sense to file a bug to Guava?\n\nHuge ++!. @danp60 when you file the bug in guava, can you link back to it here?\n",
"created_at": "2014-05-22T12:26:44Z"
},
{
"body": "I imagine you've already realized it but the work around is to force the cache size under 32GB.\n",
"created_at": "2014-05-22T12:30:33Z"
},
{
"body": "Indeed. I think that's not too bad a workaround though since I would expect such a large filter cache to be quite wasteful compared to leaving the memory to the operating system so that it can do a better job with the filesystem cache.\n",
"created_at": "2014-05-22T12:32:29Z"
},
{
"body": "@kimchy I've filed the guava bug here: https://code.google.com/p/guava-libraries/issues/detail?id=1761&colspec=ID%20Type%20Status%20Package%20Summary\n",
"created_at": "2014-05-22T17:52:03Z"
},
{
"body": "@danp60 Thanks!\n",
"created_at": "2014-05-22T18:30:13Z"
},
{
"body": "The bug has been [fixed upstream](https://code.google.com/p/guava-libraries/issues/detail?id=1761).\n",
"created_at": "2014-05-28T11:01:13Z"
}
],
"number": 6268,
"title": "Internal: Filter cache size limit not honored for 32GB or over"
} | {
"body": "Guava's caches have overflow issues around 32GB with our default segment\ncount of 16 and weight of 1 unit per byte. We give them 100MB of headroom\nso 31.9GB.\n\nThis limits the sizes of both the field data and filter caches, the two\nlarge guava caches.\n\nCloses #6268\n",
"number": 6286,
"review_comments": [
{
"body": "I think you should store `sizeInBytes` in a local variable first so that no other thread can ever see a value that is greater than the maximum size?\n",
"created_at": "2014-05-22T16:47:51Z"
},
{
"body": "Good idea.\n",
"created_at": "2014-05-22T16:49:54Z"
},
{
"body": "Could the contant name be a bit more specific? eg. `MAX_GUAVA_CACHE_SIZE`?\n",
"created_at": "2014-05-22T16:53:26Z"
},
{
"body": "I had called it MAX_CACHE_SIZE then thought I'd make it MAX_GUAVA_CACHE_SIZE but my fingers didn't agree..... Fixing.\n",
"created_at": "2014-05-22T16:54:52Z"
},
{
"body": "Done.\n",
"created_at": "2014-05-22T16:56:03Z"
},
{
"body": "Done.\n",
"created_at": "2014-05-22T16:56:10Z"
},
{
"body": "I assume you wanted to put this assignment under the `if` statement. But I don't think the size as a String should be updated: it is used in the ApplySettings class in order to know whether the value has been updated, and when it has been updated, the cache needs to be rebuilt. If we set the size here, I'm afraid we would rebuild the cache everytime new settings would be applied, even if the value didn't change?\n",
"created_at": "2014-05-22T22:04:22Z"
},
{
"body": "Yes to all that and amending.\n",
"created_at": "2014-05-22T22:08:54Z"
}
],
"title": "Limit guava caches to 31.9GB"
} | {
"commits": [
{
"message": "Limit guava caches to 31.9GB\n\nGuava's caches have overflow issues around 32GB with our default segment\ncount of 16 and weight of 1 unit per byte. We give them 100MB of headroom\nso 31.9GB.\n\nThis limits the sizes of both the field data and filter caches, the two\nlarge guava caches.\n\nCloses #6268"
}
],
"files": [
{
"diff": "@@ -34,6 +34,14 @@\n *\n */\n public class ByteSizeValue implements Serializable, Streamable {\n+ /**\n+ * Largest size possible for Guava caches to prevent overflow. Guava's\n+ * caches use integers to track weight per segment and we always 16 segments\n+ * so caches of 32GB would always overflow that integer and they'd never be\n+ * evicted by size. We set this to 31.9GB leaving 100MB of headroom to\n+ * prevent overflow.\n+ */\n+ public static final ByteSizeValue MAX_GUAVA_CACHE_SIZE = new ByteSizeValue(32 * ByteSizeUnit.C3 - 100 * ByteSizeUnit.C2);\n \n private long size;\n ",
"filename": "src/main/java/org/elasticsearch/common/unit/ByteSizeValue.java",
"status": "modified"
},
{
"diff": "@@ -123,7 +123,16 @@ private void buildCache() {\n }\n \n private void computeSizeInBytes() {\n- this.sizeInBytes = MemorySizeValue.parseBytesSizeValueOrHeapRatio(size).bytes();\n+ long sizeInBytes = MemorySizeValue.parseBytesSizeValueOrHeapRatio(size).bytes();\n+ if (sizeInBytes > ByteSizeValue.MAX_GUAVA_CACHE_SIZE.bytes()) {\n+ logger.warn(\"reducing requested filter cache size of [{}] to the maximum allowed size of [{}]\", new ByteSizeValue(sizeInBytes),\n+ ByteSizeValue.MAX_GUAVA_CACHE_SIZE);\n+ sizeInBytes = ByteSizeValue.MAX_GUAVA_CACHE_SIZE.bytes();\n+ // Even though it feels wrong for size and sizeInBytes to get out of\n+ // sync we don't update size here because it might cause the cache\n+ // to be rebuilt every time new settings are applied.\n+ }\n+ this.sizeInBytes = sizeInBytes;\n }\n \n public void addReaderKeyToClean(Object readerKey) {",
"filename": "src/main/java/org/elasticsearch/indices/cache/filter/IndicesFilterCache.java",
"status": "modified"
},
{
"diff": "@@ -55,8 +55,14 @@ public class IndicesFieldDataCache extends AbstractComponent implements RemovalL\n public IndicesFieldDataCache(Settings settings, IndicesFieldDataCacheListener indicesFieldDataCacheListener) {\n super(settings);\n this.indicesFieldDataCacheListener = indicesFieldDataCacheListener;\n- final String size = componentSettings.get(\"size\", \"-1\");\n- final long sizeInBytes = componentSettings.getAsMemory(\"size\", \"-1\").bytes();\n+ String size = componentSettings.get(\"size\", \"-1\");\n+ long sizeInBytes = componentSettings.getAsMemory(\"size\", \"-1\").bytes();\n+ if (sizeInBytes > ByteSizeValue.MAX_GUAVA_CACHE_SIZE.bytes()) {\n+ logger.warn(\"reducing requested field data cache size of [{}] to the maximum allowed size of [{}]\", new ByteSizeValue(sizeInBytes),\n+ ByteSizeValue.MAX_GUAVA_CACHE_SIZE);\n+ sizeInBytes = ByteSizeValue.MAX_GUAVA_CACHE_SIZE.bytes();\n+ size = ByteSizeValue.MAX_GUAVA_CACHE_SIZE.toString();\n+ }\n final TimeValue expire = componentSettings.getAsTime(\"expire\", null);\n CacheBuilder<Key, RamUsage> cacheBuilder = CacheBuilder.newBuilder()\n .removalListener(this);",
"filename": "src/main/java/org/elasticsearch/indices/fielddata/cache/IndicesFieldDataCache.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,95 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.common.util;\n+\n+import com.google.common.cache.Cache;\n+import com.google.common.cache.CacheBuilder;\n+import com.google.common.cache.Weigher;\n+import org.elasticsearch.common.unit.ByteSizeValue;\n+import org.elasticsearch.test.ElasticsearchTestCase;\n+import org.junit.Test;\n+\n+import static org.hamcrest.Matchers.*;\n+\n+/**\n+ * Asserts that Guava's caches can get stuck in an overflow state where they\n+ * never clear themselves based on their \"weight\" policy if the weight grows\n+ * beyond MAX_INT. If the noEvictionIf* methods start failing after upgrading\n+ * Guava then the problem with Guava's caches can probably be considered fixed\n+ * and {@code ByteSizeValue#MAX_GUAVA_CACHE_SIZE} can likely be removed.\n+ */\n+public class GuavaCacheOverflowTest extends ElasticsearchTestCase {\n+ private final int tenMeg = ByteSizeValue.parseBytesSizeValue(\"10MB\").bytesAsInt();\n+\n+ private Cache<Integer, Object> cache;\n+\n+ @Test\n+ public void noEvictionIfWeightMaxWeightIs32GB() {\n+ checkNoEviction(\"32GB\");\n+ }\n+\n+ @Test\n+ public void noEvictionIfWeightMaxWeightIsGreaterThan32GB() {\n+ checkNoEviction(between(33, 50) + \"GB\");\n+ }\n+\n+ @Test\n+ public void evictionIfWeightSlowlyGoesOverMaxWeight() {\n+ buildCache(\"30GB\");\n+ // Add about 100GB of weight to the cache\n+ int entries = 10240;\n+ fillCache(entries);\n+\n+ // And as expected, some are purged.\n+ int missing = 0;\n+ for (int i = 0; i < 31; i++) {\n+ if (cache.getIfPresent(i + tenMeg) == null) {\n+ missing++;\n+ }\n+ }\n+ assertThat(missing, both(greaterThan(0)).and(lessThan(entries)));\n+ }\n+\n+ private void buildCache(String size) {\n+ cache = CacheBuilder.newBuilder().concurrencyLevel(16).maximumWeight(ByteSizeValue.parseBytesSizeValue(size).bytes())\n+ .weigher(new Weigher<Integer, Object>() {\n+ @Override\n+ public int weigh(Integer key, Object value) {\n+ return key;\n+ }\n+ }).build();\n+ }\n+\n+ private void fillCache(int entries) {\n+ for (int i = 0; i < entries; i++) {\n+ cache.put(i + tenMeg, i);\n+ }\n+ }\n+\n+ private void checkNoEviction(String size) {\n+ buildCache(size);\n+ // Adds ~100GB worth of weight to the cache\n+ fillCache(10240);\n+ // But nothing has been purged!\n+ for (int i = 0; i < 10000; i++) {\n+ assertNotNull(cache.getIfPresent(i + tenMeg));\n+ }\n+ }\n+}",
"filename": "src/test/java/org/elasticsearch/common/util/GuavaCacheOverflowTest.java",
"status": "added"
}
]
} |
{
"body": "Attempt to upload polygon at https://gist.github.com/anonymous/7f1bb6d7e9cd72f5977c\nfails with \norg.elasticsearch.index.mapper.MapperParsingException: failed to parse [geometry]\n...\nCaused by: java.lang.ArrayIndexOutOfBoundsException: -1\n\nRepro\n\ncurl -XDELETE 'http://localhost:9200/test'\n\ncurl -XPOST 'http://localhost:9200/test' -d '{ \n \"mappings\":{ \n \"test\":{ \n \"properties\":{ \n \"geometry\":{ \n \"type\":\"geo_shape\", \n \"tree\":\"quadtree\", \n \"tree_levels\":14, \n \"distance_error_pct\":0.0 \n } \n } \n } \n } \n}'\n\ncurl -XPOST 'http://localhost:9200/test/test/1' -d '{ \n \"geometry\": { \n \"type\": \"Polygon\", \n \"coordinates\": [[[-186,0],[-176,0],[-176,3],[-183,3],[-183,5],[-176,5],[-176,8],[-186,8],[-186,0]],[[-185,1],[-181,1],[-181,2],[-184,2],[-184,6],[-178,6],[-178,7],[-185,7],[-185,1]],[[-179,1],[-177,1],[-177,2],[-179,2],[-179,1]],[[-180,0],[-180,-90],[-180,90],[-180,0]]] \n } \n}'\n\nAdditionally, there is a unit test at:\nhttps://github.com/marcuswr/elasticsearch-dateline/commit/cbf9db12615c55ba8a8801aa3eaa3704ac2943c6\n",
"comments": [],
"number": 6179,
"title": "Geo:Valid Polygon crossing dateline fails to parse"
} | {
"body": "If a polygon is constructed which overlaps the date line but has a hole which lies entirely one to one side of the date line, JTS error saying that the hole is not within the bounds of the polygon because the code which splits the polygon either side of the date line does not add the hole to the correct component of the final set of polygons. The fix ensures this selection happens correctly.\n\nCloses #6179\n",
"number": 6282,
"review_comments": [
{
"body": "Missing `@Test` annotation here?\n",
"created_at": "2014-05-23T07:31:38Z"
},
{
"body": "nitpicking here... sorry :)\n",
"created_at": "2014-05-23T07:32:00Z"
},
{
"body": "I think the framework supports both: method names that start with `test` or `@Test` annotations.\n",
"created_at": "2014-05-23T08:29:19Z"
},
{
"body": "Yeah it did seem to be running the test without the annotation but I have added the annotation anyway for consistency\n",
"created_at": "2014-05-23T08:31:23Z"
},
{
"body": "Tricky! Maybe we should run the binary search on a copy to not mutate the hole?\n",
"created_at": "2014-05-30T10:04:32Z"
}
],
"title": "Issue with polygons near date line"
} | {
"commits": [],
"files": []
} |
{
"body": "The nested queries, filters and aggregations expect that filters produce `FixedBitSet` instances. However, if the filter cache is disabled, you might get an index-based `DocsEnum` directly, so the set of documents needs to be loaded into a `FixedBitSet` before running the query.\n",
"comments": [],
"number": 6279,
"title": "Nested: queries/filters/aggregations expect FixedBitSets, yet it isn't the case with NoneFilterCache"
} | {
"body": "\"Randomize all the things\"\n\nRelates to #6278 and #6279\n",
"number": 6280,
"review_comments": [],
"title": "Test: Randomly disable the filter cache."
} | {
"commits": [
{
"message": "Aggregations: Fix ReverseNestedAggregator to compute the parent document correctly.\n\nClose #6278"
},
{
"message": "Nested: Make sure queries/filters/aggs get a FixedBitSet when they expect one.\n\nClose #6279"
},
{
"message": "[TESTS] Randomly disable the filter cache."
}
],
"files": [
{
"diff": "@@ -21,6 +21,7 @@\n \n import org.apache.lucene.search.Filter;\n import org.apache.lucene.search.Query;\n+import org.apache.lucene.search.join.FixedBitSetCachingWrapperFilter;\n import org.apache.lucene.search.join.ScoreMode;\n import org.apache.lucene.search.join.ToParentBlockJoinQuery;\n import org.elasticsearch.common.Strings;\n@@ -150,6 +151,9 @@ public Filter parse(QueryParseContext parseContext) throws IOException, QueryPar\n //}\n parentFilter = parseContext.cacheFilter(parentFilter, null);\n }\n+ // if the filter cache is disabled, then we still have a filter that is not cached while ToParentBlockJoinQuery\n+ // expects FixedBitSet instances\n+ parentFilter = new FixedBitSetCachingWrapperFilter(parentFilter);\n \n Filter nestedFilter;\n if (join) {",
"filename": "src/main/java/org/elasticsearch/index/query/NestedFilterParser.java",
"status": "modified"
},
{
"diff": "@@ -23,6 +23,7 @@\n import org.apache.lucene.search.DocIdSet;\n import org.apache.lucene.search.Filter;\n import org.apache.lucene.search.Query;\n+import org.apache.lucene.search.join.FixedBitSetCachingWrapperFilter;\n import org.apache.lucene.search.join.ScoreMode;\n import org.apache.lucene.search.join.ToParentBlockJoinQuery;\n import org.apache.lucene.util.Bits;\n@@ -154,6 +155,9 @@ public Query parse(QueryParseContext parseContext) throws IOException, QueryPars\n //}\n parentFilter = parseContext.cacheFilter(parentFilter, null);\n }\n+ // if the filter cache is disabled, then we still have a filter that is not cached while ToParentBlockJoinQuery\n+ // expects FixedBitSet instances\n+ parentFilter = new FixedBitSetCachingWrapperFilter(parentFilter);\n \n ToParentBlockJoinQuery joinQuery = new ToParentBlockJoinQuery(query, parentFilter, scoreMode);\n joinQuery.setBoost(boost);",
"filename": "src/main/java/org/elasticsearch/index/query/NestedQueryParser.java",
"status": "modified"
},
{
"diff": "@@ -25,6 +25,7 @@\n import org.apache.lucene.search.*;\n import org.apache.lucene.util.Bits;\n import org.apache.lucene.util.FixedBitSet;\n+import org.elasticsearch.common.lucene.docset.DocIdSets;\n \n import java.io.IOException;\n import java.util.Collection;\n@@ -117,7 +118,13 @@ public Scorer scorer(AtomicReaderContext context, Bits acceptDocs) throws IOExce\n return null;\n }\n if (!(parents instanceof FixedBitSet)) {\n- throw new IllegalStateException(\"parentFilter must return FixedBitSet; got \" + parents);\n+ if (parents.isCacheable()) {\n+ // the filter is cached, yet not with the right type\n+ throw new IllegalStateException(\"parentFilter must return FixedBitSet; got \" + parents);\n+ } else {\n+ // may happen if the filter cache type is none\n+ parents = DocIdSets.toFixedBitSet(parents.iterator(), context.reader().maxDoc());\n+ }\n }\n \n int firstParentDoc = parentScorer.nextDoc();",
"filename": "src/main/java/org/elasticsearch/index/search/nested/IncludeNestedDocsQuery.java",
"status": "modified"
},
{
"diff": "@@ -21,6 +21,7 @@\n import org.apache.lucene.index.AtomicReaderContext;\n import org.apache.lucene.search.DocIdSet;\n import org.apache.lucene.search.Filter;\n+import org.apache.lucene.search.join.FixedBitSetCachingWrapperFilter;\n import org.apache.lucene.util.Bits;\n import org.apache.lucene.util.FixedBitSet;\n import org.elasticsearch.common.lucene.ReaderContextAware;\n@@ -82,6 +83,8 @@ public void setNextReader(AtomicReaderContext reader) {\n parentFilterNotCached = closestNestedAggregator.childFilter;\n }\n parentFilter = SearchContext.current().filterCache().cache(parentFilterNotCached);\n+ // if the filter cache is disabled, we still need to produce bit sets\n+ parentFilter = new FixedBitSetCachingWrapperFilter(parentFilter);\n }\n \n try {",
"filename": "src/main/java/org/elasticsearch/search/aggregations/bucket/nested/NestedAggregator.java",
"status": "modified"
},
{
"diff": "@@ -105,7 +105,12 @@ public void collect(int childDoc, long bucketOrd) throws IOException {\n }\n \n // fast forward to retrieve the parentDoc this childDoc belongs to\n- int parentDoc = parentDocs.advance(childDoc);\n+ final int parentDoc;\n+ if (parentDocs.docID() < childDoc) {\n+ parentDoc = parentDocs.advance(childDoc);\n+ } else {\n+ parentDoc = parentDocs.docID();\n+ }\n assert childDoc <= parentDoc && parentDoc != DocIdSetIterator.NO_MORE_DOCS;\n if (bucketOrdToLastCollectedParentDoc.containsKey(bucketOrd)) {\n int lastCollectedParentDoc = bucketOrdToLastCollectedParentDoc.lget();",
"filename": "src/main/java/org/elasticsearch/search/aggregations/bucket/nested/ReverseNestedAggregator.java",
"status": "modified"
},
{
"diff": "@@ -61,9 +61,9 @@ public void testClearCacheFilterKeys() {\n SearchResponse searchResponse = client().prepareSearch().setQuery(filteredQuery(matchAllQuery(), FilterBuilders.termFilter(\"field\", \"value\").cacheKey(\"test_key\"))).execute().actionGet();\n assertThat(searchResponse.getHits().getHits().length, equalTo(1));\n nodesStats = client().admin().cluster().prepareNodesStats().setIndices(true).execute().actionGet();\n- assertThat(nodesStats.getNodes()[0].getIndices().getFilterCache().getMemorySizeInBytes(), greaterThan(0l));\n+ assertThat(nodesStats.getNodes()[0].getIndices().getFilterCache().getMemorySizeInBytes(), cluster().hasFilterCache() ? greaterThan(0l) : is(0L));\n indicesStats = client().admin().indices().prepareStats(\"test\").clear().setFilterCache(true).execute().actionGet();\n- assertThat(indicesStats.getTotal().getFilterCache().getMemorySizeInBytes(), greaterThan(0l));\n+ assertThat(indicesStats.getTotal().getFilterCache().getMemorySizeInBytes(), cluster().hasFilterCache() ? greaterThan(0l) : is(0L));\n \n client().admin().indices().prepareClearCache().setFilterKeys(\"test_key\").execute().actionGet();\n nodesStats = client().admin().cluster().prepareNodesStats().setIndices(true).execute().actionGet();\n@@ -152,13 +152,13 @@ public void testClearAllCaches() throws Exception {\n nodesStats = client().admin().cluster().prepareNodesStats().setIndices(true)\n .execute().actionGet();\n assertThat(nodesStats.getNodes()[0].getIndices().getFieldData().getMemorySizeInBytes(), greaterThan(0l));\n- assertThat(nodesStats.getNodes()[0].getIndices().getFilterCache().getMemorySizeInBytes(), greaterThan(0l));\n+ assertThat(nodesStats.getNodes()[0].getIndices().getFilterCache().getMemorySizeInBytes(), cluster().hasFilterCache() ? greaterThan(0l) : is(0L));\n \n indicesStats = client().admin().indices().prepareStats(\"test\")\n .clear().setFieldData(true).setFilterCache(true)\n .execute().actionGet();\n assertThat(indicesStats.getTotal().getFieldData().getMemorySizeInBytes(), greaterThan(0l));\n- assertThat(indicesStats.getTotal().getFilterCache().getMemorySizeInBytes(), greaterThan(0l));\n+ assertThat(indicesStats.getTotal().getFilterCache().getMemorySizeInBytes(), cluster().hasFilterCache() ? greaterThan(0l) : is(0L));\n \n client().admin().indices().prepareClearCache().execute().actionGet();\n Thread.sleep(100); // Make sure the filter cache entries have been removed...",
"filename": "src/test/java/org/elasticsearch/indices/cache/CacheTests.java",
"status": "modified"
},
{
"diff": "@@ -21,8 +21,6 @@\n \n import org.apache.lucene.search.Explanation;\n import org.elasticsearch.action.admin.cluster.health.ClusterHealthStatus;\n-import org.elasticsearch.action.admin.cluster.node.stats.NodesStatsResponse;\n-import org.elasticsearch.action.admin.indices.stats.CommonStatsFlags;\n import org.elasticsearch.action.admin.indices.stats.IndicesStatsResponse;\n import org.elasticsearch.action.delete.DeleteResponse;\n import org.elasticsearch.action.get.GetResponse;",
"filename": "src/test/java/org/elasticsearch/nested/SimpleNestedTests.java",
"status": "modified"
},
{
"diff": "@@ -1951,7 +1951,7 @@ public void testValidateThatHasChildAndHasParentFilterAreNeverCached() throws Ex\n \n // filter cache should not contain any thing, b/c has_child and has_parent can't be cached.\n statsResponse = client().admin().indices().prepareStats(\"test\").clear().setFilterCache(true).get();\n- assertThat(statsResponse.getIndex(\"test\").getTotal().getFilterCache().getMemorySizeInBytes(), greaterThan(initialCacheSize));\n+ assertThat(statsResponse.getIndex(\"test\").getTotal().getFilterCache().getMemorySizeInBytes(), cluster().hasFilterCache() ? greaterThan(initialCacheSize) : is(initialCacheSize));\n }\n \n // https://github.com/elasticsearch/elasticsearch/issues/5783",
"filename": "src/test/java/org/elasticsearch/search/child/SimpleChildQuerySearchTests.java",
"status": "modified"
},
{
"diff": "@@ -2212,7 +2212,7 @@ public void testRangeFilterNoCacheWithNow() throws Exception {\n // Now with rounding is used, so we must have something in filter cache\n statsResponse = client().admin().indices().prepareStats(\"test\").clear().setFilterCache(true).get();\n long filtercacheSize = statsResponse.getIndex(\"test\").getTotal().getFilterCache().getMemorySizeInBytes();\n- assertThat(filtercacheSize, greaterThan(0l));\n+ assertThat(filtercacheSize, cluster().hasFilterCache() ? greaterThan(0l) : is(0L));\n \n searchResponse = client().prepareSearch(\"test\")\n .setQuery(QueryBuilders.filteredQuery(\n@@ -2226,7 +2226,7 @@ public void testRangeFilterNoCacheWithNow() throws Exception {\n \n // and because we use term filter, it is also added to filter cache, so it should contain more than before\n statsResponse = client().admin().indices().prepareStats(\"test\").clear().setFilterCache(true).get();\n- assertThat(statsResponse.getIndex(\"test\").getTotal().getFilterCache().getMemorySizeInBytes(), greaterThan(filtercacheSize));\n+ assertThat(statsResponse.getIndex(\"test\").getTotal().getFilterCache().getMemorySizeInBytes(), cluster().hasFilterCache() ? greaterThan(filtercacheSize) : is(filtercacheSize));\n filtercacheSize = statsResponse.getIndex(\"test\").getTotal().getFilterCache().getMemorySizeInBytes();\n \n searchResponse = client().prepareSearch(\"test\")\n@@ -2241,7 +2241,7 @@ public void testRangeFilterNoCacheWithNow() throws Exception {\n \n // The range filter is now explicitly cached, so it now it is in the filter cache.\n statsResponse = client().admin().indices().prepareStats(\"test\").clear().setFilterCache(true).get();\n- assertThat(statsResponse.getIndex(\"test\").getTotal().getFilterCache().getMemorySizeInBytes(), greaterThan(filtercacheSize));\n+ assertThat(statsResponse.getIndex(\"test\").getTotal().getFilterCache().getMemorySizeInBytes(), cluster().hasFilterCache() ? greaterThan(filtercacheSize) : is(filtercacheSize));\n }\n \n @Test",
"filename": "src/test/java/org/elasticsearch/search/query/SimpleQueryTests.java",
"status": "modified"
},
{
"diff": "@@ -124,7 +124,7 @@ public void testCustomScriptCache() throws Exception {\n .execute().actionGet();\n \n assertThat(response.getHits().totalHits(), equalTo(1l));\n- assertThat(scriptCounter.get(), equalTo(3));\n+ assertThat(scriptCounter.get(), equalTo(cluster().hasFilterCache() ? 3 : 1));\n \n scriptCounter.set(0);\n logger.info(\"running script filter the second time\");\n@@ -133,7 +133,7 @@ public void testCustomScriptCache() throws Exception {\n .execute().actionGet();\n \n assertThat(response.getHits().totalHits(), equalTo(1l));\n- assertThat(scriptCounter.get(), equalTo(0));\n+ assertThat(scriptCounter.get(), equalTo(cluster().hasFilterCache() ? 0 : 1));\n \n scriptCounter.set(0);\n logger.info(\"running script filter with new parameters\");\n@@ -142,7 +142,7 @@ public void testCustomScriptCache() throws Exception {\n .execute().actionGet();\n \n assertThat(response.getHits().totalHits(), equalTo(1l));\n- assertThat(scriptCounter.get(), equalTo(3));\n+ assertThat(scriptCounter.get(), equalTo(cluster().hasFilterCache() ? 3 : 1));\n \n scriptCounter.set(0);\n logger.info(\"running script filter with same parameters\");\n@@ -151,6 +151,6 @@ public void testCustomScriptCache() throws Exception {\n .execute().actionGet();\n \n assertThat(response.getHits().totalHits(), equalTo(3l));\n- assertThat(scriptCounter.get(), equalTo(0));\n+ assertThat(scriptCounter.get(), equalTo(cluster().hasFilterCache() ? 0 : 3));\n }\n }\n\\ No newline at end of file",
"filename": "src/test/java/org/elasticsearch/search/scriptfilter/ScriptFilterSearchTests.java",
"status": "modified"
},
{
"diff": "@@ -111,4 +111,9 @@ public void close() {\n public Iterator<Client> iterator() {\n return Lists.newArrayList(client).iterator();\n }\n+\n+ @Override\n+ public boolean hasFilterCache() {\n+ return true; // default\n+ }\n }",
"filename": "src/test/java/org/elasticsearch/test/ExternalTestCluster.java",
"status": "modified"
},
{
"diff": "@@ -197,4 +197,9 @@ public void ensureEstimatedStats() {\n }\n }\n }\n+\n+ /**\n+ * Return whether or not this cluster can cache filters.\n+ */\n+ public abstract boolean hasFilterCache();\n }",
"filename": "src/test/java/org/elasticsearch/test/ImmutableTestCluster.java",
"status": "modified"
},
{
"diff": "@@ -58,6 +58,9 @@\n import org.elasticsearch.common.util.concurrent.EsExecutors;\n import org.elasticsearch.env.NodeEnvironment;\n import org.elasticsearch.http.HttpServerTransport;\n+import org.elasticsearch.index.cache.filter.FilterCacheModule;\n+import org.elasticsearch.index.cache.filter.none.NoneFilterCache;\n+import org.elasticsearch.index.cache.filter.weighted.WeightedFilterCache;\n import org.elasticsearch.index.engine.IndexEngineModule;\n import org.elasticsearch.index.fielddata.ordinals.InternalGlobalOrdinalsBuilder;\n import org.elasticsearch.node.Node;\n@@ -160,6 +163,8 @@ public final class TestCluster extends ImmutableTestCluster {\n \n private final ExecutorService executor;\n \n+ private final boolean hasFilterCache;\n+\n public TestCluster(long clusterSeed, String clusterName) {\n this(clusterSeed, DEFAULT_MIN_NUM_DATA_NODES, DEFAULT_MAX_NUM_DATA_NODES, clusterName, NodeSettingsSource.EMPTY, DEFAULT_NUM_CLIENT_NODES, DEFAULT_ENABLE_RANDOM_BENCH_NODES);\n }\n@@ -228,6 +233,7 @@ public TestCluster(long clusterSeed, int minNumDataNodes, int maxNumDataNodes, S\n builder.put(\"plugins.\" + PluginsService.LOAD_PLUGIN_FROM_CLASSPATH, false);\n defaultSettings = builder.build();\n executor = EsExecutors.newCached(1, TimeUnit.MINUTES, EsExecutors.daemonThreadFactory(\"test_\" + clusterName));\n+ this.hasFilterCache = random.nextBoolean();\n }\n \n public String getClusterName() {\n@@ -243,7 +249,8 @@ private static boolean isLocalTransportConfigured() {\n \n private Settings getSettings(int nodeOrdinal, long nodeSeed, Settings others) {\n Builder builder = ImmutableSettings.settingsBuilder().put(defaultSettings)\n- .put(getRandomNodeSettings(nodeSeed));\n+ .put(getRandomNodeSettings(nodeSeed))\n+ .put(FilterCacheModule.FilterCacheSettings.FILTER_CACHE_TYPE, hasFilterCache() ? WeightedFilterCache.class : NoneFilterCache.class);\n Settings settings = nodeSettingsSource.settings(nodeOrdinal);\n if (settings != null) {\n if (settings.get(ClusterName.SETTING) != null) {\n@@ -1278,6 +1285,11 @@ public int numBenchNodes() {\n return benchNodeAndClients().size();\n }\n \n+ @Override\n+ public boolean hasFilterCache() {\n+ return hasFilterCache;\n+ }\n+\n private synchronized Collection<NodeAndClient> dataNodeAndClients() {\n return Collections2.filter(nodes.values(), new DataNodePredicate());\n }",
"filename": "src/test/java/org/elasticsearch/test/TestCluster.java",
"status": "modified"
},
{
"diff": "@@ -77,6 +77,14 @@ public void simpleValidateQuery() throws Exception {\n assertThat(client().admin().indices().prepareValidateQuery(\"test\").setQuery(QueryBuilders.queryString(\"foo:1 AND\")).execute().actionGet().isValid(), equalTo(false));\n }\n \n+ private static String filter(String uncachedFilter) {\n+ String filter = uncachedFilter;\n+ if (cluster().hasFilterCache()) {\n+ filter = \"cached(\" + filter + \")\";\n+ }\n+ return filter;\n+ }\n+\n @Test\n public void explainValidateQuery() throws Exception {\n createIndex(\"test\");\n@@ -110,27 +118,28 @@ public void explainValidateQuery() throws Exception {\n assertThat(response.getQueryExplanation().get(0).getError(), containsString(\"Failed to parse\"));\n assertThat(response.getQueryExplanation().get(0).getExplanation(), nullValue());\n \n- assertExplanation(QueryBuilders.queryString(\"_id:1\"), equalTo(\"filtered(ConstantScore(_uid:type1#1))->cache(_type:type1)\"));\n+ final String typeFilter = filter(\"_type:type1\");\n+ assertExplanation(QueryBuilders.queryString(\"_id:1\"), equalTo(\"filtered(ConstantScore(_uid:type1#1))->\" + typeFilter));\n \n assertExplanation(QueryBuilders.idsQuery(\"type1\").addIds(\"1\").addIds(\"2\"),\n- equalTo(\"filtered(ConstantScore(_uid:type1#1 _uid:type1#2))->cache(_type:type1)\"));\n+ equalTo(\"filtered(ConstantScore(_uid:type1#1 _uid:type1#2))->\" + typeFilter));\n \n- assertExplanation(QueryBuilders.queryString(\"foo\"), equalTo(\"filtered(_all:foo)->cache(_type:type1)\"));\n+ assertExplanation(QueryBuilders.queryString(\"foo\"), equalTo(\"filtered(_all:foo)->\" + typeFilter));\n \n assertExplanation(QueryBuilders.filteredQuery(\n QueryBuilders.termQuery(\"foo\", \"1\"),\n FilterBuilders.orFilter(\n FilterBuilders.termFilter(\"bar\", \"2\"),\n FilterBuilders.termFilter(\"baz\", \"3\")\n )\n- ), equalTo(\"filtered(filtered(foo:1)->cache(bar:[2 TO 2]) cache(baz:3))->cache(_type:type1)\"));\n+ ), equalTo(\"filtered(filtered(foo:1)->\" + filter(\"bar:[2 TO 2]\") + \" \" + filter(\"baz:3\") + \")->\" + typeFilter));\n \n assertExplanation(QueryBuilders.filteredQuery(\n QueryBuilders.termQuery(\"foo\", \"1\"),\n FilterBuilders.orFilter(\n FilterBuilders.termFilter(\"bar\", \"2\")\n )\n- ), equalTo(\"filtered(filtered(foo:1)->cache(bar:[2 TO 2]))->cache(_type:type1)\"));\n+ ), equalTo(\"filtered(filtered(foo:1)->\" + filter(\"bar:[2 TO 2]\") + \")->\" + typeFilter));\n \n assertExplanation(QueryBuilders.filteredQuery(\n QueryBuilders.matchAllQuery(),\n@@ -139,55 +148,55 @@ public void explainValidateQuery() throws Exception {\n .addPoint(30, -80)\n .addPoint(20, -90)\n .addPoint(40, -70) // closing polygon\n- ), equalTo(\"filtered(ConstantScore(GeoPolygonFilter(pin.location, [[40.0, -70.0], [30.0, -80.0], [20.0, -90.0], [40.0, -70.0]])))->cache(_type:type1)\"));\n+ ), equalTo(\"filtered(ConstantScore(GeoPolygonFilter(pin.location, [[40.0, -70.0], [30.0, -80.0], [20.0, -90.0], [40.0, -70.0]])))->\" + typeFilter));\n \n assertExplanation(QueryBuilders.constantScoreQuery(FilterBuilders.geoBoundingBoxFilter(\"pin.location\")\n .topLeft(40, -80)\n .bottomRight(20, -70)\n- ), equalTo(\"filtered(ConstantScore(GeoBoundingBoxFilter(pin.location, [40.0, -80.0], [20.0, -70.0])))->cache(_type:type1)\"));\n+ ), equalTo(\"filtered(ConstantScore(GeoBoundingBoxFilter(pin.location, [40.0, -80.0], [20.0, -70.0])))->\" + typeFilter));\n \n assertExplanation(QueryBuilders.constantScoreQuery(FilterBuilders.geoDistanceFilter(\"pin.location\")\n .lat(10).lon(20).distance(15, DistanceUnit.DEFAULT).geoDistance(GeoDistance.PLANE)\n- ), equalTo(\"filtered(ConstantScore(GeoDistanceFilter(pin.location, PLANE, 15.0, 10.0, 20.0)))->cache(_type:type1)\"));\n+ ), equalTo(\"filtered(ConstantScore(GeoDistanceFilter(pin.location, PLANE, 15.0, 10.0, 20.0)))->\" + typeFilter));\n \n assertExplanation(QueryBuilders.constantScoreQuery(FilterBuilders.geoDistanceFilter(\"pin.location\")\n .lat(10).lon(20).distance(15, DistanceUnit.DEFAULT).geoDistance(GeoDistance.PLANE)\n- ), equalTo(\"filtered(ConstantScore(GeoDistanceFilter(pin.location, PLANE, 15.0, 10.0, 20.0)))->cache(_type:type1)\"));\n+ ), equalTo(\"filtered(ConstantScore(GeoDistanceFilter(pin.location, PLANE, 15.0, 10.0, 20.0)))->\" + typeFilter));\n \n assertExplanation(QueryBuilders.constantScoreQuery(FilterBuilders.geoDistanceRangeFilter(\"pin.location\")\n .lat(10).lon(20).from(\"15m\").to(\"25m\").geoDistance(GeoDistance.PLANE)\n- ), equalTo(\"filtered(ConstantScore(GeoDistanceRangeFilter(pin.location, PLANE, [15.0 - 25.0], 10.0, 20.0)))->cache(_type:type1)\"));\n+ ), equalTo(\"filtered(ConstantScore(GeoDistanceRangeFilter(pin.location, PLANE, [15.0 - 25.0], 10.0, 20.0)))->\" + typeFilter));\n \n assertExplanation(QueryBuilders.constantScoreQuery(FilterBuilders.geoDistanceRangeFilter(\"pin.location\")\n .lat(10).lon(20).from(\"15miles\").to(\"25miles\").geoDistance(GeoDistance.PLANE)\n- ), equalTo(\"filtered(ConstantScore(GeoDistanceRangeFilter(pin.location, PLANE, [\" + DistanceUnit.DEFAULT.convert(15.0, DistanceUnit.MILES) + \" - \" + DistanceUnit.DEFAULT.convert(25.0, DistanceUnit.MILES) + \"], 10.0, 20.0)))->cache(_type:type1)\"));\n+ ), equalTo(\"filtered(ConstantScore(GeoDistanceRangeFilter(pin.location, PLANE, [\" + DistanceUnit.DEFAULT.convert(15.0, DistanceUnit.MILES) + \" - \" + DistanceUnit.DEFAULT.convert(25.0, DistanceUnit.MILES) + \"], 10.0, 20.0)))->\" + typeFilter));\n \n assertExplanation(QueryBuilders.filteredQuery(\n QueryBuilders.termQuery(\"foo\", \"1\"),\n FilterBuilders.andFilter(\n FilterBuilders.termFilter(\"bar\", \"2\"),\n FilterBuilders.termFilter(\"baz\", \"3\")\n )\n- ), equalTo(\"filtered(filtered(foo:1)->+cache(bar:[2 TO 2]) +cache(baz:3))->cache(_type:type1)\"));\n+ ), equalTo(\"filtered(filtered(foo:1)->+\" + filter(\"bar:[2 TO 2]\") + \" +\" + filter(\"baz:3\") + \")->\" + typeFilter));\n \n assertExplanation(QueryBuilders.constantScoreQuery(FilterBuilders.termsFilter(\"foo\", \"1\", \"2\", \"3\")),\n- equalTo(\"filtered(ConstantScore(cache(foo:1 foo:2 foo:3)))->cache(_type:type1)\"));\n+ equalTo(\"filtered(ConstantScore(\" + filter(\"foo:1 foo:2 foo:3\") + \"))->\" + typeFilter));\n \n assertExplanation(QueryBuilders.constantScoreQuery(FilterBuilders.notFilter(FilterBuilders.termFilter(\"foo\", \"bar\"))),\n- equalTo(\"filtered(ConstantScore(NotFilter(cache(foo:bar))))->cache(_type:type1)\"));\n+ equalTo(\"filtered(ConstantScore(NotFilter(\" + filter(\"foo:bar\") + \")))->\" + typeFilter));\n \n assertExplanation(QueryBuilders.filteredQuery(\n QueryBuilders.termQuery(\"foo\", \"1\"),\n FilterBuilders.hasChildFilter(\n \"child-type\",\n QueryBuilders.matchQuery(\"foo\", \"1\")\n )\n- ), equalTo(\"filtered(filtered(foo:1)->CustomQueryWrappingFilter(child_filter[child-type/type1](filtered(foo:1)->cache(_type:child-type))))->cache(_type:type1)\"));\n+ ), equalTo(\"filtered(filtered(foo:1)->CustomQueryWrappingFilter(child_filter[child-type/type1](filtered(foo:1)->\" + filter(\"_type:child-type\") + \")))->\" + typeFilter));\n \n assertExplanation(QueryBuilders.filteredQuery(\n QueryBuilders.termQuery(\"foo\", \"1\"),\n FilterBuilders.scriptFilter(\"true\")\n- ), equalTo(\"filtered(filtered(foo:1)->ScriptFilter(true))->cache(_type:type1)\"));\n+ ), equalTo(\"filtered(filtered(foo:1)->ScriptFilter(true))->\" + typeFilter));\n \n }\n ",
"filename": "src/test/java/org/elasticsearch/validate/SimpleValidateQueryTests.java",
"status": "modified"
}
]
} |
{
"body": "In order to compute parent documents based on a child document, `ReverseNestedAggregator` does the following:\n\n``` java\nint parentDoc = parentDocs.advance(childDoc);\n```\n\nBut the behavior of `advance` is undefined when the target is less than or equal to the target document, which can happen if you have 2 matching child documents that have the same parent.\n\nThis works fine in most cases when the filter cache is enabled since `FixedBitSet` is permissive and would go the the next set bit after `childDoc`. But if the filter cache is disabled then you might run into trouble.\n",
"comments": [],
"number": 6278,
"title": "Aggregations: `ReverseNestedAggregator` does not compute parent documents correctly"
} | {
"body": "\"Randomize all the things\"\n\nRelates to #6278 and #6279\n",
"number": 6280,
"review_comments": [],
"title": "Test: Randomly disable the filter cache."
} | {
"commits": [
{
"message": "Aggregations: Fix ReverseNestedAggregator to compute the parent document correctly.\n\nClose #6278"
},
{
"message": "Nested: Make sure queries/filters/aggs get a FixedBitSet when they expect one.\n\nClose #6279"
},
{
"message": "[TESTS] Randomly disable the filter cache."
}
],
"files": [
{
"diff": "@@ -21,6 +21,7 @@\n \n import org.apache.lucene.search.Filter;\n import org.apache.lucene.search.Query;\n+import org.apache.lucene.search.join.FixedBitSetCachingWrapperFilter;\n import org.apache.lucene.search.join.ScoreMode;\n import org.apache.lucene.search.join.ToParentBlockJoinQuery;\n import org.elasticsearch.common.Strings;\n@@ -150,6 +151,9 @@ public Filter parse(QueryParseContext parseContext) throws IOException, QueryPar\n //}\n parentFilter = parseContext.cacheFilter(parentFilter, null);\n }\n+ // if the filter cache is disabled, then we still have a filter that is not cached while ToParentBlockJoinQuery\n+ // expects FixedBitSet instances\n+ parentFilter = new FixedBitSetCachingWrapperFilter(parentFilter);\n \n Filter nestedFilter;\n if (join) {",
"filename": "src/main/java/org/elasticsearch/index/query/NestedFilterParser.java",
"status": "modified"
},
{
"diff": "@@ -23,6 +23,7 @@\n import org.apache.lucene.search.DocIdSet;\n import org.apache.lucene.search.Filter;\n import org.apache.lucene.search.Query;\n+import org.apache.lucene.search.join.FixedBitSetCachingWrapperFilter;\n import org.apache.lucene.search.join.ScoreMode;\n import org.apache.lucene.search.join.ToParentBlockJoinQuery;\n import org.apache.lucene.util.Bits;\n@@ -154,6 +155,9 @@ public Query parse(QueryParseContext parseContext) throws IOException, QueryPars\n //}\n parentFilter = parseContext.cacheFilter(parentFilter, null);\n }\n+ // if the filter cache is disabled, then we still have a filter that is not cached while ToParentBlockJoinQuery\n+ // expects FixedBitSet instances\n+ parentFilter = new FixedBitSetCachingWrapperFilter(parentFilter);\n \n ToParentBlockJoinQuery joinQuery = new ToParentBlockJoinQuery(query, parentFilter, scoreMode);\n joinQuery.setBoost(boost);",
"filename": "src/main/java/org/elasticsearch/index/query/NestedQueryParser.java",
"status": "modified"
},
{
"diff": "@@ -25,6 +25,7 @@\n import org.apache.lucene.search.*;\n import org.apache.lucene.util.Bits;\n import org.apache.lucene.util.FixedBitSet;\n+import org.elasticsearch.common.lucene.docset.DocIdSets;\n \n import java.io.IOException;\n import java.util.Collection;\n@@ -117,7 +118,13 @@ public Scorer scorer(AtomicReaderContext context, Bits acceptDocs) throws IOExce\n return null;\n }\n if (!(parents instanceof FixedBitSet)) {\n- throw new IllegalStateException(\"parentFilter must return FixedBitSet; got \" + parents);\n+ if (parents.isCacheable()) {\n+ // the filter is cached, yet not with the right type\n+ throw new IllegalStateException(\"parentFilter must return FixedBitSet; got \" + parents);\n+ } else {\n+ // may happen if the filter cache type is none\n+ parents = DocIdSets.toFixedBitSet(parents.iterator(), context.reader().maxDoc());\n+ }\n }\n \n int firstParentDoc = parentScorer.nextDoc();",
"filename": "src/main/java/org/elasticsearch/index/search/nested/IncludeNestedDocsQuery.java",
"status": "modified"
},
{
"diff": "@@ -21,6 +21,7 @@\n import org.apache.lucene.index.AtomicReaderContext;\n import org.apache.lucene.search.DocIdSet;\n import org.apache.lucene.search.Filter;\n+import org.apache.lucene.search.join.FixedBitSetCachingWrapperFilter;\n import org.apache.lucene.util.Bits;\n import org.apache.lucene.util.FixedBitSet;\n import org.elasticsearch.common.lucene.ReaderContextAware;\n@@ -82,6 +83,8 @@ public void setNextReader(AtomicReaderContext reader) {\n parentFilterNotCached = closestNestedAggregator.childFilter;\n }\n parentFilter = SearchContext.current().filterCache().cache(parentFilterNotCached);\n+ // if the filter cache is disabled, we still need to produce bit sets\n+ parentFilter = new FixedBitSetCachingWrapperFilter(parentFilter);\n }\n \n try {",
"filename": "src/main/java/org/elasticsearch/search/aggregations/bucket/nested/NestedAggregator.java",
"status": "modified"
},
{
"diff": "@@ -105,7 +105,12 @@ public void collect(int childDoc, long bucketOrd) throws IOException {\n }\n \n // fast forward to retrieve the parentDoc this childDoc belongs to\n- int parentDoc = parentDocs.advance(childDoc);\n+ final int parentDoc;\n+ if (parentDocs.docID() < childDoc) {\n+ parentDoc = parentDocs.advance(childDoc);\n+ } else {\n+ parentDoc = parentDocs.docID();\n+ }\n assert childDoc <= parentDoc && parentDoc != DocIdSetIterator.NO_MORE_DOCS;\n if (bucketOrdToLastCollectedParentDoc.containsKey(bucketOrd)) {\n int lastCollectedParentDoc = bucketOrdToLastCollectedParentDoc.lget();",
"filename": "src/main/java/org/elasticsearch/search/aggregations/bucket/nested/ReverseNestedAggregator.java",
"status": "modified"
},
{
"diff": "@@ -61,9 +61,9 @@ public void testClearCacheFilterKeys() {\n SearchResponse searchResponse = client().prepareSearch().setQuery(filteredQuery(matchAllQuery(), FilterBuilders.termFilter(\"field\", \"value\").cacheKey(\"test_key\"))).execute().actionGet();\n assertThat(searchResponse.getHits().getHits().length, equalTo(1));\n nodesStats = client().admin().cluster().prepareNodesStats().setIndices(true).execute().actionGet();\n- assertThat(nodesStats.getNodes()[0].getIndices().getFilterCache().getMemorySizeInBytes(), greaterThan(0l));\n+ assertThat(nodesStats.getNodes()[0].getIndices().getFilterCache().getMemorySizeInBytes(), cluster().hasFilterCache() ? greaterThan(0l) : is(0L));\n indicesStats = client().admin().indices().prepareStats(\"test\").clear().setFilterCache(true).execute().actionGet();\n- assertThat(indicesStats.getTotal().getFilterCache().getMemorySizeInBytes(), greaterThan(0l));\n+ assertThat(indicesStats.getTotal().getFilterCache().getMemorySizeInBytes(), cluster().hasFilterCache() ? greaterThan(0l) : is(0L));\n \n client().admin().indices().prepareClearCache().setFilterKeys(\"test_key\").execute().actionGet();\n nodesStats = client().admin().cluster().prepareNodesStats().setIndices(true).execute().actionGet();\n@@ -152,13 +152,13 @@ public void testClearAllCaches() throws Exception {\n nodesStats = client().admin().cluster().prepareNodesStats().setIndices(true)\n .execute().actionGet();\n assertThat(nodesStats.getNodes()[0].getIndices().getFieldData().getMemorySizeInBytes(), greaterThan(0l));\n- assertThat(nodesStats.getNodes()[0].getIndices().getFilterCache().getMemorySizeInBytes(), greaterThan(0l));\n+ assertThat(nodesStats.getNodes()[0].getIndices().getFilterCache().getMemorySizeInBytes(), cluster().hasFilterCache() ? greaterThan(0l) : is(0L));\n \n indicesStats = client().admin().indices().prepareStats(\"test\")\n .clear().setFieldData(true).setFilterCache(true)\n .execute().actionGet();\n assertThat(indicesStats.getTotal().getFieldData().getMemorySizeInBytes(), greaterThan(0l));\n- assertThat(indicesStats.getTotal().getFilterCache().getMemorySizeInBytes(), greaterThan(0l));\n+ assertThat(indicesStats.getTotal().getFilterCache().getMemorySizeInBytes(), cluster().hasFilterCache() ? greaterThan(0l) : is(0L));\n \n client().admin().indices().prepareClearCache().execute().actionGet();\n Thread.sleep(100); // Make sure the filter cache entries have been removed...",
"filename": "src/test/java/org/elasticsearch/indices/cache/CacheTests.java",
"status": "modified"
},
{
"diff": "@@ -21,8 +21,6 @@\n \n import org.apache.lucene.search.Explanation;\n import org.elasticsearch.action.admin.cluster.health.ClusterHealthStatus;\n-import org.elasticsearch.action.admin.cluster.node.stats.NodesStatsResponse;\n-import org.elasticsearch.action.admin.indices.stats.CommonStatsFlags;\n import org.elasticsearch.action.admin.indices.stats.IndicesStatsResponse;\n import org.elasticsearch.action.delete.DeleteResponse;\n import org.elasticsearch.action.get.GetResponse;",
"filename": "src/test/java/org/elasticsearch/nested/SimpleNestedTests.java",
"status": "modified"
},
{
"diff": "@@ -1951,7 +1951,7 @@ public void testValidateThatHasChildAndHasParentFilterAreNeverCached() throws Ex\n \n // filter cache should not contain any thing, b/c has_child and has_parent can't be cached.\n statsResponse = client().admin().indices().prepareStats(\"test\").clear().setFilterCache(true).get();\n- assertThat(statsResponse.getIndex(\"test\").getTotal().getFilterCache().getMemorySizeInBytes(), greaterThan(initialCacheSize));\n+ assertThat(statsResponse.getIndex(\"test\").getTotal().getFilterCache().getMemorySizeInBytes(), cluster().hasFilterCache() ? greaterThan(initialCacheSize) : is(initialCacheSize));\n }\n \n // https://github.com/elasticsearch/elasticsearch/issues/5783",
"filename": "src/test/java/org/elasticsearch/search/child/SimpleChildQuerySearchTests.java",
"status": "modified"
},
{
"diff": "@@ -2212,7 +2212,7 @@ public void testRangeFilterNoCacheWithNow() throws Exception {\n // Now with rounding is used, so we must have something in filter cache\n statsResponse = client().admin().indices().prepareStats(\"test\").clear().setFilterCache(true).get();\n long filtercacheSize = statsResponse.getIndex(\"test\").getTotal().getFilterCache().getMemorySizeInBytes();\n- assertThat(filtercacheSize, greaterThan(0l));\n+ assertThat(filtercacheSize, cluster().hasFilterCache() ? greaterThan(0l) : is(0L));\n \n searchResponse = client().prepareSearch(\"test\")\n .setQuery(QueryBuilders.filteredQuery(\n@@ -2226,7 +2226,7 @@ public void testRangeFilterNoCacheWithNow() throws Exception {\n \n // and because we use term filter, it is also added to filter cache, so it should contain more than before\n statsResponse = client().admin().indices().prepareStats(\"test\").clear().setFilterCache(true).get();\n- assertThat(statsResponse.getIndex(\"test\").getTotal().getFilterCache().getMemorySizeInBytes(), greaterThan(filtercacheSize));\n+ assertThat(statsResponse.getIndex(\"test\").getTotal().getFilterCache().getMemorySizeInBytes(), cluster().hasFilterCache() ? greaterThan(filtercacheSize) : is(filtercacheSize));\n filtercacheSize = statsResponse.getIndex(\"test\").getTotal().getFilterCache().getMemorySizeInBytes();\n \n searchResponse = client().prepareSearch(\"test\")\n@@ -2241,7 +2241,7 @@ public void testRangeFilterNoCacheWithNow() throws Exception {\n \n // The range filter is now explicitly cached, so it now it is in the filter cache.\n statsResponse = client().admin().indices().prepareStats(\"test\").clear().setFilterCache(true).get();\n- assertThat(statsResponse.getIndex(\"test\").getTotal().getFilterCache().getMemorySizeInBytes(), greaterThan(filtercacheSize));\n+ assertThat(statsResponse.getIndex(\"test\").getTotal().getFilterCache().getMemorySizeInBytes(), cluster().hasFilterCache() ? greaterThan(filtercacheSize) : is(filtercacheSize));\n }\n \n @Test",
"filename": "src/test/java/org/elasticsearch/search/query/SimpleQueryTests.java",
"status": "modified"
},
{
"diff": "@@ -124,7 +124,7 @@ public void testCustomScriptCache() throws Exception {\n .execute().actionGet();\n \n assertThat(response.getHits().totalHits(), equalTo(1l));\n- assertThat(scriptCounter.get(), equalTo(3));\n+ assertThat(scriptCounter.get(), equalTo(cluster().hasFilterCache() ? 3 : 1));\n \n scriptCounter.set(0);\n logger.info(\"running script filter the second time\");\n@@ -133,7 +133,7 @@ public void testCustomScriptCache() throws Exception {\n .execute().actionGet();\n \n assertThat(response.getHits().totalHits(), equalTo(1l));\n- assertThat(scriptCounter.get(), equalTo(0));\n+ assertThat(scriptCounter.get(), equalTo(cluster().hasFilterCache() ? 0 : 1));\n \n scriptCounter.set(0);\n logger.info(\"running script filter with new parameters\");\n@@ -142,7 +142,7 @@ public void testCustomScriptCache() throws Exception {\n .execute().actionGet();\n \n assertThat(response.getHits().totalHits(), equalTo(1l));\n- assertThat(scriptCounter.get(), equalTo(3));\n+ assertThat(scriptCounter.get(), equalTo(cluster().hasFilterCache() ? 3 : 1));\n \n scriptCounter.set(0);\n logger.info(\"running script filter with same parameters\");\n@@ -151,6 +151,6 @@ public void testCustomScriptCache() throws Exception {\n .execute().actionGet();\n \n assertThat(response.getHits().totalHits(), equalTo(3l));\n- assertThat(scriptCounter.get(), equalTo(0));\n+ assertThat(scriptCounter.get(), equalTo(cluster().hasFilterCache() ? 0 : 3));\n }\n }\n\\ No newline at end of file",
"filename": "src/test/java/org/elasticsearch/search/scriptfilter/ScriptFilterSearchTests.java",
"status": "modified"
},
{
"diff": "@@ -111,4 +111,9 @@ public void close() {\n public Iterator<Client> iterator() {\n return Lists.newArrayList(client).iterator();\n }\n+\n+ @Override\n+ public boolean hasFilterCache() {\n+ return true; // default\n+ }\n }",
"filename": "src/test/java/org/elasticsearch/test/ExternalTestCluster.java",
"status": "modified"
},
{
"diff": "@@ -197,4 +197,9 @@ public void ensureEstimatedStats() {\n }\n }\n }\n+\n+ /**\n+ * Return whether or not this cluster can cache filters.\n+ */\n+ public abstract boolean hasFilterCache();\n }",
"filename": "src/test/java/org/elasticsearch/test/ImmutableTestCluster.java",
"status": "modified"
},
{
"diff": "@@ -58,6 +58,9 @@\n import org.elasticsearch.common.util.concurrent.EsExecutors;\n import org.elasticsearch.env.NodeEnvironment;\n import org.elasticsearch.http.HttpServerTransport;\n+import org.elasticsearch.index.cache.filter.FilterCacheModule;\n+import org.elasticsearch.index.cache.filter.none.NoneFilterCache;\n+import org.elasticsearch.index.cache.filter.weighted.WeightedFilterCache;\n import org.elasticsearch.index.engine.IndexEngineModule;\n import org.elasticsearch.index.fielddata.ordinals.InternalGlobalOrdinalsBuilder;\n import org.elasticsearch.node.Node;\n@@ -160,6 +163,8 @@ public final class TestCluster extends ImmutableTestCluster {\n \n private final ExecutorService executor;\n \n+ private final boolean hasFilterCache;\n+\n public TestCluster(long clusterSeed, String clusterName) {\n this(clusterSeed, DEFAULT_MIN_NUM_DATA_NODES, DEFAULT_MAX_NUM_DATA_NODES, clusterName, NodeSettingsSource.EMPTY, DEFAULT_NUM_CLIENT_NODES, DEFAULT_ENABLE_RANDOM_BENCH_NODES);\n }\n@@ -228,6 +233,7 @@ public TestCluster(long clusterSeed, int minNumDataNodes, int maxNumDataNodes, S\n builder.put(\"plugins.\" + PluginsService.LOAD_PLUGIN_FROM_CLASSPATH, false);\n defaultSettings = builder.build();\n executor = EsExecutors.newCached(1, TimeUnit.MINUTES, EsExecutors.daemonThreadFactory(\"test_\" + clusterName));\n+ this.hasFilterCache = random.nextBoolean();\n }\n \n public String getClusterName() {\n@@ -243,7 +249,8 @@ private static boolean isLocalTransportConfigured() {\n \n private Settings getSettings(int nodeOrdinal, long nodeSeed, Settings others) {\n Builder builder = ImmutableSettings.settingsBuilder().put(defaultSettings)\n- .put(getRandomNodeSettings(nodeSeed));\n+ .put(getRandomNodeSettings(nodeSeed))\n+ .put(FilterCacheModule.FilterCacheSettings.FILTER_CACHE_TYPE, hasFilterCache() ? WeightedFilterCache.class : NoneFilterCache.class);\n Settings settings = nodeSettingsSource.settings(nodeOrdinal);\n if (settings != null) {\n if (settings.get(ClusterName.SETTING) != null) {\n@@ -1278,6 +1285,11 @@ public int numBenchNodes() {\n return benchNodeAndClients().size();\n }\n \n+ @Override\n+ public boolean hasFilterCache() {\n+ return hasFilterCache;\n+ }\n+\n private synchronized Collection<NodeAndClient> dataNodeAndClients() {\n return Collections2.filter(nodes.values(), new DataNodePredicate());\n }",
"filename": "src/test/java/org/elasticsearch/test/TestCluster.java",
"status": "modified"
},
{
"diff": "@@ -77,6 +77,14 @@ public void simpleValidateQuery() throws Exception {\n assertThat(client().admin().indices().prepareValidateQuery(\"test\").setQuery(QueryBuilders.queryString(\"foo:1 AND\")).execute().actionGet().isValid(), equalTo(false));\n }\n \n+ private static String filter(String uncachedFilter) {\n+ String filter = uncachedFilter;\n+ if (cluster().hasFilterCache()) {\n+ filter = \"cached(\" + filter + \")\";\n+ }\n+ return filter;\n+ }\n+\n @Test\n public void explainValidateQuery() throws Exception {\n createIndex(\"test\");\n@@ -110,27 +118,28 @@ public void explainValidateQuery() throws Exception {\n assertThat(response.getQueryExplanation().get(0).getError(), containsString(\"Failed to parse\"));\n assertThat(response.getQueryExplanation().get(0).getExplanation(), nullValue());\n \n- assertExplanation(QueryBuilders.queryString(\"_id:1\"), equalTo(\"filtered(ConstantScore(_uid:type1#1))->cache(_type:type1)\"));\n+ final String typeFilter = filter(\"_type:type1\");\n+ assertExplanation(QueryBuilders.queryString(\"_id:1\"), equalTo(\"filtered(ConstantScore(_uid:type1#1))->\" + typeFilter));\n \n assertExplanation(QueryBuilders.idsQuery(\"type1\").addIds(\"1\").addIds(\"2\"),\n- equalTo(\"filtered(ConstantScore(_uid:type1#1 _uid:type1#2))->cache(_type:type1)\"));\n+ equalTo(\"filtered(ConstantScore(_uid:type1#1 _uid:type1#2))->\" + typeFilter));\n \n- assertExplanation(QueryBuilders.queryString(\"foo\"), equalTo(\"filtered(_all:foo)->cache(_type:type1)\"));\n+ assertExplanation(QueryBuilders.queryString(\"foo\"), equalTo(\"filtered(_all:foo)->\" + typeFilter));\n \n assertExplanation(QueryBuilders.filteredQuery(\n QueryBuilders.termQuery(\"foo\", \"1\"),\n FilterBuilders.orFilter(\n FilterBuilders.termFilter(\"bar\", \"2\"),\n FilterBuilders.termFilter(\"baz\", \"3\")\n )\n- ), equalTo(\"filtered(filtered(foo:1)->cache(bar:[2 TO 2]) cache(baz:3))->cache(_type:type1)\"));\n+ ), equalTo(\"filtered(filtered(foo:1)->\" + filter(\"bar:[2 TO 2]\") + \" \" + filter(\"baz:3\") + \")->\" + typeFilter));\n \n assertExplanation(QueryBuilders.filteredQuery(\n QueryBuilders.termQuery(\"foo\", \"1\"),\n FilterBuilders.orFilter(\n FilterBuilders.termFilter(\"bar\", \"2\")\n )\n- ), equalTo(\"filtered(filtered(foo:1)->cache(bar:[2 TO 2]))->cache(_type:type1)\"));\n+ ), equalTo(\"filtered(filtered(foo:1)->\" + filter(\"bar:[2 TO 2]\") + \")->\" + typeFilter));\n \n assertExplanation(QueryBuilders.filteredQuery(\n QueryBuilders.matchAllQuery(),\n@@ -139,55 +148,55 @@ public void explainValidateQuery() throws Exception {\n .addPoint(30, -80)\n .addPoint(20, -90)\n .addPoint(40, -70) // closing polygon\n- ), equalTo(\"filtered(ConstantScore(GeoPolygonFilter(pin.location, [[40.0, -70.0], [30.0, -80.0], [20.0, -90.0], [40.0, -70.0]])))->cache(_type:type1)\"));\n+ ), equalTo(\"filtered(ConstantScore(GeoPolygonFilter(pin.location, [[40.0, -70.0], [30.0, -80.0], [20.0, -90.0], [40.0, -70.0]])))->\" + typeFilter));\n \n assertExplanation(QueryBuilders.constantScoreQuery(FilterBuilders.geoBoundingBoxFilter(\"pin.location\")\n .topLeft(40, -80)\n .bottomRight(20, -70)\n- ), equalTo(\"filtered(ConstantScore(GeoBoundingBoxFilter(pin.location, [40.0, -80.0], [20.0, -70.0])))->cache(_type:type1)\"));\n+ ), equalTo(\"filtered(ConstantScore(GeoBoundingBoxFilter(pin.location, [40.0, -80.0], [20.0, -70.0])))->\" + typeFilter));\n \n assertExplanation(QueryBuilders.constantScoreQuery(FilterBuilders.geoDistanceFilter(\"pin.location\")\n .lat(10).lon(20).distance(15, DistanceUnit.DEFAULT).geoDistance(GeoDistance.PLANE)\n- ), equalTo(\"filtered(ConstantScore(GeoDistanceFilter(pin.location, PLANE, 15.0, 10.0, 20.0)))->cache(_type:type1)\"));\n+ ), equalTo(\"filtered(ConstantScore(GeoDistanceFilter(pin.location, PLANE, 15.0, 10.0, 20.0)))->\" + typeFilter));\n \n assertExplanation(QueryBuilders.constantScoreQuery(FilterBuilders.geoDistanceFilter(\"pin.location\")\n .lat(10).lon(20).distance(15, DistanceUnit.DEFAULT).geoDistance(GeoDistance.PLANE)\n- ), equalTo(\"filtered(ConstantScore(GeoDistanceFilter(pin.location, PLANE, 15.0, 10.0, 20.0)))->cache(_type:type1)\"));\n+ ), equalTo(\"filtered(ConstantScore(GeoDistanceFilter(pin.location, PLANE, 15.0, 10.0, 20.0)))->\" + typeFilter));\n \n assertExplanation(QueryBuilders.constantScoreQuery(FilterBuilders.geoDistanceRangeFilter(\"pin.location\")\n .lat(10).lon(20).from(\"15m\").to(\"25m\").geoDistance(GeoDistance.PLANE)\n- ), equalTo(\"filtered(ConstantScore(GeoDistanceRangeFilter(pin.location, PLANE, [15.0 - 25.0], 10.0, 20.0)))->cache(_type:type1)\"));\n+ ), equalTo(\"filtered(ConstantScore(GeoDistanceRangeFilter(pin.location, PLANE, [15.0 - 25.0], 10.0, 20.0)))->\" + typeFilter));\n \n assertExplanation(QueryBuilders.constantScoreQuery(FilterBuilders.geoDistanceRangeFilter(\"pin.location\")\n .lat(10).lon(20).from(\"15miles\").to(\"25miles\").geoDistance(GeoDistance.PLANE)\n- ), equalTo(\"filtered(ConstantScore(GeoDistanceRangeFilter(pin.location, PLANE, [\" + DistanceUnit.DEFAULT.convert(15.0, DistanceUnit.MILES) + \" - \" + DistanceUnit.DEFAULT.convert(25.0, DistanceUnit.MILES) + \"], 10.0, 20.0)))->cache(_type:type1)\"));\n+ ), equalTo(\"filtered(ConstantScore(GeoDistanceRangeFilter(pin.location, PLANE, [\" + DistanceUnit.DEFAULT.convert(15.0, DistanceUnit.MILES) + \" - \" + DistanceUnit.DEFAULT.convert(25.0, DistanceUnit.MILES) + \"], 10.0, 20.0)))->\" + typeFilter));\n \n assertExplanation(QueryBuilders.filteredQuery(\n QueryBuilders.termQuery(\"foo\", \"1\"),\n FilterBuilders.andFilter(\n FilterBuilders.termFilter(\"bar\", \"2\"),\n FilterBuilders.termFilter(\"baz\", \"3\")\n )\n- ), equalTo(\"filtered(filtered(foo:1)->+cache(bar:[2 TO 2]) +cache(baz:3))->cache(_type:type1)\"));\n+ ), equalTo(\"filtered(filtered(foo:1)->+\" + filter(\"bar:[2 TO 2]\") + \" +\" + filter(\"baz:3\") + \")->\" + typeFilter));\n \n assertExplanation(QueryBuilders.constantScoreQuery(FilterBuilders.termsFilter(\"foo\", \"1\", \"2\", \"3\")),\n- equalTo(\"filtered(ConstantScore(cache(foo:1 foo:2 foo:3)))->cache(_type:type1)\"));\n+ equalTo(\"filtered(ConstantScore(\" + filter(\"foo:1 foo:2 foo:3\") + \"))->\" + typeFilter));\n \n assertExplanation(QueryBuilders.constantScoreQuery(FilterBuilders.notFilter(FilterBuilders.termFilter(\"foo\", \"bar\"))),\n- equalTo(\"filtered(ConstantScore(NotFilter(cache(foo:bar))))->cache(_type:type1)\"));\n+ equalTo(\"filtered(ConstantScore(NotFilter(\" + filter(\"foo:bar\") + \")))->\" + typeFilter));\n \n assertExplanation(QueryBuilders.filteredQuery(\n QueryBuilders.termQuery(\"foo\", \"1\"),\n FilterBuilders.hasChildFilter(\n \"child-type\",\n QueryBuilders.matchQuery(\"foo\", \"1\")\n )\n- ), equalTo(\"filtered(filtered(foo:1)->CustomQueryWrappingFilter(child_filter[child-type/type1](filtered(foo:1)->cache(_type:child-type))))->cache(_type:type1)\"));\n+ ), equalTo(\"filtered(filtered(foo:1)->CustomQueryWrappingFilter(child_filter[child-type/type1](filtered(foo:1)->\" + filter(\"_type:child-type\") + \")))->\" + typeFilter));\n \n assertExplanation(QueryBuilders.filteredQuery(\n QueryBuilders.termQuery(\"foo\", \"1\"),\n FilterBuilders.scriptFilter(\"true\")\n- ), equalTo(\"filtered(filtered(foo:1)->ScriptFilter(true))->cache(_type:type1)\"));\n+ ), equalTo(\"filtered(filtered(foo:1)->ScriptFilter(true))->\" + typeFilter));\n \n }\n ",
"filename": "src/test/java/org/elasticsearch/validate/SimpleValidateQueryTests.java",
"status": "modified"
}
]
} |
{
"body": "The date format docs says you can separate different date formats using a double bar and that the first will be used to format any stored dates: [Date Format Docs](http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-date-format.html)\n\nWhen using multiple date formats, however, the date_histogram aggregation throws \"UnsupportedOperationException[Printing not supported]\" which seems to be a Joda exception when a format doesn't have a printer. If I change the mapping to be a single format, the exception isn't thrown, so my guess is that the first format isn't parsed out to retrieve the printer.\n\nI'm using ES 1.1.1, though I've also observed this on 1.1.0. \n\nCreate the index:\n\n```\n➜ ~ curl localhost:9200/my_index -XPUT -d'{\"settings\":{},\"mappings\":{\"my_type\":{\"properties\":{\"my_date\":{\"type\":\"date\",\"format\":\"dateOptionalTime||mm-DD-yyyy\"}}}}}' \n\n{\"acknowledged\":true}% \n```\n\nAdd a document with a date:\n\n```\n➜ ~ curl localhost:9200/my_index/my_type/1 -XPOST -d '{\"my_date\":\"12-13-2014\"}'\n\n{\"_index\":\"my_index\",\"_type\":\"my_type\",\"_id\":\"1\",\"_version\":1,\"created\":true}% \n```\n\nPerform a date histogram facet (works):\n\n```\n➜ ~ curl localhost:9200/my_index/my_type/_search -XPOST -d '{\"facets\":{\"dates\":{\"date_histogram\":{\"field\":\"my_date\",\"interval\":\"day\"}}}}'\n\n{\"took\":2,\"timed_out\":false,\"_shards\":{\"total\":5,\"successful\":5,\"failed\":0},\"hits\":{\"total\":1,\"max_score\":1.0,\"hits\":[{\"_index\":\"my_index\",\"_type\":\"my_type\",\"_id\":\"1\",\"_score\":1.0, \"_source\" : {\"my_date\":\"12-13-2014\"}}]},\"facets\":{\"dates\":{\"_type\":\"date_histogram\",\"entries\":[{\"time\":1389571200000,\"count\":1}]}}}% \n```\n\nPerform an aggregation facet (fails with UnsupportedOperationException):\n\n```\n ➜ ~ curl localhost:9200/my_index/my_type/_search -XPOST -d '{\"aggs\":{\"dates\":{\"date_histogram\":{\"field\":\"my_date\",\"interval\":\"day\"}}}}'\n\n{\"error\":\"UnsupportedOperationException[Printing not supported]\",\"status\":500}% \n```\n\nChange the mapping to contain only a single date format:\n\n```\n➜ ~ curl localhost:9200/my_index/my_type/_mapping -XPUT -d '{\"my_type\":{\"properties\":{\"my_date\":{\"type\":\"date\",\"format\":\"dateOptionalTime\"}}}}'\n\n{\"acknowledged\":true}% \n```\n\nCheck that the mapping changed:\n\n```\n➜ ~ curl localhost:9200/my_index/my_type/_mapping \n\n{\"my_index\":{\"mappings\":{\"my_type\":{\"properties\":{\"my_date\":{\"type\":\"date\",\"format\":\"dateOptionalTime\"}}}}}}% \n```\n\nRun the original aggregation again, and see that it returns the same result as the facet:\n\n```\n➜ ~ curl localhost:9200/my_index/my_type/_search -XPOST -d '{\"aggs\":{\"dates\":{\"date_histogram\":{\"field\":\"my_date\",\"interval\":\"day\"}}}}'\n\n{\"took\":2,\"timed_out\":false,\"_shards\":{\"total\":5,\"successful\":5,\"failed\":0},\"hits\":{\"total\":1,\"max_score\":1.0,\"hits\":[{\"_index\":\"my_index\",\"_type\":\"my_type\",\"_id\":\"1\",\"_score\":1.0, \"_source\" : {\"my_date\":\"12-13-2014\"}}]},\"aggregations\":{\"dates\":{\"buckets\":[{\"key_as_string\":\"2014-01-13T00:00:00.000Z\",\"key\":1389571200000,\"doc_count\":1}]}}}% \n```\n",
"comments": [
{
"body": "Here's the stacktrace FYI:\n\n```\njava.lang.UnsupportedOperationException: Printing not supported\nat org.elasticsearch.common.joda.time.format.DateTimeFormatter.requirePrinter(DateTimeFormatter.java:660)\nat org.elasticsearch.common.joda.time.format.DateTimeFormatter.print(DateTimeFormatter.java:598)\nat org.elasticsearch.search.aggregations.support.numeric.ValueFormatter$DateTime.format(ValueFormatter.java:117)\nat org.elasticsearch.search.aggregations.bucket.histogram.InternalHistogram.toXContent(InternalHistogram.java:471)\nat org.elasticsearch.search.aggregations.InternalAggregations.toXContentInternal(InternalAggregations.java:184)\nat org.elasticsearch.search.aggregations.InternalAggregations.toXContent(InternalAggregations.java:175)\nat org.elasticsearch.search.internal.InternalSearchResponse.toXContent(InternalSearchResponse.java:95)\nat org.elasticsearch.action.search.SearchResponse.toXContent(SearchResponse.java:217)\nat org.elasticsearch.rest.action.search.RestSearchAction$1.onResponse(RestSearchAction.java:104)\nat org.elasticsearch.rest.action.search.RestSearchAction$1.onResponse(RestSearchAction.java:98)\nat org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$AsyncAction.innerFinishHim(TransportSearchQueryThenFetchAction.java:198)\nat org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$AsyncAction.finishHim(TransportSearchQueryThenFetchAction.java:180)\nat org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$AsyncAction$3.onResult(TransportSearchQueryThenFetchAction.java:156)\nat org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$AsyncAction$3.onResult(TransportSearchQueryThenFetchAction.java:150)\nat org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteFetch(SearchServiceTransportAction.java:407)\nat org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$AsyncAction.executeFetch(TransportSearchQueryThenFetchAction.java:150)\nat org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$AsyncAction$2.run(TransportSearchQueryThenFetchAction.java:134)\nat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\nat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\nat java.lang.Thread.run(Thread.java:722)\n```\n",
"created_at": "2014-05-20T09:59:45Z"
},
{
"body": "Documentation states that \n\n> The first format will also act as the one that converts back from milliseconds to a string representation\n\nLooks like this hasn't been set for the date_histogram aggregation\n",
"created_at": "2014-05-21T12:25:17Z"
},
{
"body": "Thanks!\n",
"created_at": "2014-05-22T09:36:39Z"
}
],
"number": 6239,
"title": "Aggregations: date_histogram aggregation breaks on date fields with multiple formats"
} | {
"body": "When multiple date formats are specified using the || syntax in the field mappings the date_histogram aggregation breaks. This is because we are getting a parser rather than a printer from the date formatter for the object we use to convert the DateTime values back into Strings. Simple fix to get the printer form the date format and test to back it up\n\nCloses #6239\n",
"number": 6266,
"review_comments": [
{
"body": "why doesn't break this in the single format case?\n",
"created_at": "2014-05-21T16:58:57Z"
},
{
"body": "Because this is code is in an if statement which checks for \"||\" in the format. The single format cases use a standard formatter which work for both printing and parsing. This code creates a formatter which uses all of the formats to parse a date but only the first format for printing a date\n",
"created_at": "2014-05-21T17:04:59Z"
},
{
"body": "@colings86 Is it correct to use the printer to define how dates should be parsed, I would rather expect it to be the role of parsers? I think we need in this iteration to collect _parsers_ for all inputs formats as well as the _printer_ of the first one? (while master currently uses the first parser as a printer, and your patch collect all printers to know how to parse)\n",
"created_at": "2014-05-22T01:08:57Z"
},
{
"body": "@jpountz Yes you a probably right, I'll change it to store the parsers for all format and just the first printer\n",
"created_at": "2014-05-22T07:53:28Z"
}
],
"title": "Fixed conversion of date field values when using multiple date formats"
} | {
"commits": [
{
"message": "Aggregations: Fixed conversion of date field values when using multiple date formats\n\nWhen multiple date formats are specified using the || syntax in the field mappings the date_histogram aggregation breaks. This is because we are getting a parser rather than a printer from the date formatter for the object we use to convert the DateTime values back into Strings. Simple fix to get the printer from the date format and test to back it up\n\nCloses #6239"
}
],
"files": [
{
"diff": "@@ -19,15 +19,15 @@\n \n package org.elasticsearch.common.joda;\n \n-import java.util.Locale;\n-\n import org.elasticsearch.common.Strings;\n import org.joda.time.*;\n import org.joda.time.field.DividedDateTimeField;\n import org.joda.time.field.OffsetDateTimeField;\n import org.joda.time.field.ScaledDurationField;\n import org.joda.time.format.*;\n \n+import java.util.Locale;\n+\n /**\n *\n */\n@@ -142,11 +142,12 @@ public static FormatDateTimeFormatter forPattern(String input, Locale locale) {\n } else {\n DateTimeFormatter dateTimeFormatter = null;\n for (int i = 0; i < formats.length; i++) {\n- DateTimeFormatter currentFormatter = forPattern(formats[i], locale).parser();\n+ FormatDateTimeFormatter currentFormatter = forPattern(formats[i], locale);\n+ DateTimeFormatter currentParser = currentFormatter.parser();\n if (dateTimeFormatter == null) {\n- dateTimeFormatter = currentFormatter;\n+ dateTimeFormatter = currentFormatter.printer();\n }\n- parsers[i] = currentFormatter.getParser();\n+ parsers[i] = currentParser.getParser();\n }\n \n DateTimeFormatterBuilder builder = new DateTimeFormatterBuilder().append(dateTimeFormatter.withZone(DateTimeZone.UTC).getPrinter(), parsers);",
"filename": "src/main/java/org/elasticsearch/common/joda/Joda.java",
"status": "modified"
},
{
"diff": "@@ -1200,4 +1200,33 @@ public void singleValueField_WithExtendedBounds() throws Exception {\n key = key.plusDays(interval);\n }\n }\n+\n+ @Test\n+ public void singleValue_WithMultipleDateFormatsFromMapping() throws Exception {\n+ \n+ String mappingJson = jsonBuilder().startObject().startObject(\"type\").startObject(\"properties\").startObject(\"date\").field(\"type\", \"date\").field(\"format\", \"dateOptionalTime||dd-MM-yyyy\").endObject().endObject().endObject().endObject().string();\n+ prepareCreate(\"idx2\").addMapping(\"type\", mappingJson).execute().actionGet();\n+ IndexRequestBuilder[] reqs = new IndexRequestBuilder[5];\n+ for (int i = 0; i < reqs.length; i++) {\n+ reqs[i] = client().prepareIndex(\"idx2\", \"type\", \"\" + i).setSource(jsonBuilder().startObject().field(\"date\", \"10-03-2014\").endObject());\n+ }\n+ indexRandom(true, reqs);\n+\n+ SearchResponse response = client().prepareSearch(\"idx2\")\n+ .setQuery(matchAllQuery())\n+ .addAggregation(dateHistogram(\"date_histo\")\n+ .field(\"date\")\n+ .interval(DateHistogram.Interval.DAY))\n+ .execute().actionGet();\n+\n+ assertThat(response.getHits().getTotalHits(), equalTo(5l));\n+\n+ DateHistogram histo = response.getAggregations().get(\"date_histo\");\n+ Collection<? extends DateHistogram.Bucket> buckets = histo.getBuckets();\n+ assertThat(buckets.size(), equalTo(1));\n+\n+ DateHistogram.Bucket bucket = histo.getBucketByKey(\"2014-03-10T00:00:00.000Z\");\n+ assertThat(bucket, Matchers.notNullValue());\n+ assertThat(bucket.getDocCount(), equalTo(5l));\n+ }\n }",
"filename": "src/test/java/org/elasticsearch/search/aggregations/bucket/DateHistogramTests.java",
"status": "modified"
}
]
} |
{
"body": "I randomized the fielddata settings today and run into these exceptions:\n\n```\nInternalGlobalOrdinalsIndexFieldData cannot be cast to org.elasticsearch.index.fielddata.AtomicFieldData\n 1> at org.elasticsearch.indices.fielddata.cache.IndicesFieldDataCache$FieldDataWeigher.weigh(IndicesFieldDataCache.java:102)\n 1> at com.google.common.cache.LocalCache$Segment.setValue(LocalCache.java:2160)\n 1> at com.google.common.cache.LocalCache$Segment.storeLoadedValue(LocalCache.java:3142)\n 1> at com.google.common.cache.LocalCache$Segment.getAndRecordStats(LocalCache.java:2351)\n 1> at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2318)\n 1> at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2280)\n 1> at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2195)\n 1> at com.google.common.cache.LocalCache.get(LocalCache.java:3934)\n 1> at com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4736)\n 1> at org.elasticsearch.indices.fielddata.cache.IndicesFieldDataCache$IndexFieldCache.load(IndicesFieldDataCache.java:163)\n 1> at org.elasticsearch.index.fielddata.plain.AbstractBytesIndexFieldData.loadGlobal(AbstractBytesIndexFieldData.java:75)\n```\n\nthis causes the Circ. Breaker to not be reset to 0 etc. \n",
"comments": [],
"number": 6260,
"title": "FieldData: Global ordinals cause ClassCastExceptions if used with a bounded fielddata cache"
} | {
"body": "The `FieldDataWeighter` allowed to use a concrete subclass of the caches\ngeneric type to be used that causes ClassCastException and also trips the\nCirciutBreaker to not be decremented appropriately.\n\nThis was tripped by settings randomization also part of this commit.\n\nCloses #6260\n",
"number": 6262,
"review_comments": [],
"title": "Fix FieldDataWeighter generics to accept RamUsage instead of AtomicFieldData"
} | {
"commits": [
{
"message": "Fix FieldDataWeighter generics to accept RamUsage instead of AtomicFieldData\n\nThe `FieldDataWeighter` allowed to use a concrete subclass of the caches\ngeneric type to be used that causes ClassCastException and also trips the\nCirciutBreaker to not be decremented appropriately.\n\nThis was tripped by settings randomization also part of this commit.\n\nCloses #6260"
}
],
"files": [
{
"diff": "@@ -26,6 +26,7 @@\n import org.apache.lucene.index.AtomicReaderContext;\n import org.apache.lucene.index.IndexReader;\n import org.apache.lucene.index.SegmentReader;\n+import org.elasticsearch.common.logging.ESLogger;\n import org.elasticsearch.common.lucene.SegmentReaderUtils;\n import org.elasticsearch.index.fielddata.ordinals.GlobalOrdinalsIndexFieldData;\n import org.elasticsearch.index.mapper.FieldMapper;\n@@ -76,9 +77,11 @@ static abstract class FieldBased implements IndexFieldDataCache, SegmentReader.C\n private final FieldDataType fieldDataType;\n private final Cache<Key, RamUsage> cache;\n private final IndicesFieldDataCacheListener indicesFieldDataCacheListener;\n+ private final ESLogger logger;\n \n- protected FieldBased(IndexService indexService, FieldMapper.Names fieldNames, FieldDataType fieldDataType, CacheBuilder cache, IndicesFieldDataCacheListener indicesFieldDataCacheListener) {\n+ protected FieldBased(ESLogger logger, IndexService indexService, FieldMapper.Names fieldNames, FieldDataType fieldDataType, CacheBuilder cache, IndicesFieldDataCacheListener indicesFieldDataCacheListener) {\n assert indexService != null;\n+ this.logger = logger;\n this.indexService = indexService;\n this.fieldNames = fieldNames;\n this.fieldDataType = fieldDataType;\n@@ -100,7 +103,11 @@ public void onRemoval(RemovalNotification<Key, RamUsage> notification) {\n sizeInBytes = value.getMemorySizeInBytes();\n }\n for (Listener listener : key.listeners) {\n- listener.onUnload(fieldNames, fieldDataType, notification.wasEvicted(), sizeInBytes);\n+ try {\n+ listener.onUnload(fieldNames, fieldDataType, notification.wasEvicted(), sizeInBytes);\n+ } catch (Throwable e) {\n+ logger.error(\"Failed to call listener on field data cache unloading\", e);\n+ }\n }\n }\n \n@@ -112,8 +119,7 @@ public <FD extends AtomicFieldData, IFD extends IndexFieldData<FD>> FD load(fina\n @Override\n public AtomicFieldData call() throws Exception {\n SegmentReaderUtils.registerCoreListener(context.reader(), FieldBased.this);\n- AtomicFieldData fieldData = indexFieldData.loadDirect(context);\n- key.sizeInBytes = fieldData.getMemorySizeInBytes();\n+\n key.listeners.add(indicesFieldDataCacheListener);\n final ShardId shardId = ShardUtils.extractShardId(context.reader());\n if (shardId != null) {\n@@ -122,8 +128,15 @@ public AtomicFieldData call() throws Exception {\n key.listeners.add(shard.fieldData());\n }\n }\n+ final AtomicFieldData fieldData = indexFieldData.loadDirect(context);\n+ key.sizeInBytes = fieldData.getMemorySizeInBytes();\n for (Listener listener : key.listeners) {\n- listener.onLoad(fieldNames, fieldDataType, fieldData);\n+ try {\n+ listener.onLoad(fieldNames, fieldDataType, fieldData);\n+ } catch (Throwable e) {\n+ // load anyway since listeners should not throw exceptions\n+ logger.error(\"Failed to call listener on atomic field data loading\", e);\n+ }\n }\n return fieldData;\n }\n@@ -137,8 +150,7 @@ public <IFD extends IndexFieldData.WithOrdinals<?>> IFD load(final IndexReader i\n @Override\n public GlobalOrdinalsIndexFieldData call() throws Exception {\n indexReader.addReaderClosedListener(FieldBased.this);\n- GlobalOrdinalsIndexFieldData ifd = (GlobalOrdinalsIndexFieldData) indexFieldData.localGlobalDirect(indexReader);\n- key.sizeInBytes = ifd.getMemorySizeInBytes();\n+\n key.listeners.add(indicesFieldDataCacheListener);\n final ShardId shardId = ShardUtils.extractShardId(indexReader);\n if (shardId != null) {\n@@ -147,8 +159,15 @@ public GlobalOrdinalsIndexFieldData call() throws Exception {\n key.listeners.add(shard.fieldData());\n }\n }\n+ GlobalOrdinalsIndexFieldData ifd = (GlobalOrdinalsIndexFieldData) indexFieldData.localGlobalDirect(indexReader);\n+ key.sizeInBytes = ifd.getMemorySizeInBytes();\n for (Listener listener : key.listeners) {\n- listener.onLoad(fieldNames, fieldDataType, ifd);\n+ try {\n+ listener.onLoad(fieldNames, fieldDataType, ifd);\n+ } catch (Throwable e) {\n+ // load anyway since listeners should not throw exceptions\n+ logger.error(\"Failed to call listener on global ordinals loading\", e);\n+ }\n }\n \n return ifd;\n@@ -207,15 +226,15 @@ public int hashCode() {\n \n static class Resident extends FieldBased {\n \n- public Resident(IndexService indexService, FieldMapper.Names fieldNames, FieldDataType fieldDataType, IndicesFieldDataCacheListener indicesFieldDataCacheListener) {\n- super(indexService, fieldNames, fieldDataType, CacheBuilder.newBuilder(), indicesFieldDataCacheListener);\n+ public Resident(ESLogger logger, IndexService indexService, FieldMapper.Names fieldNames, FieldDataType fieldDataType, IndicesFieldDataCacheListener indicesFieldDataCacheListener) {\n+ super(logger, indexService, fieldNames, fieldDataType, CacheBuilder.newBuilder(), indicesFieldDataCacheListener);\n }\n }\n \n static class Soft extends FieldBased {\n \n- public Soft(IndexService indexService, FieldMapper.Names fieldNames, FieldDataType fieldDataType, IndicesFieldDataCacheListener indicesFieldDataCacheListener) {\n- super(indexService, fieldNames, fieldDataType, CacheBuilder.newBuilder().softValues(), indicesFieldDataCacheListener);\n+ public Soft(ESLogger logger, IndexService indexService, FieldMapper.Names fieldNames, FieldDataType fieldDataType, IndicesFieldDataCacheListener indicesFieldDataCacheListener) {\n+ super(logger, indexService, fieldNames, fieldDataType, CacheBuilder.newBuilder().softValues(), indicesFieldDataCacheListener);\n }\n }\n }",
"filename": "src/main/java/org/elasticsearch/index/fielddata/IndexFieldDataCache.java",
"status": "modified"
},
{
"diff": "@@ -246,9 +246,9 @@ public <IFD extends IndexFieldData<?>> IFD getForField(FieldMapper<?> mapper) {\n // this means changing the node level settings is simple, just set the bounds there\n String cacheType = type.getSettings().get(\"cache\", indexSettings.get(\"index.fielddata.cache\", \"node\"));\n if (\"resident\".equals(cacheType)) {\n- cache = new IndexFieldDataCache.Resident(indexService, fieldNames, type, indicesFieldDataCacheListener);\n+ cache = new IndexFieldDataCache.Resident(logger, indexService, fieldNames, type, indicesFieldDataCacheListener);\n } else if (\"soft\".equals(cacheType)) {\n- cache = new IndexFieldDataCache.Soft(indexService, fieldNames, type, indicesFieldDataCacheListener);\n+ cache = new IndexFieldDataCache.Soft(logger, indexService, fieldNames, type, indicesFieldDataCacheListener);\n } else if (\"node\".equals(cacheType)) {\n cache = indicesFieldDataCache.buildIndexFieldDataCache(indexService, index, fieldNames, type);\n } else {",
"filename": "src/main/java/org/elasticsearch/index/fielddata/IndexFieldDataService.java",
"status": "modified"
},
{
"diff": "@@ -25,6 +25,7 @@\n import org.apache.lucene.index.SegmentReader;\n import org.elasticsearch.common.component.AbstractComponent;\n import org.elasticsearch.common.inject.Inject;\n+import org.elasticsearch.common.logging.ESLogger;\n import org.elasticsearch.common.lucene.SegmentReaderUtils;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.ByteSizeValue;\n@@ -76,7 +77,7 @@ public void close() {\n }\n \n public IndexFieldDataCache buildIndexFieldDataCache(IndexService indexService, Index index, FieldMapper.Names fieldNames, FieldDataType fieldDataType) {\n- return new IndexFieldCache(cache, indicesFieldDataCacheListener, indexService, index, fieldNames, fieldDataType);\n+ return new IndexFieldCache(logger, cache, indicesFieldDataCacheListener, indexService, index, fieldNames, fieldDataType);\n }\n \n public Cache<Key, RamUsage> getCache() {\n@@ -95,15 +96,20 @@ public void onRemoval(RemovalNotification<Key, RamUsage> notification) {\n sizeInBytes = value.getMemorySizeInBytes();\n }\n for (IndexFieldDataCache.Listener listener : key.listeners) {\n- listener.onUnload(indexCache.fieldNames, indexCache.fieldDataType, notification.wasEvicted(), sizeInBytes);\n+ try {\n+ listener.onUnload(indexCache.fieldNames, indexCache.fieldDataType, notification.wasEvicted(), sizeInBytes);\n+ } catch (Throwable e) {\n+ // load anyway since listeners should not throw exceptions\n+ logger.error(\"Failed to call listener on field data cache unloading\", e);\n+ }\n }\n }\n \n- public static class FieldDataWeigher implements Weigher<Key, AtomicFieldData> {\n+ public static class FieldDataWeigher implements Weigher<Key, RamUsage> {\n \n @Override\n- public int weigh(Key key, AtomicFieldData fieldData) {\n- int weight = (int) Math.min(fieldData.getMemorySizeInBytes(), Integer.MAX_VALUE);\n+ public int weigh(Key key, RamUsage ramUsage) {\n+ int weight = (int) Math.min(ramUsage.getMemorySizeInBytes(), Integer.MAX_VALUE);\n return weight == 0 ? 1 : weight;\n }\n }\n@@ -112,14 +118,16 @@ public int weigh(Key key, AtomicFieldData fieldData) {\n * A specific cache instance for the relevant parameters of it (index, fieldNames, fieldType).\n */\n static class IndexFieldCache implements IndexFieldDataCache, SegmentReader.CoreClosedListener, IndexReader.ReaderClosedListener {\n-\n+ private final ESLogger logger;\n private final IndexService indexService;\n final Index index;\n final FieldMapper.Names fieldNames;\n final FieldDataType fieldDataType;\n private final Cache<Key, RamUsage> cache;\n+ private final IndicesFieldDataCacheListener indicesFieldDataCacheListener;\n \n- IndexFieldCache(final Cache<Key, RamUsage> cache, IndicesFieldDataCacheListener indicesFieldDataCacheListener, IndexService indexService, Index index, FieldMapper.Names fieldNames, FieldDataType fieldDataType) {\n+ IndexFieldCache(ESLogger logger,final Cache<Key, RamUsage> cache, IndicesFieldDataCacheListener indicesFieldDataCacheListener, IndexService indexService, Index index, FieldMapper.Names fieldNames, FieldDataType fieldDataType) {\n+ this.logger = logger;\n this.indexService = indexService;\n this.index = index;\n this.fieldNames = fieldNames;\n@@ -129,8 +137,6 @@ static class IndexFieldCache implements IndexFieldDataCache, SegmentReader.CoreC\n assert indexService != null;\n }\n \n- private final IndicesFieldDataCacheListener indicesFieldDataCacheListener;\n-\n @Override\n public <FD extends AtomicFieldData, IFD extends IndexFieldData<FD>> FD load(final AtomicReaderContext context, final IFD indexFieldData) throws Exception {\n final Key key = new Key(this, context.reader().getCoreCacheKey());\n@@ -139,8 +145,7 @@ public <FD extends AtomicFieldData, IFD extends IndexFieldData<FD>> FD load(fina\n @Override\n public AtomicFieldData call() throws Exception {\n SegmentReaderUtils.registerCoreListener(context.reader(), IndexFieldCache.this);\n- AtomicFieldData fieldData = indexFieldData.loadDirect(context);\n- key.sizeInBytes = fieldData.getMemorySizeInBytes();\n+\n key.listeners.add(indicesFieldDataCacheListener);\n final ShardId shardId = ShardUtils.extractShardId(context.reader());\n if (shardId != null) {\n@@ -149,22 +154,29 @@ public AtomicFieldData call() throws Exception {\n key.listeners.add(shard.fieldData());\n }\n }\n+ final AtomicFieldData fieldData = indexFieldData.loadDirect(context);\n for (Listener listener : key.listeners) {\n- listener.onLoad(fieldNames, fieldDataType, fieldData);\n+ try {\n+ listener.onLoad(fieldNames, fieldDataType, fieldData);\n+ } catch (Throwable e) {\n+ // load anyway since listeners should not throw exceptions\n+ logger.error(\"Failed to call listener on atomic field data loading\", e);\n+ }\n }\n+ key.sizeInBytes = fieldData.getMemorySizeInBytes();\n return fieldData;\n }\n });\n }\n \n public <IFD extends IndexFieldData.WithOrdinals<?>> IFD load(final IndexReader indexReader, final IFD indexFieldData) throws Exception {\n final Key key = new Key(this, indexReader.getCoreCacheKey());\n+\n //noinspection unchecked\n return (IFD) cache.get(key, new Callable<RamUsage>() {\n @Override\n public RamUsage call() throws Exception {\n indexReader.addReaderClosedListener(IndexFieldCache.this);\n- GlobalOrdinalsIndexFieldData ifd = (GlobalOrdinalsIndexFieldData) indexFieldData.localGlobalDirect(indexReader);\n key.listeners.add(indicesFieldDataCacheListener);\n final ShardId shardId = ShardUtils.extractShardId(indexReader);\n if (shardId != null) {\n@@ -173,8 +185,14 @@ public RamUsage call() throws Exception {\n key.listeners.add(shard.fieldData());\n }\n }\n+ final GlobalOrdinalsIndexFieldData ifd = (GlobalOrdinalsIndexFieldData) indexFieldData.localGlobalDirect(indexReader);\n for (Listener listener : key.listeners) {\n- listener.onLoad(fieldNames, fieldDataType, ifd);\n+ try {\n+ listener.onLoad(fieldNames, fieldDataType, ifd);\n+ } catch (Throwable e) {\n+ // load anyway since listeners should not throw exceptions\n+ logger.error(\"Failed to call listener on global ordinals loading\", e);\n+ }\n }\n return ifd;\n }",
"filename": "src/main/java/org/elasticsearch/indices/fielddata/cache/IndicesFieldDataCache.java",
"status": "modified"
},
{
"diff": "@@ -25,7 +25,6 @@\n import org.elasticsearch.common.settings.ImmutableSettings;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.test.ElasticsearchIntegrationTest;\n-import org.elasticsearch.test.ElasticsearchIntegrationTest;\n \n /**\n */",
"filename": "src/test/java/org/elasticsearch/search/scroll/SlowSearchScrollTests.java",
"status": "modified"
},
{
"diff": "@@ -52,6 +52,7 @@\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.transport.InetSocketTransportAddress;\n import org.elasticsearch.common.transport.TransportAddress;\n+import org.elasticsearch.common.unit.ByteSizeUnit;\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.common.util.BigArraysModule;\n import org.elasticsearch.common.util.concurrent.EsExecutors;\n@@ -310,6 +311,15 @@ private static Settings getRandomNodeSettings(long seed) {\n } else {\n builder.put(EsExecutors.PROCESSORS, AbstractRandomizedTest.TESTS_PROCESSORS);\n }\n+\n+ if (random.nextBoolean()) {\n+ if (random.nextBoolean()) {\n+ builder.put(\"indices.fielddata.cache.size\", 1 + random.nextInt(1000), ByteSizeUnit.MB);\n+ }\n+ if (random.nextBoolean()) {\n+ builder.put(\"indices.fielddata.cache.expire\", TimeValue.timeValueMillis(1 + random.nextInt(10000)));\n+ }\n+ }\n return builder.build();\n }\n ",
"filename": "src/test/java/org/elasticsearch/test/TestCluster.java",
"status": "modified"
}
]
} |
{
"body": "If I run a search query of the following kind:\n\n```\ncurl -XGET 'http://localhost:9200/index/_search?pretty' -d '{ \"query\" : { \"match_all\" : {} }, \"aggs\" : { \"agg1\" : { \"terms\" : { \"field\" : \"stringField\" } }, \"agg1\" : { \"terms\" : { \"field\" : \"longField\" } } } }'\n```\n\nIt causes a ClassCastException in Elasticsearch:\n\n```\n{\n \"error\" : \"ClassCastException[org.elasticsearch.search.aggregations.bucket.terms.LongTerms$Bucket cannot be cast to org.elasticsearch.search.aggregations.bucket.terms.StringTerms$Bucket]\",\n \"status\" : 500\n}\n```\n\nIt looks like it is actually combining all aggregations with the same name into one combined aggregation. The error occurs when the fields in the two aggregations are different types but will also give strange results when they are both the same type. I have so far only tried this with the Terms aggregation but I suspect it will be an issue for other types too.\n\nThere should probably be some validation of sibling aggregations to ensure they have unique names and throw back a parse error to the client if there are multiple sibling aggregations with the same name.\n\nMore than this, since aggregations can be referenced from other aggregations by name, should aggregations names be unique across all aggregations and not just within their siblings?\n",
"comments": [
{
"body": "> There should probably be some validation of sibling aggregations to ensure they have unique names and throw back a parse error to the client if there are multiple sibling aggregations with the same name.\n\n+1 I'll work on that\n",
"created_at": "2014-05-21T11:09:44Z"
},
{
"body": "> More than this, since aggregations can be referenced from other aggregations by name, should aggregations names be unique across all aggregations and not just within their siblings?\n\nWhen referring to another aggregation (eg. for sorting the buckets of a terms aggregation), you need to specify the relative path from the current aggregation to the one that is used for sorting, so there should be no ambiguity. Or were you thinking about something else?\n",
"created_at": "2014-05-21T11:12:52Z"
},
{
"body": "No, that is what I was thinking of, so its just a problem for sibling aggregations then\n",
"created_at": "2014-05-21T11:23:06Z"
}
],
"number": 6255,
"title": "Aggregations: ClassCastException when sibling aggregations have the same name"
} | {
"body": "Close #6255\n",
"number": 6258,
"review_comments": [],
"title": "Fail queries that have two aggregations with the same name."
} | {
"commits": [
{
"message": "Fail queries that have two aggregations with the same name.\n\nClose #6255"
}
],
"files": [
{
"diff": "@@ -19,14 +19,17 @@\n package org.elasticsearch.search.aggregations;\n \n import org.apache.lucene.index.AtomicReaderContext;\n+import org.elasticsearch.ElasticsearchIllegalArgumentException;\n import org.elasticsearch.common.lease.Releasables;\n import org.elasticsearch.common.util.ObjectArray;\n import org.elasticsearch.search.aggregations.Aggregator.BucketAggregationMode;\n import org.elasticsearch.search.aggregations.support.AggregationContext;\n \n import java.io.IOException;\n import java.util.ArrayList;\n+import java.util.HashSet;\n import java.util.List;\n+import java.util.Set;\n \n /**\n *\n@@ -183,9 +186,13 @@ public Aggregator[] createTopLevelAggregators(AggregationContext ctx) {\n \n public static class Builder {\n \n- private List<AggregatorFactory> factories = new ArrayList<>();\n+ private final Set<String> names = new HashSet<>();\n+ private final List<AggregatorFactory> factories = new ArrayList<>();\n \n public Builder add(AggregatorFactory factory) {\n+ if (!names.add(factory.name)) {\n+ throw new ElasticsearchIllegalArgumentException(\"Two sibling aggregations cannot have the same name: [\" + factory.name + \"]\");\n+ }\n factories.add(factory);\n return this;\n }",
"filename": "src/main/java/org/elasticsearch/search/aggregations/AggregatorFactories.java",
"status": "modified"
},
{
"diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.search.aggregations;\n \n+import com.carrotsearch.randomizedtesting.generators.RandomStrings;\n import org.elasticsearch.action.search.SearchPhaseExecutionException;\n import org.elasticsearch.common.xcontent.json.JsonXContent;\n import org.elasticsearch.test.ElasticsearchIntegrationTest;\n@@ -112,6 +113,26 @@ public void testInvalidAggregationName() throws Exception {\n .endObject()).execute().actionGet();\n }\n \n+ @Test(expected=SearchPhaseExecutionException.class)\n+ public void testSameAggregationName() throws Exception {\n+ createIndex(\"idx\");\n+ ensureGreen();\n+ final String name = RandomStrings.randomAsciiOfLength(getRandom(), 10);\n+ client().prepareSearch(\"idx\").setAggregations(JsonXContent.contentBuilder()\n+ .startObject()\n+ .startObject(name)\n+ .startObject(\"terms\")\n+ .field(\"field\", \"a\")\n+ .endObject()\n+ .endObject()\n+ .startObject(name)\n+ .startObject(\"terms\")\n+ .field(\"field\", \"b\")\n+ .endObject()\n+ .endObject()\n+ .endObject()).execute().actionGet();\n+ }\n+\n @Test(expected=SearchPhaseExecutionException.class)\n public void testMissingName() throws Exception {\n createIndex(\"idx\");",
"filename": "src/test/java/org/elasticsearch/search/aggregations/ParsingTests.java",
"status": "modified"
}
]
} |
{
"body": "```\nDELETE /_all\n\nPUT /t\n{\n \"settings\": {\n \"analysis\": {\n \"analyzer\": {\n \"my_english\": {\n \"type\": \"english\",\n \"stem_exclusion\": [\"organization\"]\n }\n }\n }\n }\n}\n\nGET /t/_analyze?analyzer=my_english&text=organization\n```\n\n-> returns term `organ`\n",
"comments": [],
"number": 6237,
"title": "Analysis: `stem_exclusion` as array not working in language analyzers"
} | {
"body": "This was causing the `stem_exclusion` list, when passed as an array, to be incorrect.\n\nCloses #6237\n",
"number": 6238,
"review_comments": [],
"title": "CharArraySet doesn't know how to lookup the original string in an ImmutableList."
} | {
"commits": [
{
"message": "CharArraySet doesn't know how to lookup the original string\nin an ImmutableList.\n\nCloses #6237"
}
],
"files": [
{
"diff": "@@ -20,7 +20,6 @@\n package org.elasticsearch.index.analysis;\n \n import com.google.common.base.Charsets;\n-import com.google.common.collect.ImmutableList;\n import com.google.common.collect.ImmutableMap;\n import org.apache.lucene.analysis.Analyzer;\n import org.apache.lucene.analysis.NumericTokenStream;\n@@ -107,10 +106,10 @@ public static CharArraySet parseStemExclusion(Settings settings, CharArraySet de\n return new CharArraySet(version, Strings.commaDelimitedListToSet(value), false);\n }\n }\n- String[] stopWords = settings.getAsArray(\"stem_exclusion\", null);\n- if (stopWords != null) {\n+ String[] stemExclusion = settings.getAsArray(\"stem_exclusion\", null);\n+ if (stemExclusion != null) {\n // LUCENE 4 UPGRADE: Should be settings.getAsBoolean(\"stem_exclusion_case\", false)?\n- return new CharArraySet(version, ImmutableList.of(stopWords), false);\n+ return new CharArraySet(version, Arrays.asList(stemExclusion), false);\n } else {\n return defaultStemExclusion;\n }",
"filename": "src/main/java/org/elasticsearch/index/analysis/Analysis.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,50 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.index.analysis;\n+\n+import org.apache.lucene.analysis.util.CharArraySet;\n+import org.elasticsearch.Version;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.test.ElasticsearchTestCase;\n+import org.junit.Test;\n+\n+import static org.elasticsearch.common.settings.ImmutableSettings.settingsBuilder;\n+import static org.hamcrest.Matchers.is;\n+\n+public class AnalysisTests extends ElasticsearchTestCase {\n+ @Test\n+ public void testParseStemExclusion() {\n+\n+ /* Comma separated list */\n+ Settings settings = settingsBuilder().put(\"stem_exclusion\", \"foo,bar\").build();\n+ CharArraySet set = Analysis.parseStemExclusion(settings, CharArraySet.EMPTY_SET, Version.CURRENT.luceneVersion);\n+ assertThat(set.contains(\"foo\"), is(true));\n+ assertThat(set.contains(\"bar\"), is(true));\n+ assertThat(set.contains(\"baz\"), is(false));\n+\n+ /* Array */\n+ settings = settingsBuilder().putArray(\"stem_exclusion\", \"foo\",\"bar\").build();\n+ set = Analysis.parseStemExclusion(settings, CharArraySet.EMPTY_SET, Version.CURRENT.luceneVersion);\n+ assertThat(set.contains(\"foo\"), is(true));\n+ assertThat(set.contains(\"bar\"), is(true));\n+ assertThat(set.contains(\"baz\"), is(false));\n+ }\n+\n+}",
"filename": "src/test/java/org/elasticsearch/index/analysis/AnalysisTests.java",
"status": "added"
}
]
} |
{
"body": "Dynamic date detection should only detect 4 digit years, but it is misinterpreting the `5` in `5/12/14` as the year `0005`:\n\n```\nDELETE /t\n\nPUT /t/t/1\n{\n \"d\": \"5/12/14\"\n}\n\nGET /t/_search\n{\n \"aggs\": {\n \"foo\": {\n \"date_histogram\": {\n \"field\": \"d\",\n \"interval\": \"day\"\n }\n }\n }\n}\n```\n\nReturns: \n\n```\n\"aggregations\": {\n \"foo\": {\n \"buckets\": [\n {\n \"key_as_string\": \"0005/12/14 00:00:00\",\n \"key\": -61979385600000,\n \"doc_count\": 1\n }\n ]\n }\n}\n```\n",
"comments": [
{
"body": "looked at the iso 8601 spec, which seems to define a year has to have four digits... however the predefined joda time formatters do not seem to be strict here\n",
"created_at": "2014-05-15T15:57:39Z"
},
{
"body": "maybe we should file a bug in JODA or maybe it's already fixed and we should upgrade?\n",
"created_at": "2014-05-15T15:58:30Z"
},
{
"body": "yeah, there is a `strict` option in joda, not sure if we can make use of that, investigating next\n",
"created_at": "2014-05-15T15:59:51Z"
},
{
"body": "By copy pasting a fair share of code around (from the `ISODateTimeFormat` class), we can definately make this more strict. Asked on the joda ML if this the intended behaviour and if there are smarter ways to make parsing strict and will then decide on the implementation\n",
"created_at": "2014-05-16T08:53:22Z"
},
{
"body": "Given that this is somewhat of a breaking change (it's a bug fix that will likely break some ingestion) I'm going to push to 2.0. \n\nFor now people that encounter the issue have to define a mapping for the field\n",
"created_at": "2014-05-16T14:28:00Z"
}
],
"number": 6158,
"title": "Dynamic date detection accepting years < 4 digits"
} | {
"body": "If you are using the default date or the named identifiers of dates,\nthe current implementation was allowed to read a year with only one\ndigit. In order to make this more strict, this fixes a year to be at\nleast 4 digits.\n\nCloses #6158\n",
"number": 6227,
"review_comments": [
{
"body": "nit picky - can we move this to use something similar to `getDateTimeFormatter` in TimestampFieldMapper ?\n",
"created_at": "2015-06-08T14:49:04Z"
},
{
"body": "it sucks we can do this, but it helps here :)\n",
"created_at": "2015-06-08T14:51:23Z"
},
{
"body": "can we also check we can parse a proper full formatted date here? I always like to test both \"good\" and \"bas\"\n",
"created_at": "2015-06-08T14:52:13Z"
},
{
"body": "fixed\n",
"created_at": "2015-06-25T11:41:39Z"
},
{
"body": "done\n",
"created_at": "2015-06-25T11:41:44Z"
},
{
"body": "Typo: getStrictStandardDateFormat_t_er\n",
"created_at": "2015-06-29T12:28:41Z"
},
{
"body": "why the randomization on index name?\n",
"created_at": "2015-06-29T12:47:52Z"
},
{
"body": "good catch :)\n",
"created_at": "2015-06-29T12:48:41Z"
},
{
"body": "I think we should make it more explicit than just a comment a the end of the section. We want to encourage people to use this variant when possible. Maybe we should add this explanation to the beginning of the section and have the patterns read `[strict_]year_month` (or something similar)\n",
"created_at": "2015-06-29T12:51:03Z"
},
{
"body": "can we also make sure we reject broken dates on newer indices?\n",
"created_at": "2015-06-29T12:53:48Z"
},
{
"body": "I think this is cleaner to have inline? it is very short and not used anywhere else.\n",
"created_at": "2015-06-29T19:39:45Z"
},
{
"body": "What if the user explicitly sets to the default? I think the correct thing to do is change the initial value in the field type to null, and then add an assert in freeze (override) to make sure it was setup (non null)?\n",
"created_at": "2015-06-29T19:42:08Z"
},
{
"body": "agreed. removed.\n",
"created_at": "2015-06-30T12:18:14Z"
},
{
"body": "I moved this check inside of the version check, if the index has been created before version 2.0 - in that case, it cannot be the default of `strictOptionalDate` and thus we can safely assume to fall back. Does this make sense or did I still overlook something in that scenario?\n",
"created_at": "2015-06-30T12:19:16Z"
},
{
"body": "test added\n",
"created_at": "2015-06-30T12:19:26Z"
},
{
"body": "useless. removed.\n",
"created_at": "2015-06-30T12:19:33Z"
},
{
"body": "updated the docs and added this for each date format, where I noticed that I missed one date format.\n",
"created_at": "2015-06-30T12:20:05Z"
},
{
"body": "also moved the paragraph at the top of the formats section\n",
"created_at": "2015-06-30T12:20:18Z"
},
{
"body": "Doesn't this have the same problem? If the user upgrades to 2.0, and tries to set their older index to use the new defaults, they will be forced back to non strict?\n",
"created_at": "2015-06-30T14:41:28Z"
},
{
"body": "yes, you are right. A bug in the code didnt let my test fail accordingly... will rethink.\nThx for pointing out!\n",
"created_at": "2015-06-30T16:55:43Z"
},
{
"body": "added a new commit with a test fixing this. I just check if the the parser actually parsed a new format and only in case if not, the defaults are applied\n",
"created_at": "2015-07-01T12:38:23Z"
},
{
"body": "above -> bellow\n",
"created_at": "2015-07-02T10:12:43Z"
}
],
"title": "More strict parsing of ISO dates"
} | {
"commits": [
{
"message": "Dates: More strict parsing of ISO dates\n\nIf you are using the default date or the named identifiers of dates,\nthe current implementation was allowed to read a year with only one\ndigit. In order to make this more strict, this fixes a year to be at\nleast 4 digits. Same applies for month, day, hour, minute, seconds.\n\nAlso the new default is `strictDateOptionalTime` for indices created\nwith Elasticsearch 2.0 or newer.\n\nIn addition a couple of not exposed date formats have been exposed, as they\nhave been mentioned in the documentation.\n\nCloses #6158"
}
],
"files": [
{
"diff": "@@ -118,6 +118,8 @@ public static FormatDateTimeFormatter forPattern(String input, Locale locale) {\n formatter = ISODateTimeFormat.ordinalDateTimeNoMillis();\n } else if (\"time\".equals(input)) {\n formatter = ISODateTimeFormat.time();\n+ } else if (\"timeNoMillis\".equals(input) || \"time_no_millis\".equals(input)) {\n+ formatter = ISODateTimeFormat.timeNoMillis();\n } else if (\"tTime\".equals(input) || \"t_time\".equals(input)) {\n formatter = ISODateTimeFormat.tTime();\n } else if (\"tTimeNoMillis\".equals(input) || \"t_time_no_millis\".equals(input)) {\n@@ -126,10 +128,14 @@ public static FormatDateTimeFormatter forPattern(String input, Locale locale) {\n formatter = ISODateTimeFormat.weekDate();\n } else if (\"weekDateTime\".equals(input) || \"week_date_time\".equals(input)) {\n formatter = ISODateTimeFormat.weekDateTime();\n+ } else if (\"weekDateTimeNoMillis\".equals(input) || \"week_date_time_no_millis\".equals(input)) {\n+ formatter = ISODateTimeFormat.weekDateTimeNoMillis();\n } else if (\"weekyear\".equals(input) || \"week_year\".equals(input)) {\n formatter = ISODateTimeFormat.weekyear();\n- } else if (\"weekyearWeek\".equals(input)) {\n+ } else if (\"weekyearWeek\".equals(input) || \"weekyear_week\".equals(input)) {\n formatter = ISODateTimeFormat.weekyearWeek();\n+ } else if (\"weekyearWeekDay\".equals(input) || \"weekyear_week_day\".equals(input)) {\n+ formatter = ISODateTimeFormat.weekyearWeekDay();\n } else if (\"year\".equals(input)) {\n formatter = ISODateTimeFormat.year();\n } else if (\"yearMonth\".equals(input) || \"year_month\".equals(input)) {\n@@ -140,6 +146,77 @@ public static FormatDateTimeFormatter forPattern(String input, Locale locale) {\n formatter = new DateTimeFormatterBuilder().append(new EpochTimePrinter(false), new EpochTimeParser(false)).toFormatter();\n } else if (\"epoch_millis\".equals(input)) {\n formatter = new DateTimeFormatterBuilder().append(new EpochTimePrinter(true), new EpochTimeParser(true)).toFormatter();\n+ // strict date formats here, must be at least 4 digits for year and two for months and two for day\n+ } else if (\"strictBasicWeekDate\".equals(input) || \"strict_basic_week_date\".equals(input)) {\n+ formatter = StrictISODateTimeFormat.basicWeekDate();\n+ } else if (\"strictBasicWeekDateTime\".equals(input) || \"strict_basic_week_date_time\".equals(input)) {\n+ formatter = StrictISODateTimeFormat.basicWeekDateTime();\n+ } else if (\"strictBasicWeekDateTimeNoMillis\".equals(input) || \"strict_basic_week_date_time_no_millis\".equals(input)) {\n+ formatter = StrictISODateTimeFormat.basicWeekDateTimeNoMillis();\n+ } else if (\"strictDate\".equals(input) || \"strict_date\".equals(input)) {\n+ formatter = StrictISODateTimeFormat.date();\n+ } else if (\"strictDateHour\".equals(input) || \"strict_date_hour\".equals(input)) {\n+ formatter = StrictISODateTimeFormat.dateHour();\n+ } else if (\"strictDateHourMinute\".equals(input) || \"strict_date_hour_minute\".equals(input)) {\n+ formatter = StrictISODateTimeFormat.dateHourMinute();\n+ } else if (\"strictDateHourMinuteSecond\".equals(input) || \"strict_date_hour_minute_second\".equals(input)) {\n+ formatter = StrictISODateTimeFormat.dateHourMinuteSecond();\n+ } else if (\"strictDateHourMinuteSecondFraction\".equals(input) || \"strict_date_hour_minute_second_fraction\".equals(input)) {\n+ formatter = StrictISODateTimeFormat.dateHourMinuteSecondFraction();\n+ } else if (\"strictDateHourMinuteSecondMillis\".equals(input) || \"strict_date_hour_minute_second_millis\".equals(input)) {\n+ formatter = StrictISODateTimeFormat.dateHourMinuteSecondMillis();\n+ } else if (\"strictDateOptionalTime\".equals(input) || \"strict_date_optional_time\".equals(input)) {\n+ // in this case, we have a separate parser and printer since the dataOptionalTimeParser can't print\n+ // this sucks we should use the root local by default and not be dependent on the node\n+ return new FormatDateTimeFormatter(input,\n+ StrictISODateTimeFormat.dateOptionalTimeParser().withZone(DateTimeZone.UTC),\n+ StrictISODateTimeFormat.dateTime().withZone(DateTimeZone.UTC), locale);\n+ } else if (\"strictDateTime\".equals(input) || \"strict_date_time\".equals(input)) {\n+ formatter = StrictISODateTimeFormat.dateTime();\n+ } else if (\"strictDateTimeNoMillis\".equals(input) || \"strict_date_time_no_millis\".equals(input)) {\n+ formatter = StrictISODateTimeFormat.dateTimeNoMillis();\n+ } else if (\"strictHour\".equals(input) || \"strict_hour\".equals(input)) {\n+ formatter = StrictISODateTimeFormat.hour();\n+ } else if (\"strictHourMinute\".equals(input) || \"strict_hour_minute\".equals(input)) {\n+ formatter = StrictISODateTimeFormat.hourMinute();\n+ } else if (\"strictHourMinuteSecond\".equals(input) || \"strict_hour_minute_second\".equals(input)) {\n+ formatter = StrictISODateTimeFormat.hourMinuteSecond();\n+ } else if (\"strictHourMinuteSecondFraction\".equals(input) || \"strict_hour_minute_second_fraction\".equals(input)) {\n+ formatter = StrictISODateTimeFormat.hourMinuteSecondFraction();\n+ } else if (\"strictHourMinuteSecondMillis\".equals(input) || \"strict_hour_minute_second_millis\".equals(input)) {\n+ formatter = StrictISODateTimeFormat.hourMinuteSecondMillis();\n+ } else if (\"strictOrdinalDate\".equals(input) || \"strict_ordinal_date\".equals(input)) {\n+ formatter = StrictISODateTimeFormat.ordinalDate();\n+ } else if (\"strictOrdinalDateTime\".equals(input) || \"strict_ordinal_date_time\".equals(input)) {\n+ formatter = StrictISODateTimeFormat.ordinalDateTime();\n+ } else if (\"strictOrdinalDateTimeNoMillis\".equals(input) || \"strict_ordinal_date_time_no_millis\".equals(input)) {\n+ formatter = StrictISODateTimeFormat.ordinalDateTimeNoMillis();\n+ } else if (\"strictTime\".equals(input) || \"strict_time\".equals(input)) {\n+ formatter = StrictISODateTimeFormat.time();\n+ } else if (\"strictTimeNoMillis\".equals(input) || \"strict_time_no_millis\".equals(input)) {\n+ formatter = StrictISODateTimeFormat.timeNoMillis();\n+ } else if (\"strictTTime\".equals(input) || \"strict_t_time\".equals(input)) {\n+ formatter = StrictISODateTimeFormat.tTime();\n+ } else if (\"strictTTimeNoMillis\".equals(input) || \"strict_t_time_no_millis\".equals(input)) {\n+ formatter = StrictISODateTimeFormat.tTimeNoMillis();\n+ } else if (\"strictWeekDate\".equals(input) || \"strict_week_date\".equals(input)) {\n+ formatter = StrictISODateTimeFormat.weekDate();\n+ } else if (\"strictWeekDateTime\".equals(input) || \"strict_week_date_time\".equals(input)) {\n+ formatter = StrictISODateTimeFormat.weekDateTime();\n+ } else if (\"strictWeekDateTimeNoMillis\".equals(input) || \"strict_week_date_time_no_millis\".equals(input)) {\n+ formatter = StrictISODateTimeFormat.weekDateTimeNoMillis();\n+ } else if (\"strictWeekyear\".equals(input) || \"strict_weekyear\".equals(input)) {\n+ formatter = StrictISODateTimeFormat.weekyear();\n+ } else if (\"strictWeekyearWeek\".equals(input) || \"strict_weekyear_week\".equals(input)) {\n+ formatter = StrictISODateTimeFormat.weekyearWeek();\n+ } else if (\"strictWeekyearWeekDay\".equals(input) || \"strict_weekyear_week_day\".equals(input)) {\n+ formatter = StrictISODateTimeFormat.weekyearWeekDay();\n+ } else if (\"strictYear\".equals(input) || \"strict_year\".equals(input)) {\n+ formatter = StrictISODateTimeFormat.year();\n+ } else if (\"strictYearMonth\".equals(input) || \"strict_year_month\".equals(input)) {\n+ formatter = StrictISODateTimeFormat.yearMonth();\n+ } else if (\"strictYearMonthDay\".equals(input) || \"strict_year_month_day\".equals(input)) {\n+ formatter = StrictISODateTimeFormat.yearMonthDay();\n } else if (Strings.hasLength(input) && input.contains(\"||\")) {\n String[] formats = Strings.delimitedListToStringArray(input, \"||\");\n DateTimeParser[] parsers = new DateTimeParser[formats.length];\n@@ -171,6 +248,38 @@ public static FormatDateTimeFormatter forPattern(String input, Locale locale) {\n return new FormatDateTimeFormatter(input, formatter.withZone(DateTimeZone.UTC), locale);\n }\n \n+ public static FormatDateTimeFormatter getStrictStandardDateFormatter() {\n+ // 2014/10/10\n+ DateTimeFormatter shortFormatter = new DateTimeFormatterBuilder()\n+ .appendFixedDecimal(DateTimeFieldType.year(), 4)\n+ .appendLiteral('/')\n+ .appendFixedDecimal(DateTimeFieldType.monthOfYear(), 2)\n+ .appendLiteral('/')\n+ .appendFixedDecimal(DateTimeFieldType.dayOfMonth(), 2)\n+ .toFormatter()\n+ .withZoneUTC();\n+\n+ // 2014/10/10 12:12:12\n+ DateTimeFormatter longFormatter = new DateTimeFormatterBuilder()\n+ .appendFixedDecimal(DateTimeFieldType.year(), 4)\n+ .appendLiteral('/')\n+ .appendFixedDecimal(DateTimeFieldType.monthOfYear(), 2)\n+ .appendLiteral('/')\n+ .appendFixedDecimal(DateTimeFieldType.dayOfMonth(), 2)\n+ .appendLiteral(' ')\n+ .appendFixedSignedDecimal(DateTimeFieldType.hourOfDay(), 2)\n+ .appendLiteral(':')\n+ .appendFixedSignedDecimal(DateTimeFieldType.minuteOfHour(), 2)\n+ .appendLiteral(':')\n+ .appendFixedSignedDecimal(DateTimeFieldType.secondOfMinute(), 2)\n+ .toFormatter()\n+ .withZoneUTC();\n+\n+ DateTimeFormatterBuilder builder = new DateTimeFormatterBuilder().append(longFormatter.withZone(DateTimeZone.UTC).getPrinter(), new DateTimeParser[] {longFormatter.getParser(), shortFormatter.getParser()});\n+\n+ return new FormatDateTimeFormatter(\"yyyy/MM/dd HH:mm:ss||yyyy/MM/dd\", builder.toFormatter().withZone(DateTimeZone.UTC), Locale.ROOT);\n+ }\n+\n \n public static final DurationFieldType Quarters = new DurationFieldType(\"quarters\") {\n private static final long serialVersionUID = -8167713675442491871L;",
"filename": "core/src/main/java/org/elasticsearch/common/joda/Joda.java",
"status": "modified"
},
{
"diff": "@@ -69,7 +69,8 @@ public class DateFieldMapper extends NumberFieldMapper {\n public static final String CONTENT_TYPE = \"date\";\n \n public static class Defaults extends NumberFieldMapper.Defaults {\n- public static final FormatDateTimeFormatter DATE_TIME_FORMATTER = Joda.forPattern(\"dateOptionalTime||epoch_millis\", Locale.ROOT);\n+ public static final FormatDateTimeFormatter DATE_TIME_FORMATTER = Joda.forPattern(\"strictDateOptionalTime||epoch_millis\", Locale.ROOT);\n+ public static final FormatDateTimeFormatter DATE_TIME_FORMATTER_BEFORE_2_0 = Joda.forPattern(\"dateOptionalTime\", Locale.ROOT);\n public static final TimeUnit TIME_UNIT = TimeUnit.MILLISECONDS;\n public static final DateFieldType FIELD_TYPE = new DateFieldType();\n \n@@ -123,15 +124,13 @@ public DateFieldMapper build(BuilderContext context) {\n }\n \n protected void setupFieldType(BuilderContext context) {\n- FormatDateTimeFormatter dateTimeFormatter = fieldType().dateTimeFormatter;\n- // TODO MOVE ME OUTSIDE OF THIS SPACE?\n- if (Version.indexCreated(context.indexSettings()).before(Version.V_2_0_0)) {\n- boolean includesEpochFormatter = dateTimeFormatter.format().contains(\"epoch_\");\n- if (!includesEpochFormatter) {\n- String format = fieldType().timeUnit().equals(TimeUnit.SECONDS) ? \"epoch_second\" : \"epoch_millis\";\n- fieldType().setDateTimeFormatter(Joda.forPattern(format + \"||\" + dateTimeFormatter.format()));\n- }\n+ if (Version.indexCreated(context.indexSettings()).before(Version.V_2_0_0) &&\n+ !fieldType().dateTimeFormatter().format().contains(\"epoch_\")) {\n+ String format = fieldType().timeUnit().equals(TimeUnit.SECONDS) ? \"epoch_second\" : \"epoch_millis\";\n+ fieldType().setDateTimeFormatter(Joda.forPattern(format + \"||\" + fieldType().dateTimeFormatter().format()));\n }\n+\n+ FormatDateTimeFormatter dateTimeFormatter = fieldType().dateTimeFormatter;\n if (!locale.equals(dateTimeFormatter.locale())) {\n fieldType().setDateTimeFormatter(new FormatDateTimeFormatter(dateTimeFormatter.format(), dateTimeFormatter.parser(), dateTimeFormatter.printer(), locale));\n }\n@@ -159,6 +158,7 @@ public static class TypeParser implements Mapper.TypeParser {\n public Mapper.Builder<?, ?> parse(String name, Map<String, Object> node, ParserContext parserContext) throws MapperParsingException {\n DateFieldMapper.Builder builder = dateField(name);\n parseNumberField(builder, name, node, parserContext);\n+ boolean configuredFormat = false;\n for (Iterator<Map.Entry<String, Object>> iterator = node.entrySet().iterator(); iterator.hasNext();) {\n Map.Entry<String, Object> entry = iterator.next();\n String propName = Strings.toUnderscoreCase(entry.getKey());\n@@ -171,6 +171,7 @@ public static class TypeParser implements Mapper.TypeParser {\n iterator.remove();\n } else if (propName.equals(\"format\")) {\n builder.dateTimeFormatter(parseDateTimeFormatter(propNode));\n+ configuredFormat = true;\n iterator.remove();\n } else if (propName.equals(\"numeric_resolution\")) {\n builder.timeUnit(TimeUnit.valueOf(propNode.toString().toUpperCase(Locale.ROOT)));\n@@ -180,6 +181,13 @@ public static class TypeParser implements Mapper.TypeParser {\n iterator.remove();\n }\n }\n+ if (!configuredFormat) {\n+ if (parserContext.indexVersionCreated().onOrAfter(Version.V_2_0_0)) {\n+ builder.dateTimeFormatter(Defaults.DATE_TIME_FORMATTER);\n+ } else {\n+ builder.dateTimeFormatter(Defaults.DATE_TIME_FORMATTER_BEFORE_2_0);\n+ }\n+ }\n return builder;\n }\n }",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/core/DateFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -24,14 +24,12 @@\n import org.apache.lucene.index.IndexOptions;\n import org.elasticsearch.Version;\n import org.elasticsearch.action.TimestampParsingException;\n-import org.elasticsearch.common.Explicit;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.joda.FormatDateTimeFormatter;\n import org.elasticsearch.common.joda.Joda;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n-import org.elasticsearch.index.analysis.NamedAnalyzer;\n import org.elasticsearch.index.analysis.NumericDateAnalyzer;\n import org.elasticsearch.index.fielddata.FieldDataType;\n import org.elasticsearch.index.mapper.MappedFieldType;\n@@ -41,10 +39,8 @@\n import org.elasticsearch.index.mapper.MergeResult;\n import org.elasticsearch.index.mapper.ParseContext;\n import org.elasticsearch.index.mapper.MetadataFieldMapper;\n-import org.elasticsearch.index.mapper.core.AbstractFieldMapper;\n import org.elasticsearch.index.mapper.core.DateFieldMapper;\n import org.elasticsearch.index.mapper.core.LongFieldMapper;\n-import org.elasticsearch.index.mapper.core.NumberFieldMapper;\n \n import java.io.IOException;\n import java.util.Iterator;\n@@ -59,15 +55,16 @@ public class TimestampFieldMapper extends MetadataFieldMapper {\n \n public static final String NAME = \"_timestamp\";\n public static final String CONTENT_TYPE = \"_timestamp\";\n- public static final String DEFAULT_DATE_TIME_FORMAT = \"epoch_millis||dateOptionalTime\";\n+ public static final String DEFAULT_DATE_TIME_FORMAT = \"epoch_millis||strictDateOptionalTime\";\n \n public static class Defaults extends DateFieldMapper.Defaults {\n public static final String NAME = \"_timestamp\";\n \n // TODO: this should be removed\n- public static final MappedFieldType PRE_20_FIELD_TYPE;\n- public static final FormatDateTimeFormatter DATE_TIME_FORMATTER = Joda.forPattern(DEFAULT_DATE_TIME_FORMAT);\n+ public static final TimestampFieldType PRE_20_FIELD_TYPE;\n public static final TimestampFieldType FIELD_TYPE = new TimestampFieldType();\n+ public static final FormatDateTimeFormatter DATE_TIME_FORMATTER = Joda.forPattern(DEFAULT_DATE_TIME_FORMAT);\n+ public static final FormatDateTimeFormatter DATE_TIME_FORMATTER_BEFORE_2_0 = Joda.forPattern(\"epoch_millis||dateOptionalTime\");\n \n static {\n FIELD_TYPE.setStored(true);\n@@ -82,6 +79,9 @@ public static class Defaults extends DateFieldMapper.Defaults {\n PRE_20_FIELD_TYPE = FIELD_TYPE.clone();\n PRE_20_FIELD_TYPE.setStored(false);\n PRE_20_FIELD_TYPE.setHasDocValues(false);\n+ PRE_20_FIELD_TYPE.setDateTimeFormatter(DATE_TIME_FORMATTER_BEFORE_2_0);\n+ PRE_20_FIELD_TYPE.setIndexAnalyzer(NumericDateAnalyzer.buildNamedAnalyzer(DATE_TIME_FORMATTER_BEFORE_2_0, Defaults.PRECISION_STEP_64_BIT));\n+ PRE_20_FIELD_TYPE.setSearchAnalyzer(NumericDateAnalyzer.buildNamedAnalyzer(DATE_TIME_FORMATTER_BEFORE_2_0, Integer.MAX_VALUE));\n PRE_20_FIELD_TYPE.freeze();\n }\n \n@@ -146,8 +146,23 @@ public TimestampFieldMapper build(BuilderContext context) {\n if (explicitStore == false && context.indexCreatedVersion().before(Version.V_2_0_0)) {\n fieldType.setStored(false);\n }\n+\n+ if (fieldType().dateTimeFormatter().equals(Defaults.DATE_TIME_FORMATTER)) {\n+ fieldType().setDateTimeFormatter(getDateTimeFormatter(context.indexSettings()));\n+ }\n+\n setupFieldType(context);\n- return new TimestampFieldMapper(fieldType, defaultFieldType, enabledState, path, defaultTimestamp, ignoreMissing, context.indexSettings());\n+ return new TimestampFieldMapper(fieldType, defaultFieldType, enabledState, path, defaultTimestamp,\n+ ignoreMissing, context.indexSettings());\n+ }\n+ }\n+\n+ private static FormatDateTimeFormatter getDateTimeFormatter(Settings indexSettings) {\n+ Version indexCreated = Version.indexCreated(indexSettings);\n+ if (indexCreated.onOrAfter(Version.V_2_0_0)) {\n+ return Defaults.DATE_TIME_FORMATTER;\n+ } else {\n+ return Defaults.DATE_TIME_FORMATTER_BEFORE_2_0;\n }\n }\n \n@@ -341,7 +356,9 @@ && fieldType().dateTimeFormatter().format().equals(Defaults.DATE_TIME_FORMATTER.\n if (indexCreatedBefore2x && (includeDefaults || path != Defaults.PATH)) {\n builder.field(\"path\", path);\n }\n- if (includeDefaults || !fieldType().dateTimeFormatter().format().equals(Defaults.DATE_TIME_FORMATTER.format())) {\n+ // different format handling depending on index version\n+ String defaultDateFormat = indexCreatedBefore2x ? Defaults.DATE_TIME_FORMATTER_BEFORE_2_0.format() : Defaults.DATE_TIME_FORMATTER.format();\n+ if (includeDefaults || !fieldType().dateTimeFormatter().format().equals(defaultDateFormat)) {\n builder.field(\"format\", fieldType().dateTimeFormatter().format());\n }\n if (includeDefaults || !Defaults.DEFAULT_TIMESTAMP.equals(defaultTimestamp)) {",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/internal/TimestampFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -49,7 +49,7 @@ public static class Defaults {\n public static final FormatDateTimeFormatter[] DYNAMIC_DATE_TIME_FORMATTERS =\n new FormatDateTimeFormatter[]{\n DateFieldMapper.Defaults.DATE_TIME_FORMATTER,\n- Joda.forPattern(\"yyyy/MM/dd HH:mm:ss||yyyy/MM/dd\")\n+ Joda.getStrictStandardDateFormatter()\n };\n public static final boolean DATE_DETECTION = true;\n public static final boolean NUMERIC_DETECTION = false;",
"filename": "core/src/main/java/org/elasticsearch/index/mapper/object/RootObjectMapper.java",
"status": "modified"
},
{
"diff": "@@ -38,7 +38,7 @@\n */\n public class SnapshotInfo implements ToXContent, Streamable {\n \n- private static final FormatDateTimeFormatter DATE_TIME_FORMATTER = Joda.forPattern(\"dateOptionalTime\");\n+ private static final FormatDateTimeFormatter DATE_TIME_FORMATTER = Joda.forPattern(\"strictDateOptionalTime\");\n \n private String name;\n ",
"filename": "core/src/main/java/org/elasticsearch/snapshots/SnapshotInfo.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,2028 @@\n+package org.joda.time.format;\n+\n+/*\n+ * Copyright 2001-2009 Stephen Colebourne\n+ *\n+ * Licensed under the Apache License, Version 2.0 (the \"License\");\n+ * you may not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing, software\n+ * distributed under the License is distributed on an \"AS IS\" BASIS,\n+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+ * See the License for the specific language governing permissions and\n+ * limitations under the License.\n+ */\n+\n+import org.joda.time.DateTimeFieldType;\n+\n+import java.util.Collection;\n+import java.util.HashSet;\n+import java.util.Set;\n+\n+/*\n+ * Elasticsearch Note: This class has been copied almost identically from joda, where the\n+ * class is named ISODatetimeFormat\n+ *\n+ * However there has been done one huge modification in several methods, which forces the date\n+ * year to be at least n digits, so that a year like \"5\" is invalid and must be \"0005\"\n+ *\n+ * All methods have been marked with an \"// ES change\" commentary\n+ *\n+ * In case you compare this with the original ISODateTimeFormat, make sure you use a diff\n+ * call, that ignores whitespaces/tabs/indendetations like 'diff -b'\n+ */\n+/**\n+ * Factory that creates instances of DateTimeFormatter based on the ISO8601 standard.\n+ * <p>\n+ * Date-time formatting is performed by the {@link DateTimeFormatter} class.\n+ * Three classes provide factory methods to create formatters, and this is one.\n+ * The others are {@link DateTimeFormat} and {@link DateTimeFormatterBuilder}.\n+ * <p>\n+ * ISO8601 is the international standard for data interchange. It defines a\n+ * framework, rather than an absolute standard. As a result this provider has a\n+ * number of methods that represent common uses of the framework. The most common\n+ * formats are {@link #date() date}, {@link #time() time}, and {@link #dateTime() dateTime}.\n+ * <p>\n+ * For example, to format a date time in ISO format:\n+ * <pre>\n+ * DateTime dt = new DateTime();\n+ * DateTimeFormatter fmt = ISODateTimeFormat.dateTime();\n+ * String str = fmt.print(dt);\n+ * </pre>\n+ * <p>\n+ * Note that these formatters mostly follow the ISO8601 standard for printing.\n+ * For parsing, the formatters are more lenient and allow formats that are not\n+ * in strict compliance with the standard.\n+ * <p>\n+ * It is important to understand that these formatters are not linked to\n+ * the <code>ISOChronology</code>. These formatters may be used with any\n+ * chronology, however there may be certain side effects with more unusual\n+ * chronologies. For example, the ISO formatters rely on dayOfWeek being\n+ * single digit, dayOfMonth being two digit and dayOfYear being three digit.\n+ * A chronology with a ten day week would thus cause issues. However, in\n+ * general, it is safe to use these formatters with other chronologies.\n+ * <p>\n+ * ISODateTimeFormat is thread-safe and immutable, and the formatters it\n+ * returns are as well.\n+ *\n+ * @author Brian S O'Neill\n+ * @since 1.0\n+ * @see DateTimeFormat\n+ * @see DateTimeFormatterBuilder\n+ */\n+public class StrictISODateTimeFormat {\n+\n+ /**\n+ * Constructor.\n+ *\n+ * @since 1.1 (previously private)\n+ */\n+ protected StrictISODateTimeFormat() {\n+ super();\n+ }\n+\n+ //-----------------------------------------------------------------------\n+ /**\n+ * Returns a formatter that outputs only those fields specified.\n+ * <p>\n+ * This method examines the fields provided and returns an ISO-style\n+ * formatter that best fits. This can be useful for outputting\n+ * less-common ISO styles, such as YearMonth (YYYY-MM) or MonthDay (--MM-DD).\n+ * <p>\n+ * The list provided may have overlapping fields, such as dayOfWeek and\n+ * dayOfMonth. In this case, the style is chosen based on the following\n+ * list, thus in the example, the calendar style is chosen as dayOfMonth\n+ * is higher in priority than dayOfWeek:\n+ * <ul>\n+ * <li>monthOfYear - calendar date style\n+ * <li>dayOfYear - ordinal date style\n+ * <li>weekOfWeekYear - week date style\n+ * <li>dayOfMonth - calendar date style\n+ * <li>dayOfWeek - week date style\n+ * <li>year\n+ * <li>weekyear\n+ * </ul>\n+ * The supported formats are:\n+ * <pre>\n+ * Extended Basic Fields\n+ * 2005-03-25 20050325 year/monthOfYear/dayOfMonth\n+ * 2005-03 2005-03 year/monthOfYear\n+ * 2005--25 2005--25 year/dayOfMonth *\n+ * 2005 2005 year\n+ * --03-25 --0325 monthOfYear/dayOfMonth\n+ * --03 --03 monthOfYear\n+ * ---03 ---03 dayOfMonth\n+ * 2005-084 2005084 year/dayOfYear\n+ * -084 -084 dayOfYear\n+ * 2005-W12-5 2005W125 weekyear/weekOfWeekyear/dayOfWeek\n+ * 2005-W-5 2005W-5 weekyear/dayOfWeek *\n+ * 2005-W12 2005W12 weekyear/weekOfWeekyear\n+ * -W12-5 -W125 weekOfWeekyear/dayOfWeek\n+ * -W12 -W12 weekOfWeekyear\n+ * -W-5 -W-5 dayOfWeek\n+ * 10:20:30.040 102030.040 hour/minute/second/milli\n+ * 10:20:30 102030 hour/minute/second\n+ * 10:20 1020 hour/minute\n+ * 10 10 hour\n+ * -20:30.040 -2030.040 minute/second/milli\n+ * -20:30 -2030 minute/second\n+ * -20 -20 minute\n+ * --30.040 --30.040 second/milli\n+ * --30 --30 second\n+ * ---.040 ---.040 milli *\n+ * 10-30.040 10-30.040 hour/second/milli *\n+ * 10:20-.040 1020-.040 hour/minute/milli *\n+ * 10-30 10-30 hour/second *\n+ * 10--.040 10--.040 hour/milli *\n+ * -20-.040 -20-.040 minute/milli *\n+ * plus datetime formats like {date}T{time}\n+ * </pre>\n+ * * indiates that this is not an official ISO format and can be excluded\n+ * by passing in <code>strictISO</code> as <code>true</code>.\n+ * <p>\n+ * This method can side effect the input collection of fields.\n+ * If the input collection is modifiable, then each field that was added to\n+ * the formatter will be removed from the collection, including any duplicates.\n+ * If the input collection is unmodifiable then no side effect occurs.\n+ * <p>\n+ * This side effect processing is useful if you need to know whether all\n+ * the fields were converted into the formatter or not. To achieve this,\n+ * pass in a modifiable list, and check that it is empty on exit.\n+ *\n+ * @param fields the fields to get a formatter for, not null,\n+ * updated by the method call unless unmodifiable,\n+ * removing those fields built in the formatter\n+ * @param extended true to use the extended format (with separators)\n+ * @param strictISO true to stick exactly to ISO8601, false to include additional formats\n+ * @return a suitable formatter\n+ * @throws IllegalArgumentException if there is no format for the fields\n+ * @since 1.1\n+ */\n+ public static DateTimeFormatter forFields(\n+ Collection<DateTimeFieldType> fields,\n+ boolean extended,\n+ boolean strictISO) {\n+\n+ if (fields == null || fields.size() == 0) {\n+ throw new IllegalArgumentException(\"The fields must not be null or empty\");\n+ }\n+ Set<DateTimeFieldType> workingFields = new HashSet<DateTimeFieldType>(fields);\n+ int inputSize = workingFields.size();\n+ boolean reducedPrec = false;\n+ DateTimeFormatterBuilder bld = new DateTimeFormatterBuilder();\n+ // date\n+ if (workingFields.contains(DateTimeFieldType.monthOfYear())) {\n+ reducedPrec = dateByMonth(bld, workingFields, extended, strictISO);\n+ } else if (workingFields.contains(DateTimeFieldType.dayOfYear())) {\n+ reducedPrec = dateByOrdinal(bld, workingFields, extended, strictISO);\n+ } else if (workingFields.contains(DateTimeFieldType.weekOfWeekyear())) {\n+ reducedPrec = dateByWeek(bld, workingFields, extended, strictISO);\n+ } else if (workingFields.contains(DateTimeFieldType.dayOfMonth())) {\n+ reducedPrec = dateByMonth(bld, workingFields, extended, strictISO);\n+ } else if (workingFields.contains(DateTimeFieldType.dayOfWeek())) {\n+ reducedPrec = dateByWeek(bld, workingFields, extended, strictISO);\n+ } else if (workingFields.remove(DateTimeFieldType.year())) {\n+ bld.append(Constants.ye);\n+ reducedPrec = true;\n+ } else if (workingFields.remove(DateTimeFieldType.weekyear())) {\n+ bld.append(Constants.we);\n+ reducedPrec = true;\n+ }\n+ boolean datePresent = (workingFields.size() < inputSize);\n+\n+ // time\n+ time(bld, workingFields, extended, strictISO, reducedPrec, datePresent);\n+\n+ // result\n+ if (bld.canBuildFormatter() == false) {\n+ throw new IllegalArgumentException(\"No valid format for fields: \" + fields);\n+ }\n+\n+ // side effect the input collection to indicate the processed fields\n+ // handling unmodifiable collections with no side effect\n+ try {\n+ fields.retainAll(workingFields);\n+ } catch (UnsupportedOperationException ex) {\n+ // ignore, so we can handle unmodifiable collections\n+ }\n+ return bld.toFormatter();\n+ }\n+\n+ //-----------------------------------------------------------------------\n+ /**\n+ * Creates a date using the calendar date format.\n+ * Specification reference: 5.2.1.\n+ *\n+ * @param bld the builder\n+ * @param fields the fields\n+ * @param extended true to use extended format\n+ * @param strictISO true to only allow ISO formats\n+ * @return true if reduced precision\n+ * @since 1.1\n+ */\n+ private static boolean dateByMonth(\n+ DateTimeFormatterBuilder bld,\n+ Collection<DateTimeFieldType> fields,\n+ boolean extended,\n+ boolean strictISO) {\n+\n+ boolean reducedPrec = false;\n+ if (fields.remove(DateTimeFieldType.year())) {\n+ bld.append(Constants.ye);\n+ if (fields.remove(DateTimeFieldType.monthOfYear())) {\n+ if (fields.remove(DateTimeFieldType.dayOfMonth())) {\n+ // YYYY-MM-DD/YYYYMMDD\n+ appendSeparator(bld, extended);\n+ bld.appendMonthOfYear(2);\n+ appendSeparator(bld, extended);\n+ bld.appendDayOfMonth(2);\n+ } else {\n+ // YYYY-MM/YYYY-MM\n+ bld.appendLiteral('-');\n+ bld.appendMonthOfYear(2);\n+ reducedPrec = true;\n+ }\n+ } else {\n+ if (fields.remove(DateTimeFieldType.dayOfMonth())) {\n+ // YYYY--DD/YYYY--DD (non-iso)\n+ checkNotStrictISO(fields, strictISO);\n+ bld.appendLiteral('-');\n+ bld.appendLiteral('-');\n+ bld.appendDayOfMonth(2);\n+ } else {\n+ // YYYY/YYYY\n+ reducedPrec = true;\n+ }\n+ }\n+\n+ } else if (fields.remove(DateTimeFieldType.monthOfYear())) {\n+ bld.appendLiteral('-');\n+ bld.appendLiteral('-');\n+ bld.appendMonthOfYear(2);\n+ if (fields.remove(DateTimeFieldType.dayOfMonth())) {\n+ // --MM-DD/--MMDD\n+ appendSeparator(bld, extended);\n+ bld.appendDayOfMonth(2);\n+ } else {\n+ // --MM/--MM\n+ reducedPrec = true;\n+ }\n+ } else if (fields.remove(DateTimeFieldType.dayOfMonth())) {\n+ // ---DD/---DD\n+ bld.appendLiteral('-');\n+ bld.appendLiteral('-');\n+ bld.appendLiteral('-');\n+ bld.appendDayOfMonth(2);\n+ }\n+ return reducedPrec;\n+ }\n+\n+ //-----------------------------------------------------------------------\n+ /**\n+ * Creates a date using the ordinal date format.\n+ * Specification reference: 5.2.2.\n+ *\n+ * @param bld the builder\n+ * @param fields the fields\n+ * @param extended true to use extended format\n+ * @param strictISO true to only allow ISO formats\n+ * @since 1.1\n+ */\n+ private static boolean dateByOrdinal(\n+ DateTimeFormatterBuilder bld,\n+ Collection<DateTimeFieldType> fields,\n+ boolean extended,\n+ boolean strictISO) {\n+\n+ boolean reducedPrec = false;\n+ if (fields.remove(DateTimeFieldType.year())) {\n+ bld.append(Constants.ye);\n+ if (fields.remove(DateTimeFieldType.dayOfYear())) {\n+ // YYYY-DDD/YYYYDDD\n+ appendSeparator(bld, extended);\n+ bld.appendDayOfYear(3);\n+ } else {\n+ // YYYY/YYYY\n+ reducedPrec = true;\n+ }\n+\n+ } else if (fields.remove(DateTimeFieldType.dayOfYear())) {\n+ // -DDD/-DDD\n+ bld.appendLiteral('-');\n+ bld.appendDayOfYear(3);\n+ }\n+ return reducedPrec;\n+ }\n+\n+ //-----------------------------------------------------------------------\n+ /**\n+ * Creates a date using the calendar date format.\n+ * Specification reference: 5.2.3.\n+ *\n+ * @param bld the builder\n+ * @param fields the fields\n+ * @param extended true to use extended format\n+ * @param strictISO true to only allow ISO formats\n+ * @since 1.1\n+ */\n+ private static boolean dateByWeek(\n+ DateTimeFormatterBuilder bld,\n+ Collection<DateTimeFieldType> fields,\n+ boolean extended,\n+ boolean strictISO) {\n+\n+ boolean reducedPrec = false;\n+ if (fields.remove(DateTimeFieldType.weekyear())) {\n+ bld.append(Constants.we);\n+ if (fields.remove(DateTimeFieldType.weekOfWeekyear())) {\n+ appendSeparator(bld, extended);\n+ bld.appendLiteral('W');\n+ bld.appendWeekOfWeekyear(2);\n+ if (fields.remove(DateTimeFieldType.dayOfWeek())) {\n+ // YYYY-WWW-D/YYYYWWWD\n+ appendSeparator(bld, extended);\n+ bld.appendDayOfWeek(1);\n+ } else {\n+ // YYYY-WWW/YYYY-WWW\n+ reducedPrec = true;\n+ }\n+ } else {\n+ if (fields.remove(DateTimeFieldType.dayOfWeek())) {\n+ // YYYY-W-D/YYYYW-D (non-iso)\n+ checkNotStrictISO(fields, strictISO);\n+ appendSeparator(bld, extended);\n+ bld.appendLiteral('W');\n+ bld.appendLiteral('-');\n+ bld.appendDayOfWeek(1);\n+ } else {\n+ // YYYY/YYYY\n+ reducedPrec = true;\n+ }\n+ }\n+\n+ } else if (fields.remove(DateTimeFieldType.weekOfWeekyear())) {\n+ bld.appendLiteral('-');\n+ bld.appendLiteral('W');\n+ bld.appendWeekOfWeekyear(2);\n+ if (fields.remove(DateTimeFieldType.dayOfWeek())) {\n+ // -WWW-D/-WWWD\n+ appendSeparator(bld, extended);\n+ bld.appendDayOfWeek(1);\n+ } else {\n+ // -WWW/-WWW\n+ reducedPrec = true;\n+ }\n+ } else if (fields.remove(DateTimeFieldType.dayOfWeek())) {\n+ // -W-D/-W-D\n+ bld.appendLiteral('-');\n+ bld.appendLiteral('W');\n+ bld.appendLiteral('-');\n+ bld.appendDayOfWeek(1);\n+ }\n+ return reducedPrec;\n+ }\n+\n+ //-----------------------------------------------------------------------\n+ /**\n+ * Adds the time fields to the builder.\n+ * Specification reference: 5.3.1.\n+ *\n+ * @param bld the builder\n+ * @param fields the fields\n+ * @param extended whether to use the extended format\n+ * @param strictISO whether to be strict\n+ * @param reducedPrec whether the date was reduced precision\n+ * @param datePresent whether there was a date\n+ * @since 1.1\n+ */\n+ private static void time(\n+ DateTimeFormatterBuilder bld,\n+ Collection<DateTimeFieldType> fields,\n+ boolean extended,\n+ boolean strictISO,\n+ boolean reducedPrec,\n+ boolean datePresent) {\n+\n+ boolean hour = fields.remove(DateTimeFieldType.hourOfDay());\n+ boolean minute = fields.remove(DateTimeFieldType.minuteOfHour());\n+ boolean second = fields.remove(DateTimeFieldType.secondOfMinute());\n+ boolean milli = fields.remove(DateTimeFieldType.millisOfSecond());\n+ if (!hour && !minute && !second && !milli) {\n+ return;\n+ }\n+ if (hour || minute || second || milli) {\n+ if (strictISO && reducedPrec) {\n+ throw new IllegalArgumentException(\"No valid ISO8601 format for fields because Date was reduced precision: \" + fields);\n+ }\n+ if (datePresent) {\n+ bld.appendLiteral('T');\n+ }\n+ }\n+ if (hour && minute && second || (hour && !second && !milli)) {\n+ // OK - HMSm/HMS/HM/H - valid in combination with date\n+ } else {\n+ if (strictISO && datePresent) {\n+ throw new IllegalArgumentException(\"No valid ISO8601 format for fields because Time was truncated: \" + fields);\n+ }\n+ if (!hour && (minute && second || (minute && !milli) || second)) {\n+ // OK - MSm/MS/M/Sm/S - valid ISO formats\n+ } else {\n+ if (strictISO) {\n+ throw new IllegalArgumentException(\"No valid ISO8601 format for fields: \" + fields);\n+ }\n+ }\n+ }\n+ if (hour) {\n+ bld.appendHourOfDay(2);\n+ } else if (minute || second || milli) {\n+ bld.appendLiteral('-');\n+ }\n+ if (extended && hour && minute) {\n+ bld.appendLiteral(':');\n+ }\n+ if (minute) {\n+ bld.appendMinuteOfHour(2);\n+ } else if (second || milli) {\n+ bld.appendLiteral('-');\n+ }\n+ if (extended && minute && second) {\n+ bld.appendLiteral(':');\n+ }\n+ if (second) {\n+ bld.appendSecondOfMinute(2);\n+ } else if (milli) {\n+ bld.appendLiteral('-');\n+ }\n+ if (milli) {\n+ bld.appendLiteral('.');\n+ bld.appendMillisOfSecond(3);\n+ }\n+ }\n+\n+ //-----------------------------------------------------------------------\n+ /**\n+ * Checks that the iso only flag is not set, throwing an exception if it is.\n+ *\n+ * @param fields the fields\n+ * @param strictISO true if only ISO formats allowed\n+ * @since 1.1\n+ */\n+ private static void checkNotStrictISO(Collection<DateTimeFieldType> fields, boolean strictISO) {\n+ if (strictISO) {\n+ throw new IllegalArgumentException(\"No valid ISO8601 format for fields: \" + fields);\n+ }\n+ }\n+\n+ /**\n+ * Appends the separator if necessary.\n+ *\n+ * @param bld the builder\n+ * @param extended whether to append the separator\n+ * @since 1.1\n+ */\n+ private static void appendSeparator(DateTimeFormatterBuilder bld, boolean extended) {\n+ if (extended) {\n+ bld.appendLiteral('-');\n+ }\n+ }\n+\n+ //-----------------------------------------------------------------------\n+ /**\n+ * Returns a generic ISO date parser for parsing dates with a possible zone.\n+ * <p>\n+ * The returned formatter can only be used for parsing, printing is unsupported.\n+ * <p>\n+ * It accepts formats described by the following syntax:\n+ * <pre>\n+ * date = date-element ['T' offset]\n+ * date-element = std-date-element | ord-date-element | week-date-element\n+ * std-date-element = yyyy ['-' MM ['-' dd]]\n+ * ord-date-element = yyyy ['-' DDD]\n+ * week-date-element = xxxx '-W' ww ['-' e]\n+ * offset = 'Z' | (('+' | '-') HH [':' mm [':' ss [('.' | ',') SSS]]])\n+ * </pre>\n+ */\n+ public static DateTimeFormatter dateParser() {\n+ return Constants.dp;\n+ }\n+\n+ /**\n+ * Returns a generic ISO date parser for parsing local dates.\n+ * <p>\n+ * The returned formatter can only be used for parsing, printing is unsupported.\n+ * <p>\n+ * This parser is initialised with the local (UTC) time zone.\n+ * <p>\n+ * It accepts formats described by the following syntax:\n+ * <pre>\n+ * date-element = std-date-element | ord-date-element | week-date-element\n+ * std-date-element = yyyy ['-' MM ['-' dd]]\n+ * ord-date-element = yyyy ['-' DDD]\n+ * week-date-element = xxxx '-W' ww ['-' e]\n+ * </pre>\n+ * @since 1.3\n+ */\n+ public static DateTimeFormatter localDateParser() {\n+ return Constants.ldp;\n+ }\n+\n+ /**\n+ * Returns a generic ISO date parser for parsing dates.\n+ * <p>\n+ * The returned formatter can only be used for parsing, printing is unsupported.\n+ * <p>\n+ * It accepts formats described by the following syntax:\n+ * <pre>\n+ * date-element = std-date-element | ord-date-element | week-date-element\n+ * std-date-element = yyyy ['-' MM ['-' dd]]\n+ * ord-date-element = yyyy ['-' DDD]\n+ * week-date-element = xxxx '-W' ww ['-' e]\n+ * </pre>\n+ */\n+ public static DateTimeFormatter dateElementParser() {\n+ return Constants.dpe;\n+ }\n+\n+ /**\n+ * Returns a generic ISO time parser for parsing times with a possible zone.\n+ * <p>\n+ * The returned formatter can only be used for parsing, printing is unsupported.\n+ * <p>\n+ * The parser is strict by default, thus time string {@code 24:00} cannot be parsed.\n+ * <p>\n+ * It accepts formats described by the following syntax:\n+ * <pre>\n+ * time = ['T'] time-element [offset]\n+ * time-element = HH [minute-element] | [fraction]\n+ * minute-element = ':' mm [second-element] | [fraction]\n+ * second-element = ':' ss [fraction]\n+ * fraction = ('.' | ',') digit+\n+ * offset = 'Z' | (('+' | '-') HH [':' mm [':' ss [('.' | ',') SSS]]])\n+ * </pre>\n+ */\n+ public static DateTimeFormatter timeParser() {\n+ return Constants.tp;\n+ }\n+\n+ /**\n+ * Returns a generic ISO time parser for parsing local times.\n+ * <p>\n+ * The returned formatter can only be used for parsing, printing is unsupported.\n+ * <p>\n+ * This parser is initialised with the local (UTC) time zone.\n+ * The parser is strict by default, thus time string {@code 24:00} cannot be parsed.\n+ * <p>\n+ * It accepts formats described by the following syntax:\n+ * <pre>\n+ * time = ['T'] time-element\n+ * time-element = HH [minute-element] | [fraction]\n+ * minute-element = ':' mm [second-element] | [fraction]\n+ * second-element = ':' ss [fraction]\n+ * fraction = ('.' | ',') digit+\n+ * </pre>\n+ * @since 1.3\n+ */\n+ public static DateTimeFormatter localTimeParser() {\n+ return Constants.ltp;\n+ }\n+\n+ /**\n+ * Returns a generic ISO time parser.\n+ * <p>\n+ * The returned formatter can only be used for parsing, printing is unsupported.\n+ * <p>\n+ * The parser is strict by default, thus time string {@code 24:00} cannot be parsed.\n+ * <p>\n+ * It accepts formats described by the following syntax:\n+ * <pre>\n+ * time-element = HH [minute-element] | [fraction]\n+ * minute-element = ':' mm [second-element] | [fraction]\n+ * second-element = ':' ss [fraction]\n+ * fraction = ('.' | ',') digit+\n+ * </pre>\n+ */\n+ public static DateTimeFormatter timeElementParser() {\n+ return Constants.tpe;\n+ }\n+\n+ /**\n+ * Returns a generic ISO datetime parser which parses either a date or a time or both.\n+ * <p>\n+ * The returned formatter can only be used for parsing, printing is unsupported.\n+ * <p>\n+ * The parser is strict by default, thus time string {@code 24:00} cannot be parsed.\n+ * <p>\n+ * It accepts formats described by the following syntax:\n+ * <pre>\n+ * datetime = time | date-opt-time\n+ * time = 'T' time-element [offset]\n+ * date-opt-time = date-element ['T' [time-element] [offset]]\n+ * date-element = std-date-element | ord-date-element | week-date-element\n+ * std-date-element = yyyy ['-' MM ['-' dd]]\n+ * ord-date-element = yyyy ['-' DDD]\n+ * week-date-element = xxxx '-W' ww ['-' e]\n+ * time-element = HH [minute-element] | [fraction]\n+ * minute-element = ':' mm [second-element] | [fraction]\n+ * second-element = ':' ss [fraction]\n+ * fraction = ('.' | ',') digit+\n+ * offset = 'Z' | (('+' | '-') HH [':' mm [':' ss [('.' | ',') SSS]]])\n+ * </pre>\n+ */\n+ public static DateTimeFormatter dateTimeParser() {\n+ return Constants.dtp;\n+ }\n+\n+ /**\n+ * Returns a generic ISO datetime parser where the date is mandatory and the time is optional.\n+ * <p>\n+ * The returned formatter can only be used for parsing, printing is unsupported.\n+ * <p>\n+ * This parser can parse zoned datetimes.\n+ * The parser is strict by default, thus time string {@code 24:00} cannot be parsed.\n+ * <p>\n+ * It accepts formats described by the following syntax:\n+ * <pre>\n+ * date-opt-time = date-element ['T' [time-element] [offset]]\n+ * date-element = std-date-element | ord-date-element | week-date-element\n+ * std-date-element = yyyy ['-' MM ['-' dd]]\n+ * ord-date-element = yyyy ['-' DDD]\n+ * week-date-element = xxxx '-W' ww ['-' e]\n+ * time-element = HH [minute-element] | [fraction]\n+ * minute-element = ':' mm [second-element] | [fraction]\n+ * second-element = ':' ss [fraction]\n+ * fraction = ('.' | ',') digit+\n+ * </pre>\n+ * @since 1.3\n+ */\n+ public static DateTimeFormatter dateOptionalTimeParser() {\n+ return Constants.dotp;\n+ }\n+\n+ /**\n+ * Returns a generic ISO datetime parser where the date is mandatory and the time is optional.\n+ * <p>\n+ * The returned formatter can only be used for parsing, printing is unsupported.\n+ * <p>\n+ * This parser only parses local datetimes.\n+ * This parser is initialised with the local (UTC) time zone.\n+ * The parser is strict by default, thus time string {@code 24:00} cannot be parsed.\n+ * <p>\n+ * It accepts formats described by the following syntax:\n+ * <pre>\n+ * datetime = date-element ['T' time-element]\n+ * date-element = std-date-element | ord-date-element | week-date-element\n+ * std-date-element = yyyy ['-' MM ['-' dd]]\n+ * ord-date-element = yyyy ['-' DDD]\n+ * week-date-element = xxxx '-W' ww ['-' e]\n+ * time-element = HH [minute-element] | [fraction]\n+ * minute-element = ':' mm [second-element] | [fraction]\n+ * second-element = ':' ss [fraction]\n+ * fraction = ('.' | ',') digit+\n+ * </pre>\n+ * @since 1.3\n+ */\n+ public static DateTimeFormatter localDateOptionalTimeParser() {\n+ return Constants.ldotp;\n+ }\n+\n+ //-----------------------------------------------------------------------\n+ /**\n+ * Returns a formatter for a full date as four digit year, two digit month\n+ * of year, and two digit day of month (yyyy-MM-dd).\n+ * <p>\n+ * The returned formatter prints and parses only this format.\n+ * See {@link #dateParser()} for a more flexible parser that accepts different formats.\n+ *\n+ * @return a formatter for yyyy-MM-dd\n+ */\n+ public static DateTimeFormatter date() {\n+ return yearMonthDay();\n+ }\n+\n+ /**\n+ * Returns a formatter for a two digit hour of day, two digit minute of\n+ * hour, two digit second of minute, three digit fraction of second, and\n+ * time zone offset (HH:mm:ss.SSSZZ).\n+ * <p>\n+ * The time zone offset is 'Z' for zero, and of the form '\\u00b1HH:mm' for non-zero.\n+ * The parser is strict by default, thus time string {@code 24:00} cannot be parsed.\n+ * <p>\n+ * The returned formatter prints and parses only this format, which includes milliseconds.\n+ * See {@link #timeParser()} for a more flexible parser that accepts different formats.\n+ *\n+ * @return a formatter for HH:mm:ss.SSSZZ\n+ */\n+ public static DateTimeFormatter time() {\n+ return Constants.t;\n+ }\n+\n+ /**\n+ * Returns a formatter for a two digit hour of day, two digit minute of\n+ * hour, two digit second of minute, and time zone offset (HH:mm:ssZZ).\n+ * <p>\n+ * The time zone offset is 'Z' for zero, and of the form '\\u00b1HH:mm' for non-zero.\n+ * The parser is strict by default, thus time string {@code 24:00} cannot be parsed.\n+ * <p>\n+ * The returned formatter prints and parses only this format, which excludes milliseconds.\n+ * See {@link #timeParser()} for a more flexible parser that accepts different formats.\n+ *\n+ * @return a formatter for HH:mm:ssZZ\n+ */\n+ public static DateTimeFormatter timeNoMillis() {\n+ return Constants.tx;\n+ }\n+\n+ /**\n+ * Returns a formatter for a two digit hour of day, two digit minute of\n+ * hour, two digit second of minute, three digit fraction of second, and\n+ * time zone offset prefixed by 'T' ('T'HH:mm:ss.SSSZZ).\n+ * <p>\n+ * The time zone offset is 'Z' for zero, and of the form '\\u00b1HH:mm' for non-zero.\n+ * The parser is strict by default, thus time string {@code 24:00} cannot be parsed.\n+ * <p>\n+ * The returned formatter prints and parses only this format, which includes milliseconds.\n+ * See {@link #timeParser()} for a more flexible parser that accepts different formats.\n+ *\n+ * @return a formatter for 'T'HH:mm:ss.SSSZZ\n+ */\n+ public static DateTimeFormatter tTime() {\n+ return Constants.tt;\n+ }\n+\n+ /**\n+ * Returns a formatter for a two digit hour of day, two digit minute of\n+ * hour, two digit second of minute, and time zone offset prefixed\n+ * by 'T' ('T'HH:mm:ssZZ).\n+ * <p>\n+ * The time zone offset is 'Z' for zero, and of the form '\\u00b1HH:mm' for non-zero.\n+ * The parser is strict by default, thus time string {@code 24:00} cannot be parsed.\n+ * <p>\n+ * The returned formatter prints and parses only this format, which excludes milliseconds.\n+ * See {@link #timeParser()} for a more flexible parser that accepts different formats.\n+ *\n+ * @return a formatter for 'T'HH:mm:ssZZ\n+ */\n+ public static DateTimeFormatter tTimeNoMillis() {\n+ return Constants.ttx;\n+ }\n+\n+ /**\n+ * Returns a formatter that combines a full date and time, separated by a 'T'\n+ * (yyyy-MM-dd'T'HH:mm:ss.SSSZZ).\n+ * <p>\n+ * The time zone offset is 'Z' for zero, and of the form '\\u00b1HH:mm' for non-zero.\n+ * The parser is strict by default, thus time string {@code 24:00} cannot be parsed.\n+ * <p>\n+ * The returned formatter prints and parses only this format, which includes milliseconds.\n+ * See {@link #dateTimeParser()} for a more flexible parser that accepts different formats.\n+ *\n+ * @return a formatter for yyyy-MM-dd'T'HH:mm:ss.SSSZZ\n+ */\n+ public static DateTimeFormatter dateTime() {\n+ return Constants.dt;\n+ }\n+\n+ /**\n+ * Returns a formatter that combines a full date and time without millis,\n+ * separated by a 'T' (yyyy-MM-dd'T'HH:mm:ssZZ).\n+ * <p>\n+ * The time zone offset is 'Z' for zero, and of the form '\\u00b1HH:mm' for non-zero.\n+ * The parser is strict by default, thus time string {@code 24:00} cannot be parsed.\n+ * <p>\n+ * The returned formatter prints and parses only this format, which excludes milliseconds.\n+ * See {@link #dateTimeParser()} for a more flexible parser that accepts different formats.\n+ *\n+ * @return a formatter for yyyy-MM-dd'T'HH:mm:ssZZ\n+ */\n+ public static DateTimeFormatter dateTimeNoMillis() {\n+ return Constants.dtx;\n+ }\n+\n+ /**\n+ * Returns a formatter for a full ordinal date, using a four\n+ * digit year and three digit dayOfYear (yyyy-DDD).\n+ * <p>\n+ * The returned formatter prints and parses only this format.\n+ * See {@link #dateParser()} for a more flexible parser that accepts different formats.\n+ *\n+ * @return a formatter for yyyy-DDD\n+ * @since 1.1\n+ */\n+ public static DateTimeFormatter ordinalDate() {\n+ return Constants.od;\n+ }\n+\n+ /**\n+ * Returns a formatter for a full ordinal date and time, using a four\n+ * digit year and three digit dayOfYear (yyyy-DDD'T'HH:mm:ss.SSSZZ).\n+ * <p>\n+ * The time zone offset is 'Z' for zero, and of the form '\\u00b1HH:mm' for non-zero.\n+ * The parser is strict by default, thus time string {@code 24:00} cannot be parsed.\n+ * <p>\n+ * The returned formatter prints and parses only this format, which includes milliseconds.\n+ * See {@link #dateTimeParser()} for a more flexible parser that accepts different formats.\n+ *\n+ * @return a formatter for yyyy-DDD'T'HH:mm:ss.SSSZZ\n+ * @since 1.1\n+ */\n+ public static DateTimeFormatter ordinalDateTime() {\n+ return Constants.odt;\n+ }\n+\n+ /**\n+ * Returns a formatter for a full ordinal date and time without millis,\n+ * using a four digit year and three digit dayOfYear (yyyy-DDD'T'HH:mm:ssZZ).\n+ * <p>\n+ * The time zone offset is 'Z' for zero, and of the form '\\u00b1HH:mm' for non-zero.\n+ * The parser is strict by default, thus time string {@code 24:00} cannot be parsed.\n+ * <p>\n+ * The returned formatter prints and parses only this format, which excludes milliseconds.\n+ * See {@link #dateTimeParser()} for a more flexible parser that accepts different formats.\n+ *\n+ * @return a formatter for yyyy-DDD'T'HH:mm:ssZZ\n+ * @since 1.1\n+ */\n+ public static DateTimeFormatter ordinalDateTimeNoMillis() {\n+ return Constants.odtx;\n+ }\n+\n+ /**\n+ * Returns a formatter for a full date as four digit weekyear, two digit\n+ * week of weekyear, and one digit day of week (xxxx-'W'ww-e).\n+ * <p>\n+ * The returned formatter prints and parses only this format.\n+ * See {@link #dateParser()} for a more flexible parser that accepts different formats.\n+ *\n+ * @return a formatter for xxxx-'W'ww-e\n+ */\n+ public static DateTimeFormatter weekDate() {\n+ return Constants.wwd;\n+ }\n+\n+ /**\n+ * Returns a formatter that combines a full weekyear date and time,\n+ * separated by a 'T' (xxxx-'W'ww-e'T'HH:mm:ss.SSSZZ).\n+ * <p>\n+ * The time zone offset is 'Z' for zero, and of the form '\\u00b1HH:mm' for non-zero.\n+ * The parser is strict by default, thus time string {@code 24:00} cannot be parsed.\n+ * <p>\n+ * The returned formatter prints and parses only this format, which includes milliseconds.\n+ * See {@link #dateTimeParser()} for a more flexible parser that accepts different formats.\n+ *\n+ * @return a formatter for xxxx-'W'ww-e'T'HH:mm:ss.SSSZZ\n+ */\n+ public static DateTimeFormatter weekDateTime() {\n+ return Constants.wdt;\n+ }\n+\n+ /**\n+ * Returns a formatter that combines a full weekyear date and time without millis,\n+ * separated by a 'T' (xxxx-'W'ww-e'T'HH:mm:ssZZ).\n+ * <p>\n+ * The time zone offset is 'Z' for zero, and of the form '\\u00b1HH:mm' for non-zero.\n+ * The parser is strict by default, thus time string {@code 24:00} cannot be parsed.\n+ * <p>\n+ * The returned formatter prints and parses only this format, which excludes milliseconds.\n+ * See {@link #dateTimeParser()} for a more flexible parser that accepts different formats.\n+ *\n+ * @return a formatter for xxxx-'W'ww-e'T'HH:mm:ssZZ\n+ */\n+ public static DateTimeFormatter weekDateTimeNoMillis() {\n+ return Constants.wdtx;\n+ }\n+\n+ //-----------------------------------------------------------------------\n+ /**\n+ * Returns a basic formatter for a full date as four digit year, two digit\n+ * month of year, and two digit day of month (yyyyMMdd).\n+ * <p>\n+ * The returned formatter prints and parses only this format.\n+ *\n+ * @return a formatter for yyyyMMdd\n+ */\n+ public static DateTimeFormatter basicDate() {\n+ return Constants.bd;\n+ }\n+\n+ /**\n+ * Returns a basic formatter for a two digit hour of day, two digit minute\n+ * of hour, two digit second of minute, three digit millis, and time zone\n+ * offset (HHmmss.SSSZ).\n+ * <p>\n+ * The time zone offset is 'Z' for zero, and of the form '\\u00b1HHmm' for non-zero.\n+ * The parser is strict by default, thus time string {@code 24:00} cannot be parsed.\n+ * <p>\n+ * The returned formatter prints and parses only this format, which includes milliseconds.\n+ *\n+ * @return a formatter for HHmmss.SSSZ\n+ */\n+ public static DateTimeFormatter basicTime() {\n+ return Constants.bt;\n+ }\n+\n+ /**\n+ * Returns a basic formatter for a two digit hour of day, two digit minute\n+ * of hour, two digit second of minute, and time zone offset (HHmmssZ).\n+ * <p>\n+ * The time zone offset is 'Z' for zero, and of the form '\\u00b1HHmm' for non-zero.\n+ * The parser is strict by default, thus time string {@code 24:00} cannot be parsed.\n+ * <p>\n+ * The returned formatter prints and parses only this format, which excludes milliseconds.\n+ *\n+ * @return a formatter for HHmmssZ\n+ */\n+ public static DateTimeFormatter basicTimeNoMillis() {\n+ return Constants.btx;\n+ }\n+\n+ /**\n+ * Returns a basic formatter for a two digit hour of day, two digit minute\n+ * of hour, two digit second of minute, three digit millis, and time zone\n+ * offset prefixed by 'T' ('T'HHmmss.SSSZ).\n+ * <p>\n+ * The time zone offset is 'Z' for zero, and of the form '\\u00b1HHmm' for non-zero.\n+ * The parser is strict by default, thus time string {@code 24:00} cannot be parsed.\n+ * <p>\n+ * The returned formatter prints and parses only this format, which includes milliseconds.\n+ *\n+ * @return a formatter for 'T'HHmmss.SSSZ\n+ */\n+ public static DateTimeFormatter basicTTime() {\n+ return Constants.btt;\n+ }\n+\n+ /**\n+ * Returns a basic formatter for a two digit hour of day, two digit minute\n+ * of hour, two digit second of minute, and time zone offset prefixed by 'T'\n+ * ('T'HHmmssZ).\n+ * <p>\n+ * The time zone offset is 'Z' for zero, and of the form '\\u00b1HHmm' for non-zero.\n+ * The parser is strict by default, thus time string {@code 24:00} cannot be parsed.\n+ * <p>\n+ * The returned formatter prints and parses only this format, which excludes milliseconds.\n+ *\n+ * @return a formatter for 'T'HHmmssZ\n+ */\n+ public static DateTimeFormatter basicTTimeNoMillis() {\n+ return Constants.bttx;\n+ }\n+\n+ /**\n+ * Returns a basic formatter that combines a basic date and time, separated\n+ * by a 'T' (yyyyMMdd'T'HHmmss.SSSZ).\n+ * <p>\n+ * The time zone offset is 'Z' for zero, and of the form '\\u00b1HHmm' for non-zero.\n+ * The parser is strict by default, thus time string {@code 24:00} cannot be parsed.\n+ * <p>\n+ * The returned formatter prints and parses only this format, which includes milliseconds.\n+ *\n+ * @return a formatter for yyyyMMdd'T'HHmmss.SSSZ\n+ */\n+ public static DateTimeFormatter basicDateTime() {\n+ return Constants.bdt;\n+ }\n+\n+ /**\n+ * Returns a basic formatter that combines a basic date and time without millis,\n+ * separated by a 'T' (yyyyMMdd'T'HHmmssZ).\n+ * <p>\n+ * The time zone offset is 'Z' for zero, and of the form '\\u00b1HHmm' for non-zero.\n+ * The parser is strict by default, thus time string {@code 24:00} cannot be parsed.\n+ * <p>\n+ * The returned formatter prints and parses only this format, which excludes milliseconds.\n+ *\n+ * @return a formatter for yyyyMMdd'T'HHmmssZ\n+ */\n+ public static DateTimeFormatter basicDateTimeNoMillis() {\n+ return Constants.bdtx;\n+ }\n+\n+ /**\n+ * Returns a formatter for a full ordinal date, using a four\n+ * digit year and three digit dayOfYear (yyyyDDD).\n+ * <p>\n+ * The returned formatter prints and parses only this format.\n+ *\n+ * @return a formatter for yyyyDDD\n+ * @since 1.1\n+ */\n+ public static DateTimeFormatter basicOrdinalDate() {\n+ return Constants.bod;\n+ }\n+\n+ /**\n+ * Returns a formatter for a full ordinal date and time, using a four\n+ * digit year and three digit dayOfYear (yyyyDDD'T'HHmmss.SSSZ).\n+ * <p>\n+ * The time zone offset is 'Z' for zero, and of the form '\\u00b1HHmm' for non-zero.\n+ * The parser is strict by default, thus time string {@code 24:00} cannot be parsed.\n+ * <p>\n+ * The returned formatter prints and parses only this format, which includes milliseconds.\n+ *\n+ * @return a formatter for yyyyDDD'T'HHmmss.SSSZ\n+ * @since 1.1\n+ */\n+ public static DateTimeFormatter basicOrdinalDateTime() {\n+ return Constants.bodt;\n+ }\n+\n+ /**\n+ * Returns a formatter for a full ordinal date and time without millis,\n+ * using a four digit year and three digit dayOfYear (yyyyDDD'T'HHmmssZ).\n+ * <p>\n+ * The time zone offset is 'Z' for zero, and of the form '\\u00b1HHmm' for non-zero.\n+ * The parser is strict by default, thus time string {@code 24:00} cannot be parsed.\n+ * <p>\n+ * The returned formatter prints and parses only this format, which excludes milliseconds.\n+ *\n+ * @return a formatter for yyyyDDD'T'HHmmssZ\n+ * @since 1.1\n+ */\n+ public static DateTimeFormatter basicOrdinalDateTimeNoMillis() {\n+ return Constants.bodtx;\n+ }\n+\n+ /**\n+ * Returns a basic formatter for a full date as four digit weekyear, two\n+ * digit week of weekyear, and one digit day of week (xxxx'W'wwe).\n+ * <p>\n+ * The returned formatter prints and parses only this format.\n+ *\n+ * @return a formatter for xxxx'W'wwe\n+ */\n+ public static DateTimeFormatter basicWeekDate() {\n+ return Constants.bwd;\n+ }\n+\n+ /**\n+ * Returns a basic formatter that combines a basic weekyear date and time,\n+ * separated by a 'T' (xxxx'W'wwe'T'HHmmss.SSSZ).\n+ * <p>\n+ * The time zone offset is 'Z' for zero, and of the form '\\u00b1HHmm' for non-zero.\n+ * The parser is strict by default, thus time string {@code 24:00} cannot be parsed.\n+ * <p>\n+ * The returned formatter prints and parses only this format, which includes milliseconds.\n+ *\n+ * @return a formatter for xxxx'W'wwe'T'HHmmss.SSSZ\n+ */\n+ public static DateTimeFormatter basicWeekDateTime() {\n+ return Constants.bwdt;\n+ }\n+\n+ /**\n+ * Returns a basic formatter that combines a basic weekyear date and time\n+ * without millis, separated by a 'T' (xxxx'W'wwe'T'HHmmssZ).\n+ * <p>\n+ * The time zone offset is 'Z' for zero, and of the form '\\u00b1HHmm' for non-zero.\n+ * The parser is strict by default, thus time string {@code 24:00} cannot be parsed.\n+ * <p>\n+ * The returned formatter prints and parses only this format, which excludes milliseconds.\n+ *\n+ * @return a formatter for xxxx'W'wwe'T'HHmmssZ\n+ */\n+ public static DateTimeFormatter basicWeekDateTimeNoMillis() {\n+ return Constants.bwdtx;\n+ }\n+\n+ //-----------------------------------------------------------------------\n+ /**\n+ * Returns a formatter for a four digit year. (yyyy)\n+ *\n+ * @return a formatter for yyyy\n+ */\n+ public static DateTimeFormatter year() {\n+ return Constants.ye;\n+ }\n+\n+ /**\n+ * Returns a formatter for a four digit year and two digit month of\n+ * year. (yyyy-MM)\n+ *\n+ * @return a formatter for yyyy-MM\n+ */\n+ public static DateTimeFormatter yearMonth() {\n+ return Constants.ym;\n+ }\n+\n+ /**\n+ * Returns a formatter for a four digit year, two digit month of year, and\n+ * two digit day of month. (yyyy-MM-dd)\n+ *\n+ * @return a formatter for yyyy-MM-dd\n+ */\n+ public static DateTimeFormatter yearMonthDay() {\n+ return Constants.ymd;\n+ }\n+\n+ /**\n+ * Returns a formatter for a four digit weekyear. (xxxx)\n+ *\n+ * @return a formatter for xxxx\n+ */\n+ public static DateTimeFormatter weekyear() {\n+ return Constants.we;\n+ }\n+\n+ /**\n+ * Returns a formatter for a four digit weekyear and two digit week of\n+ * weekyear. (xxxx-'W'ww)\n+ *\n+ * @return a formatter for xxxx-'W'ww\n+ */\n+ public static DateTimeFormatter weekyearWeek() {\n+ return Constants.ww;\n+ }\n+\n+ /**\n+ * Returns a formatter for a four digit weekyear, two digit week of\n+ * weekyear, and one digit day of week. (xxxx-'W'ww-e)\n+ *\n+ * @return a formatter for xxxx-'W'ww-e\n+ */\n+ public static DateTimeFormatter weekyearWeekDay() {\n+ return Constants.wwd;\n+ }\n+\n+ /**\n+ * Returns a formatter for a two digit hour of day. (HH)\n+ *\n+ * @return a formatter for HH\n+ */\n+ public static DateTimeFormatter hour() {\n+ return Constants.hde;\n+ }\n+\n+ /**\n+ * Returns a formatter for a two digit hour of day and two digit minute of\n+ * hour. (HH:mm)\n+ *\n+ * @return a formatter for HH:mm\n+ */\n+ public static DateTimeFormatter hourMinute() {\n+ return Constants.hm;\n+ }\n+\n+ /**\n+ * Returns a formatter for a two digit hour of day, two digit minute of\n+ * hour, and two digit second of minute. (HH:mm:ss)\n+ *\n+ * @return a formatter for HH:mm:ss\n+ */\n+ public static DateTimeFormatter hourMinuteSecond() {\n+ return Constants.hms;\n+ }\n+\n+ /**\n+ * Returns a formatter for a two digit hour of day, two digit minute of\n+ * hour, two digit second of minute, and three digit fraction of\n+ * second (HH:mm:ss.SSS). Parsing will parse up to 3 fractional second\n+ * digits.\n+ *\n+ * @return a formatter for HH:mm:ss.SSS\n+ */\n+ public static DateTimeFormatter hourMinuteSecondMillis() {\n+ return Constants.hmsl;\n+ }\n+\n+ /**\n+ * Returns a formatter for a two digit hour of day, two digit minute of\n+ * hour, two digit second of minute, and three digit fraction of\n+ * second (HH:mm:ss.SSS). Parsing will parse up to 9 fractional second\n+ * digits, throwing away all except the first three.\n+ *\n+ * @return a formatter for HH:mm:ss.SSS\n+ */\n+ public static DateTimeFormatter hourMinuteSecondFraction() {\n+ return Constants.hmsf;\n+ }\n+\n+ /**\n+ * Returns a formatter that combines a full date and two digit hour of\n+ * day. (yyyy-MM-dd'T'HH)\n+ *\n+ * @return a formatter for yyyy-MM-dd'T'HH\n+ */\n+ public static DateTimeFormatter dateHour() {\n+ return Constants.dh;\n+ }\n+\n+ /**\n+ * Returns a formatter that combines a full date, two digit hour of day,\n+ * and two digit minute of hour. (yyyy-MM-dd'T'HH:mm)\n+ *\n+ * @return a formatter for yyyy-MM-dd'T'HH:mm\n+ */\n+ public static DateTimeFormatter dateHourMinute() {\n+ return Constants.dhm;\n+ }\n+\n+ /**\n+ * Returns a formatter that combines a full date, two digit hour of day,\n+ * two digit minute of hour, and two digit second of\n+ * minute. (yyyy-MM-dd'T'HH:mm:ss)\n+ *\n+ * @return a formatter for yyyy-MM-dd'T'HH:mm:ss\n+ */\n+ public static DateTimeFormatter dateHourMinuteSecond() {\n+ return Constants.dhms;\n+ }\n+\n+ /**\n+ * Returns a formatter that combines a full date, two digit hour of day,\n+ * two digit minute of hour, two digit second of minute, and three digit\n+ * fraction of second (yyyy-MM-dd'T'HH:mm:ss.SSS). Parsing will parse up\n+ * to 3 fractional second digits.\n+ *\n+ * @return a formatter for yyyy-MM-dd'T'HH:mm:ss.SSS\n+ */\n+ public static DateTimeFormatter dateHourMinuteSecondMillis() {\n+ return Constants.dhmsl;\n+ }\n+\n+ /**\n+ * Returns a formatter that combines a full date, two digit hour of day,\n+ * two digit minute of hour, two digit second of minute, and three digit\n+ * fraction of second (yyyy-MM-dd'T'HH:mm:ss.SSS). Parsing will parse up\n+ * to 9 fractional second digits, throwing away all except the first three.\n+ *\n+ * @return a formatter for yyyy-MM-dd'T'HH:mm:ss.SSS\n+ */\n+ public static DateTimeFormatter dateHourMinuteSecondFraction() {\n+ return Constants.dhmsf;\n+ }\n+\n+ //-----------------------------------------------------------------------\n+ static final class Constants {\n+ private static final DateTimeFormatter\n+ ye = yearElement(), // year element (yyyy)\n+ mye = monthElement(), // monthOfYear element (-MM)\n+ dme = dayOfMonthElement(), // dayOfMonth element (-dd)\n+ we = weekyearElement(), // weekyear element (xxxx)\n+ wwe = weekElement(), // weekOfWeekyear element (-ww)\n+ dwe = dayOfWeekElement(), // dayOfWeek element (-ee)\n+ dye = dayOfYearElement(), // dayOfYear element (-DDD)\n+ hde = hourElement(), // hourOfDay element (HH)\n+ mhe = minuteElement(), // minuteOfHour element (:mm)\n+ sme = secondElement(), // secondOfMinute element (:ss)\n+ fse = fractionElement(), // fractionOfSecond element (.SSSSSSSSS)\n+ ze = offsetElement(), // zone offset element\n+ lte = literalTElement(), // literal 'T' element\n+\n+ //y, // year (same as year element)\n+ ym = yearMonth(), // year month\n+ ymd = yearMonthDay(), // year month day\n+\n+ //w, // weekyear (same as weekyear element)\n+ ww = weekyearWeek(), // weekyear week\n+ wwd = weekyearWeekDay(), // weekyear week day\n+\n+ //h, // hour (same as hour element)\n+ hm = hourMinute(), // hour minute\n+ hms = hourMinuteSecond(), // hour minute second\n+ hmsl = hourMinuteSecondMillis(), // hour minute second millis\n+ hmsf = hourMinuteSecondFraction(), // hour minute second fraction\n+\n+ dh = dateHour(), // date hour\n+ dhm = dateHourMinute(), // date hour minute\n+ dhms = dateHourMinuteSecond(), // date hour minute second\n+ dhmsl = dateHourMinuteSecondMillis(), // date hour minute second millis\n+ dhmsf = dateHourMinuteSecondFraction(), // date hour minute second fraction\n+\n+ //d, // date (same as ymd)\n+ t = time(), // time\n+ tx = timeNoMillis(), // time no millis\n+ tt = tTime(), // Ttime\n+ ttx = tTimeNoMillis(), // Ttime no millis\n+ dt = dateTime(), // date time\n+ dtx = dateTimeNoMillis(), // date time no millis\n+\n+ //wd, // week date (same as wwd)\n+ wdt = weekDateTime(), // week date time\n+ wdtx = weekDateTimeNoMillis(), // week date time no millis\n+\n+ od = ordinalDate(), // ordinal date (same as yd)\n+ odt = ordinalDateTime(), // ordinal date time\n+ odtx = ordinalDateTimeNoMillis(), // ordinal date time no millis\n+\n+ bd = basicDate(), // basic date\n+ bt = basicTime(), // basic time\n+ btx = basicTimeNoMillis(), // basic time no millis\n+ btt = basicTTime(), // basic Ttime\n+ bttx = basicTTimeNoMillis(), // basic Ttime no millis\n+ bdt = basicDateTime(), // basic date time\n+ bdtx = basicDateTimeNoMillis(), // basic date time no millis\n+\n+ bod = basicOrdinalDate(), // basic ordinal date\n+ bodt = basicOrdinalDateTime(), // basic ordinal date time\n+ bodtx = basicOrdinalDateTimeNoMillis(), // basic ordinal date time no millis\n+\n+ bwd = basicWeekDate(), // basic week date\n+ bwdt = basicWeekDateTime(), // basic week date time\n+ bwdtx = basicWeekDateTimeNoMillis(), // basic week date time no millis\n+\n+ dpe = dateElementParser(), // date parser element\n+ tpe = timeElementParser(), // time parser element\n+ dp = dateParser(), // date parser\n+ ldp = localDateParser(), // local date parser\n+ tp = timeParser(), // time parser\n+ ltp = localTimeParser(), // local time parser\n+ dtp = dateTimeParser(), // date time parser\n+ dotp = dateOptionalTimeParser(), // date optional time parser\n+ ldotp = localDateOptionalTimeParser(); // local date optional time parser\n+\n+ //-----------------------------------------------------------------------\n+ private static DateTimeFormatter dateParser() {\n+ if (dp == null) {\n+ DateTimeParser tOffset = new DateTimeFormatterBuilder()\n+ .appendLiteral('T')\n+ .append(offsetElement()).toParser();\n+ return new DateTimeFormatterBuilder()\n+ .append(dateElementParser())\n+ .appendOptional(tOffset)\n+ .toFormatter();\n+ }\n+ return dp;\n+ }\n+\n+ private static DateTimeFormatter localDateParser() {\n+ if (ldp == null) {\n+ return dateElementParser().withZoneUTC();\n+ }\n+ return ldp;\n+ }\n+\n+ private static DateTimeFormatter dateElementParser() {\n+ if (dpe == null) {\n+ return new DateTimeFormatterBuilder()\n+ .append(null, new DateTimeParser[] {\n+ new DateTimeFormatterBuilder()\n+ .append(yearElement())\n+ .appendOptional\n+ (new DateTimeFormatterBuilder()\n+ .append(monthElement())\n+ .appendOptional(dayOfMonthElement().getParser())\n+ .toParser())\n+ .toParser(),\n+ new DateTimeFormatterBuilder()\n+ .append(weekyearElement())\n+ .append(weekElement())\n+ .appendOptional(dayOfWeekElement().getParser())\n+ .toParser(),\n+ new DateTimeFormatterBuilder()\n+ .append(yearElement())\n+ .append(dayOfYearElement())\n+ .toParser()\n+ })\n+ .toFormatter();\n+ }\n+ return dpe;\n+ }\n+\n+ private static DateTimeFormatter timeParser() {\n+ if (tp == null) {\n+ return new DateTimeFormatterBuilder()\n+ .appendOptional(literalTElement().getParser())\n+ .append(timeElementParser())\n+ .appendOptional(offsetElement().getParser())\n+ .toFormatter();\n+ }\n+ return tp;\n+ }\n+\n+ private static DateTimeFormatter localTimeParser() {\n+ if (ltp == null) {\n+ return new DateTimeFormatterBuilder()\n+ .appendOptional(literalTElement().getParser())\n+ .append(timeElementParser())\n+ .toFormatter().withZoneUTC();\n+ }\n+ return ltp;\n+ }\n+\n+ private static DateTimeFormatter timeElementParser() {\n+ if (tpe == null) {\n+ // Decimal point can be either '.' or ','\n+ DateTimeParser decimalPoint = new DateTimeFormatterBuilder()\n+ .append(null, new DateTimeParser[] {\n+ new DateTimeFormatterBuilder()\n+ .appendLiteral('.')\n+ .toParser(),\n+ new DateTimeFormatterBuilder()\n+ .appendLiteral(',')\n+ .toParser()\n+ })\n+ .toParser();\n+\n+ return new DateTimeFormatterBuilder()\n+ // time-element\n+ .append(hourElement())\n+ .append\n+ (null, new DateTimeParser[] {\n+ new DateTimeFormatterBuilder()\n+ // minute-element\n+ .append(minuteElement())\n+ .append\n+ (null, new DateTimeParser[] {\n+ new DateTimeFormatterBuilder()\n+ // second-element\n+ .append(secondElement())\n+ // second fraction\n+ .appendOptional(new DateTimeFormatterBuilder()\n+ .append(decimalPoint)\n+ .appendFractionOfSecond(1, 9)\n+ .toParser())\n+ .toParser(),\n+ // minute fraction\n+ new DateTimeFormatterBuilder()\n+ .append(decimalPoint)\n+ .appendFractionOfMinute(1, 9)\n+ .toParser(),\n+ null\n+ })\n+ .toParser(),\n+ // hour fraction\n+ new DateTimeFormatterBuilder()\n+ .append(decimalPoint)\n+ .appendFractionOfHour(1, 9)\n+ .toParser(),\n+ null\n+ })\n+ .toFormatter();\n+ }\n+ return tpe;\n+ }\n+\n+ private static DateTimeFormatter dateTimeParser() {\n+ if (dtp == null) {\n+ // This is different from the general time parser in that the 'T'\n+ // is required.\n+ DateTimeParser time = new DateTimeFormatterBuilder()\n+ .appendLiteral('T')\n+ .append(timeElementParser())\n+ .appendOptional(offsetElement().getParser())\n+ .toParser();\n+ return new DateTimeFormatterBuilder()\n+ .append(null, new DateTimeParser[] {time, dateOptionalTimeParser().getParser()})\n+ .toFormatter();\n+ }\n+ return dtp;\n+ }\n+\n+ private static DateTimeFormatter dateOptionalTimeParser() {\n+ if (dotp == null) {\n+ DateTimeParser timeOrOffset = new DateTimeFormatterBuilder()\n+ .appendLiteral('T')\n+ .appendOptional(timeElementParser().getParser())\n+ .appendOptional(offsetElement().getParser())\n+ .toParser();\n+ return new DateTimeFormatterBuilder()\n+ .append(dateElementParser())\n+ .appendOptional(timeOrOffset)\n+ .toFormatter();\n+ }\n+ return dotp;\n+ }\n+\n+ private static DateTimeFormatter localDateOptionalTimeParser() {\n+ if (ldotp == null) {\n+ DateTimeParser time = new DateTimeFormatterBuilder()\n+ .appendLiteral('T')\n+ .append(timeElementParser())\n+ .toParser();\n+ return new DateTimeFormatterBuilder()\n+ .append(dateElementParser())\n+ .appendOptional(time)\n+ .toFormatter().withZoneUTC();\n+ }\n+ return ldotp;\n+ }\n+\n+ //-----------------------------------------------------------------------\n+ private static DateTimeFormatter time() {\n+ if (t == null) {\n+ return new DateTimeFormatterBuilder()\n+ .append(hourMinuteSecondFraction())\n+ .append(offsetElement())\n+ .toFormatter();\n+ }\n+ return t;\n+ }\n+\n+ private static DateTimeFormatter timeNoMillis() {\n+ if (tx == null) {\n+ return new DateTimeFormatterBuilder()\n+ .append(hourMinuteSecond())\n+ .append(offsetElement())\n+ .toFormatter();\n+ }\n+ return tx;\n+ }\n+\n+ private static DateTimeFormatter tTime() {\n+ if (tt == null) {\n+ return new DateTimeFormatterBuilder()\n+ .append(literalTElement())\n+ .append(time())\n+ .toFormatter();\n+ }\n+ return tt;\n+ }\n+\n+ private static DateTimeFormatter tTimeNoMillis() {\n+ if (ttx == null) {\n+ return new DateTimeFormatterBuilder()\n+ .append(literalTElement())\n+ .append(timeNoMillis())\n+ .toFormatter();\n+ }\n+ return ttx;\n+ }\n+\n+ private static DateTimeFormatter dateTime() {\n+ if (dt == null) {\n+ return new DateTimeFormatterBuilder()\n+ .append(date())\n+ .append(tTime())\n+ .toFormatter();\n+ }\n+ return dt;\n+ }\n+\n+ private static DateTimeFormatter dateTimeNoMillis() {\n+ if (dtx == null) {\n+ return new DateTimeFormatterBuilder()\n+ .append(date())\n+ .append(tTimeNoMillis())\n+ .toFormatter();\n+ }\n+ return dtx;\n+ }\n+\n+ private static DateTimeFormatter ordinalDate() {\n+ if (od == null) {\n+ return new DateTimeFormatterBuilder()\n+ .append(yearElement())\n+ .append(dayOfYearElement())\n+ .toFormatter();\n+ }\n+ return od;\n+ }\n+\n+ private static DateTimeFormatter ordinalDateTime() {\n+ if (odt == null) {\n+ return new DateTimeFormatterBuilder()\n+ .append(ordinalDate())\n+ .append(tTime())\n+ .toFormatter();\n+ }\n+ return odt;\n+ }\n+\n+ private static DateTimeFormatter ordinalDateTimeNoMillis() {\n+ if (odtx == null) {\n+ return new DateTimeFormatterBuilder()\n+ .append(ordinalDate())\n+ .append(tTimeNoMillis())\n+ .toFormatter();\n+ }\n+ return odtx;\n+ }\n+\n+ private static DateTimeFormatter weekDateTime() {\n+ if (wdt == null) {\n+ return new DateTimeFormatterBuilder()\n+ .append(weekDate())\n+ .append(tTime())\n+ .toFormatter();\n+ }\n+ return wdt;\n+ }\n+\n+ private static DateTimeFormatter weekDateTimeNoMillis() {\n+ if (wdtx == null) {\n+ return new DateTimeFormatterBuilder()\n+ .append(weekDate())\n+ .append(tTimeNoMillis())\n+ .toFormatter();\n+ }\n+ return wdtx;\n+ }\n+\n+ //-----------------------------------------------------------------------\n+ private static DateTimeFormatter basicDate() {\n+ if (bd == null) {\n+ return new DateTimeFormatterBuilder()\n+ .appendYear(4, 4)\n+ .appendFixedDecimal(DateTimeFieldType.monthOfYear(), 2)\n+ .appendFixedDecimal(DateTimeFieldType.dayOfMonth(), 2)\n+ .toFormatter();\n+ }\n+ return bd;\n+ }\n+\n+ private static DateTimeFormatter basicTime() {\n+ if (bt == null) {\n+ return new DateTimeFormatterBuilder()\n+ .appendFixedDecimal(DateTimeFieldType.hourOfDay(), 2)\n+ .appendFixedDecimal(DateTimeFieldType.minuteOfHour(), 2)\n+ .appendFixedDecimal(DateTimeFieldType.secondOfMinute(), 2)\n+ .appendLiteral('.')\n+ .appendFractionOfSecond(3, 9)\n+ .appendTimeZoneOffset(\"Z\", false, 2, 2)\n+ .toFormatter();\n+ }\n+ return bt;\n+ }\n+\n+ private static DateTimeFormatter basicTimeNoMillis() {\n+ if (btx == null) {\n+ return new DateTimeFormatterBuilder()\n+ .appendFixedDecimal(DateTimeFieldType.hourOfDay(), 2)\n+ .appendFixedDecimal(DateTimeFieldType.minuteOfHour(), 2)\n+ .appendFixedDecimal(DateTimeFieldType.secondOfMinute(), 2)\n+ .appendTimeZoneOffset(\"Z\", false, 2, 2)\n+ .toFormatter();\n+ }\n+ return btx;\n+ }\n+\n+ private static DateTimeFormatter basicTTime() {\n+ if (btt == null) {\n+ return new DateTimeFormatterBuilder()\n+ .append(literalTElement())\n+ .append(basicTime())\n+ .toFormatter();\n+ }\n+ return btt;\n+ }\n+\n+ private static DateTimeFormatter basicTTimeNoMillis() {\n+ if (bttx == null) {\n+ return new DateTimeFormatterBuilder()\n+ .append(literalTElement())\n+ .append(basicTimeNoMillis())\n+ .toFormatter();\n+ }\n+ return bttx;\n+ }\n+\n+ private static DateTimeFormatter basicDateTime() {\n+ if (bdt == null) {\n+ return new DateTimeFormatterBuilder()\n+ .append(basicDate())\n+ .append(basicTTime())\n+ .toFormatter();\n+ }\n+ return bdt;\n+ }\n+\n+ private static DateTimeFormatter basicDateTimeNoMillis() {\n+ if (bdtx == null) {\n+ return new DateTimeFormatterBuilder()\n+ .append(basicDate())\n+ .append(basicTTimeNoMillis())\n+ .toFormatter();\n+ }\n+ return bdtx;\n+ }\n+\n+ private static DateTimeFormatter basicOrdinalDate() {\n+ if (bod == null) {\n+ return new DateTimeFormatterBuilder()\n+ .appendYear(4, 4)\n+ .appendFixedDecimal(DateTimeFieldType.dayOfYear(), 3)\n+ .toFormatter();\n+ }\n+ return bod;\n+ }\n+\n+ private static DateTimeFormatter basicOrdinalDateTime() {\n+ if (bodt == null) {\n+ return new DateTimeFormatterBuilder()\n+ .append(basicOrdinalDate())\n+ .append(basicTTime())\n+ .toFormatter();\n+ }\n+ return bodt;\n+ }\n+\n+ private static DateTimeFormatter basicOrdinalDateTimeNoMillis() {\n+ if (bodtx == null) {\n+ return new DateTimeFormatterBuilder()\n+ .append(basicOrdinalDate())\n+ .append(basicTTimeNoMillis())\n+ .toFormatter();\n+ }\n+ return bodtx;\n+ }\n+\n+ private static DateTimeFormatter basicWeekDate() {\n+ if (bwd == null) {\n+ return new DateTimeFormatterBuilder()\n+ .appendFixedSignedDecimal(DateTimeFieldType.weekyear(), 4) // ES change, was .appendWeekyear(4, 4)\n+ .appendLiteral('W')\n+ .appendFixedDecimal(DateTimeFieldType.weekOfWeekyear(), 2)\n+ .appendFixedDecimal(DateTimeFieldType.dayOfWeek(), 1)\n+ .toFormatter();\n+ }\n+ return bwd;\n+ }\n+\n+ private static DateTimeFormatter basicWeekDateTime() {\n+ if (bwdt == null) {\n+ return new DateTimeFormatterBuilder()\n+ .append(basicWeekDate())\n+ .append(basicTTime())\n+ .toFormatter();\n+ }\n+ return bwdt;\n+ }\n+\n+ private static DateTimeFormatter basicWeekDateTimeNoMillis() {\n+ if (bwdtx == null) {\n+ return new DateTimeFormatterBuilder()\n+ .append(basicWeekDate())\n+ .append(basicTTimeNoMillis())\n+ .toFormatter();\n+ }\n+ return bwdtx;\n+ }\n+\n+ //-----------------------------------------------------------------------\n+ private static DateTimeFormatter yearMonth() {\n+ if (ym == null) {\n+ return new DateTimeFormatterBuilder()\n+ .append(yearElement())\n+ .append(monthElement())\n+ .toFormatter();\n+ }\n+ return ym;\n+ }\n+\n+ private static DateTimeFormatter yearMonthDay() {\n+ if (ymd == null) {\n+ return new DateTimeFormatterBuilder()\n+ .append(yearElement())\n+ .append(monthElement())\n+ .append(dayOfMonthElement())\n+ .toFormatter();\n+ }\n+ return ymd;\n+ }\n+\n+ private static DateTimeFormatter weekyearWeek() {\n+ if (ww == null) {\n+ return new DateTimeFormatterBuilder()\n+ .append(weekyearElement())\n+ .append(weekElement())\n+ .toFormatter();\n+ }\n+ return ww;\n+ }\n+\n+ private static DateTimeFormatter weekyearWeekDay() {\n+ if (wwd == null) {\n+ return new DateTimeFormatterBuilder()\n+ .append(weekyearElement())\n+ .append(weekElement())\n+ .append(dayOfWeekElement())\n+ .toFormatter();\n+ }\n+ return wwd;\n+ }\n+\n+ private static DateTimeFormatter hourMinute() {\n+ if (hm == null) {\n+ return new DateTimeFormatterBuilder()\n+ .append(hourElement())\n+ .append(minuteElement())\n+ .toFormatter();\n+ }\n+ return hm;\n+ }\n+\n+ private static DateTimeFormatter hourMinuteSecond() {\n+ if (hms == null) {\n+ return new DateTimeFormatterBuilder()\n+ .append(hourElement())\n+ .append(minuteElement())\n+ .append(secondElement())\n+ .toFormatter();\n+ }\n+ return hms;\n+ }\n+\n+ private static DateTimeFormatter hourMinuteSecondMillis() {\n+ if (hmsl == null) {\n+ return new DateTimeFormatterBuilder()\n+ .append(hourElement())\n+ .append(minuteElement())\n+ .append(secondElement())\n+ .appendLiteral('.')\n+ .appendFractionOfSecond(3, 3)\n+ .toFormatter();\n+ }\n+ return hmsl;\n+ }\n+\n+ private static DateTimeFormatter hourMinuteSecondFraction() {\n+ if (hmsf == null) {\n+ return new DateTimeFormatterBuilder()\n+ .append(hourElement())\n+ .append(minuteElement())\n+ .append(secondElement())\n+ .append(fractionElement())\n+ .toFormatter();\n+ }\n+ return hmsf;\n+ }\n+\n+ private static DateTimeFormatter dateHour() {\n+ if (dh == null) {\n+ return new DateTimeFormatterBuilder()\n+ .append(date())\n+ .append(literalTElement())\n+ .append(hour())\n+ .toFormatter();\n+ }\n+ return dh;\n+ }\n+\n+ private static DateTimeFormatter dateHourMinute() {\n+ if (dhm == null) {\n+ return new DateTimeFormatterBuilder()\n+ .append(date())\n+ .append(literalTElement())\n+ .append(hourMinute())\n+ .toFormatter();\n+ }\n+ return dhm;\n+ }\n+\n+ private static DateTimeFormatter dateHourMinuteSecond() {\n+ if (dhms == null) {\n+ return new DateTimeFormatterBuilder()\n+ .append(date())\n+ .append(literalTElement())\n+ .append(hourMinuteSecond())\n+ .toFormatter();\n+ }\n+ return dhms;\n+ }\n+\n+ private static DateTimeFormatter dateHourMinuteSecondMillis() {\n+ if (dhmsl == null) {\n+ return new DateTimeFormatterBuilder()\n+ .append(date())\n+ .append(literalTElement())\n+ .append(hourMinuteSecondMillis())\n+ .toFormatter();\n+ }\n+ return dhmsl;\n+ }\n+\n+ private static DateTimeFormatter dateHourMinuteSecondFraction() {\n+ if (dhmsf == null) {\n+ return new DateTimeFormatterBuilder()\n+ .append(date())\n+ .append(literalTElement())\n+ .append(hourMinuteSecondFraction())\n+ .toFormatter();\n+ }\n+ return dhmsf;\n+ }\n+\n+ //-----------------------------------------------------------------------\n+ private static DateTimeFormatter yearElement() {\n+ if (ye == null) {\n+ return new DateTimeFormatterBuilder()\n+ .appendFixedSignedDecimal(DateTimeFieldType.year(), 4) // ES change, was .appendYear(4, 9)\n+ .toFormatter();\n+ }\n+ return ye;\n+ }\n+\n+ private static DateTimeFormatter monthElement() {\n+ if (mye == null) {\n+ return new DateTimeFormatterBuilder()\n+ .appendLiteral('-')\n+ .appendFixedSignedDecimal(DateTimeFieldType.monthOfYear(), 2) // ES change, was .appendMonthOfYear(2)\n+ .toFormatter();\n+ }\n+ return mye;\n+ }\n+\n+ private static DateTimeFormatter dayOfMonthElement() {\n+ if (dme == null) {\n+ return new DateTimeFormatterBuilder()\n+ .appendLiteral('-')\n+ .appendFixedSignedDecimal(DateTimeFieldType.dayOfMonth(), 2) // ES change, was .appendDayOfMonth(2)\n+ .toFormatter();\n+ }\n+ return dme;\n+ }\n+\n+ private static DateTimeFormatter weekyearElement() {\n+ if (we == null) {\n+ return new DateTimeFormatterBuilder()\n+ .appendFixedSignedDecimal(DateTimeFieldType.weekyear(), 4) // ES change, was .appendWeekyear(4, 9)\n+ .toFormatter();\n+ }\n+ return we;\n+ }\n+\n+ private static DateTimeFormatter weekElement() {\n+ if (wwe == null) {\n+ return new DateTimeFormatterBuilder()\n+ .appendLiteral(\"-W\")\n+ .appendFixedSignedDecimal(DateTimeFieldType.weekOfWeekyear(), 2) // ES change, was .appendWeekOfWeekyear(2)\n+ .toFormatter();\n+ }\n+ return wwe;\n+ }\n+\n+ private static DateTimeFormatter dayOfWeekElement() {\n+ if (dwe == null) {\n+ return new DateTimeFormatterBuilder()\n+ .appendLiteral('-')\n+ .appendDayOfWeek(1)\n+ .toFormatter();\n+ }\n+ return dwe;\n+ }\n+\n+ private static DateTimeFormatter dayOfYearElement() {\n+ if (dye == null) {\n+ return new DateTimeFormatterBuilder()\n+ .appendLiteral('-')\n+ .appendFixedSignedDecimal(DateTimeFieldType.dayOfYear(), 3) // ES change, was .appendDayOfYear(3)\n+ .toFormatter();\n+ }\n+ return dye;\n+ }\n+\n+ private static DateTimeFormatter literalTElement() {\n+ if (lte == null) {\n+ return new DateTimeFormatterBuilder()\n+ .appendLiteral('T')\n+ .toFormatter();\n+ }\n+ return lte;\n+ }\n+\n+ private static DateTimeFormatter hourElement() {\n+ if (hde == null) {\n+ return new DateTimeFormatterBuilder()\n+ .appendFixedSignedDecimal(DateTimeFieldType.hourOfDay(), 2) // ES change, was .appendHourOfDay(2)\n+ .toFormatter();\n+ }\n+ return hde;\n+ }\n+\n+ private static DateTimeFormatter minuteElement() {\n+ if (mhe == null) {\n+ return new DateTimeFormatterBuilder()\n+ .appendLiteral(':')\n+ .appendFixedSignedDecimal(DateTimeFieldType.minuteOfHour(), 2) // ES change, was .appendMinuteOfHour(2)\n+ .toFormatter();\n+ }\n+ return mhe;\n+ }\n+\n+ private static DateTimeFormatter secondElement() {\n+ if (sme == null) {\n+ return new DateTimeFormatterBuilder()\n+ .appendLiteral(':')\n+ .appendFixedSignedDecimal(DateTimeFieldType.secondOfMinute(), 2) // ES change, was .appendSecondOfMinute(2)\n+ .toFormatter();\n+ }\n+ return sme;\n+ }\n+\n+ private static DateTimeFormatter fractionElement() {\n+ if (fse == null) {\n+ return new DateTimeFormatterBuilder()\n+ .appendLiteral('.')\n+ // Support parsing up to nanosecond precision even though\n+ // those extra digits will be dropped.\n+ .appendFractionOfSecond(3, 9)\n+ .toFormatter();\n+ }\n+ return fse;\n+ }\n+\n+ private static DateTimeFormatter offsetElement() {\n+ if (ze == null) {\n+ return new DateTimeFormatterBuilder()\n+ .appendTimeZoneOffset(\"Z\", true, 2, 4)\n+ .toFormatter();\n+ }\n+ return ze;\n+ }\n+\n+ }\n+\n+}",
"filename": "core/src/main/java/org/joda/time/format/StrictISODateTimeFormat.java",
"status": "added"
},
{
"diff": "@@ -22,8 +22,11 @@\n import org.elasticsearch.common.joda.FormatDateTimeFormatter;\n import org.elasticsearch.common.joda.Joda;\n import org.elasticsearch.common.unit.TimeValue;\n+import org.elasticsearch.index.mapper.core.DateFieldMapper;\n+import org.elasticsearch.index.mapper.object.RootObjectMapper;\n import org.elasticsearch.test.ElasticsearchTestCase;\n import org.joda.time.DateTime;\n+import org.joda.time.DateTimeFieldType;\n import org.joda.time.DateTimeZone;\n import org.joda.time.LocalDateTime;\n import org.joda.time.MutableDateTime;\n@@ -361,6 +364,368 @@ public void testThatEpochParserIsIdempotent() {\n assertThat(secondsDateTime.getMillis(), is(1234567890000l));\n }\n \n+ public void testThatDefaultFormatterChecksForCorrectYearLength() throws Exception {\n+ // if no strict version is tested, this means the date format is already strict by itself\n+ // yyyyMMdd\n+ assertValidDateFormatParsing(\"basicDate\", \"20140303\");\n+ assertDateFormatParsingThrowingException(\"basicDate\", \"2010303\");\n+\n+ // yyyyMMdd’T'HHmmss.SSSZ\n+ assertValidDateFormatParsing(\"basicDateTime\", \"20140303T124343.123Z\");\n+ assertValidDateFormatParsing(\"basicDateTime\", \"00050303T124343.123Z\");\n+ assertDateFormatParsingThrowingException(\"basicDateTime\", \"50303T124343.123Z\");\n+\n+ // yyyyMMdd’T'HHmmssZ\n+ assertValidDateFormatParsing(\"basicDateTimeNoMillis\", \"20140303T124343Z\");\n+ assertValidDateFormatParsing(\"basicDateTimeNoMillis\", \"00050303T124343Z\");\n+ assertDateFormatParsingThrowingException(\"basicDateTimeNoMillis\", \"50303T124343Z\");\n+\n+ // yyyyDDD\n+ assertValidDateFormatParsing(\"basicOrdinalDate\", \"0005165\");\n+ assertDateFormatParsingThrowingException(\"basicOrdinalDate\", \"5165\");\n+\n+ // yyyyDDD’T'HHmmss.SSSZ\n+ assertValidDateFormatParsing(\"basicOrdinalDateTime\", \"0005165T124343.123Z\");\n+ assertValidDateFormatParsing(\"basicOrdinalDateTime\", \"0005165T124343.123Z\");\n+ assertDateFormatParsingThrowingException(\"basicOrdinalDateTime\", \"5165T124343.123Z\");\n+\n+ // yyyyDDD’T'HHmmssZ\n+ assertValidDateFormatParsing(\"basicOrdinalDateTimeNoMillis\", \"0005165T124343Z\");\n+ assertValidDateFormatParsing(\"basicOrdinalDateTimeNoMillis\", \"0005165T124343Z\");\n+ assertDateFormatParsingThrowingException(\"basicOrdinalDateTimeNoMillis\", \"5165T124343Z\");\n+\n+ // HHmmss.SSSZ\n+ assertValidDateFormatParsing(\"basicTime\", \"090909.123Z\");\n+ assertDateFormatParsingThrowingException(\"basicTime\", \"90909.123Z\");\n+\n+ // HHmmssZ\n+ assertValidDateFormatParsing(\"basicTimeNoMillis\", \"090909Z\");\n+ assertDateFormatParsingThrowingException(\"basicTimeNoMillis\", \"90909Z\");\n+\n+ // 'T’HHmmss.SSSZ\n+ assertValidDateFormatParsing(\"basicTTime\", \"T090909.123Z\");\n+ assertDateFormatParsingThrowingException(\"basicTTime\", \"T90909.123Z\");\n+\n+ // T’HHmmssZ\n+ assertValidDateFormatParsing(\"basicTTimeNoMillis\", \"T090909Z\");\n+ assertDateFormatParsingThrowingException(\"basicTTimeNoMillis\", \"T90909Z\");\n+\n+ // xxxx’W'wwe\n+ assertValidDateFormatParsing(\"basicWeekDate\", \"0005W414\");\n+ assertValidDateFormatParsing(\"basicWeekDate\", \"5W414\", \"0005W414\");\n+ assertDateFormatParsingThrowingException(\"basicWeekDate\", \"5W14\");\n+\n+ assertValidDateFormatParsing(\"strictBasicWeekDate\", \"0005W414\");\n+ assertDateFormatParsingThrowingException(\"strictBasicWeekDate\", \"0005W47\");\n+ assertDateFormatParsingThrowingException(\"strictBasicWeekDate\", \"5W414\");\n+ assertDateFormatParsingThrowingException(\"strictBasicWeekDate\", \"5W14\");\n+\n+ // xxxx’W'wwe’T'HHmmss.SSSZ\n+ assertValidDateFormatParsing(\"basicWeekDateTime\", \"0005W414T124343.123Z\");\n+ assertValidDateFormatParsing(\"basicWeekDateTime\", \"5W414T124343.123Z\", \"0005W414T124343.123Z\");\n+ assertDateFormatParsingThrowingException(\"basicWeekDateTime\", \"5W14T124343.123Z\");\n+\n+ assertValidDateFormatParsing(\"strictBasicWeekDateTime\", \"0005W414T124343.123Z\");\n+ assertDateFormatParsingThrowingException(\"strictBasicWeekDateTime\", \"0005W47T124343.123Z\");\n+ assertDateFormatParsingThrowingException(\"strictBasicWeekDateTime\", \"5W414T124343.123Z\");\n+ assertDateFormatParsingThrowingException(\"strictBasicWeekDateTime\", \"5W14T124343.123Z\");\n+\n+ // xxxx’W'wwe’T'HHmmssZ\n+ assertValidDateFormatParsing(\"basicWeekDateTimeNoMillis\", \"0005W414T124343Z\");\n+ assertValidDateFormatParsing(\"basicWeekDateTimeNoMillis\", \"5W414T124343Z\", \"0005W414T124343Z\");\n+ assertDateFormatParsingThrowingException(\"basicWeekDateTimeNoMillis\", \"5W14T124343Z\");\n+\n+ assertValidDateFormatParsing(\"strictBasicWeekDateTimeNoMillis\", \"0005W414T124343Z\");\n+ assertDateFormatParsingThrowingException(\"strictBasicWeekDateTimeNoMillis\", \"0005W47T124343Z\");\n+ assertDateFormatParsingThrowingException(\"strictBasicWeekDateTimeNoMillis\", \"5W414T124343Z\");\n+ assertDateFormatParsingThrowingException(\"strictBasicWeekDateTimeNoMillis\", \"5W14T124343Z\");\n+\n+ // yyyy-MM-dd\n+ assertValidDateFormatParsing(\"date\", \"0005-06-03\");\n+ assertValidDateFormatParsing(\"date\", \"5-6-3\", \"0005-06-03\");\n+\n+ assertValidDateFormatParsing(\"strictDate\", \"0005-06-03\");\n+ assertDateFormatParsingThrowingException(\"strictDate\", \"5-6-3\");\n+ assertDateFormatParsingThrowingException(\"strictDate\", \"0005-06-3\");\n+ assertDateFormatParsingThrowingException(\"strictDate\", \"0005-6-03\");\n+ assertDateFormatParsingThrowingException(\"strictDate\", \"5-06-03\");\n+\n+ // yyyy-MM-dd'T'HH\n+ assertValidDateFormatParsing(\"dateHour\", \"0005-06-03T12\");\n+ assertValidDateFormatParsing(\"dateHour\", \"5-6-3T1\", \"0005-06-03T01\");\n+\n+ assertValidDateFormatParsing(\"strictDateHour\", \"0005-06-03T12\");\n+ assertDateFormatParsingThrowingException(\"strictDateHour\", \"5-6-3T1\");\n+\n+ // yyyy-MM-dd'T'HH:mm\n+ assertValidDateFormatParsing(\"dateHourMinute\", \"0005-06-03T12:12\");\n+ assertValidDateFormatParsing(\"dateHourMinute\", \"5-6-3T12:1\", \"0005-06-03T12:01\");\n+\n+ assertValidDateFormatParsing(\"strictDateHourMinute\", \"0005-06-03T12:12\");\n+ assertDateFormatParsingThrowingException(\"strictDateHourMinute\", \"5-6-3T12:1\");\n+\n+ // yyyy-MM-dd'T'HH:mm:ss\n+ assertValidDateFormatParsing(\"dateHourMinuteSecond\", \"0005-06-03T12:12:12\");\n+ assertValidDateFormatParsing(\"dateHourMinuteSecond\", \"5-6-3T12:12:1\", \"0005-06-03T12:12:01\");\n+\n+ assertValidDateFormatParsing(\"strictDateHourMinuteSecond\", \"0005-06-03T12:12:12\");\n+ assertDateFormatParsingThrowingException(\"strictDateHourMinuteSecond\", \"5-6-3T12:12:1\");\n+\n+ // yyyy-MM-dd’T'HH:mm:ss.SSS\n+ assertValidDateFormatParsing(\"dateHourMinuteSecondFraction\", \"0005-06-03T12:12:12.123\");\n+ assertValidDateFormatParsing(\"dateHourMinuteSecondFraction\", \"5-6-3T12:12:1.123\", \"0005-06-03T12:12:01.123\");\n+ assertValidDateFormatParsing(\"dateHourMinuteSecondFraction\", \"5-6-3T12:12:1.1\", \"0005-06-03T12:12:01.100\");\n+\n+ assertValidDateFormatParsing(\"strictDateHourMinuteSecondFraction\", \"0005-06-03T12:12:12.123\");\n+ assertDateFormatParsingThrowingException(\"strictDateHourMinuteSecondFraction\", \"5-6-3T12:12:12.1\");\n+ assertDateFormatParsingThrowingException(\"strictDateHourMinuteSecondFraction\", \"5-6-3T12:12:12.12\");\n+\n+ assertValidDateFormatParsing(\"dateHourMinuteSecondMillis\", \"0005-06-03T12:12:12.123\");\n+ assertValidDateFormatParsing(\"dateHourMinuteSecondMillis\", \"5-6-3T12:12:1.123\", \"0005-06-03T12:12:01.123\");\n+ assertValidDateFormatParsing(\"dateHourMinuteSecondMillis\", \"5-6-3T12:12:1.1\", \"0005-06-03T12:12:01.100\");\n+\n+ assertValidDateFormatParsing(\"strictDateHourMinuteSecondMillis\", \"0005-06-03T12:12:12.123\");\n+ assertDateFormatParsingThrowingException(\"strictDateHourMinuteSecondMillis\", \"5-6-3T12:12:12.1\");\n+ assertDateFormatParsingThrowingException(\"strictDateHourMinuteSecondMillis\", \"5-6-3T12:12:12.12\");\n+\n+ // yyyy-MM-dd'T'HH:mm:ss.SSSZ\n+ assertValidDateFormatParsing(\"dateOptionalTime\", \"2014-03-03\", \"2014-03-03T00:00:00.000Z\");\n+ assertValidDateFormatParsing(\"dateOptionalTime\", \"1257-3-03\", \"1257-03-03T00:00:00.000Z\");\n+ assertValidDateFormatParsing(\"dateOptionalTime\", \"0005-03-3\", \"0005-03-03T00:00:00.000Z\");\n+ assertValidDateFormatParsing(\"dateOptionalTime\", \"5-03-03\", \"0005-03-03T00:00:00.000Z\");\n+ assertValidDateFormatParsing(\"dateOptionalTime\", \"5-03-03T1:1:1.1\", \"0005-03-03T01:01:01.100Z\");\n+ assertValidDateFormatParsing(\"strictDateOptionalTime\", \"2014-03-03\", \"2014-03-03T00:00:00.000Z\");\n+ assertDateFormatParsingThrowingException(\"strictDateOptionalTime\", \"5-03-03\");\n+ assertDateFormatParsingThrowingException(\"strictDateOptionalTime\", \"0005-3-03\");\n+ assertDateFormatParsingThrowingException(\"strictDateOptionalTime\", \"0005-03-3\");\n+ assertDateFormatParsingThrowingException(\"strictDateOptionalTime\", \"5-03-03T1:1:1.1\");\n+ assertDateFormatParsingThrowingException(\"strictDateOptionalTime\", \"5-03-03T01:01:01.1\");\n+ assertDateFormatParsingThrowingException(\"strictDateOptionalTime\", \"5-03-03T01:01:1.100\");\n+ assertDateFormatParsingThrowingException(\"strictDateOptionalTime\", \"5-03-03T01:1:01.100\");\n+ assertDateFormatParsingThrowingException(\"strictDateOptionalTime\", \"5-03-03T1:01:01.100\");\n+\n+ // yyyy-MM-dd’T'HH:mm:ss.SSSZZ\n+ assertValidDateFormatParsing(\"dateTime\", \"5-03-03T1:1:1.1Z\", \"0005-03-03T01:01:01.100Z\");\n+ assertValidDateFormatParsing(\"strictDateTime\", \"2014-03-03T11:11:11.100Z\", \"2014-03-03T11:11:11.100Z\");\n+ assertDateFormatParsingThrowingException(\"strictDateTime\", \"0005-03-03T1:1:1.1Z\");\n+ assertDateFormatParsingThrowingException(\"strictDateTime\", \"0005-03-03T01:01:1.100Z\");\n+ assertDateFormatParsingThrowingException(\"strictDateTime\", \"0005-03-03T01:1:01.100Z\");\n+ assertDateFormatParsingThrowingException(\"strictDateTime\", \"0005-03-03T1:01:01.100Z\");\n+\n+ // yyyy-MM-dd’T'HH:mm:ssZZ\n+ assertValidDateFormatParsing(\"dateTimeNoMillis\", \"5-03-03T1:1:1Z\", \"0005-03-03T01:01:01Z\");\n+ assertValidDateFormatParsing(\"strictDateTimeNoMillis\", \"2014-03-03T11:11:11Z\", \"2014-03-03T11:11:11Z\");\n+ assertDateFormatParsingThrowingException(\"strictDateTimeNoMillis\", \"0005-03-03T1:1:1Z\");\n+ assertDateFormatParsingThrowingException(\"strictDateTimeNoMillis\", \"0005-03-03T01:01:1Z\");\n+ assertDateFormatParsingThrowingException(\"strictDateTimeNoMillis\", \"0005-03-03T01:1:01Z\");\n+ assertDateFormatParsingThrowingException(\"strictDateTimeNoMillis\", \"0005-03-03T1:01:01Z\");\n+\n+ // HH\n+ assertValidDateFormatParsing(\"hour\", \"12\");\n+ assertValidDateFormatParsing(\"hour\", \"1\", \"01\");\n+ assertValidDateFormatParsing(\"strictHour\", \"12\");\n+ assertValidDateFormatParsing(\"strictHour\", \"01\");\n+ assertDateFormatParsingThrowingException(\"strictHour\", \"1\");\n+\n+ // HH:mm\n+ assertValidDateFormatParsing(\"hourMinute\", \"12:12\");\n+ assertValidDateFormatParsing(\"hourMinute\", \"12:1\", \"12:01\");\n+ assertValidDateFormatParsing(\"strictHourMinute\", \"12:12\");\n+ assertValidDateFormatParsing(\"strictHourMinute\", \"12:01\");\n+ assertDateFormatParsingThrowingException(\"strictHourMinute\", \"12:1\");\n+\n+ // HH:mm:ss\n+ assertValidDateFormatParsing(\"hourMinuteSecond\", \"12:12:12\");\n+ assertValidDateFormatParsing(\"hourMinuteSecond\", \"12:12:1\", \"12:12:01\");\n+ assertValidDateFormatParsing(\"strictHourMinuteSecond\", \"12:12:12\");\n+ assertValidDateFormatParsing(\"strictHourMinuteSecond\", \"12:12:01\");\n+ assertDateFormatParsingThrowingException(\"strictHourMinuteSecond\", \"12:12:1\");\n+\n+ // HH:mm:ss.SSS\n+ assertValidDateFormatParsing(\"hourMinuteSecondFraction\", \"12:12:12.123\");\n+ assertValidDateFormatParsing(\"hourMinuteSecondFraction\", \"12:12:12.1\", \"12:12:12.100\");\n+ assertValidDateFormatParsing(\"strictHourMinuteSecondFraction\", \"12:12:12.123\");\n+ assertValidDateFormatParsing(\"strictHourMinuteSecondFraction\", \"12:12:12.1\", \"12:12:12.100\");\n+\n+ assertValidDateFormatParsing(\"hourMinuteSecondMillis\", \"12:12:12.123\");\n+ assertValidDateFormatParsing(\"hourMinuteSecondMillis\", \"12:12:12.1\", \"12:12:12.100\");\n+ assertValidDateFormatParsing(\"strictHourMinuteSecondMillis\", \"12:12:12.123\");\n+ assertValidDateFormatParsing(\"strictHourMinuteSecondMillis\", \"12:12:12.1\", \"12:12:12.100\");\n+\n+ // yyyy-DDD\n+ assertValidDateFormatParsing(\"ordinalDate\", \"5-3\", \"0005-003\");\n+ assertValidDateFormatParsing(\"strictOrdinalDate\", \"0005-003\");\n+ assertDateFormatParsingThrowingException(\"strictOrdinalDate\", \"5-3\");\n+ assertDateFormatParsingThrowingException(\"strictOrdinalDate\", \"0005-3\");\n+ assertDateFormatParsingThrowingException(\"strictOrdinalDate\", \"5-003\");\n+\n+ // yyyy-DDD’T'HH:mm:ss.SSSZZ\n+ assertValidDateFormatParsing(\"ordinalDateTime\", \"5-3T12:12:12.100Z\", \"0005-003T12:12:12.100Z\");\n+ assertValidDateFormatParsing(\"strictOrdinalDateTime\", \"0005-003T12:12:12.100Z\");\n+ assertDateFormatParsingThrowingException(\"strictOrdinalDateTime\", \"5-3T1:12:12.123Z\");\n+ assertDateFormatParsingThrowingException(\"strictOrdinalDateTime\", \"5-3T12:1:12.123Z\");\n+ assertDateFormatParsingThrowingException(\"strictOrdinalDateTime\", \"5-3T12:12:1.123Z\");\n+\n+ // yyyy-DDD’T'HH:mm:ssZZ\n+ assertValidDateFormatParsing(\"ordinalDateTimeNoMillis\", \"5-3T12:12:12Z\", \"0005-003T12:12:12Z\");\n+ assertValidDateFormatParsing(\"strictOrdinalDateTimeNoMillis\", \"0005-003T12:12:12Z\");\n+ assertDateFormatParsingThrowingException(\"strictOrdinalDateTimeNoMillis\", \"5-3T1:12:12Z\");\n+ assertDateFormatParsingThrowingException(\"strictOrdinalDateTimeNoMillis\", \"5-3T12:1:12Z\");\n+ assertDateFormatParsingThrowingException(\"strictOrdinalDateTimeNoMillis\", \"5-3T12:12:1Z\");\n+\n+\n+ // HH:mm:ss.SSSZZ\n+ assertValidDateFormatParsing(\"time\", \"12:12:12.100Z\");\n+ assertValidDateFormatParsing(\"time\", \"01:01:01.1Z\", \"01:01:01.100Z\");\n+ assertValidDateFormatParsing(\"time\", \"1:1:1.1Z\", \"01:01:01.100Z\");\n+ assertValidDateFormatParsing(\"strictTime\", \"12:12:12.100Z\");\n+ assertDateFormatParsingThrowingException(\"strictTime\", \"12:12:1.100Z\");\n+ assertDateFormatParsingThrowingException(\"strictTime\", \"12:1:12.100Z\");\n+ assertDateFormatParsingThrowingException(\"strictTime\", \"1:12:12.100Z\");\n+\n+ // HH:mm:ssZZ\n+ assertValidDateFormatParsing(\"timeNoMillis\", \"12:12:12Z\");\n+ assertValidDateFormatParsing(\"timeNoMillis\", \"01:01:01Z\", \"01:01:01Z\");\n+ assertValidDateFormatParsing(\"timeNoMillis\", \"1:1:1Z\", \"01:01:01Z\");\n+ assertValidDateFormatParsing(\"strictTimeNoMillis\", \"12:12:12Z\");\n+ assertDateFormatParsingThrowingException(\"strictTimeNoMillis\", \"12:12:1Z\");\n+ assertDateFormatParsingThrowingException(\"strictTimeNoMillis\", \"12:1:12Z\");\n+ assertDateFormatParsingThrowingException(\"strictTimeNoMillis\", \"1:12:12Z\");\n+\n+ // 'T’HH:mm:ss.SSSZZ\n+ assertValidDateFormatParsing(\"tTime\", \"T12:12:12.100Z\");\n+ assertValidDateFormatParsing(\"tTime\", \"T01:01:01.1Z\", \"T01:01:01.100Z\");\n+ assertValidDateFormatParsing(\"tTime\", \"T1:1:1.1Z\", \"T01:01:01.100Z\");\n+ assertValidDateFormatParsing(\"strictTTime\", \"T12:12:12.100Z\");\n+ assertDateFormatParsingThrowingException(\"strictTTime\", \"T12:12:1.100Z\");\n+ assertDateFormatParsingThrowingException(\"strictTTime\", \"T12:1:12.100Z\");\n+ assertDateFormatParsingThrowingException(\"strictTTime\", \"T1:12:12.100Z\");\n+\n+ // 'T’HH:mm:ssZZ\n+ assertValidDateFormatParsing(\"tTimeNoMillis\", \"T12:12:12Z\");\n+ assertValidDateFormatParsing(\"tTimeNoMillis\", \"T01:01:01Z\", \"T01:01:01Z\");\n+ assertValidDateFormatParsing(\"tTimeNoMillis\", \"T1:1:1Z\", \"T01:01:01Z\");\n+ assertValidDateFormatParsing(\"strictTTimeNoMillis\", \"T12:12:12Z\");\n+ assertDateFormatParsingThrowingException(\"strictTTimeNoMillis\", \"T12:12:1Z\");\n+ assertDateFormatParsingThrowingException(\"strictTTimeNoMillis\", \"T12:1:12Z\");\n+ assertDateFormatParsingThrowingException(\"strictTTimeNoMillis\", \"T1:12:12Z\");\n+\n+ // xxxx-'W’ww-e\n+ assertValidDateFormatParsing(\"weekDate\", \"0005-W4-1\", \"0005-W04-1\");\n+ assertValidDateFormatParsing(\"strictWeekDate\", \"0005-W04-1\");\n+ assertDateFormatParsingThrowingException(\"strictWeekDate\", \"0005-W4-1\");\n+\n+ // xxxx-'W’ww-e’T'HH:mm:ss.SSSZZ\n+ assertValidDateFormatParsing(\"weekDateTime\", \"0005-W41-4T12:43:43.123Z\");\n+ assertValidDateFormatParsing(\"weekDateTime\", \"5-W41-4T12:43:43.123Z\", \"0005-W41-4T12:43:43.123Z\");\n+ assertValidDateFormatParsing(\"strictWeekDateTime\", \"0005-W41-4T12:43:43.123Z\");\n+ assertValidDateFormatParsing(\"strictWeekDateTime\", \"0005-W06-4T12:43:43.123Z\");\n+ assertDateFormatParsingThrowingException(\"strictWeekDateTime\", \"0005-W4-7T12:43:43.123Z\");\n+ assertDateFormatParsingThrowingException(\"strictWeekDateTime\", \"5-W41-4T12:43:43.123Z\");\n+ assertDateFormatParsingThrowingException(\"strictWeekDateTime\", \"5-W1-4T12:43:43.123Z\");\n+\n+ // xxxx-'W’ww-e’T'HH:mm:ssZZ\n+ assertValidDateFormatParsing(\"weekDateTimeNoMillis\", \"0005-W41-4T12:43:43Z\");\n+ assertValidDateFormatParsing(\"weekDateTimeNoMillis\", \"5-W41-4T12:43:43Z\", \"0005-W41-4T12:43:43Z\");\n+ assertValidDateFormatParsing(\"strictWeekDateTimeNoMillis\", \"0005-W41-4T12:43:43Z\");\n+ assertValidDateFormatParsing(\"strictWeekDateTimeNoMillis\", \"0005-W06-4T12:43:43Z\");\n+ assertDateFormatParsingThrowingException(\"strictWeekDateTimeNoMillis\", \"0005-W4-7T12:43:43Z\");\n+ assertDateFormatParsingThrowingException(\"strictWeekDateTimeNoMillis\", \"5-W41-4T12:43:43Z\");\n+ assertDateFormatParsingThrowingException(\"strictWeekDateTimeNoMillis\", \"5-W1-4T12:43:43Z\");\n+\n+ // yyyy\n+ assertValidDateFormatParsing(\"weekyear\", \"2014\");\n+ assertValidDateFormatParsing(\"weekyear\", \"5\", \"0005\");\n+ assertValidDateFormatParsing(\"weekyear\", \"0005\");\n+ assertValidDateFormatParsing(\"strictWeekyear\", \"2014\");\n+ assertValidDateFormatParsing(\"strictWeekyear\", \"0005\");\n+ assertDateFormatParsingThrowingException(\"strictWeekyear\", \"5\");\n+\n+ // yyyy-'W'ee\n+ assertValidDateFormatParsing(\"weekyearWeek\", \"2014-W41\");\n+ assertValidDateFormatParsing(\"weekyearWeek\", \"2014-W1\", \"2014-W01\");\n+ assertValidDateFormatParsing(\"strictWeekyearWeek\", \"2014-W41\");\n+ assertDateFormatParsingThrowingException(\"strictWeekyearWeek\", \"2014-W1\");\n+\n+ // weekyearWeekDay\n+ assertValidDateFormatParsing(\"weekyearWeekDay\", \"2014-W41-1\");\n+ assertValidDateFormatParsing(\"weekyearWeekDay\", \"2014-W1-1\", \"2014-W01-1\");\n+ assertValidDateFormatParsing(\"strictWeekyearWeekDay\", \"2014-W41-1\");\n+ assertDateFormatParsingThrowingException(\"strictWeekyearWeekDay\", \"2014-W1-1\");\n+\n+ // yyyy\n+ assertValidDateFormatParsing(\"year\", \"2014\");\n+ assertValidDateFormatParsing(\"year\", \"5\", \"0005\");\n+ assertValidDateFormatParsing(\"strictYear\", \"2014\");\n+ assertDateFormatParsingThrowingException(\"strictYear\", \"5\");\n+\n+ // yyyy-mm\n+ assertValidDateFormatParsing(\"yearMonth\", \"2014-12\");\n+ assertValidDateFormatParsing(\"yearMonth\", \"2014-5\", \"2014-05\");\n+ assertValidDateFormatParsing(\"strictYearMonth\", \"2014-12\");\n+ assertDateFormatParsingThrowingException(\"strictYearMonth\", \"2014-5\");\n+\n+ // yyyy-mm-dd\n+ assertValidDateFormatParsing(\"yearMonthDay\", \"2014-12-12\");\n+ assertValidDateFormatParsing(\"yearMonthDay\", \"2014-05-5\", \"2014-05-05\");\n+ assertValidDateFormatParsing(\"strictYearMonthDay\", \"2014-12-12\");\n+ assertDateFormatParsingThrowingException(\"strictYearMonthDay\", \"2014-05-5\");\n+ }\n+\n+ @Test\n+ public void testThatRootObjectParsingIsStrict() throws Exception {\n+ String[] datesThatWork = new String[] { \"2014/10/10\", \"2014/10/10 12:12:12\", \"2014-05-05\", \"2014-05-05T12:12:12.123Z\" };\n+ String[] datesThatShouldNotWork = new String[]{ \"5-05-05\", \"2014-5-05\", \"2014-05-5\",\n+ \"2014-05-05T1:12:12.123Z\", \"2014-05-05T12:1:12.123Z\", \"2014-05-05T12:12:1.123Z\",\n+ \"4/10/10\", \"2014/1/10\", \"2014/10/1\",\n+ \"2014/10/10 1:12:12\", \"2014/10/10 12:1:12\", \"2014/10/10 12:12:1\"\n+ };\n+\n+ // good case\n+ for (String date : datesThatWork) {\n+ boolean dateParsingSuccessful = false;\n+ for (FormatDateTimeFormatter dateTimeFormatter : RootObjectMapper.Defaults.DYNAMIC_DATE_TIME_FORMATTERS) {\n+ try {\n+ dateTimeFormatter.parser().parseMillis(date);\n+ dateParsingSuccessful = true;\n+ break;\n+ } catch (Exception e) {}\n+ }\n+ if (!dateParsingSuccessful) {\n+ fail(\"Parsing for date \" + date + \" in root object mapper failed, but shouldnt\");\n+ }\n+ }\n+\n+ // bad case\n+ for (String date : datesThatShouldNotWork) {\n+ for (FormatDateTimeFormatter dateTimeFormatter : RootObjectMapper.Defaults.DYNAMIC_DATE_TIME_FORMATTERS) {\n+ try {\n+ dateTimeFormatter.parser().parseMillis(date);\n+ fail(String.format(Locale.ROOT, \"Expected exception when parsing date %s in root mapper\", date));\n+ } catch (Exception e) {}\n+ }\n+ }\n+ }\n+\n+ private void assertValidDateFormatParsing(String pattern, String dateToParse) {\n+ assertValidDateFormatParsing(pattern, dateToParse, dateToParse);\n+ }\n+\n+ private void assertValidDateFormatParsing(String pattern, String dateToParse, String expectedDate) {\n+ FormatDateTimeFormatter formatter = Joda.forPattern(pattern);\n+ assertThat(formatter.printer().print(formatter.parser().parseMillis(dateToParse)), is(expectedDate));\n+ }\n+\n+ private void assertDateFormatParsingThrowingException(String pattern, String invalidDate) {\n+ try {\n+ FormatDateTimeFormatter formatter = Joda.forPattern(pattern);\n+ DateTimeFormatter parser = formatter.parser();\n+ parser.parseMillis(invalidDate);\n+ fail(String.format(Locale.ROOT, \"Expected parsing exception for pattern [%s] with date [%s], but did not happen\", pattern, invalidDate));\n+ } catch (IllegalArgumentException e) {\n+ }\n+ }\n+\n private long utcTimeInMillis(String time) {\n return ISODateTimeFormat.dateOptionalTimeParser().withZone(DateTimeZone.UTC).parseMillis(time);\n }",
"filename": "core/src/test/java/org/elasticsearch/deps/joda/SimpleJodaTests.java",
"status": "modified"
},
{
"diff": "@@ -26,6 +26,8 @@\n import org.apache.lucene.search.NumericRangeQuery;\n import org.apache.lucene.util.Constants;\n import org.elasticsearch.Version;\n+import org.elasticsearch.action.admin.indices.mapping.get.GetMappingsResponse;\n+import org.elasticsearch.action.admin.indices.mapping.put.PutMappingResponse;\n import org.elasticsearch.action.index.IndexResponse;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.common.settings.Settings;\n@@ -45,6 +47,7 @@\n import org.elasticsearch.search.internal.SearchContext;\n import org.elasticsearch.test.ElasticsearchSingleNodeTest;\n import org.elasticsearch.test.TestSearchContext;\n+import org.elasticsearch.test.VersionUtils;\n import org.joda.time.DateTime;\n import org.joda.time.DateTimeZone;\n import org.junit.Before;\n@@ -55,7 +58,6 @@\n import static com.carrotsearch.randomizedtesting.RandomizedTest.systemPropertyAsBoolean;\n import static org.elasticsearch.common.settings.Settings.settingsBuilder;\n import static org.elasticsearch.index.mapper.string.SimpleStringMappingTests.docValuesType;\n-import static org.elasticsearch.test.VersionUtils.randomVersionBetween;\n import static org.hamcrest.Matchers.*;\n \n public class SimpleDateMappingTests extends ElasticsearchSingleNodeTest {\n@@ -482,4 +484,94 @@ public void testThatEpochCanBeIgnoredWithCustomFormat() throws Exception {\n indexResponse = client().prepareIndex(\"test\", \"test\").setSource(document).get();\n assertThat(indexResponse.isCreated(), is(true));\n }\n+\n+ public void testThatOlderIndicesAllowNonStrictDates() throws Exception {\n+ String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n+ .startObject(\"properties\").startObject(\"date_field\").field(\"type\", \"date\").endObject().endObject()\n+ .endObject().endObject().string();\n+\n+ Version randomVersion = VersionUtils.randomVersionBetween(getRandom(), Version.V_0_90_0, Version.V_1_6_1);\n+ IndexService index = createIndex(\"test\", settingsBuilder().put(IndexMetaData.SETTING_VERSION_CREATED, randomVersion).build());\n+ client().admin().indices().preparePutMapping(\"test\").setType(\"type\").setSource(mapping).get();\n+ assertDateFormat(\"epoch_millis||dateOptionalTime\");\n+ DocumentMapper defaultMapper = index.mapperService().documentMapper(\"type\");\n+\n+ defaultMapper.parse(\"type\", \"1\", XContentFactory.jsonBuilder()\n+ .startObject()\n+ .field(\"date_field\", \"1-1-1T00:00:44.000Z\")\n+ .endObject()\n+ .bytes());\n+\n+ // also test normal date\n+ defaultMapper.parse(\"type\", \"1\", XContentFactory.jsonBuilder()\n+ .startObject()\n+ .field(\"date_field\", \"2015-06-06T00:00:44.000Z\")\n+ .endObject()\n+ .bytes());\n+ }\n+\n+ public void testThatNewIndicesOnlyAllowStrictDates() throws Exception {\n+ String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n+ .startObject(\"properties\").startObject(\"date_field\").field(\"type\", \"date\").endObject().endObject()\n+ .endObject().endObject().string();\n+\n+ IndexService index = createIndex(\"test\");\n+ client().admin().indices().preparePutMapping(\"test\").setType(\"type\").setSource(mapping).get();\n+ assertDateFormat(DateFieldMapper.Defaults.DATE_TIME_FORMATTER.format());\n+ DocumentMapper defaultMapper = index.mapperService().documentMapper(\"type\");\n+\n+ // also test normal date\n+ defaultMapper.parse(\"type\", \"1\", XContentFactory.jsonBuilder()\n+ .startObject()\n+ .field(\"date_field\", \"2015-06-06T00:00:44.000Z\")\n+ .endObject()\n+ .bytes());\n+\n+ try {\n+ defaultMapper.parse(\"type\", \"1\", XContentFactory.jsonBuilder()\n+ .startObject()\n+ .field(\"date_field\", \"1-1-1T00:00:44.000Z\")\n+ .endObject()\n+ .bytes());\n+ fail(\"non strict date indexing should have been failed\");\n+ } catch (MapperParsingException e) {\n+ assertThat(e.getCause(), instanceOf(IllegalArgumentException.class));\n+ }\n+ }\n+\n+ public void testThatUpgradingAnOlderIndexToStrictDateWorks() throws Exception {\n+ String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n+ .startObject(\"properties\").startObject(\"date_field\").field(\"type\", \"date\").field(\"format\", \"dateOptionalTime\").endObject().endObject()\n+ .endObject().endObject().string();\n+\n+ Version randomVersion = VersionUtils.randomVersionBetween(getRandom(), Version.V_0_90_0, Version.V_1_6_1);\n+ createIndex(\"test\", settingsBuilder().put(IndexMetaData.SETTING_VERSION_CREATED, randomVersion).build());\n+ client().admin().indices().preparePutMapping(\"test\").setType(\"type\").setSource(mapping).get();\n+ assertDateFormat(\"epoch_millis||dateOptionalTime\");\n+\n+ // index doc\n+ client().prepareIndex(\"test\", \"type\", \"1\").setSource(XContentFactory.jsonBuilder()\n+ .startObject()\n+ .field(\"date_field\", \"2015-06-06T00:00:44.000Z\")\n+ .endObject()).get();\n+\n+ // update mapping\n+ String newMapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n+ .startObject(\"properties\").startObject(\"date_field\")\n+ .field(\"type\", \"date\")\n+ .field(\"format\", \"strictDateOptionalTime||epoch_millis\")\n+ .endObject().endObject().endObject().endObject().string();\n+ PutMappingResponse putMappingResponse = client().admin().indices().preparePutMapping(\"test\").setType(\"type\").setSource(newMapping).get();\n+ assertThat(putMappingResponse.isAcknowledged(), is(true));\n+\n+ assertDateFormat(\"strictDateOptionalTime||epoch_millis\");\n+ }\n+\n+ private void assertDateFormat(String expectedFormat) throws IOException {\n+ GetMappingsResponse response = client().admin().indices().prepareGetMappings(\"test\").setTypes(\"type\").get();\n+ Map<String, Object> mappingMap = response.getMappings().get(\"test\").get(\"type\").getSourceAsMap();\n+ Map<String, Object> properties = (Map<String, Object>) mappingMap.get(\"properties\");\n+ Map<String, Object> dateField = (Map<String, Object>) properties.get(\"date_field\");\n+ assertThat((String) dateField.get(\"format\"), is(expectedFormat));\n+ }\n }",
"filename": "core/src/test/java/org/elasticsearch/index/mapper/date/SimpleDateMappingTests.java",
"status": "modified"
},
{
"diff": "@@ -37,7 +37,6 @@\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.common.xcontent.json.JsonXContent;\n-import org.elasticsearch.index.Index;\n import org.elasticsearch.index.mapper.*;\n import org.elasticsearch.index.mapper.internal.TimestampFieldMapper;\n import org.elasticsearch.test.ElasticsearchSingleNodeTest;\n@@ -56,14 +55,7 @@\n import static org.elasticsearch.test.VersionUtils.randomVersion;\n import static org.elasticsearch.test.VersionUtils.randomVersionBetween;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n-import static org.hamcrest.Matchers.containsString;\n-import static org.hamcrest.Matchers.equalTo;\n-import static org.hamcrest.Matchers.hasKey;\n-import static org.hamcrest.Matchers.instanceOf;\n-import static org.hamcrest.Matchers.is;\n-import static org.hamcrest.Matchers.isIn;\n-import static org.hamcrest.Matchers.lessThanOrEqualTo;\n-import static org.hamcrest.Matchers.notNullValue;\n+import static org.hamcrest.Matchers.*;\n \n /**\n */\n@@ -113,8 +105,10 @@ public void testDefaultValues() throws Exception {\n assertThat(docMapper.timestampFieldMapper().fieldType().stored(), equalTo(version.onOrAfter(Version.V_2_0_0)));\n assertThat(docMapper.timestampFieldMapper().fieldType().indexOptions(), equalTo(TimestampFieldMapper.Defaults.FIELD_TYPE.indexOptions()));\n assertThat(docMapper.timestampFieldMapper().path(), equalTo(TimestampFieldMapper.Defaults.PATH));\n- assertThat(docMapper.timestampFieldMapper().fieldType().dateTimeFormatter().format(), equalTo(TimestampFieldMapper.DEFAULT_DATE_TIME_FORMAT));\n assertThat(docMapper.timestampFieldMapper().fieldType().hasDocValues(), equalTo(version.onOrAfter(Version.V_2_0_0)));\n+ String expectedFormat = version.onOrAfter(Version.V_2_0_0) ? TimestampFieldMapper.DEFAULT_DATE_TIME_FORMAT :\n+ TimestampFieldMapper.Defaults.DATE_TIME_FORMATTER_BEFORE_2_0.format();\n+ assertThat(docMapper.timestampFieldMapper().fieldType().dateTimeFormatter().format(), equalTo(expectedFormat));\n assertAcked(client().admin().indices().prepareDelete(\"test\").execute().get());\n }\n }\n@@ -755,7 +749,7 @@ public void testBackcompatPath() throws Exception {\n IndexRequest request = new IndexRequest(\"test\", \"type\", \"1\").source(doc);\n request.process(metaData, mappingMetaData, true, \"test\");\n \n- assertEquals(request.timestamp(), \"1\");\n+ assertThat(request.timestamp(), is(\"1\"));\n }\n \n public void testIncludeInObjectBackcompat() throws Exception {",
"filename": "core/src/test/java/org/elasticsearch/index/mapper/timestamp/TimestampMappingTests.java",
"status": "modified"
},
{
"diff": "@@ -1281,9 +1281,9 @@ public void testDSTBoundaryIssue9491() throws InterruptedException, ExecutionExc\n public void testIssue8209() throws InterruptedException, ExecutionException {\n assertAcked(client().admin().indices().prepareCreate(\"test8209\").addMapping(\"type\", \"d\", \"type=date\").get());\n indexRandom(true,\n- client().prepareIndex(\"test8209\", \"type\").setSource(\"d\", \"2014-01-01T0:00:00Z\"),\n- client().prepareIndex(\"test8209\", \"type\").setSource(\"d\", \"2014-04-01T0:00:00Z\"),\n- client().prepareIndex(\"test8209\", \"type\").setSource(\"d\", \"2014-04-30T0:00:00Z\"));\n+ client().prepareIndex(\"test8209\", \"type\").setSource(\"d\", \"2014-01-01T00:00:00Z\"),\n+ client().prepareIndex(\"test8209\", \"type\").setSource(\"d\", \"2014-04-01T00:00:00Z\"),\n+ client().prepareIndex(\"test8209\", \"type\").setSource(\"d\", \"2014-04-30T00:00:00Z\"));\n ensureSearchable(\"test8209\");\n SearchResponse response = client().prepareSearch(\"test8209\")\n .addAggregation(dateHistogram(\"histo\").field(\"d\").interval(DateHistogramInterval.MONTH).timeZone(\"CET\")",
"filename": "core/src/test/java/org/elasticsearch/search/aggregations/bucket/DateHistogramTests.java",
"status": "modified"
},
{
"diff": "@@ -42,6 +42,7 @@\n import java.util.ArrayList;\n import java.util.List;\n import java.util.concurrent.ExecutionException;\n+import java.util.Locale;\n \n import static org.elasticsearch.client.Requests.indexRequest;\n import static org.elasticsearch.client.Requests.searchRequest;\n@@ -530,17 +531,17 @@ public void testDateWithoutOrigin() throws Exception {\n ensureYellow();\n \n DateTime docDate = dt.minusDays(1);\n- String docDateString = docDate.getYear() + \"-\" + docDate.getMonthOfYear() + \"-\" + docDate.getDayOfMonth();\n+ String docDateString = docDate.getYear() + \"-\" + String.format(Locale.ROOT, \"%02d\", docDate.getMonthOfYear()) + \"-\" + String.format(Locale.ROOT, \"%02d\", docDate.getDayOfMonth());\n client().index(\n indexRequest(\"test\").type(\"type1\").id(\"1\")\n .source(jsonBuilder().startObject().field(\"test\", \"value\").field(\"num1\", docDateString).endObject())).actionGet();\n docDate = dt.minusDays(2);\n- docDateString = docDate.getYear() + \"-\" + docDate.getMonthOfYear() + \"-\" + docDate.getDayOfMonth();\n+ docDateString = docDate.getYear() + \"-\" + String.format(Locale.ROOT, \"%02d\", docDate.getMonthOfYear()) + \"-\" + String.format(Locale.ROOT, \"%02d\", docDate.getDayOfMonth());\n client().index(\n indexRequest(\"test\").type(\"type1\").id(\"2\")\n .source(jsonBuilder().startObject().field(\"test\", \"value\").field(\"num1\", docDateString).endObject())).actionGet();\n docDate = dt.minusDays(3);\n- docDateString = docDate.getYear() + \"-\" + docDate.getMonthOfYear() + \"-\" + docDate.getDayOfMonth();\n+ docDateString = docDate.getYear() + \"-\" + String.format(Locale.ROOT, \"%02d\", docDate.getMonthOfYear()) + \"-\" + String.format(Locale.ROOT, \"%02d\", docDate.getDayOfMonth());\n client().index(\n indexRequest(\"test\").type(\"type1\").id(\"3\")\n .source(jsonBuilder().startObject().field(\"test\", \"value\").field(\"num1\", docDateString).endObject())).actionGet();",
"filename": "core/src/test/java/org/elasticsearch/search/functionscore/DecayFunctionScoreTests.java",
"status": "modified"
},
{
"diff": "@@ -49,6 +49,13 @@ first millisecond of the rounding scope. The semantics work as follows:\n [[built-in]]\n === Built In Formats\n \n+Most of the below dates have a `strict` companion dates, which means, that\n+year, month and day parts of the week must have prepending zeros in order\n+to be valid. This means, that a date like `5/11/1` would not be valid, but\n+you would need to specify the full date, which would be `2005/11/01` in this\n+example. So instead of `date_optional_time` you would need to specify\n+`strict_date_optional_time`.\n+\n The following tables lists all the defaults ISO formats supported:\n \n [cols=\"<,<\",options=\"header\",]\n@@ -92,112 +99,125 @@ offset prefixed by 'T' ('T'HHmmssZ).\n \n |`basic_week_date`|A basic formatter for a full date as four digit\n weekyear, two digit week of weekyear, and one digit day of week\n-(xxxx'W'wwe).\n+(xxxx'W'wwe). `strict_basic_week_date` is supported.\n \n |`basic_week_date_time`|A basic formatter that combines a basic weekyear\n date and time, separated by a 'T' (xxxx'W'wwe'T'HHmmss.SSSZ).\n+`strict_basic_week_date_time` is supported.\n \n |`basic_week_date_time_no_millis`|A basic formatter that combines a basic\n weekyear date and time without millis, separated by a 'T'\n-(xxxx'W'wwe'T'HHmmssZ).\n+(xxxx'W'wwe'T'HHmmssZ). `strict_week_date_time` is supported.\n \n |`date`|A formatter for a full date as four digit year, two digit month\n-of year, and two digit day of month (yyyy-MM-dd).\n-\n+of year, and two digit day of month (yyyy-MM-dd). `strict_date` is supported.\n+_\n |`date_hour`|A formatter that combines a full date and two digit hour of\n-day.\n+day. strict_date_hour` is supported.\n+\n \n |`date_hour_minute`|A formatter that combines a full date, two digit hour\n-of day, and two digit minute of hour.\n+of day, and two digit minute of hour. strict_date_hour_minute` is supported.\n \n |`date_hour_minute_second`|A formatter that combines a full date, two\n digit hour of day, two digit minute of hour, and two digit second of\n-minute.\n+minute. `strict_date_hour_minute_second` is supported.\n \n |`date_hour_minute_second_fraction`|A formatter that combines a full\n date, two digit hour of day, two digit minute of hour, two digit second\n of minute, and three digit fraction of second\n-(yyyy-MM-dd'T'HH:mm:ss.SSS).\n+(yyyy-MM-dd'T'HH:mm:ss.SSS). `strict_date_hour_minute_second_fraction` is supported.\n \n |`date_hour_minute_second_millis`|A formatter that combines a full date,\n two digit hour of day, two digit minute of hour, two digit second of\n minute, and three digit fraction of second (yyyy-MM-dd'T'HH:mm:ss.SSS).\n+`strict_date_hour_minute_second_millis` is supported.\n \n |`date_optional_time`|a generic ISO datetime parser where the date is\n-mandatory and the time is optional.\n+mandatory and the time is optional. `strict_date_optional_time` is supported.\n \n |`date_time`|A formatter that combines a full date and time, separated by\n-a 'T' (yyyy-MM-dd'T'HH:mm:ss.SSSZZ).\n+a 'T' (yyyy-MM-dd'T'HH:mm:ss.SSSZZ). `strict_date_time` is supported.\n \n |`date_time_no_millis`|A formatter that combines a full date and time\n without millis, separated by a 'T' (yyyy-MM-dd'T'HH:mm:ssZZ).\n+`strict_date_time_no_millis` is supported.\n \n-|`hour`|A formatter for a two digit hour of day.\n+|`hour`|A formatter for a two digit hour of day. `strict_hour` is supported.\n \n |`hour_minute`|A formatter for a two digit hour of day and two digit\n-minute of hour.\n+minute of hour. `strict_hour_minute` is supported.\n \n |`hour_minute_second`|A formatter for a two digit hour of day, two digit\n minute of hour, and two digit second of minute.\n+`strict_hour_minute_second` is supported.\n \n |`hour_minute_second_fraction`|A formatter for a two digit hour of day,\n two digit minute of hour, two digit second of minute, and three digit\n fraction of second (HH:mm:ss.SSS).\n+`strict_hour_minute_second_fraction` is supported.\n \n |`hour_minute_second_millis`|A formatter for a two digit hour of day, two\n digit minute of hour, two digit second of minute, and three digit\n fraction of second (HH:mm:ss.SSS).\n+`strict_hour_minute_second_millis` is supported.\n \n |`ordinal_date`|A formatter for a full ordinal date, using a four digit\n-year and three digit dayOfYear (yyyy-DDD).\n+year and three digit dayOfYear (yyyy-DDD). `strict_ordinal_date` is supported.\n \n |`ordinal_date_time`|A formatter for a full ordinal date and time, using\n a four digit year and three digit dayOfYear (yyyy-DDD'T'HH:mm:ss.SSSZZ).\n+`strict_ordinal_date_time` is supported.\n \n |`ordinal_date_time_no_millis`|A formatter for a full ordinal date and\n time without millis, using a four digit year and three digit dayOfYear\n (yyyy-DDD'T'HH:mm:ssZZ).\n+`strict_ordinal_date_time_no_millis` is supported.\n \n |`time`|A formatter for a two digit hour of day, two digit minute of\n hour, two digit second of minute, three digit fraction of second, and\n-time zone offset (HH:mm:ss.SSSZZ).\n+time zone offset (HH:mm:ss.SSSZZ). `strict_time` is supported.\n \n |`time_no_millis`|A formatter for a two digit hour of day, two digit\n minute of hour, two digit second of minute, and time zone offset\n-(HH:mm:ssZZ).\n+(HH:mm:ssZZ). `strict_time_no_millis` is supported.\n \n |`t_time`|A formatter for a two digit hour of day, two digit minute of\n hour, two digit second of minute, three digit fraction of second, and\n time zone offset prefixed by 'T' ('T'HH:mm:ss.SSSZZ).\n+`strict_t_time` is supported.\n \n |`t_time_no_millis`|A formatter for a two digit hour of day, two digit\n minute of hour, two digit second of minute, and time zone offset\n-prefixed by 'T' ('T'HH:mm:ssZZ).\n+prefixed by 'T' ('T'HH:mm:ssZZ). `strict_t_time_no_millis` is supported.\n \n |`week_date`|A formatter for a full date as four digit weekyear, two\n digit week of weekyear, and one digit day of week (xxxx-'W'ww-e).\n+`strict_week_date` is supported.\n \n |`week_date_time`|A formatter that combines a full weekyear date and\n time, separated by a 'T' (xxxx-'W'ww-e'T'HH:mm:ss.SSSZZ).\n+`strict_week_date_time` is supported.\n \n-|`weekDateTimeNoMillis`|A formatter that combines a full weekyear date\n+|`week_date_time_no_millis`|A formatter that combines a full weekyear date\n and time without millis, separated by a 'T' (xxxx-'W'ww-e'T'HH:mm:ssZZ).\n+`strict_week_date_time` is supported.\n \n-|`week_year`|A formatter for a four digit weekyear.\n+|`weekyear`|A formatter for a four digit weekyear. `strict_week_year` is supported.\n \n-|`weekyearWeek`|A formatter for a four digit weekyear and two digit week\n-of weekyear.\n+|`weekyear_week`|A formatter for a four digit weekyear and two digit week\n+of weekyear. `strict_weekyear_week` is supported.\n \n-|`weekyearWeekDay`|A formatter for a four digit weekyear, two digit week\n-of weekyear, and one digit day of week.\n+|`weekyear_week_day`|A formatter for a four digit weekyear, two digit week\n+of weekyear, and one digit day of week. `strict_weekyear_week_day` is supported.\n \n-|`year`|A formatter for a four digit year.\n+|`year`|A formatter for a four digit year. `strict_year` is supported.\n \n |`year_month`|A formatter for a four digit year and two digit month of\n-year.\n+year. `strict_year_month` is supported.\n \n |`year_month_day`|A formatter for a four digit year, two digit month of\n-year, and two digit day of month.\n+year, and two digit day of month. `strict_year_month_day` is supported.\n \n |`epoch_second`|A formatter for the number of seconds since the epoch.\n Note, that this timestamp allows a max length of 10 chars, so dates",
"filename": "docs/reference/mapping/date-format.asciidoc",
"status": "modified"
},
{
"diff": "@@ -40,7 +40,7 @@ format>> used to parse the provided timestamp value. For example:\n }\n --------------------------------------------------\n \n-Note, the default format is `epoch_millis||dateOptionalTime`. The timestamp value will\n+Note, the default format is `epoch_millis||strictDateOptionalTime`. The timestamp value will\n first be parsed as a number and if it fails the format will be tried.\n \n [float]",
"filename": "docs/reference/mapping/fields/timestamp-field.asciidoc",
"status": "modified"
},
{
"diff": "@@ -349,7 +349,7 @@ date type:\n Defaults to the property/field name.\n \n |`format` |The <<mapping-date-format,date\n-format>>. Defaults to `epoch_millis||dateOptionalTime`.\n+format>>. Defaults to `epoch_millis||strictDateOptionalTime`.\n \n |`store` |Set to `true` to store actual field in the index, `false` to not\n store it. Defaults to `false` (note, the JSON document itself is stored,",
"filename": "docs/reference/mapping/types/core-types.asciidoc",
"status": "modified"
},
{
"diff": "@@ -42,7 +42,7 @@ and will use the matching format as its format attribute. The date\n format itself is explained\n <<mapping-date-format,here>>.\n \n-The default formats are: `dateOptionalTime` (ISO),\n+The default formats are: `strictDateOptionalTime` (ISO) and\n `yyyy/MM/dd HH:mm:ss Z||yyyy/MM/dd Z` and `epoch_millis`.\n \n *Note:* `dynamic_date_formats` are used *only* for dynamically added",
"filename": "docs/reference/mapping/types/root-object-type.asciidoc",
"status": "modified"
},
{
"diff": "@@ -302,6 +302,13 @@ Meta fields can no longer be specified within a document. They should be specifi\n via the API. For example, instead of adding a field `_parent` within a document,\n use the `parent` url parameter when indexing that document.\n \n+==== Default date format now is `strictDateOptionalDate`\n+\n+Instead of `dateOptionalTime` the new default date format now is `strictDateOptionalTime`,\n+which is more strict in parsing dates. This means, that dates now need to have a four digit year,\n+a two-digit month, day, hour, minute and second. This means, you may need to preprend a part of the date\n+with a zero to make it conform or switch back to the old `dateOptionalTime` format.\n+\n ==== Date format does not support unix timestamps by default\n \n In earlier versions of elasticsearch, every timestamp was always tried to be parsed as\n@@ -723,4 +730,4 @@ to prevent clashes with the watcher plugin\n \n === Percolator stats\n \n-Changed the `percolate.getTime` stat (total time spent on percolating) to `percolate.time` state.\n\\ No newline at end of file\n+Changed the `percolate.getTime` stat (total time spent on percolating) to `percolate.time` state.",
"filename": "docs/reference/migration/migrate_2_0.asciidoc",
"status": "modified"
}
]
} |
{
"body": "I have field on index is Long type, when I try to search, I'm getting NullPointerException. \n\nElasticsearch: Version 1.1.1\nHere's my traceback:\n\n[2014-05-17 21:09:18,922][DEBUG][action.search.type ] [Orbit] [skoob][0], node[LlsPNTswRgGWnzxnGrgBMQ], [P], s[STARTED]: Failed to execute [org.elasticsearch.action.search.SearchRequest@2f53db06] lastShard [true]\norg.elasticsearch.search.SearchParseException: [index3][0]: from[-1],size[-1]: Parse Failure [Failed to parse source [{\n \"query\": {\n \"multi_match\": {\n \"query\": 2014,\n \"fields\": [\n \"year\"\n ]\n }\n },\n \"size\": 20\n}\n]]\n at org.elasticsearch.search.SearchService.parseSource(SearchService.java:634)\n at org.elasticsearch.search.SearchService.createContext(SearchService.java:507)\n at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:480)\n at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:252)\n at org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteQuery(SearchServiceTransportAction.java:202)\n at org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$AsyncAction.sendExecuteFirstPhase(TransportSearchQueryThenFetchAction.java:80)\n at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:216)\n at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:203)\n at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$2.run(TransportSearchTypeAction.java:186)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n at java.lang.Thread.run(Thread.java:745)\nCaused by: java.lang.NullPointerException\n at org.elasticsearch.index.search.MultiMatchQuery.forceAnalyzeQueryString(MultiMatchQuery.java:266)\n at org.elasticsearch.index.search.MatchQuery.parse(MatchQuery.java:178)\n at org.elasticsearch.index.search.MultiMatchQuery.parseAndApply(MultiMatchQuery.java:57)\n at org.elasticsearch.index.search.MultiMatchQuery.parse(MultiMatchQuery.java:71)\n at org.elasticsearch.index.query.MultiMatchQueryParser.parse(MultiMatchQueryParser.java:164)\n at org.elasticsearch.index.query.QueryParseContext.parseInnerQuery(QueryParseContext.java:223)\n at org.elasticsearch.index.query.IndexQueryParserService.parse(IndexQueryParserService.java:330)\n at org.elasticsearch.index.query.IndexQueryParserService.parse(IndexQueryParserService.java:260)\n at org.elasticsearch.search.query.QueryParseElement.parse(QueryParseElement.java:33)\n at org.elasticsearch.search.SearchService.parseSource(SearchService.java:622)\n ... 11 more\n",
"comments": [
{
"body": "thanks for reporting this - I already referenced a commit and will fix that very soon.\n",
"created_at": "2014-05-18T06:35:13Z"
}
],
"number": 6215,
"title": "Search by integer NullPointerException"
} | {
"body": "In the single field case no query builder is selected which causes NPE\nwhen the query has only a numeric field.\n\nCloses #6215\n",
"number": 6217,
"review_comments": [],
"title": "Use default forceAnalyzeQueryString if no query builder is present"
} | {
"commits": [
{
"message": "Use default forceAnalyzeQueryString if no query builder is present\n\nIn the single field case no query builder is selected which causes NPE\nwhen the query has only a numeric field.\n\nCloses #6215"
}
],
"files": [
{
"diff": "@@ -263,6 +263,6 @@ public Term newTerm(String value) {\n }\n \n protected boolean forceAnalyzeQueryString() {\n- return this.queryBuilder.forceAnalyzeQueryString();\n+ return this.queryBuilder == null ? super.forceAnalyzeQueryString() : this.queryBuilder.forceAnalyzeQueryString();\n }\n }\n\\ No newline at end of file",
"filename": "src/main/java/org/elasticsearch/index/search/MultiMatchQuery.java",
"status": "modified"
},
{
"diff": "@@ -210,6 +210,15 @@ public void testPhraseType() {\n assertHitCount(searchResponse, 2l);\n }\n \n+ @Test\n+ public void testSingleField() {\n+ SearchResponse searchResponse = client().prepareSearch(\"test\")\n+ .setQuery(randomizeType(multiMatchQuery(\"15\", \"skill\"))).get();\n+ assertNoFailures(searchResponse);\n+ assertFirstHit(searchResponse, hasId(\"theone\"));\n+ // TODO we need equivalence tests with match query here\n+ }\n+\n @Test\n public void testCutoffFreq() throws ExecutionException, InterruptedException {\n final long numDocs = client().prepareCount(\"test\")",
"filename": "src/test/java/org/elasticsearch/search/query/MultiMatchQueryTests.java",
"status": "modified"
}
]
} |
{
"body": "The DateHistogramBuilder stores and builds the DateHistogram [with a `long` value for the `pre_offset` and `post_offset`](https://github.com/elasticsearch/elasticsearch/blob/master/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/DateHistogramBuilder.java#L44), which is neither what the [API docs specify](http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-aggregations-bucket-datehistogram-aggregation.html#_pre_post_offset_2) (which specify the format is the data format `1s`, `2d`, etc.) nor what the [DateHistogramParser expect](https://github.com/elasticsearch/elasticsearch/blob/master/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/DateHistogramParser.java#L122).\n\nThis forces the improper construction of DateHistogram requests when using the Java API to construct queries. Both the `preOffset` and `postOffset` variables should be converted to `Strings`.\n",
"comments": [
{
"body": "I experience the same error so +1 on fixing this.\n",
"created_at": "2014-05-06T14:56:31Z"
}
],
"number": 5586,
"title": "Aggregations: DateHistogramBuilder uses wrong data type for pre_offset and post_offset"
} | {
"body": "This PR fixes #5586 by correcting the behavior of the current `preOffset` and `postOffset` methods in DateHistogramBuilder and adds new versions of these methods that take `DateHistogram.Interval`. It is patterned after the interval property that already exists in this class.\n\nThis PR is an alternative to #5587. That PR removes the current methods and replaces them with `String` based fields. This seems inconsistent with the other methods in the class and causes a change to the contract currently established by DateHistogramBuilder.\n",
"number": 6216,
"review_comments": [],
"title": "Fix offsets in DateHistogramBuilder"
} | {
"commits": [
{
"message": "fixed offsets in DateHistogramBuilder"
}
],
"files": [
{
"diff": "@@ -41,8 +41,8 @@ public class DateHistogramBuilder extends ValuesSourceAggregationBuilder<DateHis\n private String postZone;\n private boolean preZoneAdjustLargeInterval;\n private String format;\n- long preOffset = 0;\n- long postOffset = 0;\n+ private Object preOffset;\n+ private Object postOffset;\n float factor = 1.0f;\n \n public DateHistogramBuilder(String name) {\n@@ -94,6 +94,16 @@ public DateHistogramBuilder postOffset(long postOffset) {\n return this;\n }\n \n+ public DateHistogramBuilder preOffset( DateHistogram.Interval preOffset ) {\n+ this.preOffset = preOffset;\n+ return this;\n+ }\n+\n+ public DateHistogramBuilder postOffset( DateHistogram.Interval postOffset ) {\n+ this.postOffset = postOffset;\n+ return this;\n+ }\n+\n public DateHistogramBuilder factor(float factor) {\n this.factor = factor;\n return this;\n@@ -153,11 +163,17 @@ protected XContentBuilder doInternalXContent(XContentBuilder builder, Params par\n builder.field(\"pre_zone_adjust_large_interval\", true);\n }\n \n- if (preOffset != 0) {\n+ if (preOffset != null) {\n+ if (preOffset instanceof Number) {\n+ preOffset = TimeValue.timeValueMillis(((Number) preOffset).longValue()).toString();\n+ }\n builder.field(\"pre_offset\", preOffset);\n }\n \n- if (postOffset != 0) {\n+ if (postOffset != null) {\n+ if (postOffset instanceof Number) {\n+ postOffset = TimeValue.timeValueMillis(((Number) postOffset).longValue()).toString();\n+ }\n builder.field(\"post_offset\", postOffset);\n }\n ",
"filename": "src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/DateHistogramBuilder.java",
"status": "modified"
},
{
"diff": "@@ -167,6 +167,117 @@ public void singleValuedField() throws Exception {\n assertThat(bucket.getDocCount(), equalTo(3l));\n }\n \n+ @Test\n+ public void singleValuedField_WithLongOffsets() throws Exception {\n+ SearchResponse response;\n+ response = client().prepareSearch(\"idx\")\n+ .addAggregation(dateHistogram(\"histo\").field(\"date\")\n+ .interval(DateHistogram.Interval.DAY)\n+ .preOffset(1000*60*60*24*2)\n+ .postOffset(-1000*60*60*24))\n+ .execute().actionGet();\n+\n+ assertSearchResponse(response);\n+\n+\n+ DateHistogram histo = response.getAggregations().get(\"histo\");\n+ assertThat(histo, notNullValue());\n+ assertThat(histo.getName(), equalTo(\"histo\"));\n+ assertThat(histo.getBuckets().size(), equalTo(6));\n+\n+ long key = new DateTime(2012, 1, 3, 0, 0, DateTimeZone.UTC).getMillis();\n+ DateHistogram.Bucket bucket = histo.getBucketByKey(key);\n+ assertThat(bucket, notNullValue());\n+ assertThat(bucket.getKeyAsNumber().longValue(), equalTo(key));\n+ assertThat(bucket.getDocCount(), equalTo(1l));\n+\n+ key = new DateTime(2012, 2, 3, 0, 0, DateTimeZone.UTC).getMillis();\n+ bucket = histo.getBucketByKey(key);\n+ assertThat(bucket, notNullValue());\n+ assertThat(bucket.getKeyAsNumber().longValue(), equalTo(key));\n+ assertThat(bucket.getDocCount(), equalTo(1l));\n+\n+ key = new DateTime(2012, 2, 16, 0, 0, DateTimeZone.UTC).getMillis();\n+ bucket = histo.getBucketByKey(key);\n+ assertThat(bucket, notNullValue());\n+ assertThat(bucket.getKeyAsNumber().longValue(), equalTo(key));\n+ assertThat(bucket.getDocCount(), equalTo(1l));\n+\n+ key = new DateTime(2012, 3, 3, 0, 0, DateTimeZone.UTC).getMillis();\n+ bucket = histo.getBucketByKey(key);\n+ assertThat(bucket, notNullValue());\n+ assertThat(bucket.getKeyAsNumber().longValue(), equalTo(key));\n+ assertThat(bucket.getDocCount(), equalTo(1l));\n+\n+ key = new DateTime(2012, 3, 16, 0, 0, DateTimeZone.UTC).getMillis();\n+ bucket = histo.getBucketByKey(key);\n+ assertThat(bucket, notNullValue());\n+ assertThat(bucket.getKeyAsNumber().longValue(), equalTo(key));\n+ assertThat(bucket.getDocCount(), equalTo(1l));\n+\n+ key = new DateTime(2012, 3, 24, 0, 0, DateTimeZone.UTC).getMillis();\n+ bucket = histo.getBucketByKey(key);\n+ assertThat(bucket, notNullValue());\n+ assertThat(bucket.getKeyAsNumber().longValue(), equalTo(key));\n+ assertThat(bucket.getDocCount(), equalTo(1l));\n+ }\n+ \n+ @Test\n+ public void singleValuedField_WithIntervalOffsets() throws Exception {\n+ SearchResponse response;\n+ response = client().prepareSearch(\"idx\")\n+ .addAggregation(dateHistogram(\"histo\").field(\"date\")\n+ .interval(DateHistogram.Interval.DAY)\n+ .preOffset(DateHistogram.Interval.days(2))\n+ .postOffset(DateHistogram.Interval.days(-1)))\n+ .execute().actionGet();\n+\n+ assertSearchResponse(response);\n+\n+\n+ DateHistogram histo = response.getAggregations().get(\"histo\");\n+ assertThat(histo, notNullValue());\n+ assertThat(histo.getName(), equalTo(\"histo\"));\n+ assertThat(histo.getBuckets().size(), equalTo(6));\n+\n+ long key = new DateTime(2012, 1, 3, 0, 0, DateTimeZone.UTC).getMillis();\n+ DateHistogram.Bucket bucket = histo.getBucketByKey(key);\n+ assertThat(bucket, notNullValue());\n+ assertThat(bucket.getKeyAsNumber().longValue(), equalTo(key));\n+ assertThat(bucket.getDocCount(), equalTo(1l));\n+\n+ key = new DateTime(2012, 2, 3, 0, 0, DateTimeZone.UTC).getMillis();\n+ bucket = histo.getBucketByKey(key);\n+ assertThat(bucket, notNullValue());\n+ assertThat(bucket.getKeyAsNumber().longValue(), equalTo(key));\n+ assertThat(bucket.getDocCount(), equalTo(1l));\n+\n+ key = new DateTime(2012, 2, 16, 0, 0, DateTimeZone.UTC).getMillis();\n+ bucket = histo.getBucketByKey(key);\n+ assertThat(bucket, notNullValue());\n+ assertThat(bucket.getKeyAsNumber().longValue(), equalTo(key));\n+ assertThat(bucket.getDocCount(), equalTo(1l));\n+\n+ key = new DateTime(2012, 3, 3, 0, 0, DateTimeZone.UTC).getMillis();\n+ bucket = histo.getBucketByKey(key);\n+ assertThat(bucket, notNullValue());\n+ assertThat(bucket.getKeyAsNumber().longValue(), equalTo(key));\n+ assertThat(bucket.getDocCount(), equalTo(1l));\n+\n+ key = new DateTime(2012, 3, 16, 0, 0, DateTimeZone.UTC).getMillis();\n+ bucket = histo.getBucketByKey(key);\n+ assertThat(bucket, notNullValue());\n+ assertThat(bucket.getKeyAsNumber().longValue(), equalTo(key));\n+ assertThat(bucket.getDocCount(), equalTo(1l));\n+\n+ key = new DateTime(2012, 3, 24, 0, 0, DateTimeZone.UTC).getMillis();\n+ bucket = histo.getBucketByKey(key);\n+ assertThat(bucket, notNullValue());\n+ assertThat(bucket.getKeyAsNumber().longValue(), equalTo(key));\n+ assertThat(bucket.getDocCount(), equalTo(1l));\n+ }\n+\n+\n @Test\n public void singleValuedField_WithPostTimeZone() throws Exception {\n SearchResponse response;",
"filename": "src/test/java/org/elasticsearch/search/aggregations/bucket/DateHistogramTests.java",
"status": "modified"
}
]
} |
{
"body": "The `com.github.spullara.mustache.java:compiler` dependency is not being shaded in the Elasticsearch fat jar:\n\n```\n$ curl -s http://repo1.maven.org/maven2/org/elasticsearch/elasticsearch/1.1.1/elasticsearch-1.1.1.jar | jar tv | grep -i mustache\n 0 Wed Apr 16 14:28:38 EDT 2014 org/elasticsearch/script/mustache/\n 5705 Wed Apr 16 14:28:38 EDT 2014 org/elasticsearch/script/mustache/MustacheScriptEngineService.class\n 1598 Wed Apr 16 14:28:38 EDT 2014 org/elasticsearch/script/mustache/JsonEscapingMustacheFactory.class\n 3246 Wed Apr 16 14:28:38 EDT 2014 org/elasticsearch/script/mustache/MustacheScriptEngineService$MustacheExecutableScript.class\n 0 Wed Apr 16 14:28:42 EDT 2014 com/github/mustachejava/\n 175 Wed Apr 16 14:28:42 EDT 2014 com/github/mustachejava/Binding.class\n 639 Wed Apr 16 14:28:42 EDT 2014 com/github/mustachejava/Code.class\n 0 Wed Apr 16 14:28:42 EDT 2014 com/github/mustachejava/codes/\n 6514 Wed Apr 16 14:28:42 EDT 2014 com/github/mustachejava/codes/DefaultCode.class\n 1951 Wed Apr 16 14:28:42 EDT 2014 com/github/mustachejava/codes/DefaultMustache.class\n 758 Wed Apr 16 14:28:42 EDT 2014 com/github/mustachejava/codes/DepthLimitedWriter.class\n 4121 Wed Apr 16 14:28:42 EDT 2014 com/github/mustachejava/codes/ExtendCode.class\n 1025 Wed Apr 16 14:28:42 EDT 2014 com/github/mustachejava/codes/ExtendNameCode.class\n 1737 Wed Apr 16 14:28:42 EDT 2014 com/github/mustachejava/codes/IterableCode$1.class\n 5438 Wed Apr 16 14:28:42 EDT 2014 com/github/mustachejava/codes/IterableCode.class\n 2183 Wed Apr 16 14:28:42 EDT 2014 com/github/mustachejava/codes/NotIterableCode.class\n 3939 Wed Apr 16 14:28:42 EDT 2014 com/github/mustachejava/codes/PartialCode.class\n 1772 Wed Apr 16 14:28:42 EDT 2014 com/github/mustachejava/codes/ValueCode$1.class\n 5351 Wed Apr 16 14:28:42 EDT 2014 com/github/mustachejava/codes/ValueCode.class\n[...]\n```\n\nStrange since it is being included in the list of shaded dependencies: https://github.com/elasticsearch/elasticsearch/blob/master/pom.xml#L585\n\nMaybe it's because it is declared as an optional dependency? https://github.com/elasticsearch/elasticsearch/blob/master/pom.xml#L176\n",
"comments": [
{
"body": "good catch I will provide a PR in a bit...\n",
"created_at": "2014-05-15T18:45:59Z"
}
],
"number": 6192,
"title": "Mustache dependency not shaded"
} | {
"body": "Previously we shared the jar but never rewrote the packages such\nthat the shading had no effect.\n\nCloses #6192\n",
"number": 6193,
"review_comments": [],
"title": "Shade mustache into org.elasticsearch.common package"
} | {
"commits": [
{
"message": "Shade mustache into org.elasticsearch.common package\n\nPreviously we shared the jar but never rewrote the packages such\nthat the shading had no effect.\n\nCloses #6192"
}
],
"files": [
{
"diff": "@@ -619,6 +619,10 @@\n <pattern>com.ning.compress</pattern>\n <shadedPattern>org.elasticsearch.common.compress</shadedPattern>\n </relocation>\n+ <relocation>\n+\t\t\t <pattern>com.github.mustachejava</pattern>\n+ <shadedPattern>org.elasticsearch.common.mustache</shadedPattern>\n+ </relocation>\n <relocation>\n <pattern>com.tdunning.math.stats</pattern>\n <shadedPattern>org.elasticsearch.common.stats</shadedPattern>",
"filename": "pom.xml",
"status": "modified"
}
]
} |
{
"body": "",
"comments": [
{
"body": "should we get this into `1.1.2` as well?\n",
"created_at": "2014-05-18T10:04:57Z"
},
{
"body": "It would be difficult because of all deprecated gateways that are still there.\n",
"created_at": "2014-05-19T14:22:16Z"
},
{
"body": "oh yeah I see makes sense - in that case no `1.1.2`\n",
"created_at": "2014-05-19T14:22:49Z"
}
],
"number": 6181,
"title": "Unregistering snapshot repositories causes thread leaks"
} | {
"body": " Closes #6181\n",
"number": 6182,
"review_comments": [],
"title": "Switch to shared thread pool for all snapshot repositories"
} | {
"commits": [
{
"message": "Switch to shared thread pool for all snapshot repositories\n\n Closes #6181"
}
],
"files": [
{
"diff": "@@ -67,7 +67,6 @@ on all data and master nodes. The following settings are supported:\n [horizontal]\n `location`:: Location of the snapshots. Mandatory.\n `compress`:: Turns on compression of the snapshot files. Defaults to `true`.\n-`concurrent_streams`:: Throttles the number of streams (per node) preforming snapshot operation. Defaults to `5`\n `chunk_size`:: Big files can be broken down into chunks during snapshotting if needed. The chunk size can be specified in bytes or by\n using size value notation, i.e. 1g, 10m, 5k. Defaults to `null` (unlimited chunk size).\n `max_restore_bytes_per_sec`:: Throttles per node restore rate. Defaults to `20mb` per second.\n@@ -83,8 +82,6 @@ point to the root of the shared filesystem repository. The following settings ar\n \n [horizontal]\n `url`:: Location of the snapshots. Mandatory.\n-`concurrent_streams`:: Throttles the number of streams (per node) preforming snapshot operation. Defaults to `5`\n-\n \n [float]\n ===== Repository plugins",
"filename": "docs/reference/modules/snapshots.asciidoc",
"status": "modified"
},
{
"diff": "@@ -36,13 +36,23 @@ pools, but the important ones include:\n size `# of available processors`.\n queue_size `1000`.\n \n-`warmer`:: \n+`snapshot`::\n+ For snapshot/restore operations, defaults to `scaling`\n+ keep-alive `5m`,\n+ size `(# of available processors)/2`.\n+\n+`snapshot_data`::\n+ For snapshot/restore operations on data files, defaults to `scaling`\n+ with a `5m` keep-alive,\n+ size `5`.\n+\n+`warmer`::\n For segment warm-up operations, defaults to `scaling`\n with a `5m` keep-alive. \n \n `refresh`:: \n For refresh operations, defaults to `scaling`\n- with a `5m` keep-alive. \n+ with a `5m` keep-alive.\n \n Changing a specific thread pool can be done by setting its type and\n specific type parameters, for example, changing the `index` thread pool",
"filename": "docs/reference/modules/threadpool.asciidoc",
"status": "modified"
},
{
"diff": "@@ -28,6 +28,7 @@\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.ByteSizeUnit;\n import org.elasticsearch.common.unit.ByteSizeValue;\n+import org.elasticsearch.threadpool.ThreadPool;\n \n import java.io.File;\n import java.util.concurrent.Executor;\n@@ -37,15 +38,16 @@\n */\n public class FsBlobStore extends AbstractComponent implements BlobStore {\n \n- private final Executor executor;\n+ private final ThreadPool threadPool;\n \n private final File path;\n \n private final int bufferSizeInBytes;\n \n- public FsBlobStore(Settings settings, Executor executor, File path) {\n+ public FsBlobStore(Settings settings, ThreadPool threadPool, File path) {\n super(settings);\n this.path = path;\n+ this.threadPool = threadPool;\n if (!path.exists()) {\n boolean b = FileSystemUtils.mkdirs(path);\n if (!b) {\n@@ -56,7 +58,6 @@ public FsBlobStore(Settings settings, Executor executor, File path) {\n throw new BlobStoreException(\"Path is not a directory at [\" + path + \"]\");\n }\n this.bufferSizeInBytes = (int) settings.getAsBytesSize(\"buffer_size\", new ByteSizeValue(100, ByteSizeUnit.KB)).bytes();\n- this.executor = executor;\n }\n \n @Override\n@@ -73,7 +74,7 @@ public int bufferSizeInBytes() {\n }\n \n public Executor executor() {\n- return executor;\n+ return threadPool.executor(ThreadPool.Names.SNAPSHOT_DATA);\n }\n \n @Override",
"filename": "src/main/java/org/elasticsearch/common/blobstore/fs/FsBlobStore.java",
"status": "modified"
},
{
"diff": "@@ -27,6 +27,7 @@\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.ByteSizeUnit;\n import org.elasticsearch.common.unit.ByteSizeValue;\n+import org.elasticsearch.threadpool.ThreadPool;\n \n import java.net.MalformedURLException;\n import java.net.URL;\n@@ -37,7 +38,7 @@\n */\n public class URLBlobStore extends AbstractComponent implements BlobStore {\n \n- private final Executor executor;\n+ private final ThreadPool threadPool;\n \n private final URL path;\n \n@@ -53,14 +54,14 @@ public class URLBlobStore extends AbstractComponent implements BlobStore {\n * </dl>\n *\n * @param settings settings\n- * @param executor executor for read operations\n+ * @param threadPool thread pool for read operations\n * @param path base URL\n */\n- public URLBlobStore(Settings settings, Executor executor, URL path) {\n+ public URLBlobStore(Settings settings, ThreadPool threadPool, URL path) {\n super(settings);\n this.path = path;\n this.bufferSizeInBytes = (int) settings.getAsBytesSize(\"buffer_size\", new ByteSizeValue(100, ByteSizeUnit.KB)).bytes();\n- this.executor = executor;\n+ this.threadPool = threadPool;\n }\n \n /**\n@@ -95,7 +96,7 @@ public int bufferSizeInBytes() {\n * @return executor\n */\n public Executor executor() {\n- return executor;\n+ return threadPool.executor(ThreadPool.Names.SNAPSHOT_DATA);\n }\n \n /**",
"filename": "src/main/java/org/elasticsearch/common/blobstore/url/URLBlobStore.java",
"status": "modified"
},
{
"diff": "@@ -31,7 +31,6 @@\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.index.gateway.IndexShardGateway;\n import org.elasticsearch.index.gateway.IndexShardGatewayRecoveryException;\n-import org.elasticsearch.indices.recovery.RecoveryState;\n import org.elasticsearch.index.settings.IndexSettings;\n import org.elasticsearch.index.shard.AbstractIndexShardComponent;\n import org.elasticsearch.index.shard.IndexShardState;\n@@ -41,6 +40,7 @@\n import org.elasticsearch.index.translog.Translog;\n import org.elasticsearch.index.translog.TranslogStreams;\n import org.elasticsearch.index.translog.fs.FsTranslog;\n+import org.elasticsearch.indices.recovery.RecoveryState;\n import org.elasticsearch.rest.RestStatus;\n import org.elasticsearch.threadpool.ThreadPool;\n \n@@ -274,7 +274,7 @@ public void run() {\n return;\n }\n if (indexShard.state() == IndexShardState.STARTED && indexShard.translog().syncNeeded()) {\n- threadPool.executor(ThreadPool.Names.SNAPSHOT).execute(new Runnable() {\n+ threadPool.executor(ThreadPool.Names.FLUSH).execute(new Runnable() {\n @Override\n public void run() {\n try {",
"filename": "src/main/java/org/elasticsearch/index/gateway/local/LocalIndexShardGateway.java",
"status": "modified"
},
{
"diff": "@@ -89,6 +89,7 @@\n import org.elasticsearch.script.ScriptService;\n import org.elasticsearch.search.SearchModule;\n import org.elasticsearch.search.SearchService;\n+import org.elasticsearch.snapshots.SnapshotsService;\n import org.elasticsearch.threadpool.ThreadPool;\n import org.elasticsearch.threadpool.ThreadPoolModule;\n import org.elasticsearch.transport.TransportModule;\n@@ -223,6 +224,7 @@ public Node start() {\n injector.getInstance(IndicesClusterStateService.class).start();\n injector.getInstance(IndicesTTLService.class).start();\n injector.getInstance(RiversManager.class).start();\n+ injector.getInstance(SnapshotsService.class).start();\n injector.getInstance(ClusterService.class).start();\n injector.getInstance(RoutingService.class).start();\n injector.getInstance(SearchService.class).start();\n@@ -263,6 +265,7 @@ public Node stop() {\n \n injector.getInstance(RiversManager.class).stop();\n \n+ injector.getInstance(SnapshotsService.class).stop();\n // stop any changes happening as a result of cluster state changes\n injector.getInstance(IndicesClusterStateService.class).stop();\n // we close indices first, so operations won't be allowed on it\n@@ -317,6 +320,8 @@ public void close() {\n stopWatch.stop().start(\"rivers\");\n injector.getInstance(RiversManager.class).close();\n \n+ stopWatch.stop().start(\"snapshot_service\");\n+ injector.getInstance(SnapshotsService.class).close();\n stopWatch.stop().start(\"client\");\n injector.getInstance(Client.class).close();\n stopWatch.stop().start(\"indices_cluster\");",
"filename": "src/main/java/org/elasticsearch/node/internal/InternalNode.java",
"status": "modified"
},
{
"diff": "@@ -24,17 +24,15 @@\n import org.elasticsearch.common.blobstore.fs.FsBlobStore;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.unit.ByteSizeValue;\n-import org.elasticsearch.common.util.concurrent.EsExecutors;\n import org.elasticsearch.index.snapshots.IndexShardRepository;\n import org.elasticsearch.repositories.RepositoryException;\n import org.elasticsearch.repositories.RepositoryName;\n import org.elasticsearch.repositories.RepositorySettings;\n import org.elasticsearch.repositories.blobstore.BlobStoreRepository;\n+import org.elasticsearch.threadpool.ThreadPool;\n \n import java.io.File;\n import java.io.IOException;\n-import java.util.concurrent.ExecutorService;\n-import java.util.concurrent.TimeUnit;\n \n /**\n * Shared file system implementation of the BlobStoreRepository\n@@ -68,7 +66,7 @@ public class FsRepository extends BlobStoreRepository {\n * @throws IOException\n */\n @Inject\n- public FsRepository(RepositoryName name, RepositorySettings repositorySettings, IndexShardRepository indexShardRepository) throws IOException {\n+ public FsRepository(RepositoryName name, RepositorySettings repositorySettings, ThreadPool threadPool, IndexShardRepository indexShardRepository) throws IOException {\n super(name.getName(), repositorySettings, indexShardRepository);\n File locationFile;\n String location = repositorySettings.settings().get(\"location\", componentSettings.get(\"location\"));\n@@ -78,9 +76,7 @@ public FsRepository(RepositoryName name, RepositorySettings repositorySettings,\n } else {\n locationFile = new File(location);\n }\n- int concurrentStreams = repositorySettings.settings().getAsInt(\"concurrent_streams\", componentSettings.getAsInt(\"concurrent_streams\", 5));\n- ExecutorService concurrentStreamPool = EsExecutors.newScaling(1, concurrentStreams, 60, TimeUnit.SECONDS, EsExecutors.daemonThreadFactory(settings, \"[fs_stream]\"));\n- blobStore = new FsBlobStore(componentSettings, concurrentStreamPool, locationFile);\n+ blobStore = new FsBlobStore(componentSettings, threadPool, locationFile);\n this.chunkSize = repositorySettings.settings().getAsBytesSize(\"chunk_size\", componentSettings.getAsBytesSize(\"chunk_size\", null));\n this.compress = repositorySettings.settings().getAsBoolean(\"compress\", componentSettings.getAsBoolean(\"compress\", false));\n this.basePath = BlobPath.cleanPath();",
"filename": "src/main/java/org/elasticsearch/repositories/fs/FsRepository.java",
"status": "modified"
},
{
"diff": "@@ -25,17 +25,15 @@\n import org.elasticsearch.common.blobstore.BlobStore;\n import org.elasticsearch.common.blobstore.url.URLBlobStore;\n import org.elasticsearch.common.inject.Inject;\n-import org.elasticsearch.common.util.concurrent.EsExecutors;\n import org.elasticsearch.index.snapshots.IndexShardRepository;\n import org.elasticsearch.repositories.RepositoryException;\n import org.elasticsearch.repositories.RepositoryName;\n import org.elasticsearch.repositories.RepositorySettings;\n import org.elasticsearch.repositories.blobstore.BlobStoreRepository;\n+import org.elasticsearch.threadpool.ThreadPool;\n \n import java.io.IOException;\n import java.net.URL;\n-import java.util.concurrent.ExecutorService;\n-import java.util.concurrent.TimeUnit;\n \n /**\n * Read-only URL-based implementation of the BlobStoreRepository\n@@ -65,7 +63,7 @@ public class URLRepository extends BlobStoreRepository {\n * @throws IOException\n */\n @Inject\n- public URLRepository(RepositoryName name, RepositorySettings repositorySettings, IndexShardRepository indexShardRepository) throws IOException {\n+ public URLRepository(RepositoryName name, RepositorySettings repositorySettings, ThreadPool threadPool, IndexShardRepository indexShardRepository) throws IOException {\n super(name.getName(), repositorySettings, indexShardRepository);\n URL url;\n String path = repositorySettings.settings().get(\"url\", componentSettings.get(\"url\"));\n@@ -74,10 +72,8 @@ public URLRepository(RepositoryName name, RepositorySettings repositorySettings,\n } else {\n url = new URL(path);\n }\n- int concurrentStreams = repositorySettings.settings().getAsInt(\"concurrent_streams\", componentSettings.getAsInt(\"concurrent_streams\", 5));\n- ExecutorService concurrentStreamPool = EsExecutors.newScaling(1, concurrentStreams, 60, TimeUnit.SECONDS, EsExecutors.daemonThreadFactory(settings, \"[fs_stream]\"));\n listDirectories = repositorySettings.settings().getAsBoolean(\"list_directories\", componentSettings.getAsBoolean(\"list_directories\", true));\n- blobStore = new URLBlobStore(componentSettings, concurrentStreamPool, url);\n+ blobStore = new URLBlobStore(componentSettings, threadPool, url);\n basePath = BlobPath.cleanPath();\n }\n ",
"filename": "src/main/java/org/elasticsearch/repositories/uri/URLRepository.java",
"status": "modified"
},
{
"diff": "@@ -65,6 +65,7 @@ public class RestThreadPoolAction extends AbstractCatAction {\n ThreadPool.Names.REFRESH,\n ThreadPool.Names.SEARCH,\n ThreadPool.Names.SNAPSHOT,\n+ ThreadPool.Names.SNAPSHOT_DATA,\n ThreadPool.Names.SUGGEST,\n ThreadPool.Names.WARMER\n };\n@@ -82,6 +83,7 @@ public class RestThreadPoolAction extends AbstractCatAction {\n \"r\",\n \"s\",\n \"sn\",\n+ \"sd\",\n \"su\",\n \"w\"\n };",
"filename": "src/main/java/org/elasticsearch/rest/action/cat/RestThreadPoolAction.java",
"status": "modified"
},
{
"diff": "@@ -35,7 +35,7 @@\n import org.elasticsearch.cluster.routing.IndexRoutingTable;\n import org.elasticsearch.cluster.routing.ShardRouting;\n import org.elasticsearch.common.Strings;\n-import org.elasticsearch.common.component.AbstractComponent;\n+import org.elasticsearch.common.component.AbstractLifecycleComponent;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n@@ -56,6 +56,10 @@\n import java.io.IOException;\n import java.util.*;\n import java.util.concurrent.CopyOnWriteArrayList;\n+import java.util.concurrent.TimeUnit;\n+import java.util.concurrent.locks.Condition;\n+import java.util.concurrent.locks.Lock;\n+import java.util.concurrent.locks.ReentrantLock;\n \n import static com.google.common.collect.Lists.newArrayList;\n import static com.google.common.collect.Maps.newHashMap;\n@@ -79,7 +83,7 @@\n * notifies all {@link #snapshotCompletionListeners} that snapshot is completed, and finally calls {@link #removeSnapshotFromClusterState(SnapshotId, SnapshotInfo, Throwable)} to remove snapshot from cluster state</li>\n * </ul>\n */\n-public class SnapshotsService extends AbstractComponent implements ClusterStateListener {\n+public class SnapshotsService extends AbstractLifecycleComponent<SnapshotsService> implements ClusterStateListener {\n \n private final ClusterService clusterService;\n \n@@ -93,6 +97,10 @@ public class SnapshotsService extends AbstractComponent implements ClusterStateL\n \n private volatile ImmutableMap<SnapshotId, SnapshotShards> shardSnapshots = ImmutableMap.of();\n \n+ private final Lock shutdownLock = new ReentrantLock();\n+\n+ private final Condition shutdownCondition = shutdownLock.newCondition();\n+\n private final CopyOnWriteArrayList<SnapshotCompletionListener> snapshotCompletionListeners = new CopyOnWriteArrayList<>();\n \n \n@@ -678,7 +686,16 @@ private void processIndexShardSnapshots(SnapshotMetaData snapshotMetaData) {\n \n // Update the list of snapshots that we saw and tried to started\n // If startup of these shards fails later, we don't want to try starting these shards again\n- shardSnapshots = ImmutableMap.copyOf(survivors);\n+ shutdownLock.lock();\n+ try {\n+ shardSnapshots = ImmutableMap.copyOf(survivors);\n+ if (shardSnapshots.isEmpty()) {\n+ // Notify all waiting threads that no more snapshots\n+ shutdownCondition.signalAll();\n+ }\n+ } finally {\n+ shutdownLock.unlock();\n+ }\n \n // We have new snapshots to process -\n if (newSnapshots != null) {\n@@ -1101,6 +1118,30 @@ public void removeListener(SnapshotCompletionListener listener) {\n this.snapshotCompletionListeners.remove(listener);\n }\n \n+ @Override\n+ protected void doStart() throws ElasticsearchException {\n+\n+ }\n+\n+ @Override\n+ protected void doStop() throws ElasticsearchException {\n+ shutdownLock.lock();\n+ try {\n+ while(!shardSnapshots.isEmpty() && shutdownCondition.await(5, TimeUnit.SECONDS)) {\n+ // Wait for at most 5 second for locally running snapshots to finish\n+ }\n+ } catch (InterruptedException ex) {\n+ Thread.currentThread().interrupt();\n+ } finally {\n+ shutdownLock.unlock();\n+ }\n+ }\n+\n+ @Override\n+ protected void doClose() throws ElasticsearchException {\n+\n+ }\n+\n /**\n * Listener for create snapshot operation\n */",
"filename": "src/main/java/org/elasticsearch/snapshots/SnapshotsService.java",
"status": "modified"
},
{
"diff": "@@ -74,6 +74,7 @@ public static class Names {\n public static final String REFRESH = \"refresh\";\n public static final String WARMER = \"warmer\";\n public static final String SNAPSHOT = \"snapshot\";\n+ public static final String SNAPSHOT_DATA = \"snapshot_data\";\n public static final String OPTIMIZE = \"optimize\";\n public static final String BENCH = \"bench\";\n }\n@@ -117,6 +118,7 @@ public ThreadPool(Settings settings, @Nullable NodeSettingsService nodeSettingsS\n .put(Names.REFRESH, settingsBuilder().put(\"type\", \"scaling\").put(\"keep_alive\", \"5m\").put(\"size\", halfProcMaxAt10).build())\n .put(Names.WARMER, settingsBuilder().put(\"type\", \"scaling\").put(\"keep_alive\", \"5m\").put(\"size\", halfProcMaxAt5).build())\n .put(Names.SNAPSHOT, settingsBuilder().put(\"type\", \"scaling\").put(\"keep_alive\", \"5m\").put(\"size\", halfProcMaxAt5).build())\n+ .put(Names.SNAPSHOT_DATA, settingsBuilder().put(\"type\", \"scaling\").put(\"keep_alive\", \"5m\").put(\"size\", 5).build())\n .put(Names.OPTIMIZE, settingsBuilder().put(\"type\", \"fixed\").put(\"size\", 1).build())\n .put(Names.BENCH, settingsBuilder().put(\"type\", \"scaling\").put(\"keep_alive\", \"5m\").put(\"size\", halfProcMaxAt5).build())\n .build();",
"filename": "src/main/java/org/elasticsearch/threadpool/ThreadPool.java",
"status": "modified"
},
{
"diff": "@@ -30,6 +30,7 @@\n import org.elasticsearch.repositories.RepositoryName;\n import org.elasticsearch.repositories.RepositorySettings;\n import org.elasticsearch.repositories.fs.FsRepository;\n+import org.elasticsearch.threadpool.ThreadPool;\n \n import java.io.IOException;\n import java.io.InputStream;\n@@ -71,8 +72,8 @@ public long getFailureCount() {\n private volatile boolean blocked = false;\n \n @Inject\n- public MockRepository(RepositoryName name, RepositorySettings repositorySettings, IndexShardRepository indexShardRepository) throws IOException {\n- super(name, repositorySettings, indexShardRepository);\n+ public MockRepository(RepositoryName name, ThreadPool threadPool, RepositorySettings repositorySettings, IndexShardRepository indexShardRepository) throws IOException {\n+ super(name, repositorySettings, threadPool, indexShardRepository);\n randomControlIOExceptionRate = repositorySettings.settings().getAsDouble(\"random_control_io_exception_rate\", 0.0);\n randomDataFileIOExceptionRate = repositorySettings.settings().getAsDouble(\"random_data_file_io_exception_rate\", 0.0);\n blockOnControlFiles = repositorySettings.settings().getAsBoolean(\"block_on_control\", false);",
"filename": "src/test/java/org/elasticsearch/snapshots/mockstore/MockRepository.java",
"status": "modified"
},
{
"diff": "@@ -294,7 +294,7 @@ private static Settings getRandomNodeSettings(long seed) {\n for (String name : Arrays.asList(ThreadPool.Names.BULK, ThreadPool.Names.FLUSH, ThreadPool.Names.GET,\n ThreadPool.Names.INDEX, ThreadPool.Names.MANAGEMENT, ThreadPool.Names.MERGE, ThreadPool.Names.OPTIMIZE,\n ThreadPool.Names.PERCOLATE, ThreadPool.Names.REFRESH, ThreadPool.Names.SEARCH, ThreadPool.Names.SNAPSHOT,\n- ThreadPool.Names.SUGGEST, ThreadPool.Names.WARMER)) {\n+ ThreadPool.Names.SNAPSHOT_DATA, ThreadPool.Names.SUGGEST, ThreadPool.Names.WARMER)) {\n if (random.nextBoolean()) {\n final String type = RandomPicks.randomFrom(random, Arrays.asList(\"fixed\", \"cached\", \"scaling\"));\n builder.put(ThreadPool.THREADPOOL_GROUP + name + \".type\", type);",
"filename": "src/test/java/org/elasticsearch/test/TestCluster.java",
"status": "modified"
}
]
} |
{
"body": "On a small machine, the BENCH thread pool can get sized to 1, leaving no threads available to perform other tasks such as listing and aborting.\n",
"comments": [
{
"body": "+1 to use `GENERIC` good that we found this before this gets released\n",
"created_at": "2014-05-14T17:41:52Z"
}
],
"number": 6174,
"title": "Benchmark: Use GENERIC thread pool for status/abort benchmark calls"
} | {
"body": "We configure the threadpools according to the number of processors which is\ndifferent on every machine. Yet, we had some test failures related to this\nand #6174 that only happened reproducibly on a node with 1 available processor.\nThis commit does:\n- sometimes randomize the number of available processors\n- if we don't randomize we should set the actual number of available processors\n in the settings on the test node\n- always print out the num of processors when a test fails to make sure we can\n reproduce the thread pool settings with the reproduce info line\n\nCloses #6176\n",
"number": 6177,
"review_comments": [],
"title": "[TEST] Randomize number of available processors"
} | {
"commits": [
{
"message": "[TEST] Randomize number of available processors\n\nWe configure the threadpools according to the number of processors which is\ndifferent on every machine. Yet, we had some test failures related to this\nand #6174 that only happened reproducibly on a node with 1 available processor.\nThis commit does:\n * sometimes randomize the number of available processors\n * if we don't randomize we should set the actual number of available processors\n in the settings on the test node\n * always print out the num of processors when a test fails to make sure we can\n reproduce the thread pool settings with the reproduce info line\n\nCloses #6176"
}
],
"files": [
{
"diff": "@@ -458,6 +458,7 @@\n <java.io.tmpdir>.</java.io.tmpdir> <!-- we use '.' since this is different per JVM-->\n <!-- RandomizedTesting library system properties -->\n <tests.jvm.argline>${tests.jvm.argline}</tests.jvm.argline>\n+ <tests.processors>${tests.processors}</tests.processors>\n <tests.appendseed>${tests.appendseed}</tests.appendseed>\n <tests.iters>${tests.iters}</tests.iters>\n <tests.maxfailures>${tests.maxfailures}</tests.maxfailures>",
"filename": "pom.xml",
"status": "modified"
},
{
"diff": "@@ -29,6 +29,12 @@\n */\n public class EsExecutors {\n \n+ /**\n+ * Settings key to manually set the number of available processors.\n+ * This is used to adjust thread pools sizes etc. per node.\n+ */\n+ public static final String PROCESSORS = \"processors\";\n+\n /**\n * Returns the number of processors available but at most <tt>32</tt>.\n */\n@@ -37,7 +43,7 @@ public static int boundedNumberOfProcessors(Settings settings) {\n * ie. >= 48 create too many threads and run into OOM see #3478\n * We just use an 32 core upper-bound here to not stress the system\n * too much with too many created threads */\n- return settings.getAsInt(\"processors\", Math.min(32, Runtime.getRuntime().availableProcessors()));\n+ return settings.getAsInt(PROCESSORS, Math.min(32, Runtime.getRuntime().availableProcessors()));\n }\n \n public static PrioritizedEsThreadPoolExecutor newSinglePrioritizing(ThreadFactory threadFactory) {",
"filename": "src/main/java/org/elasticsearch/common/util/concurrent/EsExecutors.java",
"status": "modified"
},
{
"diff": "@@ -32,6 +32,8 @@\n import com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule;\n import org.apache.lucene.util.LuceneTestCase.SuppressCodecs;\n import org.elasticsearch.common.lucene.Lucene;\n+import org.elasticsearch.common.settings.ImmutableSettings;\n+import org.elasticsearch.common.util.concurrent.EsExecutors;\n import org.elasticsearch.test.ElasticsearchIntegrationTest;\n import org.elasticsearch.test.ElasticsearchTestCase;\n import org.elasticsearch.test.junit.listeners.ReproduceInfoPrinter;\n@@ -90,6 +92,8 @@ public abstract class AbstractRandomizedTest extends RandomizedTest {\n \n public static final String SYSPROP_INTEGRATION = \"tests.integration\";\n \n+ public static final String SYSPROP_PROCESSORS = \"tests.processors\";\n+\n // -----------------------------------------------------------------\n // Truly immutable fields and constants, initialized once and valid \n // for all suites ever since.\n@@ -128,12 +132,20 @@ public abstract class AbstractRandomizedTest extends RandomizedTest {\n */\n public static final File TEMP_DIR;\n \n+ public static final int TESTS_PROCESSORS;\n+\n static {\n String s = System.getProperty(\"tempDir\", System.getProperty(\"java.io.tmpdir\"));\n if (s == null)\n throw new RuntimeException(\"To run tests, you need to define system property 'tempDir' or 'java.io.tmpdir'.\");\n TEMP_DIR = new File(s);\n TEMP_DIR.mkdirs();\n+\n+ String processors = System.getProperty(SYSPROP_PROCESSORS, \"\"); // mvn sets \"\" as default\n+ if (processors == null || processors.isEmpty()) {\n+ processors = Integer.toString(EsExecutors.boundedNumberOfProcessors(ImmutableSettings.EMPTY));\n+ }\n+ TESTS_PROCESSORS = Integer.parseInt(processors);\n }\n \n /**",
"filename": "src/test/java/org/apache/lucene/util/AbstractRandomizedTest.java",
"status": "modified"
},
{
"diff": "@@ -27,6 +27,7 @@\n import com.google.common.util.concurrent.Futures;\n import com.google.common.util.concurrent.ListenableFuture;\n import com.google.common.util.concurrent.SettableFuture;\n+import org.apache.lucene.util.AbstractRandomizedTest;\n import org.apache.lucene.util.IOUtils;\n import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.ElasticsearchIllegalStateException;\n@@ -302,6 +303,11 @@ private static Settings getRandomNodeSettings(long seed) {\n }\n builder.put(\"plugins.isolation\", random.nextBoolean());\n builder.put(InternalGlobalOrdinalsBuilder.ORDINAL_MAPPING_THRESHOLD_INDEX_SETTING_KEY, 1 + random.nextInt(InternalGlobalOrdinalsBuilder.ORDINAL_MAPPING_THRESHOLD_DEFAULT));\n+ if (random.nextInt(10) == 0) {\n+ builder.put(EsExecutors.PROCESSORS, 1 + random.nextInt(AbstractRandomizedTest.TESTS_PROCESSORS));\n+ } else {\n+ builder.put(EsExecutors.PROCESSORS, AbstractRandomizedTest.TESTS_PROCESSORS);\n+ }\n return builder.build();\n }\n ",
"filename": "src/test/java/org/elasticsearch/test/TestCluster.java",
"status": "modified"
},
{
"diff": "@@ -21,6 +21,7 @@\n import com.carrotsearch.randomizedtesting.RandomizedContext;\n import com.carrotsearch.randomizedtesting.ReproduceErrorMessageBuilder;\n import com.carrotsearch.randomizedtesting.TraceFormatting;\n+import org.apache.lucene.util.AbstractRandomizedTest;\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.logging.ESLogger;\n import org.elasticsearch.common.logging.Loggers;\n@@ -139,6 +140,7 @@ public ReproduceErrorMessageBuilder appendESProperties() {\n if (System.getProperty(\"tests.jvm.argline\") != null && !System.getProperty(\"tests.jvm.argline\").isEmpty()) {\n appendOpt(\"tests.jvm.argline\", \"\\\"\" + System.getProperty(\"tests.jvm.argline\") + \"\\\"\");\n }\n+ appendOpt(AbstractRandomizedTest.SYSPROP_PROCESSORS, Integer.toString(AbstractRandomizedTest.TESTS_PROCESSORS));\n return this;\n }\n ",
"filename": "src/test/java/org/elasticsearch/test/junit/listeners/ReproduceInfoPrinter.java",
"status": "modified"
}
]
} |
{
"body": "On a small machine, the BENCH thread pool can get sized to 1, leaving no threads available to perform other tasks such as listing and aborting.\n",
"comments": [
{
"body": "+1 to use `GENERIC` good that we found this before this gets released\n",
"created_at": "2014-05-14T17:41:52Z"
}
],
"number": 6174,
"title": "Benchmark: Use GENERIC thread pool for status/abort benchmark calls"
} | {
"body": "On small hardware, the BENCH thread pool can be set to size 1. This is\nproblematic as it means that while a benchmark is active, there are no\nthreads available to service administrative tasks such as listing and\naborting. This change fixes that by executing list and abort operations\non the GENERIC thread pool.\n\nCloses #6174\n",
"number": 6175,
"review_comments": [],
"title": "Fix bug for BENCH thread pool size == 1"
} | {
"commits": [
{
"message": "Fix bug for BENCH thread pool size == 1\n\nOn small hardware, the BENCH thread pool can be set to size 1. This is\nproblematic as it means that while a benchmark is active, there are no\nthreads available to service administrative tasks such as listing and\naborting. This change fixes that by executing list and abort operations\non the GENERIC thread pool.\n\nCloses #6174"
}
],
"files": [
{
"diff": "@@ -286,7 +286,8 @@ public void messageReceived(NodeStatusRequest request, TransportChannel channel)\n \n @Override\n public String executor() {\n- return ThreadPool.Names.BENCH;\n+ // Perform management tasks on GENERIC so as not to block pending acquisition of a thread from BENCH.\n+ return ThreadPool.Names.GENERIC;\n }\n }\n \n@@ -308,7 +309,8 @@ public void messageReceived(NodeAbortRequest request, TransportChannel channel)\n \n @Override\n public String executor() {\n- return ThreadPool.Names.BENCH;\n+ // Perform management tasks on GENERIC so as not to block pending acquisition of a thread from BENCH.\n+ return ThreadPool.Names.GENERIC;\n }\n }\n ",
"filename": "src/main/java/org/elasticsearch/action/bench/BenchmarkService.java",
"status": "modified"
}
]
} |
{
"body": "I see tons of these logs when I run some tests:\n\n```\n[2014-05-13 12:17:52,428][WARN ][org.elasticsearch.transport.netty] [node_1] Message not fully read (response) for [91] handler org.elasticsearch.transport.EmptyTransportResponseHandler@7bfccc86, error [false], resetting\n[2014-05-13 12:17:52,432][WARN ][org.elasticsearch.transport.netty] [node_2] Message not fully read (response) for [50] handler org.elasticsearch.transport.EmptyTransportResponseHandler@7bfccc86, error [false], resetting\n...\n```\n\nI think this is related to #5730 since we now actually send a non-empty response back on freeContext.\n",
"comments": [],
"number": 6147,
"title": "Lot's of warn logs related to FreeContext"
} | {
"body": "Since #5730 we write a boolean in the FreeContextResponse which should be deserialized\n\nCloses #6147\n",
"number": 6148,
"review_comments": [],
"title": "Read full message on free context"
} | {
"commits": [
{
"message": "Read full message on free context\n\nSince #5730 we write a boolean in the FreeContextResponse which should be deserialized\n\nCloses #6147"
}
],
"files": [
{
"diff": "@@ -29,7 +29,6 @@\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n-import org.elasticsearch.common.logging.ESLogger;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.search.SearchService;\n import org.elasticsearch.search.dfs.DfsSearchResult;\n@@ -54,26 +53,48 @@\n */\n public class SearchServiceTransportAction extends AbstractComponent {\n \n- static final class FreeContextResponseHandler extends EmptyTransportResponseHandler {\n+ static final class FreeContextResponseHandler implements TransportResponseHandler<SearchFreeContextResponse> {\n \n- private final ESLogger logger;\n+ private final ActionListener<Boolean> listener;\n \n- FreeContextResponseHandler(ESLogger logger) {\n- super(ThreadPool.Names.SAME);\n- this.logger = logger;\n+ FreeContextResponseHandler(final ActionListener<Boolean> listener) {\n+ this.listener = listener;\n+ }\n+\n+ @Override\n+ public SearchFreeContextResponse newInstance() {\n+ return new SearchFreeContextResponse();\n+ }\n+\n+ @Override\n+ public void handleResponse(SearchFreeContextResponse response) {\n+ listener.onResponse(response.freed);\n }\n \n @Override\n public void handleException(TransportException exp) {\n- logger.warn(\"Failed to send release search context\", exp);\n+ listener.onFailure(exp);\n }\n- }\n \n+ @Override\n+ public String executor() {\n+ return ThreadPool.Names.SAME;\n+ }\n+ }\n+ //\n private final ThreadPool threadPool;\n private final TransportService transportService;\n private final ClusterService clusterService;\n private final SearchService searchService;\n- private final FreeContextResponseHandler freeContextResponseHandler = new FreeContextResponseHandler(logger);\n+ private final FreeContextResponseHandler freeContextResponseHandler = new FreeContextResponseHandler(new ActionListener<Boolean>() {\n+ @Override\n+ public void onResponse(Boolean aBoolean) {}\n+\n+ @Override\n+ public void onFailure(Throwable exp) {\n+ logger.warn(\"Failed to send release search context\", exp);\n+ }\n+ });\n \n @Inject\n public SearchServiceTransportAction(Settings settings, ThreadPool threadPool, TransportService transportService, ClusterService clusterService, SearchService searchService) {\n@@ -110,27 +131,7 @@ public void sendFreeContext(DiscoveryNode node, long contextId, ClearScrollReque\n boolean freed = searchService.freeContext(contextId);\n actionListener.onResponse(freed);\n } else {\n- transportService.sendRequest(node, SearchFreeContextTransportHandler.ACTION, new SearchFreeContextRequest(request, contextId), new TransportResponseHandler<SearchFreeContextResponse>() {\n- @Override\n- public SearchFreeContextResponse newInstance() {\n- return new SearchFreeContextResponse();\n- }\n-\n- @Override\n- public void handleResponse(SearchFreeContextResponse response) {\n- actionListener.onResponse(response.isFreed());\n- }\n-\n- @Override\n- public void handleException(TransportException exp) {\n- actionListener.onFailure(exp);\n- }\n-\n- @Override\n- public String executor() {\n- return ThreadPool.Names.SAME;\n- }\n- });\n+ transportService.sendRequest(node, SearchFreeContextTransportHandler.ACTION, new SearchFreeContextRequest(request, contextId), new FreeContextResponseHandler(actionListener));\n }\n }\n \n@@ -532,7 +533,7 @@ public void run() {\n }\n }\n \n- class SearchFreeContextRequest extends TransportRequest {\n+ static class SearchFreeContextRequest extends TransportRequest {\n \n private long id;\n \n@@ -561,7 +562,7 @@ public void writeTo(StreamOutput out) throws IOException {\n }\n }\n \n- class SearchFreeContextResponse extends TransportResponse {\n+ static class SearchFreeContextResponse extends TransportResponse {\n \n private boolean freed;\n \n@@ -618,7 +619,7 @@ public String executor() {\n }\n }\n \n- class ClearScrollContextsRequest extends TransportRequest {\n+ static class ClearScrollContextsRequest extends TransportRequest {\n \n ClearScrollContextsRequest() {\n }",
"filename": "src/main/java/org/elasticsearch/search/action/SearchServiceTransportAction.java",
"status": "modified"
}
]
} |
{
"body": "```\nDELETE http://127.0.0.1:9200/_search/scroll/asdasdadasdasd HTTP/1.1\n```\n\nReturns the following response\n\n```\nHTTP/1.1 500 Internal Server Error\nContent-Type: application/json; charset=UTF-8\nContent-Length: 73\n\n{\n \"error\" : \"ArrayIndexOutOfBoundsException[1]\",\n \"status\" : 500\n}\n```\n\nWhile a 404 is expected. \n\nIt would also be nice if we can allow the scroll id to be posted. I've had people hit problems with scroll ids that are too big in the past:\n\nhttps://github.com/elasticsearch/elasticsearch-net/issues/318\n",
"comments": [
{
"body": "@Mpdreamz Actually that specific scroll id is malformed and that is where the ArrayIndexOutOfBoundsException comes from, so I think a 400 should be returned?\n\nIf a non existent scroll_id is used, it will just return and act like everything is fine. I agree a 404 would be nice.\n",
"created_at": "2014-04-08T14:31:56Z"
},
{
"body": "++404\n",
"created_at": "2014-04-08T16:25:19Z"
},
{
"body": "++404 and +1 on implementing #5726 @martijnvg ! \n",
"created_at": "2014-04-08T17:31:49Z"
},
{
"body": "PR #5738 only addresses invalid scroll ids. Returning a 404 for a valid, but non existing scroll id requires more work than just validation. The clear scoll api uses an internal free search context api, which for example the search api relies on. This internal api just always returns an empty response. I can change that, so that it includes whether it actually has removed a search context, but that requires a change in the transport layer, so I like to do separate that in a different PR.\n",
"created_at": "2014-04-09T11:29:03Z"
},
{
"body": "LGTM\n",
"created_at": "2014-04-14T20:47:59Z"
},
{
"body": "@martijnvg can you assign the fix version here please\n",
"created_at": "2014-04-14T20:48:38Z"
},
{
"body": "@s1monw PR #5738 only handles invalid scroll ids, but this issue is also about returning a 404 when a valid scroll id doesn't exist. I will assign the proper versions in PR and leave this issue open, once the missing scroll id has been addressed this issue can be closed.\n",
"created_at": "2014-04-16T02:34:38Z"
}
],
"number": 5730,
"title": "clear scroll throws 500 on array out of bounds exception"
} | {
"body": "Since #5730 we write a boolean in the FreeContextResponse which should be deserialized\n\nCloses #6147\n",
"number": 6148,
"review_comments": [],
"title": "Read full message on free context"
} | {
"commits": [
{
"message": "Read full message on free context\n\nSince #5730 we write a boolean in the FreeContextResponse which should be deserialized\n\nCloses #6147"
}
],
"files": [
{
"diff": "@@ -29,7 +29,6 @@\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n-import org.elasticsearch.common.logging.ESLogger;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.search.SearchService;\n import org.elasticsearch.search.dfs.DfsSearchResult;\n@@ -54,26 +53,48 @@\n */\n public class SearchServiceTransportAction extends AbstractComponent {\n \n- static final class FreeContextResponseHandler extends EmptyTransportResponseHandler {\n+ static final class FreeContextResponseHandler implements TransportResponseHandler<SearchFreeContextResponse> {\n \n- private final ESLogger logger;\n+ private final ActionListener<Boolean> listener;\n \n- FreeContextResponseHandler(ESLogger logger) {\n- super(ThreadPool.Names.SAME);\n- this.logger = logger;\n+ FreeContextResponseHandler(final ActionListener<Boolean> listener) {\n+ this.listener = listener;\n+ }\n+\n+ @Override\n+ public SearchFreeContextResponse newInstance() {\n+ return new SearchFreeContextResponse();\n+ }\n+\n+ @Override\n+ public void handleResponse(SearchFreeContextResponse response) {\n+ listener.onResponse(response.freed);\n }\n \n @Override\n public void handleException(TransportException exp) {\n- logger.warn(\"Failed to send release search context\", exp);\n+ listener.onFailure(exp);\n }\n- }\n \n+ @Override\n+ public String executor() {\n+ return ThreadPool.Names.SAME;\n+ }\n+ }\n+ //\n private final ThreadPool threadPool;\n private final TransportService transportService;\n private final ClusterService clusterService;\n private final SearchService searchService;\n- private final FreeContextResponseHandler freeContextResponseHandler = new FreeContextResponseHandler(logger);\n+ private final FreeContextResponseHandler freeContextResponseHandler = new FreeContextResponseHandler(new ActionListener<Boolean>() {\n+ @Override\n+ public void onResponse(Boolean aBoolean) {}\n+\n+ @Override\n+ public void onFailure(Throwable exp) {\n+ logger.warn(\"Failed to send release search context\", exp);\n+ }\n+ });\n \n @Inject\n public SearchServiceTransportAction(Settings settings, ThreadPool threadPool, TransportService transportService, ClusterService clusterService, SearchService searchService) {\n@@ -110,27 +131,7 @@ public void sendFreeContext(DiscoveryNode node, long contextId, ClearScrollReque\n boolean freed = searchService.freeContext(contextId);\n actionListener.onResponse(freed);\n } else {\n- transportService.sendRequest(node, SearchFreeContextTransportHandler.ACTION, new SearchFreeContextRequest(request, contextId), new TransportResponseHandler<SearchFreeContextResponse>() {\n- @Override\n- public SearchFreeContextResponse newInstance() {\n- return new SearchFreeContextResponse();\n- }\n-\n- @Override\n- public void handleResponse(SearchFreeContextResponse response) {\n- actionListener.onResponse(response.isFreed());\n- }\n-\n- @Override\n- public void handleException(TransportException exp) {\n- actionListener.onFailure(exp);\n- }\n-\n- @Override\n- public String executor() {\n- return ThreadPool.Names.SAME;\n- }\n- });\n+ transportService.sendRequest(node, SearchFreeContextTransportHandler.ACTION, new SearchFreeContextRequest(request, contextId), new FreeContextResponseHandler(actionListener));\n }\n }\n \n@@ -532,7 +533,7 @@ public void run() {\n }\n }\n \n- class SearchFreeContextRequest extends TransportRequest {\n+ static class SearchFreeContextRequest extends TransportRequest {\n \n private long id;\n \n@@ -561,7 +562,7 @@ public void writeTo(StreamOutput out) throws IOException {\n }\n }\n \n- class SearchFreeContextResponse extends TransportResponse {\n+ static class SearchFreeContextResponse extends TransportResponse {\n \n private boolean freed;\n \n@@ -618,7 +619,7 @@ public String executor() {\n }\n }\n \n- class ClearScrollContextsRequest extends TransportRequest {\n+ static class ClearScrollContextsRequest extends TransportRequest {\n \n ClearScrollContextsRequest() {\n }",
"filename": "src/main/java/org/elasticsearch/search/action/SearchServiceTransportAction.java",
"status": "modified"
}
]
} |
{
"body": "Currently the snapshot process fails immediately with an error if one of the indices involved in the snapshot process has a relocating primary shard.\n",
"comments": [],
"number": 5531,
"title": "Add an ability to snapshot relocating primary shards"
} | {
"body": "This change adds a new cluster state that waits for the replication of a shard to finish before starting snapshotting process. Because this change adds a new snapshot state, it will not be possible to do rolling upgrade from 1.1 branch to 1.2 with snapshot is being executed in the cluster.\n\nCloses #5531\n",
"number": 6139,
"review_comments": [
{
"body": "can we name this differently it's super hard to differentiate between the member and the local var. I also would love to not use `null` as the marker but make it final and check if it is empty? The extra object creation here is not worth the complexity IMO. We might even think of putting this into a separate method for readability ie `Map<X,Y> findWaitingIndices()`?\n",
"created_at": "2014-05-14T07:43:16Z"
},
{
"body": "can we have a call that does that I really want to prevent `Thread.sleep` in our tests... await busy might be ok as well but we can also do a cluster health with wait for relocation?\n",
"created_at": "2014-05-14T07:45:04Z"
},
{
"body": "This pattern adds a boat load of complexity all over our code can we prevent using `null` as an invariant?\n",
"created_at": "2014-05-14T07:47:21Z"
},
{
"body": "can we invert this to have only one return statement?\n",
"created_at": "2014-05-14T07:49:03Z"
},
{
"body": "should we log the source as well as the exception?\n",
"created_at": "2014-05-14T07:50:17Z"
},
{
"body": "dump question what if it is unassigned? do we fail earlier?\n",
"created_at": "2014-05-14T07:51:00Z"
},
{
"body": "should this be `Map` vs. `HashMap`?\n",
"created_at": "2014-05-14T07:55:40Z"
},
{
"body": "If shard goes unassigned or its routing table disappears completely we fail 6 lines below, right after the comment (// Shard that we were waiting for went into unassigned state or disappeared - giving up)\n",
"created_at": "2014-05-16T00:08:24Z"
},
{
"body": "could we make this test somehow operations based rather than time based. The Problem with time-based tests is that you have a very very hard time to get anywhere near reproducability. I know this is multithreaded anyways so it won't really reproduces but we can nicely randomize the num ops per thread which make it a bit nicer :)\n",
"created_at": "2014-05-19T08:07:03Z"
},
{
"body": "can you put a latch in here to make sure we are not half way done once the last thread starts?\n",
"created_at": "2014-05-19T08:09:03Z"
},
{
"body": "instead of sleeping can you fire up an operation and wait until it comes back like a list snapshots or so?\n",
"created_at": "2014-05-19T08:09:52Z"
},
{
"body": "you should use the `awaitBusy` method here which doesn't have a fixed sleep but increase the sleep interval in quadratic steps...\n",
"created_at": "2014-05-19T08:11:04Z"
},
{
"body": "I wonder if it would make sense to add a method that allows us to filter the snapshots here and only return the ones relevant to this method. This method is crazy :)\n",
"created_at": "2014-05-19T08:13:58Z"
},
{
"body": "like a method that returns all started shards ie `public void Map<ShardId, ShardSnapshotStatus> getStarted(SnapshotMetaData meta)` this would make this method soo much simpler\n",
"created_at": "2014-05-19T08:17:07Z"
}
],
"title": "Add ability to snapshot replicating primary shards"
} | {
"commits": [
{
"message": "Add ability to snapshot replicating primary shards\n\nThis change adds a new cluster state that waits for the replication of a shard to finish before starting snapshotting process. Because this change adds a new snapshot state, an pre-1.2.0 nodes will not be able to join the 1.2.0 cluster that is currently running snapshot/restore operation.\n\nCloses #5531"
}
],
"files": [
{
"diff": "@@ -131,7 +131,10 @@ changed since the last snapshot. That allows multiple snapshots to be preserved\n Snapshotting process is executed in non-blocking fashion. All indexing and searching operation can continue to be\n executed against the index that is being snapshotted. However, a snapshot represents the point-in-time view of the index\n at the moment when snapshot was created, so no records that were added to the index after snapshot process had started\n-will be present in the snapshot.\n+will be present in the snapshot. The snapshot process starts immediately for the primary shards that has been started\n+and are not relocating at the moment. Before version 1.2.0 the snapshot operation fails if cluster has any relocating or\n+initializing primaries of indices participating in the snapshot. Starting with version 1.2.0, Elasticsearch waits for\n+are relocating or initializing shards to start before snapshotting them.\n \n Besides creating a copy of each index the snapshot process can also store global cluster metadata, which includes persistent\n cluster settings and templates. The transient settings and registered snapshot repositories are not stored as part of",
"filename": "docs/reference/modules/snapshots.asciidoc",
"status": "modified"
},
{
"diff": "@@ -49,6 +49,7 @@ private SnapshotIndexShardStatus() {\n SnapshotIndexShardStatus(String index, int shardId, SnapshotIndexShardStage stage) {\n super(index, shardId);\n this.stage = stage;\n+ this.stats = new SnapshotStats();\n }\n \n SnapshotIndexShardStatus(ShardId shardId, IndexShardSnapshotStatus indexShardStatus) {",
"filename": "src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/SnapshotIndexShardStatus.java",
"status": "modified"
},
{
"diff": "@@ -111,9 +111,13 @@ protected void masterOperation(final SnapshotsStatusRequest request,\n snapshotIds, request.masterNodeTimeout(), new ActionListener<TransportNodesSnapshotsStatus.NodesSnapshotStatus>() {\n @Override\n public void onResponse(TransportNodesSnapshotsStatus.NodesSnapshotStatus nodeSnapshotStatuses) {\n- ImmutableList<SnapshotMetaData.Entry> currentSnapshots =\n- snapshotsService.currentSnapshots(request.repository(), request.snapshots());\n- listener.onResponse(buildResponse(request, currentSnapshots, nodeSnapshotStatuses));\n+ try {\n+ ImmutableList<SnapshotMetaData.Entry> currentSnapshots =\n+ snapshotsService.currentSnapshots(request.repository(), request.snapshots());\n+ listener.onResponse(buildResponse(request, currentSnapshots, nodeSnapshotStatuses));\n+ } catch (Throwable e) {\n+ listener.onFailure(e);\n+ }\n }\n \n @Override\n@@ -169,6 +173,7 @@ private SnapshotsStatusResponse buildResponse(SnapshotsStatusRequest request, Im\n stage = SnapshotIndexShardStage.FAILURE;\n break;\n case INIT:\n+ case WAITING:\n case STARTED:\n stage = SnapshotIndexShardStage.STARTED;\n break;",
"filename": "src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/TransportSnapshotsStatusAction.java",
"status": "modified"
},
{
"diff": "@@ -32,6 +32,8 @@\n import java.io.IOException;\n import java.util.Map;\n \n+import static com.google.common.collect.Maps.newHashMap;\n+\n /**\n * Meta data about snapshots that are currently executing\n */\n@@ -63,6 +65,7 @@ public static class Entry {\n private final boolean includeGlobalState;\n private final ImmutableMap<ShardId, ShardSnapshotStatus> shards;\n private final ImmutableList<String> indices;\n+ private final ImmutableMap<String, ImmutableList<ShardId>> waitingIndices;\n \n public Entry(SnapshotId snapshotId, boolean includeGlobalState, State state, ImmutableList<String> indices, ImmutableMap<ShardId, ShardSnapshotStatus> shards) {\n this.state = state;\n@@ -71,8 +74,10 @@ public Entry(SnapshotId snapshotId, boolean includeGlobalState, State state, Imm\n this.indices = indices;\n if (shards == null) {\n this.shards = ImmutableMap.of();\n+ this.waitingIndices = ImmutableMap.of();\n } else {\n this.shards = shards;\n+ this.waitingIndices = findWaitingIndices(shards);\n }\n }\n \n@@ -92,6 +97,10 @@ public ImmutableList<String> indices() {\n return indices;\n }\n \n+ public ImmutableMap<String, ImmutableList<ShardId>> waitingIndices() {\n+ return waitingIndices;\n+ }\n+\n public boolean includeGlobalState() {\n return includeGlobalState;\n }\n@@ -121,6 +130,31 @@ public int hashCode() {\n result = 31 * result + indices.hashCode();\n return result;\n }\n+\n+ private ImmutableMap<String, ImmutableList<ShardId>> findWaitingIndices(ImmutableMap<ShardId, ShardSnapshotStatus> shards) {\n+ Map<String, ImmutableList.Builder<ShardId>> waitingIndicesMap = newHashMap();\n+ for (ImmutableMap.Entry<ShardId, ShardSnapshotStatus> entry : shards.entrySet()) {\n+ if (entry.getValue().state() == State.WAITING) {\n+ ImmutableList.Builder<ShardId> waitingShards = waitingIndicesMap.get(entry.getKey().getIndex());\n+ if (waitingShards == null) {\n+ waitingShards = ImmutableList.builder();\n+ waitingIndicesMap.put(entry.getKey().getIndex(), waitingShards);\n+ }\n+ waitingShards.add(entry.getKey());\n+ }\n+ }\n+ if (!waitingIndicesMap.isEmpty()) {\n+ ImmutableMap.Builder<String, ImmutableList<ShardId>> waitingIndicesBuilder = ImmutableMap.builder();\n+ for (Map.Entry<String, ImmutableList.Builder<ShardId>> entry : waitingIndicesMap.entrySet()) {\n+ waitingIndicesBuilder.put(entry.getKey(), entry.getValue().build());\n+ }\n+ return waitingIndicesBuilder.build();\n+ } else {\n+ return ImmutableMap.of();\n+ }\n+\n+ }\n+\n }\n \n public static class ShardSnapshotStatus {\n@@ -199,61 +233,36 @@ public int hashCode() {\n }\n \n public static enum State {\n- INIT((byte) 0),\n- STARTED((byte) 1),\n- SUCCESS((byte) 2),\n- FAILED((byte) 3),\n- ABORTED((byte) 4),\n- MISSING((byte) 5);\n+ INIT((byte) 0, false, false),\n+ STARTED((byte) 1, false, false),\n+ SUCCESS((byte) 2, true, false),\n+ FAILED((byte) 3, true, true),\n+ ABORTED((byte) 4, false, true),\n+ MISSING((byte) 5, true, true),\n+ WAITING((byte) 6, false, false);\n \n private byte value;\n \n- State(byte value) {\n+ private boolean completed;\n+\n+ private boolean failed;\n+\n+ State(byte value, boolean completed, boolean failed) {\n this.value = value;\n+ this.completed = completed;\n+ this.failed = failed;\n }\n \n public byte value() {\n return value;\n }\n \n public boolean completed() {\n- switch (this) {\n- case INIT:\n- return false;\n- case STARTED:\n- return false;\n- case SUCCESS:\n- return true;\n- case FAILED:\n- return true;\n- case ABORTED:\n- return false;\n- case MISSING:\n- return true;\n- default:\n- assert false;\n- return true;\n- }\n+ return completed;\n }\n \n public boolean failed() {\n- switch (this) {\n- case INIT:\n- return false;\n- case STARTED:\n- return false;\n- case SUCCESS:\n- return false;\n- case FAILED:\n- return true;\n- case ABORTED:\n- return true;\n- case MISSING:\n- return true;\n- default:\n- assert false;\n- return false;\n- }\n+ return failed;\n }\n \n public static State fromValue(byte value) {\n@@ -270,6 +279,8 @@ public static State fromValue(byte value) {\n return ABORTED;\n case 5:\n return MISSING;\n+ case 6:\n+ return WAITING;\n default:\n throw new ElasticsearchIllegalArgumentException(\"No snapshot state for value [\" + value + \"]\");\n }",
"filename": "src/main/java/org/elasticsearch/cluster/metadata/SnapshotMetaData.java",
"status": "modified"
},
{
"diff": "@@ -25,6 +25,7 @@\n import com.google.common.collect.ImmutableList;\n import com.google.common.collect.UnmodifiableIterator;\n import org.elasticsearch.ElasticsearchIllegalArgumentException;\n+import org.elasticsearch.Version;\n import org.elasticsearch.common.Booleans;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.collect.ImmutableOpenMap;\n@@ -282,6 +283,20 @@ public boolean isAllNodes(String... nodesIds) {\n return nodesIds == null || nodesIds.length == 0 || (nodesIds.length == 1 && nodesIds[0].equals(\"_all\"));\n }\n \n+\n+ /**\n+ * Returns the version of the node with the oldest version in the cluster\n+ *\n+ * @return the oldest version in the cluster\n+ */\n+ public Version smallestVersion() {\n+ Version version = Version.CURRENT;\n+ for (ObjectCursor<DiscoveryNode> cursor : nodes.values()) {\n+ version = Version.smallest(version, cursor.value.version());\n+ }\n+ return version;\n+ }\n+\n /**\n * Resolve a node with a given id\n *",
"filename": "src/main/java/org/elasticsearch/cluster/node/DiscoveryNodes.java",
"status": "modified"
},
{
"diff": "@@ -254,10 +254,13 @@ public void clusterChanged(ClusterChangedEvent event) {\n return;\n }\n \n+ logger.trace(\"processing new index repositories for state version [{}]\", event.state().version());\n+\n Map<String, RepositoryHolder> survivors = newHashMap();\n // First, remove repositories that are no longer there\n for (Map.Entry<String, RepositoryHolder> entry : repositories.entrySet()) {\n if (newMetaData == null || newMetaData.repository(entry.getKey()) == null) {\n+ logger.debug(\"unregistering repository [{}]\", entry.getKey());\n closeRepository(entry.getKey(), entry.getValue());\n } else {\n survivors.put(entry.getKey(), entry.getValue());\n@@ -273,13 +276,15 @@ public void clusterChanged(ClusterChangedEvent event) {\n // Found previous version of this repository\n if (!holder.type.equals(repositoryMetaData.type()) || !holder.settings.equals(repositoryMetaData.settings())) {\n // Previous version is different from the version in settings\n+ logger.debug(\"updating repository [{}]\", repositoryMetaData.name());\n closeRepository(repositoryMetaData.name(), holder);\n holder = createRepositoryHolder(repositoryMetaData);\n }\n } else {\n holder = createRepositoryHolder(repositoryMetaData);\n }\n if (holder != null) {\n+ logger.debug(\"registering repository [{}]\", repositoryMetaData.name());\n builder.put(repositoryMetaData.name(), holder);\n }\n }",
"filename": "src/main/java/org/elasticsearch/repositories/RepositoriesService.java",
"status": "modified"
},
{
"diff": "@@ -24,6 +24,7 @@\n import org.apache.lucene.util.CollectionUtil;\n import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.ExceptionsHelper;\n+import org.elasticsearch.Version;\n import org.elasticsearch.action.search.ShardSearchFailure;\n import org.elasticsearch.action.support.IndicesOptions;\n import org.elasticsearch.cluster.*;\n@@ -33,6 +34,8 @@\n import org.elasticsearch.cluster.node.DiscoveryNode;\n import org.elasticsearch.cluster.node.DiscoveryNodes;\n import org.elasticsearch.cluster.routing.IndexRoutingTable;\n+import org.elasticsearch.cluster.routing.IndexShardRoutingTable;\n+import org.elasticsearch.cluster.routing.RoutingTable;\n import org.elasticsearch.cluster.routing.ShardRouting;\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.component.AbstractLifecycleComponent;\n@@ -63,7 +66,6 @@\n \n import static com.google.common.collect.Lists.newArrayList;\n import static com.google.common.collect.Maps.newHashMap;\n-import static com.google.common.collect.Maps.newHashMapWithExpectedSize;\n import static com.google.common.collect.Sets.newHashSet;\n \n /**\n@@ -503,6 +505,9 @@ public void clusterChanged(ClusterChangedEvent event) {\n if (event.nodesRemoved()) {\n processSnapshotsOnRemovedNodes(event);\n }\n+ if (event.routingTableChanged()) {\n+ processStartedShards(event);\n+ }\n }\n SnapshotMetaData prev = event.previousState().metaData().custom(SnapshotMetaData.TYPE);\n SnapshotMetaData curr = event.state().metaData().custom(SnapshotMetaData.TYPE);\n@@ -605,6 +610,112 @@ public void onFailure(String source, Throwable t) {\n }\n }\n \n+ private void processStartedShards(ClusterChangedEvent event) {\n+ if (waitingShardsStartedOrUnassigned(event)) {\n+ clusterService.submitStateUpdateTask(\"update snapshot state after shards started\", new ClusterStateUpdateTask() {\n+ @Override\n+ public ClusterState execute(ClusterState currentState) throws Exception {\n+ MetaData metaData = currentState.metaData();\n+ RoutingTable routingTable = currentState.routingTable();\n+ MetaData.Builder mdBuilder = MetaData.builder(currentState.metaData());\n+ SnapshotMetaData snapshots = metaData.custom(SnapshotMetaData.TYPE);\n+ if (snapshots != null) {\n+ boolean changed = false;\n+ ArrayList<SnapshotMetaData.Entry> entries = newArrayList();\n+ for (final SnapshotMetaData.Entry snapshot : snapshots.entries()) {\n+ SnapshotMetaData.Entry updatedSnapshot = snapshot;\n+ if (snapshot.state() == State.STARTED) {\n+ ImmutableMap<ShardId, ShardSnapshotStatus> shards = processWaitingShards(snapshot.shards(), routingTable);\n+ if (shards != null) {\n+ changed = true;\n+ if (!snapshot.state().completed() && completed(shards.values())) {\n+ updatedSnapshot = new SnapshotMetaData.Entry(snapshot.snapshotId(), snapshot.includeGlobalState(), State.SUCCESS, snapshot.indices(), shards);\n+ endSnapshot(updatedSnapshot);\n+ } else {\n+ updatedSnapshot = new SnapshotMetaData.Entry(snapshot.snapshotId(), snapshot.includeGlobalState(), snapshot.state(), snapshot.indices(), shards);\n+ }\n+ }\n+ entries.add(updatedSnapshot);\n+ }\n+ }\n+ if (changed) {\n+ snapshots = new SnapshotMetaData(entries.toArray(new SnapshotMetaData.Entry[entries.size()]));\n+ mdBuilder.putCustom(SnapshotMetaData.TYPE, snapshots);\n+ return ClusterState.builder(currentState).metaData(mdBuilder).build();\n+ }\n+ }\n+ return currentState;\n+ }\n+\n+ @Override\n+ public void onFailure(String source, Throwable t) {\n+ logger.warn(\"failed to update snapshot state after shards started from [{}] \", t, source);\n+ }\n+ });\n+ }\n+ }\n+\n+ private ImmutableMap<ShardId, ShardSnapshotStatus> processWaitingShards(ImmutableMap<ShardId, ShardSnapshotStatus> snapshotShards, RoutingTable routingTable) {\n+ boolean snapshotChanged = false;\n+ ImmutableMap.Builder<ShardId, ShardSnapshotStatus> shards = ImmutableMap.builder();\n+ for (ImmutableMap.Entry<ShardId, ShardSnapshotStatus> shardEntry : snapshotShards.entrySet()) {\n+ ShardSnapshotStatus shardStatus = shardEntry.getValue();\n+ if (shardStatus.state() == State.WAITING) {\n+ ShardId shardId = shardEntry.getKey();\n+ IndexRoutingTable indexShardRoutingTable = routingTable.index(shardId.getIndex());\n+ if (indexShardRoutingTable != null) {\n+ IndexShardRoutingTable shardRouting = indexShardRoutingTable.shard(shardId.id());\n+ if (shardRouting != null && shardRouting.primaryShard() != null) {\n+ if (shardRouting.primaryShard().started()) {\n+ // Shard that we were waiting for has started on a node, let's process it\n+ snapshotChanged = true;\n+ logger.trace(\"starting shard that we were waiting for [{}] on node [{}]\", shardEntry.getKey(), shardStatus.nodeId());\n+ shards.put(shardEntry.getKey(), new ShardSnapshotStatus(shardRouting.primaryShard().currentNodeId()));\n+ continue;\n+ } else if (shardRouting.primaryShard().initializing() || shardRouting.primaryShard().relocating()) {\n+ // Shard that we were waiting for hasn't started yet or still relocating - will continue to wait\n+ shards.put(shardEntry);\n+ continue;\n+ }\n+ }\n+ }\n+ // Shard that we were waiting for went into unassigned state or disappeared - giving up\n+ snapshotChanged = true;\n+ logger.warn(\"failing snapshot of shard [{}] on unassigned shard [{}]\", shardEntry.getKey(), shardStatus.nodeId());\n+ shards.put(shardEntry.getKey(), new ShardSnapshotStatus(shardStatus.nodeId(), State.FAILED, \"shard is unassigned\"));\n+ } else {\n+ shards.put(shardEntry);\n+ }\n+ }\n+ if (snapshotChanged) {\n+ return shards.build();\n+ } else {\n+ return null;\n+ }\n+ }\n+\n+ private boolean waitingShardsStartedOrUnassigned(ClusterChangedEvent event) {\n+ SnapshotMetaData curr = event.state().metaData().custom(SnapshotMetaData.TYPE);\n+ if (curr != null) {\n+ for (SnapshotMetaData.Entry entry : curr.entries()) {\n+ if (entry.state() == State.STARTED && !entry.waitingIndices().isEmpty()) {\n+ for (String index : entry.waitingIndices().keySet()) {\n+ if (event.indexRoutingTableChanged(index)) {\n+ IndexRoutingTable indexShardRoutingTable = event.state().getRoutingTable().index(index);\n+ for (ShardId shardId : entry.waitingIndices().get(index)) {\n+ ShardRouting shardRouting = indexShardRoutingTable.shard(shardId.id()).primaryShard();\n+ if (shardRouting != null && (shardRouting.started() || shardRouting.unassigned())) {\n+ return true;\n+ }\n+ }\n+ }\n+ }\n+ }\n+ }\n+ }\n+ return false;\n+ }\n+\n private boolean removedNodesCleanupNeeded(ClusterChangedEvent event) {\n // Check if we just became the master\n boolean newMaster = !event.previousState().nodes().localNodeMaster();\n@@ -646,44 +757,51 @@ private void processIndexShardSnapshots(SnapshotMetaData snapshotMetaData) {\n \n // For now we will be mostly dealing with a single snapshot at a time but might have multiple simultaneously running\n // snapshots in the future\n- HashMap<SnapshotId, SnapshotShards> newSnapshots = null;\n+ Map<SnapshotId, Map<ShardId, IndexShardSnapshotStatus>> newSnapshots = newHashMap();\n // Now go through all snapshots and update existing or create missing\n final String localNodeId = clusterService.localNode().id();\n for (SnapshotMetaData.Entry entry : snapshotMetaData.entries()) {\n- HashMap<ShardId, IndexShardSnapshotStatus> startedShards = null;\n- for (Map.Entry<ShardId, SnapshotMetaData.ShardSnapshotStatus> shard : entry.shards().entrySet()) {\n- // Check if we have new shards to start processing on\n- if (localNodeId.equals(shard.getValue().nodeId())) {\n- if (entry.state() == State.STARTED) {\n- if (startedShards == null) {\n- startedShards = newHashMap();\n- }\n- startedShards.put(shard.getKey(), new IndexShardSnapshotStatus());\n- } else if (entry.state() == State.ABORTED) {\n- SnapshotShards snapshotShards = shardSnapshots.get(entry.snapshotId());\n- if (snapshotShards != null) {\n- IndexShardSnapshotStatus snapshotStatus = snapshotShards.shards.get(shard.getKey());\n- if (snapshotStatus != null) {\n- snapshotStatus.abort();\n- }\n+ if (entry.state() == State.STARTED) {\n+ Map<ShardId, IndexShardSnapshotStatus> startedShards = newHashMap();\n+ SnapshotShards snapshotShards = shardSnapshots.get(entry.snapshotId());\n+ for (Map.Entry<ShardId, SnapshotMetaData.ShardSnapshotStatus> shard : entry.shards().entrySet()) {\n+ // Add all new shards to start processing on\n+ if (localNodeId.equals(shard.getValue().nodeId())) {\n+ if (shard.getValue().state() == State.INIT && (snapshotShards == null || !snapshotShards.shards.containsKey(shard.getKey()))) {\n+ logger.trace(\"[{}] - Adding shard to the queue\", shard.getKey());\n+ startedShards.put(shard.getKey(), new IndexShardSnapshotStatus());\n }\n }\n }\n- }\n- if (startedShards != null) {\n- if (!survivors.containsKey(entry.snapshotId())) {\n- if (newSnapshots == null) {\n- newSnapshots = newHashMapWithExpectedSize(2);\n+ if (!startedShards.isEmpty()) {\n+ newSnapshots.put(entry.snapshotId(), startedShards);\n+ if (snapshotShards != null) {\n+ // We already saw this snapshot but we need to add more started shards\n+ ImmutableMap.Builder<ShardId, IndexShardSnapshotStatus> shards = ImmutableMap.builder();\n+ // Put all shards that were already running on this node\n+ shards.putAll(snapshotShards.shards);\n+ // Put all newly started shards\n+ shards.putAll(startedShards);\n+ survivors.put(entry.snapshotId(), new SnapshotShards(shards.build()));\n+ } else {\n+ // Brand new snapshot that we haven't seen before\n+ survivors.put(entry.snapshotId(), new SnapshotShards(ImmutableMap.copyOf(startedShards)));\n+ }\n+ }\n+ } else if (entry.state() == State.ABORTED) {\n+ // Abort all running shards for this snapshot\n+ SnapshotShards snapshotShards = shardSnapshots.get(entry.snapshotId());\n+ if (snapshotShards != null) {\n+ for (Map.Entry<ShardId, SnapshotMetaData.ShardSnapshotStatus> shard : entry.shards().entrySet()) {\n+ IndexShardSnapshotStatus snapshotStatus = snapshotShards.shards.get(shard.getKey());\n+ if (snapshotStatus != null) {\n+ snapshotStatus.abort();\n+ }\n }\n- newSnapshots.put(entry.snapshotId(), new SnapshotShards(ImmutableMap.copyOf(startedShards)));\n }\n }\n }\n \n- if (newSnapshots != null) {\n- survivors.putAll(newSnapshots);\n- }\n-\n // Update the list of snapshots that we saw and tried to started\n // If startup of these shards fails later, we don't want to try starting these shards again\n shutdownLock.lock();\n@@ -697,10 +815,10 @@ private void processIndexShardSnapshots(SnapshotMetaData snapshotMetaData) {\n shutdownLock.unlock();\n }\n \n- // We have new snapshots to process -\n- if (newSnapshots != null) {\n- for (final Map.Entry<SnapshotId, SnapshotShards> entry : newSnapshots.entrySet()) {\n- for (final Map.Entry<ShardId, IndexShardSnapshotStatus> shardEntry : entry.getValue().shards.entrySet()) {\n+ // We have new shards to starts\n+ if (!newSnapshots.isEmpty()) {\n+ for (final Map.Entry<SnapshotId, Map<ShardId, IndexShardSnapshotStatus>> entry : newSnapshots.entrySet()) {\n+ for (final Map.Entry<ShardId, IndexShardSnapshotStatus> shardEntry : entry.getValue().entrySet()) {\n try {\n final IndexShardSnapshotAndRestoreService shardSnapshotService = indicesService.indexServiceSafe(shardEntry.getKey().getIndex()).shardInjectorSafe(shardEntry.getKey().id())\n .getInstance(IndexShardSnapshotAndRestoreService.class);\n@@ -1089,6 +1207,9 @@ private ImmutableMap<ShardId, SnapshotMetaData.ShardSnapshotStatus> shards(Snaps\n ShardRouting primary = indexRoutingTable.shard(i).primaryShard();\n if (primary == null || !primary.assignedToNode()) {\n builder.put(shardId, new SnapshotMetaData.ShardSnapshotStatus(null, State.MISSING, \"primary shard is not allocated\"));\n+ } else if (clusterState.getNodes().smallestVersion().onOrAfter(Version.V_1_2_0) && (primary.relocating() || primary.initializing())) {\n+ // The WAITING state was introduced in V1.2.0 - don't use it if there are nodes with older version in the cluster\n+ builder.put(shardId, new SnapshotMetaData.ShardSnapshotStatus(primary.currentNodeId(), State.WAITING));\n } else if (!primary.started()) {\n builder.put(shardId, new SnapshotMetaData.ShardSnapshotStatus(primary.currentNodeId(), State.MISSING, \"primary shard hasn't been started yet\"));\n } else {",
"filename": "src/main/java/org/elasticsearch/snapshots/SnapshotsService.java",
"status": "modified"
},
{
"diff": "@@ -171,7 +171,6 @@ public void testSpecifiedIndexUnavailable() throws Exception {\n @Test\n public void testSpecifiedIndexUnavailable_snapshotRestore() throws Exception {\n createIndex(\"test1\");\n- ensureYellow();\n \n PutRepositoryResponse putRepositoryResponse = client().admin().cluster().preparePutRepository(\"dummy-repo\")\n .setType(\"fs\").setSettings(ImmutableSettings.settingsBuilder().put(\"location\", newTempDir())).get();\n@@ -327,7 +326,6 @@ public void testWildcardBehaviour() throws Exception {\n @Test\n public void testWildcardBehaviour_snapshotRestore() throws Exception {\n createIndex(\"foobar\");\n- ensureYellow();\n \n PutRepositoryResponse putRepositoryResponse = client().admin().cluster().preparePutRepository(\"dummy-repo\")\n .setType(\"fs\").setSettings(ImmutableSettings.settingsBuilder().put(\"location\", newTempDir())).get();",
"filename": "src/test/java/org/elasticsearch/indices/IndicesOptionsIntegrationTests.java",
"status": "modified"
},
{
"diff": "@@ -20,22 +20,30 @@\n package org.elasticsearch.snapshots;\n \n import com.carrotsearch.randomizedtesting.LifecycleScope;\n+import com.google.common.util.concurrent.ListenableFuture;\n import org.elasticsearch.action.ListenableActionFuture;\n import org.elasticsearch.action.admin.cluster.repositories.put.PutRepositoryResponse;\n import org.elasticsearch.action.admin.cluster.snapshots.create.CreateSnapshotResponse;\n import org.elasticsearch.action.admin.cluster.snapshots.delete.DeleteSnapshotResponse;\n import org.elasticsearch.action.admin.cluster.snapshots.restore.RestoreSnapshotResponse;\n+import org.elasticsearch.action.admin.cluster.snapshots.status.SnapshotStatus;\n+import org.elasticsearch.action.admin.cluster.snapshots.status.SnapshotsStatusResponse;\n import org.elasticsearch.client.Client;\n import org.elasticsearch.common.Priority;\n import org.elasticsearch.common.settings.ImmutableSettings;\n+import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n+import org.elasticsearch.index.store.support.AbstractIndexStore;\n import org.elasticsearch.snapshots.mockstore.MockRepositoryModule;\n import org.elasticsearch.test.junit.annotations.TestLogging;\n import org.elasticsearch.test.store.MockDirectoryHelper;\n import org.elasticsearch.threadpool.ThreadPool;\n+import org.junit.Ignore;\n import org.junit.Test;\n \n import java.util.ArrayList;\n+import java.util.List;\n+import java.util.concurrent.CopyOnWriteArrayList;\n \n import static com.google.common.collect.Lists.newArrayList;\n import static org.elasticsearch.common.settings.ImmutableSettings.settingsBuilder;\n@@ -261,6 +269,145 @@ public void restoreIndexWithMissingShards() throws Exception {\n ensureGreen(\"test-idx-2\");\n \n assertThat(client().prepareCount(\"test-idx-2\").get().getCount(), equalTo(100L));\n+ }\n+\n+ @Test\n+ @TestLogging(\"snapshots:TRACE,repositories:TRACE\")\n+ @Ignore\n+ public void chaosSnapshotTest() throws Exception {\n+ final List<String> indices = new CopyOnWriteArrayList<>();\n+ Settings settings = settingsBuilder().put(\"action.write_consistency\", \"one\").build();\n+ int initialNodes = between(1, 3);\n+ logger.info(\"--> start {} nodes\", initialNodes);\n+ for (int i = 0; i < initialNodes; i++) {\n+ cluster().startNode(settings);\n+ }\n+\n+ logger.info(\"--> creating repository\");\n+ assertAcked(client().admin().cluster().preparePutRepository(\"test-repo\")\n+ .setType(\"fs\").setSettings(ImmutableSettings.settingsBuilder()\n+ .put(\"location\", newTempDir(LifecycleScope.SUITE))\n+ .put(\"compress\", randomBoolean())\n+ .put(\"chunk_size\", randomIntBetween(100, 1000))));\n+\n+ int initialIndices = between(1, 3);\n+ logger.info(\"--> create {} indices\", initialIndices);\n+ for (int i = 0; i < initialIndices; i++) {\n+ createTestIndex(\"test-\" + i);\n+ indices.add(\"test-\" + i);\n+ }\n+\n+ int asyncNodes = between(0, 5);\n+ logger.info(\"--> start {} additional nodes asynchronously\", asyncNodes);\n+ ListenableFuture<List<String>> asyncNodesFuture = cluster().startNodesAsync(asyncNodes, settings);\n+\n+ int asyncIndices = between(0, 10);\n+ logger.info(\"--> create {} additional indices asynchronously\", asyncIndices);\n+ Thread[] asyncIndexThreads = new Thread[asyncIndices];\n+ for (int i = 0; i < asyncIndices; i++) {\n+ final int cur = i;\n+ asyncIndexThreads[i] = new Thread(new Runnable() {\n+ @Override\n+ public void run() {\n+ createTestIndex(\"test-async-\" + cur);\n+ indices.add(\"test-async-\" + cur);\n+\n+ }\n+ });\n+ asyncIndexThreads[i].start();\n+ }\n+\n+ logger.info(\"--> snapshot\");\n+\n+ ListenableActionFuture<CreateSnapshotResponse> snapshotResponseFuture = client().admin().cluster().prepareCreateSnapshot(\"test-repo\", \"test-snap\").setWaitForCompletion(true).setIndices(\"test-*\").setPartial(true).execute();\n+\n+ long start = System.currentTimeMillis();\n+ // Produce chaos for 30 sec or until snapshot is done whatever comes first\n+ int randomIndices = 0;\n+ while (System.currentTimeMillis() - start < 30000 && !snapshotIsDone(\"test-repo\", \"test-snap\")) {\n+ Thread.sleep(100);\n+ int chaosType = randomInt(10);\n+ if (chaosType < 4) {\n+ // Randomly delete an index\n+ if (indices.size() > 0) {\n+ String index = indices.remove(randomInt(indices.size() - 1));\n+ logger.info(\"--> deleting random index [{}]\", index);\n+ cluster().wipeIndices(index);\n+ }\n+ } else if (chaosType < 6) {\n+ // Randomly shutdown a node\n+ if (cluster().size() > 1) {\n+ logger.info(\"--> shutting down random node\");\n+ cluster().stopRandomDataNode();\n+ }\n+ } else if (chaosType < 8) {\n+ // Randomly create an index\n+ String index = \"test-rand-\" + randomIndices;\n+ logger.info(\"--> creating random index [{}]\", index);\n+ createTestIndex(index);\n+ randomIndices++;\n+ } else {\n+ // Take a break\n+ logger.info(\"--> noop\");\n+ }\n+ }\n+\n+ logger.info(\"--> waiting for async indices creation to finish\");\n+ for (int i = 0; i < asyncIndices; i++) {\n+ asyncIndexThreads[i].join();\n+ }\n+\n+ logger.info(\"--> update index settings to back to normal\");\n+ assertAcked(client().admin().indices().prepareUpdateSettings(\"test-*\").setSettings(ImmutableSettings.builder()\n+ .put(AbstractIndexStore.INDEX_STORE_THROTTLE_TYPE, \"node\")\n+ ));\n+\n+ // Make sure that snapshot finished - doesn't matter if it failed or succeeded\n+ try {\n+ CreateSnapshotResponse snapshotResponse = snapshotResponseFuture.get();\n+ SnapshotInfo snapshotInfo = snapshotResponse.getSnapshotInfo();\n+ assertNotNull(snapshotInfo);\n+ logger.info(\"--> snapshot is done with state [{}], total shards [{}], successful shards [{}]\", snapshotInfo.state(), snapshotInfo.totalShards(), snapshotInfo.successfulShards());\n+ } catch (Exception ex) {\n+ logger.info(\"--> snapshot didn't start properly\", ex);\n+ }\n+\n+ asyncNodesFuture.get();\n+ logger.info(\"--> done\");\n+ }\n+\n+ private boolean snapshotIsDone(String repository, String snapshot) {\n+ try {\n+ SnapshotsStatusResponse snapshotsStatusResponse = client().admin().cluster().prepareSnapshotStatus(repository).setSnapshots(snapshot).get();\n+ if (snapshotsStatusResponse.getSnapshots().isEmpty()) {\n+ return false;\n+ }\n+ for (SnapshotStatus snapshotStatus : snapshotsStatusResponse.getSnapshots()) {\n+ if (snapshotStatus.getState().completed()) {\n+ return true;\n+ }\n+ }\n+ return false;\n+ } catch (SnapshotMissingException ex) {\n+ return false;\n+ }\n+ }\n+\n+ private void createTestIndex(String name) {\n+ assertAcked(prepareCreate(name, 0, settingsBuilder().put(\"number_of_shards\", between(1, 6))\n+ .put(\"number_of_replicas\", between(1, 6))\n+ .put(MockDirectoryHelper.RANDOM_NO_DELETE_OPEN_FILE, false)));\n+\n+ ensureYellow(name);\n+\n+ logger.info(\"--> indexing some data into {}\", name);\n+ for (int i = 0; i < between(10, 500); i++) {\n+ index(name, \"doc\", Integer.toString(i), \"foo\", \"bar\" + i);\n+ }\n \n+ assertAcked(client().admin().indices().prepareUpdateSettings(name).setSettings(ImmutableSettings.builder()\n+ .put(AbstractIndexStore.INDEX_STORE_THROTTLE_TYPE, \"all\")\n+ .put(AbstractIndexStore.INDEX_STORE_THROTTLE_MAX_BYTES_PER_SEC, between(100, 50000))\n+ ));\n }\n }",
"filename": "src/test/java/org/elasticsearch/snapshots/DedicatedClusterSnapshotRestoreTests.java",
"status": "modified"
},
{
"diff": "@@ -20,6 +20,7 @@\n package org.elasticsearch.snapshots;\n \n import com.carrotsearch.randomizedtesting.LifecycleScope;\n+import com.google.common.base.Predicate;\n import com.google.common.collect.ImmutableList;\n import org.apache.lucene.util.LuceneTestCase.Slow;\n import org.elasticsearch.ExceptionsHelper;\n@@ -43,6 +44,7 @@\n import org.elasticsearch.common.settings.ImmutableSettings;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n+import org.elasticsearch.index.store.support.AbstractIndexStore;\n import org.elasticsearch.indices.InvalidIndexNameException;\n import org.elasticsearch.repositories.RepositoriesService;\n import org.elasticsearch.snapshots.mockstore.MockRepositoryModule;\n@@ -51,6 +53,7 @@\n import org.junit.Test;\n \n import java.io.File;\n+import java.util.concurrent.TimeUnit;\n \n import static org.elasticsearch.cluster.metadata.IndexMetaData.*;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.*;\n@@ -1045,7 +1048,7 @@ public void snapshotStatusTest() throws Exception {\n assertThat(snapshotStatus.getState(), equalTo(SnapshotMetaData.State.STARTED));\n // We blocked the node during data write operation, so at least one shard snapshot should be in STARTED stage\n assertThat(snapshotStatus.getShardsStats().getStartedShards(), greaterThan(0));\n- for( SnapshotIndexShardStatus shardStatus : snapshotStatus.getIndices().get(\"test-idx\")) {\n+ for (SnapshotIndexShardStatus shardStatus : snapshotStatus.getIndices().get(\"test-idx\")) {\n if (shardStatus.getStage() == SnapshotIndexShardStage.STARTED) {\n assertThat(shardStatus.getNodeId(), notNullValue());\n }\n@@ -1058,7 +1061,7 @@ public void snapshotStatusTest() throws Exception {\n assertThat(snapshotStatus.getState(), equalTo(SnapshotMetaData.State.STARTED));\n // We blocked the node during data write operation, so at least one shard snapshot should be in STARTED stage\n assertThat(snapshotStatus.getShardsStats().getStartedShards(), greaterThan(0));\n- for( SnapshotIndexShardStatus shardStatus : snapshotStatus.getIndices().get(\"test-idx\")) {\n+ for (SnapshotIndexShardStatus shardStatus : snapshotStatus.getIndices().get(\"test-idx\")) {\n if (shardStatus.getStage() == SnapshotIndexShardStage.STARTED) {\n assertThat(shardStatus.getNodeId(), notNullValue());\n }\n@@ -1093,17 +1096,72 @@ public void snapshotStatusTest() throws Exception {\n } catch (SnapshotMissingException ex) {\n // Expected\n }\n+ }\n+\n \n+ @Test\n+ public void snapshotRelocatingPrimary() throws Exception {\n+ Client client = client();\n+ logger.info(\"--> creating repository\");\n+ assertAcked(client.admin().cluster().preparePutRepository(\"test-repo\")\n+ .setType(\"fs\").setSettings(ImmutableSettings.settingsBuilder()\n+ .put(\"location\", newTempDir(LifecycleScope.SUITE))\n+ .put(\"compress\", randomBoolean())\n+ .put(\"chunk_size\", randomIntBetween(100, 1000))));\n+\n+ // Create index on 1 nodes and make sure each node has a primary by setting no replicas\n+ assertAcked(prepareCreate(\"test-idx\", 1, ImmutableSettings.builder().put(\"number_of_replicas\", 0)));\n+\n+ logger.info(\"--> indexing some data\");\n+ for (int i = 0; i < 100; i++) {\n+ index(\"test-idx\", \"doc\", Integer.toString(i), \"foo\", \"bar\" + i);\n+ }\n+ refresh();\n+ assertThat(client.prepareCount(\"test-idx\").get().getCount(), equalTo(100L));\n+\n+ // Update settings to make sure that relocation is slow so we can start snapshot before relocation is finished\n+ assertAcked(client.admin().indices().prepareUpdateSettings(\"test-idx\").setSettings(ImmutableSettings.builder()\n+ .put(AbstractIndexStore.INDEX_STORE_THROTTLE_TYPE, \"all\")\n+ .put(AbstractIndexStore.INDEX_STORE_THROTTLE_MAX_BYTES_PER_SEC, 100)\n+ ));\n+\n+ logger.info(\"--> start relocations\");\n+ allowNodes(\"test-idx\", cluster().numDataNodes());\n+\n+ logger.info(\"--> wait for relocations to start\");\n+\n+ waitForRelocationsToStart(\"test-idx\", TimeValue.timeValueMillis(300));\n+\n+ logger.info(\"--> snapshot\");\n+ client.admin().cluster().prepareCreateSnapshot(\"test-repo\", \"test-snap\").setWaitForCompletion(false).setIndices(\"test-idx\").get();\n+\n+ // Update settings to back to normal\n+ assertAcked(client.admin().indices().prepareUpdateSettings(\"test-idx\").setSettings(ImmutableSettings.builder()\n+ .put(AbstractIndexStore.INDEX_STORE_THROTTLE_TYPE, \"node\")\n+ ));\n+\n+ logger.info(\"--> wait for snapshot to complete\");\n+ SnapshotInfo snapshotInfo = waitForCompletion(\"test-repo\", \"test-snap\", TimeValue.timeValueSeconds(600));\n+ assertThat(snapshotInfo.state(), equalTo(SnapshotState.SUCCESS));\n+ assertThat(snapshotInfo.shardFailures().size(), equalTo(0));\n+ logger.info(\"--> done\");\n }\n \n- private boolean waitForIndex(String index, TimeValue timeout) throws InterruptedException {\n- long start = System.currentTimeMillis();\n- while (System.currentTimeMillis() - start < timeout.millis()) {\n- if (client().admin().indices().prepareExists(index).execute().actionGet().isExists()) {\n- return true;\n+ private boolean waitForIndex(final String index, TimeValue timeout) throws InterruptedException {\n+ return awaitBusy(new Predicate<Object>() {\n+ @Override\n+ public boolean apply(Object o) {\n+ return client().admin().indices().prepareExists(index).execute().actionGet().isExists();\n }\n- Thread.sleep(100);\n- }\n- return false;\n+ }, timeout.millis(), TimeUnit.MILLISECONDS);\n+ }\n+\n+ private boolean waitForRelocationsToStart(final String index, TimeValue timeout) throws InterruptedException {\n+ return awaitBusy(new Predicate<Object>() {\n+ @Override\n+ public boolean apply(Object o) {\n+ return client().admin().cluster().prepareHealth(index).execute().actionGet().getRelocatingShards() > 0;\n+ }\n+ }, timeout.millis(), TimeUnit.MILLISECONDS);\n }\n }",
"filename": "src/test/java/org/elasticsearch/snapshots/SharedClusterSnapshotRestoreTests.java",
"status": "modified"
}
]
} |
{
"body": "The recovery API sometimes reports percentages > 100 for bytes recovered. This is confusing and makes no sense. \n",
"comments": [
{
"body": "http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/cat-recovery.html#big-percent\n",
"created_at": "2014-05-09T21:34:21Z"
},
{
"body": "Hi @nik9000. Indeed the docs explain this behavior, so technically perhaps it is not a bug. That being said, it has been confusing a lot of people. I think reporting non-intuitive numbers like this detracts from confidence in the correctness of all the other measurements. \n\nDo you have a use-case that relies on the current behavior?\n",
"created_at": "2014-05-09T21:41:39Z"
}
],
"number": 6113,
"title": "Percent bytes recovered greater than 100%"
} | {
"body": "The recovery API was sometimes misreporting the recovered byte\npercentages of index files. This was caused by summing up total file\nlengths on each file chunk transfer. It should have been summing the\nlengths of each transfer request.\n\nCloses #6113\n",
"number": 6138,
"review_comments": [],
"title": "Fix recovery percentage > 100%"
} | {
"commits": [
{
"message": "Fix recovery percentage > 100%\n\nThe recovery API was sometimes misreporting the recovered byte\npercentages of index files. This was caused by summing up total file\nlengths on each file chunk transfer. It should have been summing the\nlengths of each transfer request.\n\nCloses #6113"
}
],
"files": [
{
"diff": "@@ -658,6 +658,10 @@ public float percentFilesRecovered(int numberRecovered) {\n }\n }\n \n+ public float percentFilesRecovered() {\n+ return percentFilesRecovered(recoveredFileCount.get());\n+ }\n+\n public int numberOfRecoveredFiles() {\n return totalFileCount - reusedFileCount;\n }\n@@ -699,6 +703,10 @@ public float percentBytesRecovered(long numberRecovered) {\n }\n }\n \n+ public float percentBytesRecovered() {\n+ return percentBytesRecovered(recoveredByteCount.get());\n+ }\n+\n public int reusedFileCount() {\n return reusedFileCount;\n }",
"filename": "src/main/java/org/elasticsearch/indices/recovery/RecoveryState.java",
"status": "modified"
},
{
"diff": "@@ -635,7 +635,7 @@ public void messageReceived(final RecoveryFileChunkRequest request, TransportCha\n content = content.toBytesArray();\n }\n indexOutput.writeBytes(content.array(), content.arrayOffset(), content.length());\n- onGoingRecovery.recoveryState.getIndex().addRecoveredByteCount(request.length());\n+ onGoingRecovery.recoveryState.getIndex().addRecoveredByteCount(content.length());\n RecoveryState.File file = onGoingRecovery.recoveryState.getIndex().file(request.name());\n if (file != null) {\n file.updateRecovered(request.length());",
"filename": "src/main/java/org/elasticsearch/indices/recovery/RecoveryTarget.java",
"status": "modified"
},
{
"diff": "@@ -25,6 +25,7 @@\n import org.elasticsearch.action.admin.indices.recovery.RecoveryResponse;\n import org.elasticsearch.action.admin.indices.recovery.ShardRecoveryResponse;\n import org.elasticsearch.action.admin.indices.stats.IndicesStatsResponse;\n+import org.elasticsearch.action.index.IndexRequestBuilder;\n import org.elasticsearch.cluster.routing.allocation.command.MoveAllocationCommand;\n import org.elasticsearch.common.settings.ImmutableSettings;\n import org.elasticsearch.index.shard.ShardId;\n@@ -36,12 +37,12 @@\n import java.util.ArrayList;\n import java.util.List;\n import java.util.Map;\n+import java.util.concurrent.ExecutionException;\n \n import static org.elasticsearch.common.settings.ImmutableSettings.settingsBuilder;\n import static org.elasticsearch.test.ElasticsearchIntegrationTest.*;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n-import static org.hamcrest.Matchers.equalTo;\n-import static org.hamcrest.Matchers.greaterThan;\n+import static org.hamcrest.Matchers.*;\n \n /**\n *\n@@ -50,10 +51,12 @@\n public class IndexRecoveryTests extends ElasticsearchIntegrationTest {\n \n private static final String INDEX_NAME = \"test-idx-1\";\n+ private static final String INDEX_TYPE = \"test-type-1\";\n private static final String REPO_NAME = \"test-repo-1\";\n private static final String SNAP_NAME = \"test-snap-1\";\n \n- private static final int DOC_COUNT = 100;\n+ private static final int MIN_DOC_COUNT = 500;\n+ private static final int MAX_DOC_COUNT = 1000;\n private static final int SHARD_COUNT = 1;\n private static final int REPLICA_COUNT = 0;\n \n@@ -84,6 +87,8 @@ public void gatewayRecoveryTest() throws Exception {\n assertThat(node, equalTo(state.getSourceNode().getName()));\n assertThat(node, equalTo(state.getTargetNode().getName()));\n assertNull(state.getRestoreSource());\n+\n+ validateIndexRecoveryState(state.getIndex());\n }\n \n @Test\n@@ -141,6 +146,7 @@ public void replicaRecoveryTest() throws Exception {\n assertThat(nodeAShardResponse.recoveryState().getTargetNode().getName(), equalTo(nodeA));\n assertThat(nodeAShardResponse.recoveryState().getType(), equalTo(RecoveryState.Type.GATEWAY));\n assertThat(nodeAShardResponse.recoveryState().getStage(), equalTo(RecoveryState.Stage.DONE));\n+ validateIndexRecoveryState(nodeAShardResponse.recoveryState().getIndex());\n \n // validate node B recovery\n ShardRecoveryResponse nodeBShardResponse = nodeBResponses.get(0);\n@@ -149,6 +155,7 @@ public void replicaRecoveryTest() throws Exception {\n assertThat(nodeBShardResponse.recoveryState().getTargetNode().getName(), equalTo(nodeB));\n assertThat(nodeBShardResponse.recoveryState().getType(), equalTo(RecoveryState.Type.REPLICA));\n assertThat(nodeBShardResponse.recoveryState().getStage(), equalTo(RecoveryState.Stage.DONE));\n+ validateIndexRecoveryState(nodeBShardResponse.recoveryState().getIndex());\n }\n \n @Test\n@@ -184,6 +191,7 @@ public void rerouteRecoveryTest() throws Exception {\n assertThat(nodeA, equalTo(state.getSourceNode().getName()));\n assertThat(nodeB, equalTo(state.getTargetNode().getName()));\n assertNull(state.getRestoreSource());\n+ validateIndexRecoveryState(state.getIndex());\n }\n \n @Test\n@@ -194,8 +202,8 @@ public void snapshotRecoveryTest() throws Exception {\n logger.info(\"--> create repository\");\n assertAcked(client().admin().cluster().preparePutRepository(REPO_NAME)\n .setType(\"fs\").setSettings(ImmutableSettings.settingsBuilder()\n- .put(\"location\", newTempDir(LifecycleScope.SUITE))\n- .put(\"compress\", false)\n+ .put(\"location\", newTempDir(LifecycleScope.SUITE))\n+ .put(\"compress\", false)\n ).get());\n \n ensureGreen();\n@@ -237,6 +245,7 @@ public void snapshotRecoveryTest() throws Exception {\n assertThat(shardResponse.recoveryState().getStage(), equalTo(RecoveryState.Stage.DONE));\n assertNotNull(shardResponse.recoveryState().getRestoreSource());\n assertThat(shardResponse.recoveryState().getTargetNode().getName(), equalTo(nodeA));\n+ validateIndexRecoveryState(shardResponse.recoveryState().getIndex());\n }\n }\n }\n@@ -251,18 +260,36 @@ private List<ShardRecoveryResponse> findRecoveriesForTargetNode(String nodeName,\n return nodeResponses;\n }\n \n- private IndicesStatsResponse createAndPopulateIndex(String name, int nodeCount, int shardCount, int replicaCount) {\n+ private IndicesStatsResponse createAndPopulateIndex(String name, int nodeCount, int shardCount, int replicaCount)\n+ throws ExecutionException, InterruptedException {\n+\n logger.info(\"--> creating test index: {}\", name);\n assertAcked(prepareCreate(name, nodeCount, settingsBuilder().put(\"number_of_shards\", shardCount)\n .put(\"number_of_replicas\", replicaCount)));\n ensureGreen();\n \n logger.info(\"--> indexing sample data\");\n- for (int i = 0; i < DOC_COUNT; i++) {\n- index(INDEX_NAME, \"x\", Integer.toString(i), \"foo-\" + i, \"bar-\" + i);\n+ final int numDocs = between(MIN_DOC_COUNT, MAX_DOC_COUNT);\n+ final IndexRequestBuilder[] docs = new IndexRequestBuilder[numDocs];\n+\n+ for (int i = 0; i < numDocs; i++) {\n+ docs[i] = client().prepareIndex(INDEX_NAME, INDEX_TYPE).\n+ setSource(\"foo-int-\" + i, randomInt(),\n+ \"foo-string-\" + i, randomAsciiOfLength(32),\n+ \"foo-float-\" + i, randomFloat());\n }\n- refresh();\n- assertThat(client().prepareCount(INDEX_NAME).get().getCount(), equalTo((long) DOC_COUNT));\n+\n+ indexRandom(true, docs);\n+ flush();\n+ assertThat(client().prepareCount(INDEX_NAME).get().getCount(), equalTo((long) numDocs));\n return client().admin().indices().prepareStats(INDEX_NAME).execute().actionGet();\n }\n+\n+ private void validateIndexRecoveryState(RecoveryState.Index indexState) {\n+ assertThat(indexState.time(), greaterThanOrEqualTo(0L));\n+ assertThat(indexState.percentFilesRecovered(), greaterThanOrEqualTo(0.0f));\n+ assertThat(indexState.percentFilesRecovered(), lessThanOrEqualTo(100.0f));\n+ assertThat(indexState.percentBytesRecovered(), greaterThanOrEqualTo(0.0f));\n+ assertThat(indexState.percentBytesRecovered(), lessThanOrEqualTo(100.0f));\n+ }\n }",
"filename": "src/test/java/org/elasticsearch/indices/recovery/IndexRecoveryTests.java",
"status": "modified"
}
]
} |
{
"body": "The problem is that these two aggregations use an heuristic to compute the default value of `shard_size` that depends on the number of shards that the request targets, yet `PercolateContext.numberOfShards` throws an `UnsupportedOperationException`.\n",
"comments": [],
"number": 6037,
"title": "Percolator: Allow significant terms and geo hash grid aggregations in the percolator"
} | {
"body": "PR for #6037\n",
"number": 6123,
"review_comments": [
{
"body": "Should it just pass `shardIt.size()` instead of `shardIt`? I'mworried about letting implementations of this method reset or consume the iterator?\n",
"created_at": "2014-05-13T09:54:12Z"
},
{
"body": "Right, a subclass should change the `shardIt`, so I'll change it `int numShards` instead.\n",
"created_at": "2014-05-15T21:57:34Z"
}
],
"title": "Add num_of_shards statistic to percolate context"
} | {
"commits": [
{
"message": "Add number of shards statistic to PercolateContext instead of throwing exception.\n\nCertain features like significant_terms aggregation rely on this statistic for sizing heuristics.\n\nCloses #6037"
},
{
"message": "Pass shardIt.size() instead of shardIt"
}
],
"files": [
{
"diff": "@@ -107,7 +107,7 @@ protected ShardClearIndicesCacheRequest newShardRequest() {\n }\n \n @Override\n- protected ShardClearIndicesCacheRequest newShardRequest(ShardRouting shard, ClearIndicesCacheRequest request) {\n+ protected ShardClearIndicesCacheRequest newShardRequest(int numShards, ShardRouting shard, ClearIndicesCacheRequest request) {\n return new ShardClearIndicesCacheRequest(shard.index(), shard.id(), request);\n }\n ",
"filename": "src/main/java/org/elasticsearch/action/admin/indices/cache/clear/TransportClearIndicesCacheAction.java",
"status": "modified"
},
{
"diff": "@@ -99,7 +99,7 @@ protected ShardFlushRequest newShardRequest() {\n }\n \n @Override\n- protected ShardFlushRequest newShardRequest(ShardRouting shard, FlushRequest request) {\n+ protected ShardFlushRequest newShardRequest(int numShards, ShardRouting shard, FlushRequest request) {\n return new ShardFlushRequest(shard.index(), shard.id(), request);\n }\n ",
"filename": "src/main/java/org/elasticsearch/action/admin/indices/flush/TransportFlushAction.java",
"status": "modified"
},
{
"diff": "@@ -100,7 +100,7 @@ protected ShardOptimizeRequest newShardRequest() {\n }\n \n @Override\n- protected ShardOptimizeRequest newShardRequest(ShardRouting shard, OptimizeRequest request) {\n+ protected ShardOptimizeRequest newShardRequest(int numShards, ShardRouting shard, OptimizeRequest request) {\n return new ShardOptimizeRequest(shard.index(), shard.id(), request);\n }\n ",
"filename": "src/main/java/org/elasticsearch/action/admin/indices/optimize/TransportOptimizeAction.java",
"status": "modified"
},
{
"diff": "@@ -136,7 +136,7 @@ protected ShardRecoveryRequest newShardRequest() {\n }\n \n @Override\n- protected ShardRecoveryRequest newShardRequest(ShardRouting shard, RecoveryRequest request) {\n+ protected ShardRecoveryRequest newShardRequest(int numShards, ShardRouting shard, RecoveryRequest request) {\n return new ShardRecoveryRequest(shard.index(), shard.id(), request);\n }\n ",
"filename": "src/main/java/org/elasticsearch/action/admin/indices/recovery/TransportRecoveryAction.java",
"status": "modified"
},
{
"diff": "@@ -100,7 +100,7 @@ protected ShardRefreshRequest newShardRequest() {\n }\n \n @Override\n- protected ShardRefreshRequest newShardRequest(ShardRouting shard, RefreshRequest request) {\n+ protected ShardRefreshRequest newShardRequest(int numShards, ShardRouting shard, RefreshRequest request) {\n return new ShardRefreshRequest(shard.index(), shard.id(), request);\n }\n ",
"filename": "src/main/java/org/elasticsearch/action/admin/indices/refresh/TransportRefreshAction.java",
"status": "modified"
},
{
"diff": "@@ -124,7 +124,7 @@ protected IndexShardSegmentRequest newShardRequest() {\n }\n \n @Override\n- protected IndexShardSegmentRequest newShardRequest(ShardRouting shard, IndicesSegmentsRequest request) {\n+ protected IndexShardSegmentRequest newShardRequest(int numShards, ShardRouting shard, IndicesSegmentsRequest request) {\n return new IndexShardSegmentRequest(shard.index(), shard.id(), request);\n }\n ",
"filename": "src/main/java/org/elasticsearch/action/admin/indices/segments/TransportIndicesSegmentsAction.java",
"status": "modified"
},
{
"diff": "@@ -125,7 +125,7 @@ protected IndexShardStatsRequest newShardRequest() {\n }\n \n @Override\n- protected IndexShardStatsRequest newShardRequest(ShardRouting shard, IndicesStatsRequest request) {\n+ protected IndexShardStatsRequest newShardRequest(int numShards, ShardRouting shard, IndicesStatsRequest request) {\n return new IndexShardStatsRequest(shard.index(), shard.id(), request);\n }\n ",
"filename": "src/main/java/org/elasticsearch/action/admin/indices/stats/TransportIndicesStatsAction.java",
"status": "modified"
},
{
"diff": "@@ -108,7 +108,7 @@ protected ShardValidateQueryRequest newShardRequest() {\n }\n \n @Override\n- protected ShardValidateQueryRequest newShardRequest(ShardRouting shard, ValidateQueryRequest request) {\n+ protected ShardValidateQueryRequest newShardRequest(int numShards, ShardRouting shard, ValidateQueryRequest request) {\n String[] filteringAliases = clusterService.state().metaData().filteringAliases(shard.index(), request.indices());\n return new ShardValidateQueryRequest(shard.index(), shard.id(), filteringAliases, request);\n }",
"filename": "src/main/java/org/elasticsearch/action/admin/indices/validate/query/TransportValidateQueryAction.java",
"status": "modified"
},
{
"diff": "@@ -112,7 +112,7 @@ protected ShardCountRequest newShardRequest() {\n }\n \n @Override\n- protected ShardCountRequest newShardRequest(ShardRouting shard, CountRequest request) {\n+ protected ShardCountRequest newShardRequest(int numShards, ShardRouting shard, CountRequest request) {\n String[] filteringAliases = clusterService.state().metaData().filteringAliases(shard.index(), request.indices());\n return new ShardCountRequest(shard.index(), shard.id(), filteringAliases, request);\n }",
"filename": "src/main/java/org/elasticsearch/action/count/TransportCountAction.java",
"status": "modified"
},
{
"diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.action.percolate;\n \n+import org.elasticsearch.Version;\n import org.elasticsearch.action.support.broadcast.BroadcastShardOperationRequest;\n import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.io.stream.StreamInput;\n@@ -35,6 +36,7 @@ public class PercolateShardRequest extends BroadcastShardOperationRequest {\n private BytesReference source;\n private BytesReference docSource;\n private boolean onlyCount;\n+ private int numberOfShards;\n \n public PercolateShardRequest() {\n }\n@@ -43,12 +45,13 @@ public PercolateShardRequest(String index, int shardId) {\n super(index, shardId);\n }\n \n- public PercolateShardRequest(String index, int shardId, PercolateRequest request) {\n+ public PercolateShardRequest(String index, int shardId, int numberOfShards, PercolateRequest request) {\n super(index, shardId, request);\n this.documentType = request.documentType();\n this.source = request.source();\n this.docSource = request.docSource();\n this.onlyCount = request.onlyCount();\n+ this.numberOfShards = numberOfShards;\n }\n \n public PercolateShardRequest(ShardId shardId, PercolateRequest request) {\n@@ -91,13 +94,20 @@ void onlyCount(boolean onlyCount) {\n this.onlyCount = onlyCount;\n }\n \n+ public int getNumberOfShards() {\n+ return numberOfShards;\n+ }\n+\n @Override\n public void readFrom(StreamInput in) throws IOException {\n super.readFrom(in);\n documentType = in.readString();\n source = in.readBytesReference();\n docSource = in.readBytesReference();\n onlyCount = in.readBoolean();\n+ if (in.getVersion().onOrAfter(Version.V_1_2_0)) {\n+ numberOfShards = in.readVInt();\n+ }\n }\n \n @Override\n@@ -107,6 +117,9 @@ public void writeTo(StreamOutput out) throws IOException {\n out.writeBytesReference(source);\n out.writeBytesReference(docSource);\n out.writeBoolean(onlyCount);\n+ if (out.getVersion().onOrAfter(Version.V_1_2_0)) {\n+ out.writeVInt(numberOfShards);\n+ }\n }\n \n }",
"filename": "src/main/java/org/elasticsearch/action/percolate/PercolateShardRequest.java",
"status": "modified"
},
{
"diff": "@@ -173,8 +173,8 @@ protected PercolateShardRequest newShardRequest() {\n }\n \n @Override\n- protected PercolateShardRequest newShardRequest(ShardRouting shard, PercolateRequest request) {\n- return new PercolateShardRequest(shard.index(), shard.id(), request);\n+ protected PercolateShardRequest newShardRequest(int numShards, ShardRouting shard, PercolateRequest request) {\n+ return new PercolateShardRequest(shard.index(), shard.id(), numShards, request);\n }\n \n @Override",
"filename": "src/main/java/org/elasticsearch/action/percolate/TransportPercolateAction.java",
"status": "modified"
},
{
"diff": "@@ -93,7 +93,7 @@ protected ShardSuggestRequest newShardRequest() {\n }\n \n @Override\n- protected ShardSuggestRequest newShardRequest(ShardRouting shard, SuggestRequest request) {\n+ protected ShardSuggestRequest newShardRequest(int numShards, ShardRouting shard, SuggestRequest request) {\n return new ShardSuggestRequest(shard.index(), shard.id(), request);\n }\n ",
"filename": "src/main/java/org/elasticsearch/action/suggest/TransportSuggestAction.java",
"status": "modified"
},
{
"diff": "@@ -83,7 +83,7 @@ protected void doExecute(Request request, ActionListener<Response> listener) {\n \n protected abstract ShardRequest newShardRequest();\n \n- protected abstract ShardRequest newShardRequest(ShardRouting shard, Request request);\n+ protected abstract ShardRequest newShardRequest(int numShards, ShardRouting shard, Request request);\n \n protected abstract ShardResponse newShardResponse();\n \n@@ -161,7 +161,7 @@ void performOperation(final ShardIterator shardIt, final ShardRouting shard, fin\n onOperation(null, shardIt, shardIndex, new NoShardAvailableActionException(shardIt.shardId()));\n } else {\n try {\n- final ShardRequest shardRequest = newShardRequest(shard, request);\n+ final ShardRequest shardRequest = newShardRequest(shardIt.size(), shard, request);\n if (shard.currentNodeId().equals(nodes.localNodeId())) {\n threadPool.executor(executor).execute(new Runnable() {\n @Override",
"filename": "src/main/java/org/elasticsearch/action/support/broadcast/TransportBroadcastOperationAction.java",
"status": "modified"
},
{
"diff": "@@ -93,6 +93,7 @@ public class PercolateContext extends SearchContext {\n private final BigArrays bigArrays;\n private final ScriptService scriptService;\n private final ConcurrentMap<HashedBytesRef, Query> percolateQueries;\n+ private final int numberOfShards;\n private String[] types;\n \n private Engine.Searcher docSearcher;\n@@ -127,6 +128,7 @@ public PercolateContext(PercolateShardRequest request, SearchShardTarget searchS\n this.engineSearcher = indexShard.acquireSearcher(\"percolate\");\n this.searcher = new ContextIndexSearcher(this, engineSearcher);\n this.scriptService = scriptService;\n+ this.numberOfShards = request.getNumberOfShards();\n }\n \n public IndexSearcher docSearcher() {\n@@ -327,7 +329,7 @@ public SearchContext searchType(SearchType searchType) {\n \n @Override\n public int numberOfShards() {\n- throw new UnsupportedOperationException();\n+ return numberOfShards;\n }\n \n @Override",
"filename": "src/main/java/org/elasticsearch/percolator/PercolateContext.java",
"status": "modified"
},
{
"diff": "@@ -37,6 +37,7 @@\n import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery;\n import static org.elasticsearch.index.query.QueryBuilders.matchQuery;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertMatchCount;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertNoFailures;\n import static org.hamcrest.Matchers.arrayWithSize;\n import static org.hamcrest.Matchers.equalTo;\n \n@@ -122,4 +123,16 @@ public void testFacetsAndAggregations() throws Exception {\n }\n }\n \n+ @Test\n+ public void testSignificantAggs() throws Exception {\n+ client().admin().indices().prepareCreate(\"test\").execute().actionGet();\n+ ensureGreen();\n+ PercolateRequestBuilder percolateRequestBuilder = client().preparePercolate()\n+ .setIndices(\"test\").setDocumentType(\"type\")\n+ .setPercolateDoc(docBuilder().setDoc(jsonBuilder().startObject().field(\"field1\", \"value\").endObject()))\n+ .addAggregation(AggregationBuilders.significantTerms(\"a\").field(\"field2\"));\n+ PercolateResponse response = percolateRequestBuilder.get();\n+ assertNoFailures(response);\n+ }\n+\n }",
"filename": "src/test/java/org/elasticsearch/percolator/PercolatorFacetsAndAggregationsTests.java",
"status": "modified"
}
]
} |
{
"body": "Steps to reproduce:\n\n```\nPUT twitter\n\nGET twitter/tweet/_validate/query?explain\n{\n \"query\" : {\n \"term\" : {\"field\":\"value\"} \n }\n}\n\n{\n \"valid\": true,\n \"_shards\": {\n \"total\": 1,\n \"successful\": 1,\n \"failed\": 0\n },\n \"explanations\": [\n {\n \"index\": \"twitter\",\n \"valid\": true,\n \"explanation\": \"field:value\"\n }\n ]\n}\n```\n\nThe resulting above explanation should be wrapped into a filtered query `_type:tweet` but it isn't.\n",
"comments": [],
"number": 6116,
"title": "Validate query ignores type filter"
} | {
"body": "Made sure that a match_all query is used when no query is specified and ensure no NPE is thrown either.\nAlso used the same code path as the search api to ensure that alias filters are taken into account, same for type filters.\n\nCloses #6111 Closes #6112 Closes #6116\n",
"number": 6114,
"review_comments": [],
"title": "Fixed validate query parsing issues"
} | {
"commits": [
{
"message": "Fixed validate query parsing issues\n\nMade sure that a match_all query is used when no query is specified and ensure no NPE is thrown either.\nAlso used the same code path as the search api to ensure that alias filters are taken into account, same for type filters.\n\nCloses #6111 Closes #6112 Closes #6116"
}
],
"files": [
{
"diff": "@@ -25,3 +25,12 @@\n \n - is_false: valid\n \n+ - do:\n+ indices.validate_query:\n+ explain: true\n+\n+ - is_true: valid\n+ - match: {_shards.failed: 0}\n+ - match: {explanations.0.index: 'testing'}\n+ - match: {explanations.0.explanation: 'ConstantScore(*:*)'}\n+",
"filename": "rest-api-spec/test/indices.validate_query/10_basic.yaml",
"status": "modified"
},
{
"diff": "@@ -37,7 +37,6 @@\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.util.BigArrays;\n import org.elasticsearch.index.query.IndexQueryParserService;\n-import org.elasticsearch.index.query.ParsedQuery;\n import org.elasticsearch.index.query.QueryParsingException;\n import org.elasticsearch.index.service.IndexService;\n import org.elasticsearch.index.shard.service.IndexShard;\n@@ -182,30 +181,35 @@ protected ShardValidateQueryResponse shardOperation(ShardValidateQueryRequest re\n boolean valid;\n String explanation = null;\n String error = null;\n- if (request.source().length() == 0) {\n+\n+ DefaultSearchContext searchContext = new DefaultSearchContext(0,\n+ new ShardSearchRequest().types(request.types()).nowInMillis(request.nowInMillis())\n+ .filteringAliases(request.filteringAliases()),\n+ null, indexShard.acquireSearcher(\"validate_query\"), indexService, indexShard,\n+ scriptService, cacheRecycler, pageCacheRecycler, bigArrays\n+ );\n+ SearchContext.setCurrent(searchContext);\n+ try {\n+ if (request.source() != null && request.source().length() > 0) {\n+ searchContext.parsedQuery(queryParserService.parseQuery(request.source()));\n+ }\n+ searchContext.preProcess();\n+\n valid = true;\n- } else {\n- SearchContext.setCurrent(new DefaultSearchContext(0,\n- new ShardSearchRequest().types(request.types()).nowInMillis(request.nowInMillis()),\n- null, indexShard.acquireSearcher(\"validate_query\"), indexService, indexShard,\n- scriptService, cacheRecycler, pageCacheRecycler, bigArrays));\n- try {\n- ParsedQuery parsedQuery = queryParserService.parseQuery(request.source());\n- valid = true;\n- if (request.explain()) {\n- explanation = parsedQuery.query().toString();\n- }\n- } catch (QueryParsingException e) {\n- valid = false;\n- error = e.getDetailedMessage();\n- } catch (AssertionError e) {\n- valid = false;\n- error = e.getMessage();\n- } finally {\n- SearchContext.current().close();\n- SearchContext.removeCurrent();\n+ if (request.explain()) {\n+ explanation = searchContext.query().toString();\n }\n+ } catch (QueryParsingException e) {\n+ valid = false;\n+ error = e.getDetailedMessage();\n+ } catch (AssertionError e) {\n+ valid = false;\n+ error = e.getMessage();\n+ } finally {\n+ SearchContext.current().close();\n+ SearchContext.removeCurrent();\n }\n+\n return new ShardValidateQueryResponse(request.index(), request.shardId(), valid, explanation, error);\n }\n }",
"filename": "src/main/java/org/elasticsearch/action/admin/indices/validate/query/TransportValidateQueryAction.java",
"status": "modified"
},
{
"diff": "@@ -19,6 +19,7 @@\n package org.elasticsearch.validate;\n \n import com.google.common.base.Charsets;\n+import org.elasticsearch.action.admin.indices.alias.Alias;\n import org.elasticsearch.action.admin.indices.validate.query.ValidateQueryResponse;\n import org.elasticsearch.client.Client;\n import org.elasticsearch.common.geo.GeoDistance;\n@@ -28,6 +29,7 @@\n import org.elasticsearch.index.query.FilterBuilders;\n import org.elasticsearch.index.query.QueryBuilder;\n import org.elasticsearch.index.query.QueryBuilders;\n+import org.elasticsearch.indices.IndexMissingException;\n import org.elasticsearch.test.ElasticsearchIntegrationTest;\n import org.elasticsearch.test.ElasticsearchIntegrationTest.ClusterScope;\n import org.hamcrest.Matcher;\n@@ -107,27 +109,27 @@ public void explainValidateQuery() throws Exception {\n assertThat(response.getQueryExplanation().get(0).getError(), containsString(\"Failed to parse\"));\n assertThat(response.getQueryExplanation().get(0).getExplanation(), nullValue());\n \n- assertExplanation(QueryBuilders.queryString(\"_id:1\"), equalTo(\"ConstantScore(_uid:type1#1)\"));\n+ assertExplanation(QueryBuilders.queryString(\"_id:1\"), equalTo(\"filtered(ConstantScore(_uid:type1#1))->cache(_type:type1)\"));\n \n assertExplanation(QueryBuilders.idsQuery(\"type1\").addIds(\"1\").addIds(\"2\"),\n- equalTo(\"ConstantScore(_uid:type1#1 _uid:type1#2)\"));\n+ equalTo(\"filtered(ConstantScore(_uid:type1#1 _uid:type1#2))->cache(_type:type1)\"));\n \n- assertExplanation(QueryBuilders.queryString(\"foo\"), equalTo(\"_all:foo\"));\n+ assertExplanation(QueryBuilders.queryString(\"foo\"), equalTo(\"filtered(_all:foo)->cache(_type:type1)\"));\n \n assertExplanation(QueryBuilders.filteredQuery(\n QueryBuilders.termQuery(\"foo\", \"1\"),\n FilterBuilders.orFilter(\n FilterBuilders.termFilter(\"bar\", \"2\"),\n FilterBuilders.termFilter(\"baz\", \"3\")\n )\n- ), equalTo(\"filtered(foo:1)->cache(bar:[2 TO 2]) cache(baz:3)\"));\n+ ), equalTo(\"filtered(filtered(foo:1)->cache(bar:[2 TO 2]) cache(baz:3))->cache(_type:type1)\"));\n \n assertExplanation(QueryBuilders.filteredQuery(\n QueryBuilders.termQuery(\"foo\", \"1\"),\n FilterBuilders.orFilter(\n FilterBuilders.termFilter(\"bar\", \"2\")\n )\n- ), equalTo(\"filtered(foo:1)->cache(bar:[2 TO 2])\"));\n+ ), equalTo(\"filtered(filtered(foo:1)->cache(bar:[2 TO 2]))->cache(_type:type1)\"));\n \n assertExplanation(QueryBuilders.filteredQuery(\n QueryBuilders.matchAllQuery(),\n@@ -136,55 +138,55 @@ public void explainValidateQuery() throws Exception {\n .addPoint(30, -80)\n .addPoint(20, -90)\n .addPoint(40, -70) // closing polygon\n- ), equalTo(\"ConstantScore(GeoPolygonFilter(pin.location, [[40.0, -70.0], [30.0, -80.0], [20.0, -90.0], [40.0, -70.0]]))\"));\n+ ), equalTo(\"filtered(ConstantScore(GeoPolygonFilter(pin.location, [[40.0, -70.0], [30.0, -80.0], [20.0, -90.0], [40.0, -70.0]])))->cache(_type:type1)\"));\n \n assertExplanation(QueryBuilders.constantScoreQuery(FilterBuilders.geoBoundingBoxFilter(\"pin.location\")\n .topLeft(40, -80)\n .bottomRight(20, -70)\n- ), equalTo(\"ConstantScore(GeoBoundingBoxFilter(pin.location, [40.0, -80.0], [20.0, -70.0]))\"));\n+ ), equalTo(\"filtered(ConstantScore(GeoBoundingBoxFilter(pin.location, [40.0, -80.0], [20.0, -70.0])))->cache(_type:type1)\"));\n \n assertExplanation(QueryBuilders.constantScoreQuery(FilterBuilders.geoDistanceFilter(\"pin.location\")\n .lat(10).lon(20).distance(15, DistanceUnit.DEFAULT).geoDistance(GeoDistance.PLANE)\n- ), equalTo(\"ConstantScore(GeoDistanceFilter(pin.location, PLANE, 15.0, 10.0, 20.0))\"));\n+ ), equalTo(\"filtered(ConstantScore(GeoDistanceFilter(pin.location, PLANE, 15.0, 10.0, 20.0)))->cache(_type:type1)\"));\n \n assertExplanation(QueryBuilders.constantScoreQuery(FilterBuilders.geoDistanceFilter(\"pin.location\")\n .lat(10).lon(20).distance(15, DistanceUnit.DEFAULT).geoDistance(GeoDistance.PLANE)\n- ), equalTo(\"ConstantScore(GeoDistanceFilter(pin.location, PLANE, 15.0, 10.0, 20.0))\"));\n+ ), equalTo(\"filtered(ConstantScore(GeoDistanceFilter(pin.location, PLANE, 15.0, 10.0, 20.0)))->cache(_type:type1)\"));\n \n assertExplanation(QueryBuilders.constantScoreQuery(FilterBuilders.geoDistanceRangeFilter(\"pin.location\")\n .lat(10).lon(20).from(\"15m\").to(\"25m\").geoDistance(GeoDistance.PLANE)\n- ), equalTo(\"ConstantScore(GeoDistanceRangeFilter(pin.location, PLANE, [15.0 - 25.0], 10.0, 20.0))\"));\n+ ), equalTo(\"filtered(ConstantScore(GeoDistanceRangeFilter(pin.location, PLANE, [15.0 - 25.0], 10.0, 20.0)))->cache(_type:type1)\"));\n \n assertExplanation(QueryBuilders.constantScoreQuery(FilterBuilders.geoDistanceRangeFilter(\"pin.location\")\n .lat(10).lon(20).from(\"15miles\").to(\"25miles\").geoDistance(GeoDistance.PLANE)\n- ), equalTo(\"ConstantScore(GeoDistanceRangeFilter(pin.location, PLANE, [\" + DistanceUnit.DEFAULT.convert(15.0, DistanceUnit.MILES) + \" - \" + DistanceUnit.DEFAULT.convert(25.0, DistanceUnit.MILES) + \"], 10.0, 20.0))\"));\n+ ), equalTo(\"filtered(ConstantScore(GeoDistanceRangeFilter(pin.location, PLANE, [\" + DistanceUnit.DEFAULT.convert(15.0, DistanceUnit.MILES) + \" - \" + DistanceUnit.DEFAULT.convert(25.0, DistanceUnit.MILES) + \"], 10.0, 20.0)))->cache(_type:type1)\"));\n \n assertExplanation(QueryBuilders.filteredQuery(\n QueryBuilders.termQuery(\"foo\", \"1\"),\n FilterBuilders.andFilter(\n FilterBuilders.termFilter(\"bar\", \"2\"),\n FilterBuilders.termFilter(\"baz\", \"3\")\n )\n- ), equalTo(\"filtered(foo:1)->+cache(bar:[2 TO 2]) +cache(baz:3)\"));\n+ ), equalTo(\"filtered(filtered(foo:1)->+cache(bar:[2 TO 2]) +cache(baz:3))->cache(_type:type1)\"));\n \n assertExplanation(QueryBuilders.constantScoreQuery(FilterBuilders.termsFilter(\"foo\", \"1\", \"2\", \"3\")),\n- equalTo(\"ConstantScore(cache(foo:1 foo:2 foo:3))\"));\n+ equalTo(\"filtered(ConstantScore(cache(foo:1 foo:2 foo:3)))->cache(_type:type1)\"));\n \n assertExplanation(QueryBuilders.constantScoreQuery(FilterBuilders.notFilter(FilterBuilders.termFilter(\"foo\", \"bar\"))),\n- equalTo(\"ConstantScore(NotFilter(cache(foo:bar)))\"));\n+ equalTo(\"filtered(ConstantScore(NotFilter(cache(foo:bar))))->cache(_type:type1)\"));\n \n assertExplanation(QueryBuilders.filteredQuery(\n QueryBuilders.termQuery(\"foo\", \"1\"),\n FilterBuilders.hasChildFilter(\n \"child-type\",\n QueryBuilders.matchQuery(\"foo\", \"1\")\n )\n- ), equalTo(\"filtered(foo:1)->CustomQueryWrappingFilter(child_filter[child-type/type1](filtered(foo:1)->cache(_type:child-type)))\"));\n+ ), equalTo(\"filtered(filtered(foo:1)->CustomQueryWrappingFilter(child_filter[child-type/type1](filtered(foo:1)->cache(_type:child-type))))->cache(_type:type1)\"));\n \n assertExplanation(QueryBuilders.filteredQuery(\n QueryBuilders.termQuery(\"foo\", \"1\"),\n FilterBuilders.scriptFilter(\"true\")\n- ), equalTo(\"filtered(foo:1)->ScriptFilter(true)\"));\n+ ), equalTo(\"filtered(filtered(foo:1)->ScriptFilter(true))->cache(_type:type1)\"));\n \n }\n \n@@ -253,6 +255,38 @@ public void explainDateRangeInQueryString() {\n assertThat(response.isValid(), equalTo(true));\n }\n \n+ @Test(expected = IndexMissingException.class)\n+ public void validateEmptyCluster() {\n+ client().admin().indices().prepareValidateQuery().get();\n+ }\n+\n+ @Test\n+ public void explainNoQuery() {\n+ createIndex(\"test\");\n+ ensureGreen();\n+\n+ ValidateQueryResponse validateQueryResponse = client().admin().indices().prepareValidateQuery().setExplain(true).get();\n+ assertThat(validateQueryResponse.isValid(), equalTo(true));\n+ assertThat(validateQueryResponse.getQueryExplanation().size(), equalTo(1));\n+ assertThat(validateQueryResponse.getQueryExplanation().get(0).getIndex(), equalTo(\"test\"));\n+ assertThat(validateQueryResponse.getQueryExplanation().get(0).getExplanation(), equalTo(\"ConstantScore(*:*)\"));\n+ }\n+\n+ @Test\n+ public void explainFilteredAlias() {\n+ assertAcked(prepareCreate(\"test\")\n+ .addMapping(\"test\", \"field\", \"type=string\")\n+ .addAlias(new Alias(\"alias\").filter(FilterBuilders.termFilter(\"field\", \"value1\"))));\n+ ensureGreen();\n+\n+ ValidateQueryResponse validateQueryResponse = client().admin().indices().prepareValidateQuery(\"alias\")\n+ .setQuery(QueryBuilders.matchAllQuery()).setExplain(true).get();\n+ assertThat(validateQueryResponse.isValid(), equalTo(true));\n+ assertThat(validateQueryResponse.getQueryExplanation().size(), equalTo(1));\n+ assertThat(validateQueryResponse.getQueryExplanation().get(0).getIndex(), equalTo(\"test\"));\n+ assertThat(validateQueryResponse.getQueryExplanation().get(0).getExplanation(), containsString(\"field:value1\"));\n+ }\n+\n private void assertExplanation(QueryBuilder queryBuilder, Matcher<String> matcher) {\n ValidateQueryResponse response = client().admin().indices().prepareValidateQuery(\"test\")\n .setTypes(\"type1\")",
"filename": "src/test/java/org/elasticsearch/validate/SimpleValidateQueryTests.java",
"status": "modified"
}
]
} |
{
"body": "```\nPUT /foo/t/1\n{\"foo\": \"bar\"}\n\nPUT /foo/_alias/bar\n{\n \"filter\": {\"term\": { \"foo\": \"bar\"}}\n}\n\nGET /foo/_validate/query?explain\n{\"query\": { \"match_all\": {}}}\n```\n\nReturns:\n\n```\n {\n \"index\": \"foo\",\n \"valid\": true,\n \"explanation\": \"ConstantScore(*:*)\"\n }\n```\n\nIt should include the filter from the alias.\n",
"comments": [
{
"body": "should we get this into `1.1.2` as well?\n",
"created_at": "2014-05-18T10:05:23Z"
},
{
"body": "Makes sense to me, will do\n",
"created_at": "2014-05-19T07:55:05Z"
}
],
"number": 6112,
"title": "Validate query ignores alias filters"
} | {
"body": "Made sure that a match_all query is used when no query is specified and ensure no NPE is thrown either.\nAlso used the same code path as the search api to ensure that alias filters are taken into account, same for type filters.\n\nCloses #6111 Closes #6112 Closes #6116\n",
"number": 6114,
"review_comments": [],
"title": "Fixed validate query parsing issues"
} | {
"commits": [
{
"message": "Fixed validate query parsing issues\n\nMade sure that a match_all query is used when no query is specified and ensure no NPE is thrown either.\nAlso used the same code path as the search api to ensure that alias filters are taken into account, same for type filters.\n\nCloses #6111 Closes #6112 Closes #6116"
}
],
"files": [
{
"diff": "@@ -25,3 +25,12 @@\n \n - is_false: valid\n \n+ - do:\n+ indices.validate_query:\n+ explain: true\n+\n+ - is_true: valid\n+ - match: {_shards.failed: 0}\n+ - match: {explanations.0.index: 'testing'}\n+ - match: {explanations.0.explanation: 'ConstantScore(*:*)'}\n+",
"filename": "rest-api-spec/test/indices.validate_query/10_basic.yaml",
"status": "modified"
},
{
"diff": "@@ -37,7 +37,6 @@\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.util.BigArrays;\n import org.elasticsearch.index.query.IndexQueryParserService;\n-import org.elasticsearch.index.query.ParsedQuery;\n import org.elasticsearch.index.query.QueryParsingException;\n import org.elasticsearch.index.service.IndexService;\n import org.elasticsearch.index.shard.service.IndexShard;\n@@ -182,30 +181,35 @@ protected ShardValidateQueryResponse shardOperation(ShardValidateQueryRequest re\n boolean valid;\n String explanation = null;\n String error = null;\n- if (request.source().length() == 0) {\n+\n+ DefaultSearchContext searchContext = new DefaultSearchContext(0,\n+ new ShardSearchRequest().types(request.types()).nowInMillis(request.nowInMillis())\n+ .filteringAliases(request.filteringAliases()),\n+ null, indexShard.acquireSearcher(\"validate_query\"), indexService, indexShard,\n+ scriptService, cacheRecycler, pageCacheRecycler, bigArrays\n+ );\n+ SearchContext.setCurrent(searchContext);\n+ try {\n+ if (request.source() != null && request.source().length() > 0) {\n+ searchContext.parsedQuery(queryParserService.parseQuery(request.source()));\n+ }\n+ searchContext.preProcess();\n+\n valid = true;\n- } else {\n- SearchContext.setCurrent(new DefaultSearchContext(0,\n- new ShardSearchRequest().types(request.types()).nowInMillis(request.nowInMillis()),\n- null, indexShard.acquireSearcher(\"validate_query\"), indexService, indexShard,\n- scriptService, cacheRecycler, pageCacheRecycler, bigArrays));\n- try {\n- ParsedQuery parsedQuery = queryParserService.parseQuery(request.source());\n- valid = true;\n- if (request.explain()) {\n- explanation = parsedQuery.query().toString();\n- }\n- } catch (QueryParsingException e) {\n- valid = false;\n- error = e.getDetailedMessage();\n- } catch (AssertionError e) {\n- valid = false;\n- error = e.getMessage();\n- } finally {\n- SearchContext.current().close();\n- SearchContext.removeCurrent();\n+ if (request.explain()) {\n+ explanation = searchContext.query().toString();\n }\n+ } catch (QueryParsingException e) {\n+ valid = false;\n+ error = e.getDetailedMessage();\n+ } catch (AssertionError e) {\n+ valid = false;\n+ error = e.getMessage();\n+ } finally {\n+ SearchContext.current().close();\n+ SearchContext.removeCurrent();\n }\n+\n return new ShardValidateQueryResponse(request.index(), request.shardId(), valid, explanation, error);\n }\n }",
"filename": "src/main/java/org/elasticsearch/action/admin/indices/validate/query/TransportValidateQueryAction.java",
"status": "modified"
},
{
"diff": "@@ -19,6 +19,7 @@\n package org.elasticsearch.validate;\n \n import com.google.common.base.Charsets;\n+import org.elasticsearch.action.admin.indices.alias.Alias;\n import org.elasticsearch.action.admin.indices.validate.query.ValidateQueryResponse;\n import org.elasticsearch.client.Client;\n import org.elasticsearch.common.geo.GeoDistance;\n@@ -28,6 +29,7 @@\n import org.elasticsearch.index.query.FilterBuilders;\n import org.elasticsearch.index.query.QueryBuilder;\n import org.elasticsearch.index.query.QueryBuilders;\n+import org.elasticsearch.indices.IndexMissingException;\n import org.elasticsearch.test.ElasticsearchIntegrationTest;\n import org.elasticsearch.test.ElasticsearchIntegrationTest.ClusterScope;\n import org.hamcrest.Matcher;\n@@ -107,27 +109,27 @@ public void explainValidateQuery() throws Exception {\n assertThat(response.getQueryExplanation().get(0).getError(), containsString(\"Failed to parse\"));\n assertThat(response.getQueryExplanation().get(0).getExplanation(), nullValue());\n \n- assertExplanation(QueryBuilders.queryString(\"_id:1\"), equalTo(\"ConstantScore(_uid:type1#1)\"));\n+ assertExplanation(QueryBuilders.queryString(\"_id:1\"), equalTo(\"filtered(ConstantScore(_uid:type1#1))->cache(_type:type1)\"));\n \n assertExplanation(QueryBuilders.idsQuery(\"type1\").addIds(\"1\").addIds(\"2\"),\n- equalTo(\"ConstantScore(_uid:type1#1 _uid:type1#2)\"));\n+ equalTo(\"filtered(ConstantScore(_uid:type1#1 _uid:type1#2))->cache(_type:type1)\"));\n \n- assertExplanation(QueryBuilders.queryString(\"foo\"), equalTo(\"_all:foo\"));\n+ assertExplanation(QueryBuilders.queryString(\"foo\"), equalTo(\"filtered(_all:foo)->cache(_type:type1)\"));\n \n assertExplanation(QueryBuilders.filteredQuery(\n QueryBuilders.termQuery(\"foo\", \"1\"),\n FilterBuilders.orFilter(\n FilterBuilders.termFilter(\"bar\", \"2\"),\n FilterBuilders.termFilter(\"baz\", \"3\")\n )\n- ), equalTo(\"filtered(foo:1)->cache(bar:[2 TO 2]) cache(baz:3)\"));\n+ ), equalTo(\"filtered(filtered(foo:1)->cache(bar:[2 TO 2]) cache(baz:3))->cache(_type:type1)\"));\n \n assertExplanation(QueryBuilders.filteredQuery(\n QueryBuilders.termQuery(\"foo\", \"1\"),\n FilterBuilders.orFilter(\n FilterBuilders.termFilter(\"bar\", \"2\")\n )\n- ), equalTo(\"filtered(foo:1)->cache(bar:[2 TO 2])\"));\n+ ), equalTo(\"filtered(filtered(foo:1)->cache(bar:[2 TO 2]))->cache(_type:type1)\"));\n \n assertExplanation(QueryBuilders.filteredQuery(\n QueryBuilders.matchAllQuery(),\n@@ -136,55 +138,55 @@ public void explainValidateQuery() throws Exception {\n .addPoint(30, -80)\n .addPoint(20, -90)\n .addPoint(40, -70) // closing polygon\n- ), equalTo(\"ConstantScore(GeoPolygonFilter(pin.location, [[40.0, -70.0], [30.0, -80.0], [20.0, -90.0], [40.0, -70.0]]))\"));\n+ ), equalTo(\"filtered(ConstantScore(GeoPolygonFilter(pin.location, [[40.0, -70.0], [30.0, -80.0], [20.0, -90.0], [40.0, -70.0]])))->cache(_type:type1)\"));\n \n assertExplanation(QueryBuilders.constantScoreQuery(FilterBuilders.geoBoundingBoxFilter(\"pin.location\")\n .topLeft(40, -80)\n .bottomRight(20, -70)\n- ), equalTo(\"ConstantScore(GeoBoundingBoxFilter(pin.location, [40.0, -80.0], [20.0, -70.0]))\"));\n+ ), equalTo(\"filtered(ConstantScore(GeoBoundingBoxFilter(pin.location, [40.0, -80.0], [20.0, -70.0])))->cache(_type:type1)\"));\n \n assertExplanation(QueryBuilders.constantScoreQuery(FilterBuilders.geoDistanceFilter(\"pin.location\")\n .lat(10).lon(20).distance(15, DistanceUnit.DEFAULT).geoDistance(GeoDistance.PLANE)\n- ), equalTo(\"ConstantScore(GeoDistanceFilter(pin.location, PLANE, 15.0, 10.0, 20.0))\"));\n+ ), equalTo(\"filtered(ConstantScore(GeoDistanceFilter(pin.location, PLANE, 15.0, 10.0, 20.0)))->cache(_type:type1)\"));\n \n assertExplanation(QueryBuilders.constantScoreQuery(FilterBuilders.geoDistanceFilter(\"pin.location\")\n .lat(10).lon(20).distance(15, DistanceUnit.DEFAULT).geoDistance(GeoDistance.PLANE)\n- ), equalTo(\"ConstantScore(GeoDistanceFilter(pin.location, PLANE, 15.0, 10.0, 20.0))\"));\n+ ), equalTo(\"filtered(ConstantScore(GeoDistanceFilter(pin.location, PLANE, 15.0, 10.0, 20.0)))->cache(_type:type1)\"));\n \n assertExplanation(QueryBuilders.constantScoreQuery(FilterBuilders.geoDistanceRangeFilter(\"pin.location\")\n .lat(10).lon(20).from(\"15m\").to(\"25m\").geoDistance(GeoDistance.PLANE)\n- ), equalTo(\"ConstantScore(GeoDistanceRangeFilter(pin.location, PLANE, [15.0 - 25.0], 10.0, 20.0))\"));\n+ ), equalTo(\"filtered(ConstantScore(GeoDistanceRangeFilter(pin.location, PLANE, [15.0 - 25.0], 10.0, 20.0)))->cache(_type:type1)\"));\n \n assertExplanation(QueryBuilders.constantScoreQuery(FilterBuilders.geoDistanceRangeFilter(\"pin.location\")\n .lat(10).lon(20).from(\"15miles\").to(\"25miles\").geoDistance(GeoDistance.PLANE)\n- ), equalTo(\"ConstantScore(GeoDistanceRangeFilter(pin.location, PLANE, [\" + DistanceUnit.DEFAULT.convert(15.0, DistanceUnit.MILES) + \" - \" + DistanceUnit.DEFAULT.convert(25.0, DistanceUnit.MILES) + \"], 10.0, 20.0))\"));\n+ ), equalTo(\"filtered(ConstantScore(GeoDistanceRangeFilter(pin.location, PLANE, [\" + DistanceUnit.DEFAULT.convert(15.0, DistanceUnit.MILES) + \" - \" + DistanceUnit.DEFAULT.convert(25.0, DistanceUnit.MILES) + \"], 10.0, 20.0)))->cache(_type:type1)\"));\n \n assertExplanation(QueryBuilders.filteredQuery(\n QueryBuilders.termQuery(\"foo\", \"1\"),\n FilterBuilders.andFilter(\n FilterBuilders.termFilter(\"bar\", \"2\"),\n FilterBuilders.termFilter(\"baz\", \"3\")\n )\n- ), equalTo(\"filtered(foo:1)->+cache(bar:[2 TO 2]) +cache(baz:3)\"));\n+ ), equalTo(\"filtered(filtered(foo:1)->+cache(bar:[2 TO 2]) +cache(baz:3))->cache(_type:type1)\"));\n \n assertExplanation(QueryBuilders.constantScoreQuery(FilterBuilders.termsFilter(\"foo\", \"1\", \"2\", \"3\")),\n- equalTo(\"ConstantScore(cache(foo:1 foo:2 foo:3))\"));\n+ equalTo(\"filtered(ConstantScore(cache(foo:1 foo:2 foo:3)))->cache(_type:type1)\"));\n \n assertExplanation(QueryBuilders.constantScoreQuery(FilterBuilders.notFilter(FilterBuilders.termFilter(\"foo\", \"bar\"))),\n- equalTo(\"ConstantScore(NotFilter(cache(foo:bar)))\"));\n+ equalTo(\"filtered(ConstantScore(NotFilter(cache(foo:bar))))->cache(_type:type1)\"));\n \n assertExplanation(QueryBuilders.filteredQuery(\n QueryBuilders.termQuery(\"foo\", \"1\"),\n FilterBuilders.hasChildFilter(\n \"child-type\",\n QueryBuilders.matchQuery(\"foo\", \"1\")\n )\n- ), equalTo(\"filtered(foo:1)->CustomQueryWrappingFilter(child_filter[child-type/type1](filtered(foo:1)->cache(_type:child-type)))\"));\n+ ), equalTo(\"filtered(filtered(foo:1)->CustomQueryWrappingFilter(child_filter[child-type/type1](filtered(foo:1)->cache(_type:child-type))))->cache(_type:type1)\"));\n \n assertExplanation(QueryBuilders.filteredQuery(\n QueryBuilders.termQuery(\"foo\", \"1\"),\n FilterBuilders.scriptFilter(\"true\")\n- ), equalTo(\"filtered(foo:1)->ScriptFilter(true)\"));\n+ ), equalTo(\"filtered(filtered(foo:1)->ScriptFilter(true))->cache(_type:type1)\"));\n \n }\n \n@@ -253,6 +255,38 @@ public void explainDateRangeInQueryString() {\n assertThat(response.isValid(), equalTo(true));\n }\n \n+ @Test(expected = IndexMissingException.class)\n+ public void validateEmptyCluster() {\n+ client().admin().indices().prepareValidateQuery().get();\n+ }\n+\n+ @Test\n+ public void explainNoQuery() {\n+ createIndex(\"test\");\n+ ensureGreen();\n+\n+ ValidateQueryResponse validateQueryResponse = client().admin().indices().prepareValidateQuery().setExplain(true).get();\n+ assertThat(validateQueryResponse.isValid(), equalTo(true));\n+ assertThat(validateQueryResponse.getQueryExplanation().size(), equalTo(1));\n+ assertThat(validateQueryResponse.getQueryExplanation().get(0).getIndex(), equalTo(\"test\"));\n+ assertThat(validateQueryResponse.getQueryExplanation().get(0).getExplanation(), equalTo(\"ConstantScore(*:*)\"));\n+ }\n+\n+ @Test\n+ public void explainFilteredAlias() {\n+ assertAcked(prepareCreate(\"test\")\n+ .addMapping(\"test\", \"field\", \"type=string\")\n+ .addAlias(new Alias(\"alias\").filter(FilterBuilders.termFilter(\"field\", \"value1\"))));\n+ ensureGreen();\n+\n+ ValidateQueryResponse validateQueryResponse = client().admin().indices().prepareValidateQuery(\"alias\")\n+ .setQuery(QueryBuilders.matchAllQuery()).setExplain(true).get();\n+ assertThat(validateQueryResponse.isValid(), equalTo(true));\n+ assertThat(validateQueryResponse.getQueryExplanation().size(), equalTo(1));\n+ assertThat(validateQueryResponse.getQueryExplanation().get(0).getIndex(), equalTo(\"test\"));\n+ assertThat(validateQueryResponse.getQueryExplanation().get(0).getExplanation(), containsString(\"field:value1\"));\n+ }\n+\n private void assertExplanation(QueryBuilder queryBuilder, Matcher<String> matcher) {\n ValidateQueryResponse response = client().admin().indices().prepareValidateQuery(\"test\")\n .setTypes(\"type1\")",
"filename": "src/test/java/org/elasticsearch/validate/SimpleValidateQueryTests.java",
"status": "modified"
}
]
} |
{
"body": "```\nGET /_validate/query\n\nfailed to executed [[[]][], source[_na_], explain:false]\njava.lang.NullPointerException\n at org.elasticsearch.action.admin.indices.validate.query.TransportValidateQueryAction.shardOperation(TransportValidateQueryAction.java:185)\n at org.elasticsearch.action.admin.indices.validate.query.TransportValidateQueryAction.shardOperation(TransportValidateQueryAction.java:63)\n at org.elasticsearch.action.support.broadcast.TransportBroadcastOperationAction$AsyncBroadcastAction$1.run(TransportBroadcastOperationAction.java:170)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n at java.lang.Thread.run(Thread.java:722)\n```\n\nShould just default to a `match_all` query\n",
"comments": [],
"number": 6111,
"title": "Validate query without a body throws an NPE"
} | {
"body": "Made sure that a match_all query is used when no query is specified and ensure no NPE is thrown either.\nAlso used the same code path as the search api to ensure that alias filters are taken into account, same for type filters.\n\nCloses #6111 Closes #6112 Closes #6116\n",
"number": 6114,
"review_comments": [],
"title": "Fixed validate query parsing issues"
} | {
"commits": [
{
"message": "Fixed validate query parsing issues\n\nMade sure that a match_all query is used when no query is specified and ensure no NPE is thrown either.\nAlso used the same code path as the search api to ensure that alias filters are taken into account, same for type filters.\n\nCloses #6111 Closes #6112 Closes #6116"
}
],
"files": [
{
"diff": "@@ -25,3 +25,12 @@\n \n - is_false: valid\n \n+ - do:\n+ indices.validate_query:\n+ explain: true\n+\n+ - is_true: valid\n+ - match: {_shards.failed: 0}\n+ - match: {explanations.0.index: 'testing'}\n+ - match: {explanations.0.explanation: 'ConstantScore(*:*)'}\n+",
"filename": "rest-api-spec/test/indices.validate_query/10_basic.yaml",
"status": "modified"
},
{
"diff": "@@ -37,7 +37,6 @@\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.util.BigArrays;\n import org.elasticsearch.index.query.IndexQueryParserService;\n-import org.elasticsearch.index.query.ParsedQuery;\n import org.elasticsearch.index.query.QueryParsingException;\n import org.elasticsearch.index.service.IndexService;\n import org.elasticsearch.index.shard.service.IndexShard;\n@@ -182,30 +181,35 @@ protected ShardValidateQueryResponse shardOperation(ShardValidateQueryRequest re\n boolean valid;\n String explanation = null;\n String error = null;\n- if (request.source().length() == 0) {\n+\n+ DefaultSearchContext searchContext = new DefaultSearchContext(0,\n+ new ShardSearchRequest().types(request.types()).nowInMillis(request.nowInMillis())\n+ .filteringAliases(request.filteringAliases()),\n+ null, indexShard.acquireSearcher(\"validate_query\"), indexService, indexShard,\n+ scriptService, cacheRecycler, pageCacheRecycler, bigArrays\n+ );\n+ SearchContext.setCurrent(searchContext);\n+ try {\n+ if (request.source() != null && request.source().length() > 0) {\n+ searchContext.parsedQuery(queryParserService.parseQuery(request.source()));\n+ }\n+ searchContext.preProcess();\n+\n valid = true;\n- } else {\n- SearchContext.setCurrent(new DefaultSearchContext(0,\n- new ShardSearchRequest().types(request.types()).nowInMillis(request.nowInMillis()),\n- null, indexShard.acquireSearcher(\"validate_query\"), indexService, indexShard,\n- scriptService, cacheRecycler, pageCacheRecycler, bigArrays));\n- try {\n- ParsedQuery parsedQuery = queryParserService.parseQuery(request.source());\n- valid = true;\n- if (request.explain()) {\n- explanation = parsedQuery.query().toString();\n- }\n- } catch (QueryParsingException e) {\n- valid = false;\n- error = e.getDetailedMessage();\n- } catch (AssertionError e) {\n- valid = false;\n- error = e.getMessage();\n- } finally {\n- SearchContext.current().close();\n- SearchContext.removeCurrent();\n+ if (request.explain()) {\n+ explanation = searchContext.query().toString();\n }\n+ } catch (QueryParsingException e) {\n+ valid = false;\n+ error = e.getDetailedMessage();\n+ } catch (AssertionError e) {\n+ valid = false;\n+ error = e.getMessage();\n+ } finally {\n+ SearchContext.current().close();\n+ SearchContext.removeCurrent();\n }\n+\n return new ShardValidateQueryResponse(request.index(), request.shardId(), valid, explanation, error);\n }\n }",
"filename": "src/main/java/org/elasticsearch/action/admin/indices/validate/query/TransportValidateQueryAction.java",
"status": "modified"
},
{
"diff": "@@ -19,6 +19,7 @@\n package org.elasticsearch.validate;\n \n import com.google.common.base.Charsets;\n+import org.elasticsearch.action.admin.indices.alias.Alias;\n import org.elasticsearch.action.admin.indices.validate.query.ValidateQueryResponse;\n import org.elasticsearch.client.Client;\n import org.elasticsearch.common.geo.GeoDistance;\n@@ -28,6 +29,7 @@\n import org.elasticsearch.index.query.FilterBuilders;\n import org.elasticsearch.index.query.QueryBuilder;\n import org.elasticsearch.index.query.QueryBuilders;\n+import org.elasticsearch.indices.IndexMissingException;\n import org.elasticsearch.test.ElasticsearchIntegrationTest;\n import org.elasticsearch.test.ElasticsearchIntegrationTest.ClusterScope;\n import org.hamcrest.Matcher;\n@@ -107,27 +109,27 @@ public void explainValidateQuery() throws Exception {\n assertThat(response.getQueryExplanation().get(0).getError(), containsString(\"Failed to parse\"));\n assertThat(response.getQueryExplanation().get(0).getExplanation(), nullValue());\n \n- assertExplanation(QueryBuilders.queryString(\"_id:1\"), equalTo(\"ConstantScore(_uid:type1#1)\"));\n+ assertExplanation(QueryBuilders.queryString(\"_id:1\"), equalTo(\"filtered(ConstantScore(_uid:type1#1))->cache(_type:type1)\"));\n \n assertExplanation(QueryBuilders.idsQuery(\"type1\").addIds(\"1\").addIds(\"2\"),\n- equalTo(\"ConstantScore(_uid:type1#1 _uid:type1#2)\"));\n+ equalTo(\"filtered(ConstantScore(_uid:type1#1 _uid:type1#2))->cache(_type:type1)\"));\n \n- assertExplanation(QueryBuilders.queryString(\"foo\"), equalTo(\"_all:foo\"));\n+ assertExplanation(QueryBuilders.queryString(\"foo\"), equalTo(\"filtered(_all:foo)->cache(_type:type1)\"));\n \n assertExplanation(QueryBuilders.filteredQuery(\n QueryBuilders.termQuery(\"foo\", \"1\"),\n FilterBuilders.orFilter(\n FilterBuilders.termFilter(\"bar\", \"2\"),\n FilterBuilders.termFilter(\"baz\", \"3\")\n )\n- ), equalTo(\"filtered(foo:1)->cache(bar:[2 TO 2]) cache(baz:3)\"));\n+ ), equalTo(\"filtered(filtered(foo:1)->cache(bar:[2 TO 2]) cache(baz:3))->cache(_type:type1)\"));\n \n assertExplanation(QueryBuilders.filteredQuery(\n QueryBuilders.termQuery(\"foo\", \"1\"),\n FilterBuilders.orFilter(\n FilterBuilders.termFilter(\"bar\", \"2\")\n )\n- ), equalTo(\"filtered(foo:1)->cache(bar:[2 TO 2])\"));\n+ ), equalTo(\"filtered(filtered(foo:1)->cache(bar:[2 TO 2]))->cache(_type:type1)\"));\n \n assertExplanation(QueryBuilders.filteredQuery(\n QueryBuilders.matchAllQuery(),\n@@ -136,55 +138,55 @@ public void explainValidateQuery() throws Exception {\n .addPoint(30, -80)\n .addPoint(20, -90)\n .addPoint(40, -70) // closing polygon\n- ), equalTo(\"ConstantScore(GeoPolygonFilter(pin.location, [[40.0, -70.0], [30.0, -80.0], [20.0, -90.0], [40.0, -70.0]]))\"));\n+ ), equalTo(\"filtered(ConstantScore(GeoPolygonFilter(pin.location, [[40.0, -70.0], [30.0, -80.0], [20.0, -90.0], [40.0, -70.0]])))->cache(_type:type1)\"));\n \n assertExplanation(QueryBuilders.constantScoreQuery(FilterBuilders.geoBoundingBoxFilter(\"pin.location\")\n .topLeft(40, -80)\n .bottomRight(20, -70)\n- ), equalTo(\"ConstantScore(GeoBoundingBoxFilter(pin.location, [40.0, -80.0], [20.0, -70.0]))\"));\n+ ), equalTo(\"filtered(ConstantScore(GeoBoundingBoxFilter(pin.location, [40.0, -80.0], [20.0, -70.0])))->cache(_type:type1)\"));\n \n assertExplanation(QueryBuilders.constantScoreQuery(FilterBuilders.geoDistanceFilter(\"pin.location\")\n .lat(10).lon(20).distance(15, DistanceUnit.DEFAULT).geoDistance(GeoDistance.PLANE)\n- ), equalTo(\"ConstantScore(GeoDistanceFilter(pin.location, PLANE, 15.0, 10.0, 20.0))\"));\n+ ), equalTo(\"filtered(ConstantScore(GeoDistanceFilter(pin.location, PLANE, 15.0, 10.0, 20.0)))->cache(_type:type1)\"));\n \n assertExplanation(QueryBuilders.constantScoreQuery(FilterBuilders.geoDistanceFilter(\"pin.location\")\n .lat(10).lon(20).distance(15, DistanceUnit.DEFAULT).geoDistance(GeoDistance.PLANE)\n- ), equalTo(\"ConstantScore(GeoDistanceFilter(pin.location, PLANE, 15.0, 10.0, 20.0))\"));\n+ ), equalTo(\"filtered(ConstantScore(GeoDistanceFilter(pin.location, PLANE, 15.0, 10.0, 20.0)))->cache(_type:type1)\"));\n \n assertExplanation(QueryBuilders.constantScoreQuery(FilterBuilders.geoDistanceRangeFilter(\"pin.location\")\n .lat(10).lon(20).from(\"15m\").to(\"25m\").geoDistance(GeoDistance.PLANE)\n- ), equalTo(\"ConstantScore(GeoDistanceRangeFilter(pin.location, PLANE, [15.0 - 25.0], 10.0, 20.0))\"));\n+ ), equalTo(\"filtered(ConstantScore(GeoDistanceRangeFilter(pin.location, PLANE, [15.0 - 25.0], 10.0, 20.0)))->cache(_type:type1)\"));\n \n assertExplanation(QueryBuilders.constantScoreQuery(FilterBuilders.geoDistanceRangeFilter(\"pin.location\")\n .lat(10).lon(20).from(\"15miles\").to(\"25miles\").geoDistance(GeoDistance.PLANE)\n- ), equalTo(\"ConstantScore(GeoDistanceRangeFilter(pin.location, PLANE, [\" + DistanceUnit.DEFAULT.convert(15.0, DistanceUnit.MILES) + \" - \" + DistanceUnit.DEFAULT.convert(25.0, DistanceUnit.MILES) + \"], 10.0, 20.0))\"));\n+ ), equalTo(\"filtered(ConstantScore(GeoDistanceRangeFilter(pin.location, PLANE, [\" + DistanceUnit.DEFAULT.convert(15.0, DistanceUnit.MILES) + \" - \" + DistanceUnit.DEFAULT.convert(25.0, DistanceUnit.MILES) + \"], 10.0, 20.0)))->cache(_type:type1)\"));\n \n assertExplanation(QueryBuilders.filteredQuery(\n QueryBuilders.termQuery(\"foo\", \"1\"),\n FilterBuilders.andFilter(\n FilterBuilders.termFilter(\"bar\", \"2\"),\n FilterBuilders.termFilter(\"baz\", \"3\")\n )\n- ), equalTo(\"filtered(foo:1)->+cache(bar:[2 TO 2]) +cache(baz:3)\"));\n+ ), equalTo(\"filtered(filtered(foo:1)->+cache(bar:[2 TO 2]) +cache(baz:3))->cache(_type:type1)\"));\n \n assertExplanation(QueryBuilders.constantScoreQuery(FilterBuilders.termsFilter(\"foo\", \"1\", \"2\", \"3\")),\n- equalTo(\"ConstantScore(cache(foo:1 foo:2 foo:3))\"));\n+ equalTo(\"filtered(ConstantScore(cache(foo:1 foo:2 foo:3)))->cache(_type:type1)\"));\n \n assertExplanation(QueryBuilders.constantScoreQuery(FilterBuilders.notFilter(FilterBuilders.termFilter(\"foo\", \"bar\"))),\n- equalTo(\"ConstantScore(NotFilter(cache(foo:bar)))\"));\n+ equalTo(\"filtered(ConstantScore(NotFilter(cache(foo:bar))))->cache(_type:type1)\"));\n \n assertExplanation(QueryBuilders.filteredQuery(\n QueryBuilders.termQuery(\"foo\", \"1\"),\n FilterBuilders.hasChildFilter(\n \"child-type\",\n QueryBuilders.matchQuery(\"foo\", \"1\")\n )\n- ), equalTo(\"filtered(foo:1)->CustomQueryWrappingFilter(child_filter[child-type/type1](filtered(foo:1)->cache(_type:child-type)))\"));\n+ ), equalTo(\"filtered(filtered(foo:1)->CustomQueryWrappingFilter(child_filter[child-type/type1](filtered(foo:1)->cache(_type:child-type))))->cache(_type:type1)\"));\n \n assertExplanation(QueryBuilders.filteredQuery(\n QueryBuilders.termQuery(\"foo\", \"1\"),\n FilterBuilders.scriptFilter(\"true\")\n- ), equalTo(\"filtered(foo:1)->ScriptFilter(true)\"));\n+ ), equalTo(\"filtered(filtered(foo:1)->ScriptFilter(true))->cache(_type:type1)\"));\n \n }\n \n@@ -253,6 +255,38 @@ public void explainDateRangeInQueryString() {\n assertThat(response.isValid(), equalTo(true));\n }\n \n+ @Test(expected = IndexMissingException.class)\n+ public void validateEmptyCluster() {\n+ client().admin().indices().prepareValidateQuery().get();\n+ }\n+\n+ @Test\n+ public void explainNoQuery() {\n+ createIndex(\"test\");\n+ ensureGreen();\n+\n+ ValidateQueryResponse validateQueryResponse = client().admin().indices().prepareValidateQuery().setExplain(true).get();\n+ assertThat(validateQueryResponse.isValid(), equalTo(true));\n+ assertThat(validateQueryResponse.getQueryExplanation().size(), equalTo(1));\n+ assertThat(validateQueryResponse.getQueryExplanation().get(0).getIndex(), equalTo(\"test\"));\n+ assertThat(validateQueryResponse.getQueryExplanation().get(0).getExplanation(), equalTo(\"ConstantScore(*:*)\"));\n+ }\n+\n+ @Test\n+ public void explainFilteredAlias() {\n+ assertAcked(prepareCreate(\"test\")\n+ .addMapping(\"test\", \"field\", \"type=string\")\n+ .addAlias(new Alias(\"alias\").filter(FilterBuilders.termFilter(\"field\", \"value1\"))));\n+ ensureGreen();\n+\n+ ValidateQueryResponse validateQueryResponse = client().admin().indices().prepareValidateQuery(\"alias\")\n+ .setQuery(QueryBuilders.matchAllQuery()).setExplain(true).get();\n+ assertThat(validateQueryResponse.isValid(), equalTo(true));\n+ assertThat(validateQueryResponse.getQueryExplanation().size(), equalTo(1));\n+ assertThat(validateQueryResponse.getQueryExplanation().get(0).getIndex(), equalTo(\"test\"));\n+ assertThat(validateQueryResponse.getQueryExplanation().get(0).getExplanation(), containsString(\"field:value1\"));\n+ }\n+\n private void assertExplanation(QueryBuilder queryBuilder, Matcher<String> matcher) {\n ValidateQueryResponse response = client().admin().indices().prepareValidateQuery(\"test\")\n .setTypes(\"type1\")",
"filename": "src/test/java/org/elasticsearch/validate/SimpleValidateQueryTests.java",
"status": "modified"
}
]
} |
{
"body": "``` json\nDELETE /test1\nPUT /test1\nPUT /test1/test/_mapping\n{\n \"mappings\":\n {\n \"test\":{\n \"properties\":{\n \"prop1\":{\n \"type\": \"string\",\n \"index\" : \"not_analyzed\"\n }\n }\n }\n }\n}\nGET /test1/test/_mapping\n```\n\nThe 2nd PUT request above is invalid but it is still accepted/acknowledged by the engine. Would be nice to add some validation to the above use case to return an error.\n",
"comments": [],
"number": 5864,
"title": "Error handling on invalid mapping data"
} | {
"body": "When a mapping is declared and the type is known from the uri\nthen the type can be skipped in the body (see #4483). However,\nthere was no check if the given keys actually make a valid mapping.\n\ncloses #5864\n",
"number": 6093,
"review_comments": [
{
"body": "Maybe enclose key with brackets to make clearer where it starts and where it ends?\n\n``` java\nthrow new MapperParsingException(\"Got unrecognized key [\"+ key + \"] in root of mapping for type\");\n```\n",
"created_at": "2014-05-09T06:19:33Z"
},
{
"body": "I think there can be other properties than the root mappers and `properties` on the root level? (eg. `dynamic_date_formats`, `dynamic_templates`, etc.) Would they still work?\n",
"created_at": "2014-05-09T06:25:21Z"
},
{
"body": "Indeed not! Great that you saw that.\n",
"created_at": "2014-05-09T10:20:21Z"
},
{
"body": "you can replace the call to `get` with a call to `remove` to not need to remove the `_meta` key on the next line\n",
"created_at": "2014-05-12T09:21:22Z"
},
{
"body": "Can you use a StringBuilder?\n",
"created_at": "2014-05-12T09:21:56Z"
},
{
"body": "s/parseObjectOrDocumentTyeProperties/parseObjectOrDocumentTypeProperties/ (missing 'p')\n",
"created_at": "2014-05-12T09:23:03Z"
},
{
"body": "do you mean \"related\" instead of \"unrelated\"?\n",
"created_at": "2014-05-12T09:36:01Z"
},
{
"body": "nice that it found such issues!\n",
"created_at": "2014-05-12T09:37:09Z"
},
{
"body": "Since you never call `iterator.remove()` in this loop, let's switch back to the for-each syntax?\n",
"created_at": "2014-05-12T09:40:43Z"
},
{
"body": "Or maybe directly:\n\n``` java\nif (parseObjectOrDocumentTyeProperties( fieldName, fieldNode, parserContext, builder)\n || processField(builder,fieldName, fieldNode)) {\n iterator.remove();\n}\n```\n",
"created_at": "2014-05-12T09:44:06Z"
}
],
"title": "Check if root mapping is actually valid"
} | {
"commits": [
{
"message": "Check if root mapping is actually valid\n\nWhen a mapping is declared and the type is known from the uri\nthen the type can be skipped in the body (see #4483). However,\nthere was no check if the given keys actually make a valid mapping.\n\ncloses #5864"
},
{
"message": "remove and add same time"
},
{
"message": "use StringBuilder"
},
{
"message": "switch back to iterating of erntry set"
},
{
"message": "typo"
},
{
"message": "shorter if"
},
{
"message": "typo"
}
],
"files": [
{
"diff": "@@ -21,6 +21,7 @@\n \n import com.google.common.collect.ImmutableMap;\n import com.google.common.collect.Maps;\n+import org.elasticsearch.ElasticsearchParseException;\n import org.elasticsearch.Version;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.common.Nullable;\n@@ -48,7 +49,9 @@\n import org.elasticsearch.index.settings.IndexSettings;\n import org.elasticsearch.index.similarity.SimilarityLookupService;\n \n+import java.util.Iterator;\n import java.util.Map;\n+import java.util.Set;\n \n import static org.elasticsearch.index.mapper.MapperBuilders.doc;\n \n@@ -201,31 +204,38 @@ private DocumentMapper parse(String type, Map<String, Object> mapping, String de\n \n \n Mapper.TypeParser.ParserContext parserContext = parserContext();\n+ // parse RootObjectMapper\n DocumentMapper.Builder docBuilder = doc(index.name(), indexSettings, (RootObjectMapper.Builder) rootObjectTypeParser.parse(type, mapping, parserContext));\n-\n- for (Map.Entry<String, Object> entry : mapping.entrySet()) {\n+ Iterator<Map.Entry<String, Object>> iterator = mapping.entrySet().iterator();\n+ // parse DocumentMapper\n+ while(iterator.hasNext()) {\n+ Map.Entry<String, Object> entry = iterator.next();\n String fieldName = Strings.toUnderscoreCase(entry.getKey());\n Object fieldNode = entry.getValue();\n \n if (\"index_analyzer\".equals(fieldName)) {\n+ iterator.remove();\n NamedAnalyzer analyzer = analysisService.analyzer(fieldNode.toString());\n if (analyzer == null) {\n throw new MapperParsingException(\"Analyzer [\" + fieldNode.toString() + \"] not found for index_analyzer setting on root type [\" + type + \"]\");\n }\n docBuilder.indexAnalyzer(analyzer);\n } else if (\"search_analyzer\".equals(fieldName)) {\n+ iterator.remove();\n NamedAnalyzer analyzer = analysisService.analyzer(fieldNode.toString());\n if (analyzer == null) {\n throw new MapperParsingException(\"Analyzer [\" + fieldNode.toString() + \"] not found for search_analyzer setting on root type [\" + type + \"]\");\n }\n docBuilder.searchAnalyzer(analyzer);\n } else if (\"search_quote_analyzer\".equals(fieldName)) {\n+ iterator.remove();\n NamedAnalyzer analyzer = analysisService.analyzer(fieldNode.toString());\n if (analyzer == null) {\n throw new MapperParsingException(\"Analyzer [\" + fieldNode.toString() + \"] not found for search_analyzer setting on root type [\" + type + \"]\");\n }\n docBuilder.searchQuoteAnalyzer(analyzer);\n } else if (\"analyzer\".equals(fieldName)) {\n+ iterator.remove();\n NamedAnalyzer analyzer = analysisService.analyzer(fieldNode.toString());\n if (analyzer == null) {\n throw new MapperParsingException(\"Analyzer [\" + fieldNode.toString() + \"] not found for analyzer setting on root type [\" + type + \"]\");\n@@ -235,11 +245,25 @@ private DocumentMapper parse(String type, Map<String, Object> mapping, String de\n } else {\n Mapper.TypeParser typeParser = rootTypeParsers.get(fieldName);\n if (typeParser != null) {\n+ iterator.remove();\n docBuilder.put(typeParser.parse(fieldName, (Map<String, Object>) fieldNode, parserContext));\n }\n }\n }\n \n+ ImmutableMap<String, Object> attributes = ImmutableMap.of();\n+ if (mapping.containsKey(\"_meta\")) {\n+ attributes = ImmutableMap.copyOf((Map<String, Object>) mapping.remove(\"_meta\"));\n+ }\n+ docBuilder.meta(attributes);\n+\n+ if (!mapping.isEmpty()) {\n+ StringBuilder remainingFields = new StringBuilder();\n+ for (String key : mapping.keySet()) {\n+ remainingFields.append(\" [\").append(key).append(\" : \").append(mapping.get(key).toString()).append(\"]\");\n+ }\n+ throw new MapperParsingException(\"Root type mapping not empty after parsing! Remaining fields:\" + remainingFields.toString());\n+ }\n if (!docBuilder.hasIndexAnalyzer()) {\n docBuilder.indexAnalyzer(analysisService.defaultIndexAnalyzer());\n }\n@@ -250,12 +274,6 @@ private DocumentMapper parse(String type, Map<String, Object> mapping, String de\n docBuilder.searchAnalyzer(analysisService.defaultSearchQuoteAnalyzer());\n }\n \n- ImmutableMap<String, Object> attributes = ImmutableMap.of();\n- if (mapping.containsKey(\"_meta\")) {\n- attributes = ImmutableMap.copyOf((Map<String, Object>) mapping.get(\"_meta\"));\n- }\n- docBuilder.meta(attributes);\n-\n DocumentMapper documentMapper = docBuilder.build(this);\n // update the source with the generated one\n documentMapper.refreshSource();\n@@ -279,15 +297,13 @@ private Tuple<String, Map<String, Object>> extractMapping(String type, Map<Strin\n // if we don't have any keys throw an exception\n throw new MapperParsingException(\"malformed mapping no root object found\");\n }\n-\n String rootName = root.keySet().iterator().next();\n Tuple<String, Map<String, Object>> mapping;\n if (type == null || type.equals(rootName)) {\n mapping = new Tuple<>(rootName, (Map<String, Object>) root.get(rootName));\n } else {\n mapping = new Tuple<>(type, root);\n }\n-\n return mapping;\n }\n }",
"filename": "src/main/java/org/elasticsearch/index/mapper/DocumentMapperParser.java",
"status": "modified"
},
{
"diff": "@@ -20,7 +20,6 @@\n package org.elasticsearch.index.mapper.object;\n \n import com.carrotsearch.hppc.cursors.ObjectObjectCursor;\n-import org.apache.lucene.document.Field;\n import org.apache.lucene.document.XStringField;\n import org.apache.lucene.index.IndexableField;\n import org.apache.lucene.index.Term;\n@@ -180,63 +179,80 @@ protected ObjectMapper createMapper(String name, String fullPath, boolean enable\n public static class TypeParser implements Mapper.TypeParser {\n @Override\n public Mapper.Builder parse(String name, Map<String, Object> node, ParserContext parserContext) throws MapperParsingException {\n- Map<String, Object> objectNode = node;\n ObjectMapper.Builder builder = createBuilder(name);\n-\n- boolean nested = false;\n- boolean nestedIncludeInParent = false;\n- boolean nestedIncludeInRoot = false;\n- for (Map.Entry<String, Object> entry : objectNode.entrySet()) {\n+ for (Map.Entry<String, Object> entry : node.entrySet()) {\n String fieldName = Strings.toUnderscoreCase(entry.getKey());\n Object fieldNode = entry.getValue();\n+ parseObjectOrDocumentTypeProperties(fieldName, fieldNode, parserContext, builder);\n+ parseObjectProperties(name, fieldName, fieldNode, builder);\n+ }\n+ parseNested(name, node, builder);\n+ return builder;\n+ }\n \n- if (fieldName.equals(\"dynamic\")) {\n- String value = fieldNode.toString();\n- if (value.equalsIgnoreCase(\"strict\")) {\n- builder.dynamic(Dynamic.STRICT);\n- } else {\n- builder.dynamic(nodeBooleanValue(fieldNode) ? Dynamic.TRUE : Dynamic.FALSE);\n- }\n- } else if (fieldName.equals(\"type\")) {\n- String type = fieldNode.toString();\n- if (type.equals(CONTENT_TYPE)) {\n- builder.nested = Nested.NO;\n- } else if (type.equals(NESTED_CONTENT_TYPE)) {\n- nested = true;\n- } else {\n- throw new MapperParsingException(\"Trying to parse an object but has a different type [\" + type + \"] for [\" + name + \"]\");\n- }\n- } else if (fieldName.equals(\"include_in_parent\")) {\n- nestedIncludeInParent = nodeBooleanValue(fieldNode);\n- } else if (fieldName.equals(\"include_in_root\")) {\n- nestedIncludeInRoot = nodeBooleanValue(fieldNode);\n- } else if (fieldName.equals(\"enabled\")) {\n- builder.enabled(nodeBooleanValue(fieldNode));\n- } else if (fieldName.equals(\"path\")) {\n- builder.pathType(parsePathType(name, fieldNode.toString()));\n- } else if (fieldName.equals(\"properties\")) {\n- if (fieldNode instanceof Collection && ((Collection) fieldNode).isEmpty()) {\n- // nothing to do here, empty (to support \"properties: []\" case)\n- } else if (!(fieldNode instanceof Map)) {\n- throw new ElasticsearchParseException(\"properties must be a map type\");\n- } else {\n- parseProperties(builder, (Map<String, Object>) fieldNode, parserContext);\n- }\n- } else if (fieldName.equals(\"include_in_all\")) {\n- builder.includeInAll(nodeBooleanValue(fieldNode));\n+ protected static boolean parseObjectOrDocumentTypeProperties(String fieldName, Object fieldNode, ParserContext parserContext, ObjectMapper.Builder builder) {\n+ if (fieldName.equals(\"dynamic\")) {\n+ String value = fieldNode.toString();\n+ if (value.equalsIgnoreCase(\"strict\")) {\n+ builder.dynamic(Dynamic.STRICT);\n+ } else {\n+ builder.dynamic(nodeBooleanValue(fieldNode) ? Dynamic.TRUE : Dynamic.FALSE);\n+ }\n+ return true;\n+ } else if (fieldName.equals(\"enabled\")) {\n+ builder.enabled(nodeBooleanValue(fieldNode));\n+ return true;\n+ } else if (fieldName.equals(\"properties\")) {\n+ if (fieldNode instanceof Collection && ((Collection) fieldNode).isEmpty()) {\n+ // nothing to do here, empty (to support \"properties: []\" case)\n+ } else if (!(fieldNode instanceof Map)) {\n+ throw new ElasticsearchParseException(\"properties must be a map type\");\n } else {\n- processField(builder, fieldName, fieldNode);\n+ parseProperties(builder, (Map<String, Object>) fieldNode, parserContext);\n }\n+ return true;\n }\n+ return false;\n+ }\n \n+ protected static void parseObjectProperties(String name, String fieldName, Object fieldNode, ObjectMapper.Builder builder) {\n+ if (fieldName.equals(\"path\")) {\n+ builder.pathType(parsePathType(name, fieldNode.toString()));\n+ } else if (fieldName.equals(\"include_in_all\")) {\n+ builder.includeInAll(nodeBooleanValue(fieldNode));\n+ }\n+ }\n+\n+ protected static void parseNested(String name, Map<String, Object> node, ObjectMapper.Builder builder) {\n+ boolean nested = false;\n+ boolean nestedIncludeInParent = false;\n+ boolean nestedIncludeInRoot = false;\n+ Object fieldNode = node.get(\"type\");\n+ if (fieldNode!=null) {\n+ String type = fieldNode.toString();\n+ if (type.equals(CONTENT_TYPE)) {\n+ builder.nested = Nested.NO;\n+ } else if (type.equals(NESTED_CONTENT_TYPE)) {\n+ nested = true;\n+ } else {\n+ throw new MapperParsingException(\"Trying to parse an object but has a different type [\" + type + \"] for [\" + name + \"]\");\n+ }\n+ }\n+ fieldNode = node.get(\"include_in_parent\");\n+ if (fieldNode != null) {\n+ nestedIncludeInParent = nodeBooleanValue(fieldNode);\n+ }\n+ fieldNode = node.get(\"include_in_root\");\n+ if (fieldNode != null) {\n+ nestedIncludeInRoot = nodeBooleanValue(fieldNode);\n+ }\n if (nested) {\n builder.nested = Nested.newNested(nestedIncludeInParent, nestedIncludeInRoot);\n }\n \n- return builder;\n }\n \n- private void parseProperties(ObjectMapper.Builder objBuilder, Map<String, Object> propsNode, ParserContext parserContext) {\n+ protected static void parseProperties(ObjectMapper.Builder objBuilder, Map<String, Object> propsNode, ParserContext parserContext) {\n for (Map.Entry<String, Object> entry : propsNode.entrySet()) {\n String propName = entry.getKey();\n Map<String, Object> propNode = (Map<String, Object>) entry.getValue();\n@@ -270,10 +286,6 @@ private void parseProperties(ObjectMapper.Builder objBuilder, Map<String, Object\n protected Builder createBuilder(String name) {\n return object(name);\n }\n-\n- protected void processField(Builder builder, String fieldName, Object fieldNode) {\n-\n- }\n }\n \n private final String name;",
"filename": "src/main/java/org/elasticsearch/index/mapper/object/ObjectMapper.java",
"status": "modified"
},
{
"diff": "@@ -21,6 +21,7 @@\n \n import com.google.common.collect.Lists;\n import com.google.common.collect.Sets;\n+import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.joda.FormatDateTimeFormatter;\n import org.elasticsearch.common.joda.Joda;\n import org.elasticsearch.common.xcontent.ToXContent;\n@@ -29,10 +30,7 @@\n import org.elasticsearch.index.mapper.core.DateFieldMapper;\n \n import java.io.IOException;\n-import java.util.Arrays;\n-import java.util.List;\n-import java.util.Map;\n-import java.util.Set;\n+import java.util.*;\n \n import static com.google.common.collect.Lists.newArrayList;\n import static org.elasticsearch.common.xcontent.support.XContentMapValues.nodeBooleanValue;\n@@ -124,7 +122,23 @@ protected ObjectMapper.Builder createBuilder(String name) {\n }\n \n @Override\n- protected void processField(ObjectMapper.Builder builder, String fieldName, Object fieldNode) {\n+ public Mapper.Builder parse(String name, Map<String, Object> node, ParserContext parserContext) throws MapperParsingException {\n+\n+ ObjectMapper.Builder builder = createBuilder(name);\n+ Iterator<Map.Entry<String, Object>> iterator = node.entrySet().iterator();\n+ while (iterator.hasNext()) {\n+ Map.Entry<String, Object> entry = iterator.next();\n+ String fieldName = Strings.toUnderscoreCase(entry.getKey());\n+ Object fieldNode = entry.getValue();\n+ if (parseObjectOrDocumentTypeProperties(fieldName, fieldNode, parserContext, builder)\n+ || processField(builder, fieldName, fieldNode)) {\n+ iterator.remove();\n+ }\n+ }\n+ return builder;\n+ }\n+\n+ protected boolean processField(ObjectMapper.Builder builder, String fieldName, Object fieldNode) {\n if (fieldName.equals(\"date_formats\") || fieldName.equals(\"dynamic_date_formats\")) {\n List<FormatDateTimeFormatter> dateTimeFormatters = newArrayList();\n if (fieldNode instanceof List) {\n@@ -141,6 +155,7 @@ protected void processField(ObjectMapper.Builder builder, String fieldName, Obje\n } else {\n ((Builder) builder).dynamicDateTimeFormatter(dateTimeFormatters);\n }\n+ return true;\n } else if (fieldName.equals(\"dynamic_templates\")) {\n // \"dynamic_templates\" : [\n // {\n@@ -160,11 +175,15 @@ protected void processField(ObjectMapper.Builder builder, String fieldName, Obje\n Map.Entry<String, Object> entry = tmpl.entrySet().iterator().next();\n ((Builder) builder).add(DynamicTemplate.parse(entry.getKey(), (Map<String, Object>) entry.getValue()));\n }\n+ return true;\n } else if (fieldName.equals(\"date_detection\")) {\n ((Builder) builder).dateDetection = nodeBooleanValue(fieldNode);\n+ return true;\n } else if (fieldName.equals(\"numeric_detection\")) {\n ((Builder) builder).numericDetection = nodeBooleanValue(fieldNode);\n+ return true;\n }\n+ return false;\n }\n }\n ",
"filename": "src/main/java/org/elasticsearch/index/mapper/object/RootObjectMapper.java",
"status": "modified"
},
{
"diff": "@@ -22,7 +22,9 @@\n import org.apache.lucene.index.Term;\n import org.apache.lucene.search.Query;\n import org.apache.lucene.search.TermQuery;\n+import org.elasticsearch.ElasticsearchParseException;\n import org.elasticsearch.common.bytes.BytesArray;\n+import org.elasticsearch.common.collect.MapBuilder;\n import org.elasticsearch.common.collect.Tuple;\n import org.elasticsearch.common.io.stream.BytesStreamOutput;\n import org.elasticsearch.common.lucene.all.AllEntries;\n@@ -33,18 +35,15 @@\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.common.xcontent.XContentType;\n-import org.elasticsearch.index.mapper.DocumentMapper;\n-import org.elasticsearch.index.mapper.FieldMapper;\n-import org.elasticsearch.index.mapper.MapperTestUtils;\n+import org.elasticsearch.index.mapper.*;\n import org.elasticsearch.index.mapper.ParseContext.Document;\n+import org.elasticsearch.index.mapper.internal.*;\n import org.elasticsearch.test.ElasticsearchTestCase;\n import org.hamcrest.Matchers;\n import org.junit.Test;\n \n import java.io.IOException;\n-import java.util.ArrayList;\n-import java.util.Collections;\n-import java.util.List;\n+import java.util.*;\n \n import static org.elasticsearch.common.io.Streams.copyToBytesFromClasspath;\n import static org.elasticsearch.common.io.Streams.copyToStringFromClasspath;\n@@ -323,4 +322,55 @@ public void testMultiField_defaults() throws IOException {\n assertThat(allEntries.fields(), hasSize(1));\n assertThat(allEntries.fields(), hasItem(\"foo.bar\"));\n }\n+\n+ @Test(expected = MapperParsingException.class)\n+ public void testMisplacedTypeInRoot() throws IOException {\n+ String mapping = copyToStringFromClasspath(\"/org/elasticsearch/index/mapper/all/misplaced_type_in_root.json\");\n+ DocumentMapper docMapper = MapperTestUtils.newParser().parse(\"test\", mapping);\n+ }\n+\n+ // related to https://github.com/elasticsearch/elasticsearch/issues/5864\n+ @Test(expected = MapperParsingException.class)\n+ public void testMistypedTypeInRoot() throws IOException {\n+ String mapping = copyToStringFromClasspath(\"/org/elasticsearch/index/mapper/all/mistyped_type_in_root.json\");\n+ DocumentMapper docMapper = MapperTestUtils.newParser().parse(\"test\", mapping);\n+ }\n+\n+ // issue https://github.com/elasticsearch/elasticsearch/issues/5864\n+ @Test(expected = MapperParsingException.class)\n+ public void testMisplacedMappingAsRoot() throws IOException {\n+ String mapping = copyToStringFromClasspath(\"/org/elasticsearch/index/mapper/all/misplaced_mapping_key_in_root.json\");\n+ DocumentMapper docMapper = MapperTestUtils.newParser().parse(\"test\", mapping);\n+ }\n+\n+ // issue https://github.com/elasticsearch/elasticsearch/issues/5864\n+ // test that RootObjectMapping still works\n+ @Test\n+ public void testRootObjectMapperPropertiesDoNotCauseException() throws IOException {\n+ String mapping = copyToStringFromClasspath(\"/org/elasticsearch/index/mapper/all/type_dynamic_template_mapping.json\");\n+ MapperTestUtils.newParser().parse(\"test\", mapping);\n+ mapping = copyToStringFromClasspath(\"/org/elasticsearch/index/mapper/all/type_dynamic_date_formats_mapping.json\");\n+ MapperTestUtils.newParser().parse(\"test\", mapping);\n+ mapping = copyToStringFromClasspath(\"/org/elasticsearch/index/mapper/all/type_date_detection_mapping.json\");\n+ MapperTestUtils.newParser().parse(\"test\", mapping);\n+ mapping = copyToStringFromClasspath(\"/org/elasticsearch/index/mapper/all/type_numeric_detection_mapping.json\");\n+ MapperTestUtils.newParser().parse(\"test\", mapping);\n+ }\n+\n+ // issue https://github.com/elasticsearch/elasticsearch/issues/5864\n+ @Test\n+ public void testRootMappersStillWorking() {\n+ String mapping = \"{\";\n+ Map<String, String> rootTypes = new HashMap<>();\n+ //just pick some example from DocumentMapperParser.rootTypeParsers\n+ rootTypes.put(SizeFieldMapper.NAME, \"{\\\"enabled\\\" : true}\");\n+ rootTypes.put(IndexFieldMapper.NAME, \"{\\\"enabled\\\" : true}\");\n+ rootTypes.put(SourceFieldMapper.NAME, \"{\\\"enabled\\\" : true}\");\n+ rootTypes.put(TypeFieldMapper.NAME, \"{\\\"store\\\" : true}\");\n+ for (String key : rootTypes.keySet()) {\n+ mapping += \"\\\"\" + key+ \"\\\"\" + \":\" + rootTypes.get(key) + \",\\n\";\n+ }\n+ mapping += \"\\\"properties\\\":{}}\" ;\n+ MapperTestUtils.newParser().parse(\"test\", mapping);\n+ }\n }",
"filename": "src/test/java/org/elasticsearch/index/mapper/all/SimpleAllMapperTests.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,11 @@\n+{\n+ \"mapping\": {\n+ \"test\": {\n+ \"properties\": {\n+ \"foo\": {\n+ \"type\": \"string\"\n+ }\n+ }\n+ }\n+ }\n+}\n\\ No newline at end of file",
"filename": "src/test/java/org/elasticsearch/index/mapper/all/misplaced_mapping_key_in_root.json",
"status": "added"
},
{
"diff": "@@ -0,0 +1,8 @@\n+{\n+ \"type\": \"string\",\n+ \"properties\": {\n+ \"foo\": {\n+ \"type\": \"string\"\n+ }\n+ }\n+}\n\\ No newline at end of file",
"filename": "src/test/java/org/elasticsearch/index/mapper/all/misplaced_type_in_root.json",
"status": "added"
},
{
"diff": "@@ -0,0 +1,9 @@\n+{\n+ \"testX\": {\n+ \"properties\": {\n+ \"foo\": {\n+ \"type\": \"string\"\n+ }\n+ }\n+ }\n+}\n\\ No newline at end of file",
"filename": "src/test/java/org/elasticsearch/index/mapper/all/mistyped_type_in_root.json",
"status": "added"
},
{
"diff": "@@ -0,0 +1,8 @@\n+{\n+ \"date_detection\" : false,\n+ \"properties\": {\n+ \"foo\": {\n+ \"type\": \"string\"\n+ }\n+ }\n+}\n\\ No newline at end of file",
"filename": "src/test/java/org/elasticsearch/index/mapper/all/type_date_detection_mapping.json",
"status": "added"
},
{
"diff": "@@ -0,0 +1,8 @@\n+{\n+ \"dynamic_date_formats\" : [\"yyyy-MM-dd\", \"dd-MM-yyyy\"],\n+ \"properties\": {\n+ \"foo\": {\n+ \"type\": \"string\"\n+ }\n+ }\n+}\n\\ No newline at end of file",
"filename": "src/test/java/org/elasticsearch/index/mapper/all/type_dynamic_date_formats_mapping.json",
"status": "added"
},
{
"diff": "@@ -0,0 +1,17 @@\n+{\n+ \"dynamic_templates\" : [\n+ {\n+ \"dynamic_template_name\" : {\n+ \"match\" : \"*\",\n+ \"mapping\" : {\n+ \"store\" : true\n+ }\n+ }\n+ }\n+ ],\n+ \"properties\": {\n+ \"foo\": {\n+ \"type\": \"string\"\n+ }\n+ }\n+}\n\\ No newline at end of file",
"filename": "src/test/java/org/elasticsearch/index/mapper/all/type_dynamic_template_mapping.json",
"status": "added"
},
{
"diff": "@@ -0,0 +1,8 @@\n+{\n+ \"numeric_detection\" : false,\n+ \"properties\": {\n+ \"foo\": {\n+ \"type\": \"string\"\n+ }\n+ }\n+}\n\\ No newline at end of file",
"filename": "src/test/java/org/elasticsearch/index/mapper/all/type_numeric_detection_mapping.json",
"status": "added"
},
{
"diff": "@@ -1684,7 +1684,7 @@ public void testPercolationWithDynamicTemplates() throws Exception {\n .field(\"incude_in_all\", false)\n .endObject()\n .endObject()\n- .startArray(\"dynamic_template\")\n+ .startArray(\"dynamic_templates\")\n .startObject()\n .startObject(\"custom_fields\")\n .field(\"path_match\", \"custom.*\")",
"filename": "src/test/java/org/elasticsearch/percolator/PercolatorTests.java",
"status": "modified"
},
{
"diff": "@@ -234,8 +234,8 @@ public void testRequiredRoutingMapping() throws Exception {\n public void testRequiredRoutingWithPathMapping() throws Exception {\n client().admin().indices().prepareCreate(\"test\")\n .addMapping(\"type1\", XContentFactory.jsonBuilder().startObject().startObject(\"type1\")\n- .startObject(\"_routing\").field(\"required\", true).field(\"path\", \"routing_field\").endObject()\n- .startObject(\"routing_field\").field(\"type\", \"string\").field(\"index\", randomBoolean() ? \"no\" : \"not_analyzed\").field(\"doc_values\", randomBoolean() ? \"yes\" : \"no\").endObject()\n+ .startObject(\"_routing\").field(\"required\", true).field(\"path\", \"routing_field\").endObject().startObject(\"properties\")\n+ .startObject(\"routing_field\").field(\"type\", \"string\").field(\"index\", randomBoolean() ? \"no\" : \"not_analyzed\").field(\"doc_values\", randomBoolean() ? \"yes\" : \"no\").endObject().endObject()\n .endObject().endObject())\n .execute().actionGet();\n ensureGreen();",
"filename": "src/test/java/org/elasticsearch/routing/SimpleRoutingTests.java",
"status": "modified"
},
{
"diff": "@@ -848,9 +848,6 @@ public void testSortMissingNumbers() throws Exception {\n .startObject(\"fielddata\").field(\"format\", maybeDocValues() ? \"doc_values\" : null).endObject()\n .endObject()\n .endObject()\n- .startObject(\"d_value\")\n- .field(\"type\", \"float\")\n- .endObject()\n .endObject()\n .endObject()));\n ensureGreen();",
"filename": "src/test/java/org/elasticsearch/search/sort/SimpleSortTests.java",
"status": "modified"
}
]
} |
{
"body": "When using the aggregations module I am unable to get aggregated doc counts per index.\n\nThe following query **DO NOT** return the aggregated result per index:\n\n```\n{\n \"query\": {\n \"match_all\": {}\n },\n \"size\": 0,\n \"aggs\": {\n \"type\": {\n \"terms\": {\n \"field\": \"_index\"\n }\n }\n }\n}\n```\n\nWhen I use **Facets** instead of aggs, I get the desired result:\n\n```\n{\n \"query\": {\n \"match_all\": {}\n },\n \"size\": 0,\n \"facets\": {\n \"type\": {\n \"terms\": {\n \"field\": \"_index\"\n }\n }\n }\n}\n```\n\nIs this a known issue?\n",
"comments": [
{
"body": "/cc @jpountz \n\nLooks like the `_index` field is not indexed. Probably facets has a workaround for it?\n",
"created_at": "2014-04-17T12:48:52Z"
},
{
"body": "@clintongormley Facets indeed have special handling for this field.\n\nI think we can have a cleaner fix for this, eg. by making field mappers responsible for producing field data, so that a field which is neither indexed nor doc-valued but knows what its field data looks like could return a non-empty instance. I need to think more about this...\n",
"created_at": "2014-04-18T09:04:38Z"
}
],
"number": 5848,
"title": "Unable to aggregate on _index"
} | {
"body": "This makes aggregations work on the _index field, and also allows to remove the\nspecial facet aggregator for the _index field.\n\nClose #5848\n",
"number": 6073,
"review_comments": [],
"title": "Add a dedicated field data type for the _index field mapper."
} | {
"commits": [
{
"message": "Add a dedicated field data type for the _index field mapper.\n\nThis makes aggregations work on the _index field, and also allows to remove the\nspecial facet aggregator for the _index field.\n\nClose #5848"
}
],
"files": [
{
"diff": "@@ -35,6 +35,7 @@\n import org.elasticsearch.index.fielddata.ordinals.InternalGlobalOrdinalsBuilder;\n import org.elasticsearch.index.fielddata.plain.*;\n import org.elasticsearch.index.mapper.FieldMapper;\n+import org.elasticsearch.index.mapper.internal.IndexFieldMapper;\n import org.elasticsearch.index.mapper.internal.ParentFieldMapper;\n import org.elasticsearch.index.service.IndexService;\n import org.elasticsearch.index.settings.IndexSettings;\n@@ -73,6 +74,7 @@ public class IndexFieldDataService extends AbstractIndexComponent {\n .put(\"long\", new PackedArrayIndexFieldData.Builder().setNumericType(IndexNumericFieldData.NumericType.LONG))\n .put(\"geo_point\", new GeoPointDoubleArrayIndexFieldData.Builder())\n .put(ParentFieldMapper.NAME, new ParentChildIndexFieldData.Builder())\n+ .put(IndexFieldMapper.NAME, new IndexIndexFieldData.Builder())\n .put(\"binary\", new DisabledIndexFieldData.Builder())\n .immutableMap();\n ",
"filename": "src/main/java/org/elasticsearch/index/fielddata/IndexFieldDataService.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,211 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.index.fielddata.plain;\n+\n+import org.apache.lucene.index.AtomicReaderContext;\n+import org.apache.lucene.index.IndexReader;\n+import org.apache.lucene.index.TermsEnum;\n+import org.apache.lucene.util.BytesRef;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.index.Index;\n+import org.elasticsearch.index.fielddata.*;\n+import org.elasticsearch.index.fielddata.fieldcomparator.BytesRefFieldComparatorSource;\n+import org.elasticsearch.index.fielddata.ordinals.GlobalOrdinalsBuilder;\n+import org.elasticsearch.index.fielddata.ordinals.Ordinals;\n+import org.elasticsearch.index.mapper.FieldMapper;\n+import org.elasticsearch.index.mapper.MapperService;\n+import org.elasticsearch.indices.fielddata.breaker.CircuitBreakerService;\n+import org.elasticsearch.search.MultiValueMode;\n+\n+public class IndexIndexFieldData implements IndexFieldData.WithOrdinals<AtomicFieldData.WithOrdinals<ScriptDocValues>> {\n+\n+ public static class Builder implements IndexFieldData.Builder {\n+\n+ @Override\n+ public IndexFieldData<?> build(Index index, Settings indexSettings, FieldMapper<?> mapper, IndexFieldDataCache cache,\n+ CircuitBreakerService breakerService, MapperService mapperService, GlobalOrdinalsBuilder globalOrdinalBuilder) {\n+ return new IndexIndexFieldData(index, mapper.names());\n+ }\n+\n+ }\n+\n+ private static final Ordinals.Docs INDEX_ORDINALS = new Ordinals.Docs() {\n+\n+ @Override\n+ public int setDocument(int docId) {\n+ return 1;\n+ }\n+\n+ @Override\n+ public long nextOrd() {\n+ return Ordinals.MIN_ORDINAL;\n+ }\n+\n+ @Override\n+ public boolean isMultiValued() {\n+ return false;\n+ }\n+\n+ @Override\n+ public long getOrd(int docId) {\n+ return Ordinals.MIN_ORDINAL;\n+ }\n+\n+ @Override\n+ public long getMaxOrd() {\n+ return 1;\n+ }\n+\n+ @Override\n+ public long currentOrd() {\n+ return Ordinals.MIN_ORDINAL;\n+ }\n+ };\n+\n+ private static class IndexBytesValues extends BytesValues.WithOrdinals {\n+\n+ final int hash;\n+\n+ protected IndexBytesValues(String index) {\n+ super(INDEX_ORDINALS);\n+ scratch.copyChars(index);\n+ hash = scratch.hashCode();\n+ }\n+\n+ @Override\n+ public int currentValueHash() {\n+ return hash;\n+ }\n+\n+ @Override\n+ public BytesRef getValueByOrd(long ord) {\n+ return scratch;\n+ }\n+\n+ }\n+\n+ private static class IndexAtomicFieldData implements AtomicFieldData.WithOrdinals<ScriptDocValues> {\n+\n+ private final String index;\n+\n+ IndexAtomicFieldData(String index) {\n+ this.index = index;\n+ }\n+\n+ @Override\n+ public long getMemorySizeInBytes() {\n+ return 0;\n+ }\n+\n+ @Override\n+ public boolean isMultiValued() {\n+ return false;\n+ }\n+\n+ @Override\n+ public long getNumberUniqueValues() {\n+ return 1;\n+ }\n+\n+ @Override\n+ public BytesValues.WithOrdinals getBytesValues(boolean needsHashes) {\n+ return new IndexBytesValues(index);\n+ }\n+\n+ @Override\n+ public ScriptDocValues getScriptValues() {\n+ return new ScriptDocValues.Strings(getBytesValues(false));\n+ }\n+\n+ @Override\n+ public void close() {\n+ }\n+\n+ @Override\n+ public TermsEnum getTermsEnum() {\n+ return new AtomicFieldDataWithOrdinalsTermsEnum(this);\n+ }\n+\n+ }\n+\n+ private final FieldMapper.Names names;\n+ private final Index index;\n+\n+ private IndexIndexFieldData(Index index, FieldMapper.Names names) {\n+ this.index = index;\n+ this.names = names;\n+ }\n+\n+ @Override\n+ public Index index() {\n+ return index;\n+ }\n+\n+ @Override\n+ public org.elasticsearch.index.mapper.FieldMapper.Names getFieldNames() {\n+ return names;\n+ }\n+\n+ @Override\n+ public FieldDataType getFieldDataType() {\n+ return new FieldDataType(\"string\");\n+ }\n+\n+ @Override\n+ public boolean valuesOrdered() {\n+ return true;\n+ }\n+\n+ @Override\n+ public org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource comparatorSource(Object missingValue,\n+ MultiValueMode sortMode) {\n+ return new BytesRefFieldComparatorSource(this, missingValue, sortMode);\n+ }\n+\n+ @Override\n+ public void clear() {\n+ }\n+\n+ @Override\n+ public void clear(IndexReader reader) {\n+ }\n+\n+ @Override\n+ public AtomicFieldData.WithOrdinals<ScriptDocValues> load(AtomicReaderContext context) {\n+ return new IndexAtomicFieldData(index().name());\n+ }\n+\n+ @Override\n+ public AtomicFieldData.WithOrdinals<ScriptDocValues> loadDirect(AtomicReaderContext context)\n+ throws Exception {\n+ return load(context);\n+ }\n+\n+ @Override\n+ public IndexFieldData.WithOrdinals<?> loadGlobal(IndexReader indexReader) {\n+ return this;\n+ }\n+\n+ @Override\n+ public IndexFieldData.WithOrdinals<?> localGlobalDirect(IndexReader indexReader) throws Exception {\n+ return loadGlobal(indexReader);\n+ }\n+\n+}",
"filename": "src/main/java/org/elasticsearch/index/fielddata/plain/IndexIndexFieldData.java",
"status": "added"
},
{
"diff": "@@ -290,4 +290,5 @@ public static Loading parse(String loading, Loading defaultValue) {\n boolean hasDocValues();\n \n Loading normsLoading(Loading defaultLoading);\n+\n }",
"filename": "src/main/java/org/elasticsearch/index/mapper/FieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -136,7 +136,7 @@ public FieldType defaultFieldType() {\n \n @Override\n public FieldDataType defaultFieldDataType() {\n- return new FieldDataType(\"string\");\n+ return new FieldDataType(IndexFieldMapper.NAME);\n }\n \n @Override\n@@ -225,4 +225,5 @@ public void merge(Mapper mergeWith, MergeContext mergeContext) throws MergeMappi\n }\n }\n }\n+\n }",
"filename": "src/main/java/org/elasticsearch/index/mapper/internal/IndexFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -34,7 +34,6 @@\n import org.elasticsearch.search.facet.FacetExecutor;\n import org.elasticsearch.search.facet.FacetParser;\n import org.elasticsearch.search.facet.terms.doubles.TermsDoubleFacetExecutor;\n-import org.elasticsearch.search.facet.terms.index.IndexNameFacetExecutor;\n import org.elasticsearch.search.facet.terms.longs.TermsLongFacetExecutor;\n import org.elasticsearch.search.facet.terms.strings.FieldsTermsStringFacetExecutor;\n import org.elasticsearch.search.facet.terms.strings.ScriptTermsStringFieldFacetExecutor;\n@@ -151,10 +150,6 @@ public FacetExecutor parse(String facetName, XContentParser parser, SearchContex\n }\n }\n \n- if (\"_index\".equals(field)) {\n- return new IndexNameFacetExecutor(context.shardTarget().index(), comparatorType, size);\n- }\n-\n if (fieldsNames != null && fieldsNames.length == 1) {\n field = fieldsNames[0];\n fieldsNames = null;",
"filename": "src/main/java/org/elasticsearch/search/facet/terms/TermsFacetParser.java",
"status": "modified"
},
{
"diff": "@@ -22,6 +22,7 @@\n import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.action.index.IndexRequestBuilder;\n import org.elasticsearch.action.search.SearchResponse;\n+import org.elasticsearch.index.mapper.internal.IndexFieldMapper;\n import org.elasticsearch.index.query.FilterBuilders;\n import org.elasticsearch.search.aggregations.bucket.filter.Filter;\n import org.elasticsearch.search.aggregations.bucket.histogram.Histogram;\n@@ -1154,4 +1155,27 @@ public void singleValuedField_OrderedByMultiValueExtendedStatsAsc() throws Excep\n }\n \n }\n+\n+ @Test\n+ public void indexMetaField() throws Exception {\n+ SearchResponse response = client().prepareSearch(\"idx\", \"empty_bucket_idx\").setTypes(\"type\")\n+ .addAggregation(terms(\"terms\")\n+ .executionHint(randomExecutionHint())\n+ .field(IndexFieldMapper.NAME)\n+ ).execute().actionGet();\n+\n+ assertSearchResponse(response);\n+ Terms terms = response.getAggregations().get(\"terms\");\n+ assertThat(terms, notNullValue());\n+ assertThat(terms.getName(), equalTo(\"terms\"));\n+ assertThat(terms.getBuckets().size(), equalTo(2));\n+\n+ int i = 0;\n+ for (Terms.Bucket bucket : terms.getBuckets()) {\n+ assertThat(bucket, notNullValue());\n+ assertThat(key(bucket), equalTo(i == 0 ? \"idx\" : \"empty_bucket_idx\"));\n+ assertThat(bucket.getDocCount(), equalTo(i == 0 ? 5L : 2L));\n+ i++;\n+ }\n+ }\n }",
"filename": "src/test/java/org/elasticsearch/search/aggregations/bucket/StringTermsTests.java",
"status": "modified"
}
]
} |
{
"body": "When optimize_bbox is enabled for geo_distance filters, it can cause missing results:\n\nhttps://gist.github.com/jtibshirani/1e42809a52be9ac651fc\n\nThis issue occurs on ES 1.1.1, and also in ES 1.0.3 and below, before the upgrade to Lucene 4.7. It seems the distance calculation for the bounding box uses DistanceUnit#getEarthRadius(), which is the radius at the semi-major axis, whereas the actual geo_distance filter uses SloppyMath to do the calculation. In Lucene 4.6 SloppyMath uses the average earth radius, and in 4.7 it averages the radii at the two points.\n\nThe same problem exists for both 'memory' and 'indexed' bounding boxes.\n",
"comments": [
{
"body": "Thanks for the detailed report, I could reproduce the issue and will work on a fix.\n",
"created_at": "2014-05-06T13:19:25Z"
}
],
"number": 6008,
"title": "`optimize_bbox` for geo_distance filters can cause missing results"
} | {
"body": "We switched to Lucene's SloppyMath way of computing an approximate value of\nthe eath diameter given a latitude in order to compute distances, yet the\nbounding box optimization of the geo distance filter still assumed a constant\nearth diameter, equal to the average.\n\nClose #6008\n",
"number": 6065,
"review_comments": [],
"title": "Use GeoUtils.earthDiameter in the bounding box optimization."
} | {
"commits": [
{
"message": "Use GeoUtils.earthDiameter in the bounding box optimization.\n\nWe switched to Lucene's SloppyMath way of computing an approximate value of\nthe eath diameter given a latitude in order to compute distances, yet the\nbounding box optimization of the geo distance filter still assumed a constant\nearth diameter, equal to the average.\n\nClose #6008"
}
],
"files": [
{
"diff": "@@ -140,7 +140,8 @@ public FixedSourceDistance fixedSourceDistance(double sourceLatitude, double sou\n \n public static DistanceBoundingCheck distanceBoundingCheck(double sourceLatitude, double sourceLongitude, double distance, DistanceUnit unit) {\n // angular distance in radians on a great circle\n- double radDist = distance / unit.getEarthRadius();\n+ // assume worst-case: use the minor axis\n+ double radDist = unit.toMeters(distance) / GeoUtils.EARTH_SEMI_MINOR_AXIS;\n \n double radLat = Math.toRadians(sourceLatitude);\n double radLon = Math.toRadians(sourceLongitude);",
"filename": "src/main/java/org/elasticsearch/common/geo/GeoDistance.java",
"status": "modified"
},
{
"diff": "@@ -52,7 +52,7 @@ public class GeoDistanceFilter extends Filter {\n private final IndexGeoPointFieldData indexFieldData;\n \n private final GeoDistance.FixedSourceDistance fixedSourceDistance;\n- private GeoDistance.DistanceBoundingCheck distanceBoundingCheck;\n+ private final GeoDistance.DistanceBoundingCheck distanceBoundingCheck;\n private final Filter boundingBoxFilter;\n \n public GeoDistanceFilter(double lat, double lon, double distance, GeoDistance geoDistance, IndexGeoPointFieldData indexFieldData, GeoPointFieldMapper mapper,\n@@ -64,6 +64,7 @@ public GeoDistanceFilter(double lat, double lon, double distance, GeoDistance ge\n this.indexFieldData = indexFieldData;\n \n this.fixedSourceDistance = geoDistance.fixedSourceDistance(lat, lon, DistanceUnit.DEFAULT);\n+ GeoDistance.DistanceBoundingCheck distanceBoundingCheck = null;\n if (optimizeBbox != null && !\"none\".equals(optimizeBbox)) {\n distanceBoundingCheck = GeoDistance.distanceBoundingCheck(lat, lon, distance, DistanceUnit.DEFAULT);\n if (\"memory\".equals(optimizeBbox)) {\n@@ -78,6 +79,7 @@ public GeoDistanceFilter(double lat, double lon, double distance, GeoDistance ge\n distanceBoundingCheck = GeoDistance.ALWAYS_INSTANCE;\n boundingBoxFilter = null;\n }\n+ this.distanceBoundingCheck = distanceBoundingCheck;\n }\n \n public double lat() {",
"filename": "src/main/java/org/elasticsearch/index/search/geo/GeoDistanceFilter.java",
"status": "modified"
},
{
"diff": "@@ -19,7 +19,9 @@\n \n package org.elasticsearch.search.geo;\n \n+import org.elasticsearch.action.index.IndexRequestBuilder;\n import org.elasticsearch.action.search.SearchResponse;\n+import org.elasticsearch.action.search.SearchType;\n import org.elasticsearch.common.geo.GeoDistance;\n import org.elasticsearch.common.geo.GeoHashUtils;\n import org.elasticsearch.common.unit.DistanceUnit;\n@@ -36,6 +38,9 @@\n import org.junit.Test;\n \n import java.io.IOException;\n+import java.util.ArrayList;\n+import java.util.Arrays;\n+import java.util.List;\n \n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n import static org.elasticsearch.index.query.FilterBuilders.*;\n@@ -633,4 +638,44 @@ public void testGeoDistanceFilter() throws IOException {\n assertHitCount(result, 1);\n } \n \n+ private double randomLon() {\n+ return randomDouble() * 360 - 180;\n+ }\n+\n+ private double randomLat() {\n+ return randomDouble() * 180 - 90;\n+ }\n+\n+ public void testDuelOptimizations() throws Exception {\n+ assertAcked(prepareCreate(\"index\").addMapping(\"type\", \"location\", \"type=geo_point,lat_lon=true\"));\n+ final int numDocs = scaledRandomIntBetween(3000, 10000);\n+ List<IndexRequestBuilder> docs = new ArrayList<>();\n+ for (int i = 0; i < numDocs; ++i) {\n+ docs.add(client().prepareIndex(\"index\", \"type\").setSource(jsonBuilder().startObject().startObject(\"location\").field(\"lat\", randomLat()).field(\"lon\", randomLon()).endObject().endObject()));\n+ }\n+ indexRandom(true, docs);\n+ ensureSearchable();\n+\n+ for (int i = 0; i < 10; ++i) {\n+ final double originLat = randomLat();\n+ final double originLon = randomLon();\n+ final String distance = DistanceUnit.KILOMETERS.toString(randomInt(10000));\n+ for (GeoDistance geoDistance : Arrays.asList(GeoDistance.ARC, GeoDistance.SLOPPY_ARC)) {\n+ logger.info(\"Now testing GeoDistance={}, distance={}, origin=({}, {})\", geoDistance, distance, originLat, originLon);\n+ long matches = -1;\n+ for (String optimizeBbox : Arrays.asList(\"none\", \"memory\", \"indexed\")) {\n+ SearchResponse resp = client().prepareSearch(\"index\").setSearchType(SearchType.COUNT).setQuery(QueryBuilders.constantScoreQuery(\n+ FilterBuilders.geoDistanceFilter(\"location\").point(originLat, originLon).distance(distance).geoDistance(geoDistance).optimizeBbox(optimizeBbox))).execute().actionGet();\n+ assertSearchResponse(resp);\n+ logger.info(\"{} -> {} hits\", optimizeBbox, resp.getHits().totalHits());\n+ if (matches < 0) {\n+ matches = resp.getHits().totalHits();\n+ } else {\n+ assertEquals(matches, resp.getHits().totalHits());\n+ }\n+ }\n+ }\n+ }\n+ }\n+\n }\n\\ No newline at end of file",
"filename": "src/test/java/org/elasticsearch/search/geo/GeoDistanceTests.java",
"status": "modified"
}
]
} |
{
"body": "Using the default analyzer:\n\n```\nGET /_analyze?text=The fox\n```\n\nRemoves stopwords:\n\n```\n{\n \"tokens\": [\n {\n \"token\": \"fox\",\n \"start_offset\": 4,\n \"end_offset\": 7,\n \"type\": \"<ALPHANUM>\",\n \"position\": 2\n }\n ]\n}\n```\n\nUsing the `standard` analyzer:\n\n```\nGET /_analyze?text=The fox&analyzer=standard\n```\n\nKeeps stopwords:\n\n```\n{\n \"tokens\": [\n {\n \"token\": \"the\",\n \"start_offset\": 0,\n \"end_offset\": 3,\n \"type\": \"<ALPHANUM>\",\n \"position\": 1\n },\n {\n \"token\": \"fox\",\n \"start_offset\": 4,\n \"end_offset\": 7,\n \"type\": \"<ALPHANUM>\",\n \"position\": 2\n }\n ]\n}\n```\n",
"comments": [
{
"body": "@clintongormley \n\nBased on #4092 and the below snippet, it appears that the correct behavior is to not remove stop words for versions on or after 1.0.0.Beta1.\n\n```\nSTANDARD(CachingStrategy.ELASTICSEARCH) { // we don't do stopwords anymore from 1.0Beta on\n @Override\n protected Analyzer create(Version version) {\n if (version.onOrAfter(Version.V_1_0_0_Beta1)) {\n return new StandardAnalyzer(version.luceneVersion, CharArraySet.EMPTY_SET);\n }\n return new StandardAnalyzer(version.luceneVersion);\n }\n}\n```\n\nThe inconsistency here is due to the fact that when an analyzer name isn't specified in the query params, the analyzer isn't being resolved from the pre-built analyzers, but instead from Lucene.STANDARD_ANALYZER which is configured to use stop words.\n\nPerhaps it should be resolved from the pre-built analyzers instead.\n",
"created_at": "2014-05-04T19:14:03Z"
},
{
"body": "I guess we should port this to `1.1.2` as well @spinscale \n",
"created_at": "2014-05-18T10:06:09Z"
},
{
"body": "done\n",
"created_at": "2014-05-18T15:58:02Z"
}
],
"number": 5974,
"title": "Analysis: Default analyzer includes stopwords"
} | {
"body": "The analyze API used the standard analyzer from lucene and therefore removed\nstopwords instead of using the elasticsearch default analyzer.\n\nCloses #5974\n",
"number": 6043,
"review_comments": [],
"title": "Analyze API: Default analyzer accidentally removed stopwords"
} | {
"commits": [
{
"message": "Analyze API: Default analyzer accidentally removed stopwords\n\nThe analyze API used the standard analyzer from lucene and therefore removed\nstopwords instead of using the elasticsearch default analyzer.\n\nCloses #5974"
}
],
"files": [
{
"diff": "@@ -34,7 +34,6 @@\n import org.elasticsearch.cluster.block.ClusterBlockLevel;\n import org.elasticsearch.cluster.routing.ShardsIterator;\n import org.elasticsearch.common.inject.Inject;\n-import org.elasticsearch.common.lucene.Lucene;\n import org.elasticsearch.common.settings.ImmutableSettings;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.index.analysis.*;\n@@ -213,7 +212,7 @@ protected AnalyzeResponse shardOperation(AnalyzeRequest request, int shardId) th\n closeAnalyzer = true;\n } else if (analyzer == null) {\n if (indexService == null) {\n- analyzer = Lucene.STANDARD_ANALYZER;\n+ analyzer = indicesAnalysisService.analyzer(\"standard\");\n } else {\n analyzer = indexService.analysisService().defaultIndexAnalyzer();\n }",
"filename": "src/main/java/org/elasticsearch/action/admin/indices/analyze/TransportAnalyzeAction.java",
"status": "modified"
},
{
"diff": "@@ -31,6 +31,8 @@\n import static org.elasticsearch.common.settings.ImmutableSettings.settingsBuilder;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.hasSize;\n+import static org.hamcrest.Matchers.is;\n \n /**\n *\n@@ -172,4 +174,23 @@ public void analyzerWithFieldOrTypeTests() throws Exception {\n assertThat(token.getEndOffset(), equalTo(14));\n }\n }\n+\n+ @Test // issue #5974\n+ public void testThatStandardAndDefaultAnalyzersAreSame() throws Exception {\n+ AnalyzeResponse response = client().admin().indices().prepareAnalyze(\"this is a test\").setAnalyzer(\"standard\").get();\n+ assertTokens(response, \"this\", \"is\", \"a\", \"test\");\n+\n+ response = client().admin().indices().prepareAnalyze(\"this is a test\").setAnalyzer(\"default\").get();\n+ assertTokens(response, \"this\", \"is\", \"a\", \"test\");\n+\n+ response = client().admin().indices().prepareAnalyze(\"this is a test\").get();\n+ assertTokens(response, \"this\", \"is\", \"a\", \"test\");\n+ }\n+\n+ private void assertTokens(AnalyzeResponse response, String ... tokens) {\n+ assertThat(response.getTokens(), hasSize(tokens.length));\n+ for (int i = 0; i < tokens.length; i++) {\n+ assertThat(response.getTokens().get(i).getTerm(), is(tokens[i]));\n+ }\n+ }\n }",
"filename": "src/test/java/org/elasticsearch/indices/analyze/AnalyzeActionTests.java",
"status": "modified"
}
]
} |
{
"body": "As of commit 705c7e2469546fcb241f119570265a76262eac75 running a scroll request on a bad scroll ID no longer returns a 500 request error. Instead, each shard returns a failure but the overall request is a 200 OK.\n\n/cc @kimchy \n",
"comments": [
{
"body": "@clintongormley wondering why we don't respond with a 400 or even a 404 when the scroll id does not exist (contains out of date information) ?\n",
"created_at": "2014-04-08T12:02:02Z"
},
{
"body": "We used to respond with a 500, but maybe a 404 would be more RESTful\n",
"created_at": "2014-04-08T16:24:08Z"
},
{
"body": "+1 on the 404 or 400. \n",
"created_at": "2014-04-08T17:29:45Z"
},
{
"body": "500 represents an internal server error, so anything in the 400 range (404 in particular representing the resource was not found) would be a lot better, thus indicating to the end user that the problem was on their end (input), rather than putting the blame onto Elasticsearch.\n\nSo +1 to HTTP `404`\n",
"created_at": "2014-04-11T00:06:58Z"
}
],
"number": 5729,
"title": "Missing scroll ID no longer returns exception"
} | {
"body": "A bad/non-existing scroll ID used to return a 200, however a 404 might be more useful.\nAlso, this PR returns the right Exception (SearchContextMissingException) in the Java API.\n\nTODO: I need some more feedback for this PR. I think this solution is not optimal and there might be a better option (checking the exception type twice is not what I want to do, I do this in order to return a real exception instead of a shard failure, but it is not a good solution codewise and I think there are better solutions which I do not see at the moment).\n\nCloses #5729\n",
"number": 6040,
"review_comments": [],
"title": "Missing scroll id now returns 404"
} | {
"commits": [
{
"message": "[REST] Missing scroll id now returns 404\n\nA bad/non-existing scroll ID used to return a 200, however a 404 might be more useful.\nAlso, this PR returns the right Exception (SearchContextMissingException) in the Java API.\n\nAdditionally: Added StatusToXContent interface and RestStatusToXContentListener listener, so\nthe appropriate RestStatus can be returned\n\nCloses #5729"
}
],
"files": [
{
"diff": "@@ -29,7 +29,6 @@\n scroll_id: $scroll_id1\n \n - do:\n+ catch: missing\n scroll:\n scroll_id: $scroll_id1\n-\n- - length: {hits.hits: 0}",
"filename": "rest-api-spec/test/scroll/11_clear.yaml",
"status": "modified"
},
{
"diff": "@@ -23,10 +23,7 @@\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.unit.TimeValue;\n-import org.elasticsearch.common.xcontent.ToXContent;\n-import org.elasticsearch.common.xcontent.XContentBuilder;\n-import org.elasticsearch.common.xcontent.XContentBuilderString;\n-import org.elasticsearch.common.xcontent.XContentFactory;\n+import org.elasticsearch.common.xcontent.*;\n import org.elasticsearch.rest.RestStatus;\n import org.elasticsearch.search.SearchHits;\n import org.elasticsearch.search.aggregations.Aggregations;\n@@ -42,7 +39,7 @@\n /**\n * A response of a search request.\n */\n-public class SearchResponse extends ActionResponse implements ToXContent {\n+public class SearchResponse extends ActionResponse implements StatusToXContent {\n \n private InternalSearchResponse internalResponse;\n ",
"filename": "src/main/java/org/elasticsearch/action/search/SearchResponse.java",
"status": "modified"
},
{
"diff": "@@ -21,6 +21,8 @@\n \n import com.carrotsearch.hppc.IntArrayList;\n import org.apache.lucene.search.ScoreDoc;\n+import org.elasticsearch.ElasticsearchException;\n+import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.action.ActionListener;\n import org.elasticsearch.action.search.*;\n import org.elasticsearch.cluster.ClusterService;\n@@ -31,6 +33,7 @@\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.util.concurrent.AtomicArray;\n+import org.elasticsearch.search.SearchContextMissingException;\n import org.elasticsearch.search.action.SearchServiceListener;\n import org.elasticsearch.search.action.SearchServiceTransportAction;\n import org.elasticsearch.search.controller.SearchPhaseController;\n@@ -171,7 +174,12 @@ public void onResult(QuerySearchResult result) {\n \n @Override\n public void onFailure(Throwable t) {\n- onQueryPhaseFailure(shardIndex, counter, searchId, t);\n+ Throwable cause = ExceptionsHelper.unwrapCause(t);\n+ if (cause instanceof SearchContextMissingException) {\n+ listener.onFailure(t);\n+ } else {\n+ onQueryPhaseFailure(shardIndex, counter, searchId, t);\n+ }\n }\n });\n }",
"filename": "src/main/java/org/elasticsearch/action/search/type/TransportSearchScrollQueryThenFetchAction.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,32 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.common.xcontent;\n+\n+import org.elasticsearch.rest.RestStatus;\n+\n+/**\n+ *\n+ */\n+public interface StatusToXContent extends ToXContent {\n+\n+ /**\n+ * Returns the REST status to make sure it is returned correctly\n+ */\n+ RestStatus status();\n+}",
"filename": "src/main/java/org/elasticsearch/common/xcontent/StatusToXContent.java",
"status": "added"
},
{
"diff": "@@ -33,6 +33,7 @@\n import org.elasticsearch.rest.RestChannel;\n import org.elasticsearch.rest.RestController;\n import org.elasticsearch.rest.RestRequest;\n+import org.elasticsearch.rest.action.support.RestStatusToXContentListener;\n import org.elasticsearch.rest.action.support.RestToXContentListener;\n import org.elasticsearch.search.Scroll;\n import org.elasticsearch.search.builder.SearchSourceBuilder;\n@@ -71,7 +72,7 @@ public void handleRequest(final RestRequest request, final RestChannel channel)\n SearchRequest searchRequest;\n searchRequest = RestSearchAction.parseSearchRequest(request);\n searchRequest.listenerThreaded(false);\n- client.search(searchRequest, new RestToXContentListener<SearchResponse>(channel));\n+ client.search(searchRequest, new RestStatusToXContentListener<SearchResponse>(channel));\n }\n \n public static SearchRequest parseSearchRequest(RestRequest request) {",
"filename": "src/main/java/org/elasticsearch/rest/action/search/RestSearchAction.java",
"status": "modified"
},
{
"diff": "@@ -19,7 +19,6 @@\n \n package org.elasticsearch.rest.action.search;\n \n-import org.elasticsearch.action.search.SearchResponse;\n import org.elasticsearch.action.search.SearchScrollRequest;\n import org.elasticsearch.client.Client;\n import org.elasticsearch.common.inject.Inject;\n@@ -29,7 +28,7 @@\n import org.elasticsearch.rest.RestController;\n import org.elasticsearch.rest.RestRequest;\n import org.elasticsearch.rest.action.support.RestActions;\n-import org.elasticsearch.rest.action.support.RestToXContentListener;\n+import org.elasticsearch.rest.action.support.RestStatusToXContentListener;\n import org.elasticsearch.search.Scroll;\n \n import static org.elasticsearch.common.unit.TimeValue.parseTimeValue;\n@@ -63,6 +62,7 @@ public void handleRequest(final RestRequest request, final RestChannel channel)\n if (scroll != null) {\n searchScrollRequest.scroll(new Scroll(parseTimeValue(scroll, null)));\n }\n- client.searchScroll(searchScrollRequest, new RestToXContentListener<SearchResponse>(channel));\n+\n+ client.searchScroll(searchScrollRequest, new RestStatusToXContentListener(channel));\n }\n }",
"filename": "src/main/java/org/elasticsearch/rest/action/search/RestSearchScrollAction.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,48 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.rest.action.support;\n+\n+import org.elasticsearch.common.xcontent.StatusToXContent;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.rest.BytesRestResponse;\n+import org.elasticsearch.rest.RestChannel;\n+import org.elasticsearch.rest.RestResponse;\n+\n+/**\n+ *\n+ */\n+public class RestStatusToXContentListener<Response extends StatusToXContent> extends RestResponseListener<Response> {\n+\n+ public RestStatusToXContentListener(RestChannel channel) {\n+ super(channel);\n+ }\n+\n+ @Override\n+ public final RestResponse buildResponse(Response response) throws Exception {\n+ return buildResponse(response, channel.newBuilder());\n+ }\n+\n+ public final RestResponse buildResponse(Response response, XContentBuilder builder) throws Exception {\n+ builder.startObject();\n+ response.toXContent(builder, channel.request());\n+ builder.endObject();\n+ return new BytesRestResponse(response.status(), builder);\n+ }\n+\n+}",
"filename": "src/main/java/org/elasticsearch/rest/action/support/RestStatusToXContentListener.java",
"status": "added"
},
{
"diff": "@@ -20,6 +20,7 @@\n package org.elasticsearch.search;\n \n import org.elasticsearch.ElasticsearchException;\n+import org.elasticsearch.rest.RestStatus;\n \n /**\n *\n@@ -36,4 +37,9 @@ public SearchContextMissingException(long id) {\n public long id() {\n return this.id;\n }\n+\n+ @Override\n+ public RestStatus status() {\n+ return RestStatus.NOT_FOUND;\n+ }\n }",
"filename": "src/main/java/org/elasticsearch/search/SearchContextMissingException.java",
"status": "modified"
},
{
"diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.search.scroll;\n \n+import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.ElasticsearchIllegalArgumentException;\n import org.elasticsearch.action.search.ClearScrollResponse;\n import org.elasticsearch.action.search.SearchRequestBuilder;\n@@ -29,6 +30,8 @@\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.common.util.concurrent.UncategorizedExecutionException;\n import org.elasticsearch.index.query.QueryBuilders;\n+import org.elasticsearch.rest.RestStatus;\n+import org.elasticsearch.search.SearchContextMissingException;\n import org.elasticsearch.search.SearchHit;\n import org.elasticsearch.search.sort.SortOrder;\n import org.elasticsearch.test.ElasticsearchIntegrationTest;\n@@ -39,6 +42,7 @@\n \n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n import static org.elasticsearch.index.query.QueryBuilders.*;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertThrows;\n import static org.hamcrest.Matchers.*;\n \n /**\n@@ -287,19 +291,8 @@ public void testSimpleScrollQueryThenFetch_clearScrollIds() throws Exception {\n .execute().actionGet();\n assertThat(clearResponse.isSucceeded(), equalTo(true));\n \n- searchResponse1 = client().prepareSearchScroll(searchResponse1.getScrollId())\n- .setScroll(TimeValue.timeValueMinutes(2))\n- .execute().actionGet();\n-\n- searchResponse2 = client().prepareSearchScroll(searchResponse2.getScrollId())\n- .setScroll(TimeValue.timeValueMinutes(2))\n- .execute().actionGet();\n-\n- assertThat(searchResponse1.getHits().getTotalHits(), equalTo(0l));\n- assertThat(searchResponse1.getHits().hits().length, equalTo(0));\n-\n- assertThat(searchResponse2.getHits().getTotalHits(), equalTo(0l));\n- assertThat(searchResponse2.getHits().hits().length, equalTo(0));\n+ assertThrows(client().prepareSearchScroll(searchResponse1.getScrollId()).setScroll(TimeValue.timeValueMinutes(2)), SearchContextMissingException.class);\n+ assertThrows(client().prepareSearchScroll(searchResponse2.getScrollId()).setScroll(TimeValue.timeValueMinutes(2)), SearchContextMissingException.class);\n }\n \n @Test\n@@ -404,19 +397,8 @@ public void testSimpleScrollQueryThenFetch_clearAllScrollIds() throws Exception\n .execute().actionGet();\n assertThat(clearResponse.isSucceeded(), equalTo(true));\n \n- searchResponse1 = client().prepareSearchScroll(searchResponse1.getScrollId())\n- .setScroll(TimeValue.timeValueMinutes(2))\n- .execute().actionGet();\n-\n- searchResponse2 = client().prepareSearchScroll(searchResponse2.getScrollId())\n- .setScroll(TimeValue.timeValueMinutes(2))\n- .execute().actionGet();\n-\n- assertThat(searchResponse1.getHits().getTotalHits(), equalTo(0l));\n- assertThat(searchResponse1.getHits().hits().length, equalTo(0));\n-\n- assertThat(searchResponse2.getHits().getTotalHits(), equalTo(0l));\n- assertThat(searchResponse2.getHits().hits().length, equalTo(0));\n+ assertThrows(cluster().transportClient().prepareSearchScroll(searchResponse1.getScrollId()).setScroll(TimeValue.timeValueMinutes(2)), SearchContextMissingException.class);\n+ assertThrows(cluster().transportClient().prepareSearchScroll(searchResponse2.getScrollId()).setScroll(TimeValue.timeValueMinutes(2)), SearchContextMissingException.class);\n }\n \n @Test\n@@ -447,7 +429,24 @@ public void testDeepPaginationWithOneDocIndexAndDoNotBlowUp() throws Exception {\n }\n }\n }\n-\n }\n \n+ @Test\n+ public void testThatNonExistingScrollIdReturnsCorrectException() throws Exception {\n+ client().prepareIndex(\"index\", \"type\", \"1\").setSource(\"field\", \"value\").execute().get();\n+ refresh();\n+\n+ try {\n+ SearchResponse searchResponse = client().prepareSearch(\"index\").setSize(1).setScroll(\"1m\").get();\n+ assertThat(searchResponse.getScrollId(), is(notNullValue()));\n+\n+ ClearScrollResponse clearScrollResponse = client().prepareClearScroll().addScrollId(searchResponse.getScrollId()).get();\n+ assertThat(clearScrollResponse.isSucceeded(), is(true));\n+\n+ cluster().transportClient().prepareSearchScroll(searchResponse.getScrollId()).get();\n+ fail(\"Expected exception to happen due to non-existing scroll id\");\n+ } catch (ElasticsearchException e) {\n+ assertThat(e.status(), is(RestStatus.NOT_FOUND));\n+ }\n+ }\n }",
"filename": "src/test/java/org/elasticsearch/search/scroll/SearchScrollTests.java",
"status": "modified"
}
]
} |
{
"body": "I initially wanted to make the diff minimal but this ended up being quite complicated\nso I finally refactored a bit the way shards are randomized. Yet, it uses the same logic:\n- rotations to shuffle shards,\n- an AtomicInteger to generate the distances to use for the rotations.\n\nClose #5559\n",
"comments": [
{
"body": "this looks great - I left some commetns\n",
"created_at": "2014-03-27T09:35:05Z"
},
{
"body": "@s1monw I pushed a new commit\n",
"created_at": "2014-03-27T13:23:22Z"
},
{
"body": "LGTM\n",
"created_at": "2014-03-27T13:31:27Z"
},
{
"body": "Thanks Simon.\n",
"created_at": "2014-03-27T13:49:52Z"
}
],
"number": 5561,
"title": "Fix IndexShardRoutingTable's shard randomization to not throw out-of-bounds exceptions."
} | {
"body": "Change #5561 introduced a potential bug in that iterations that are performed\non a thread are might not be visible to other threads due to the removal of the\n`volatile` keyword.\n",
"number": 6039,
"review_comments": [],
"title": "Restore read/write visibility in PlainShardsIterator."
} | {
"commits": [
{
"message": "Restore read/write visibility is PlainShardsIterator.\n\nChange #5561 introduced a potential bug in that iterations that are performed\non a thread are might not be visible to other threads due to the removal of the\n`volatile` keyword.\n\nClose #6039"
}
],
"files": [
{
"diff": "@@ -19,7 +19,6 @@\n package org.elasticsearch.cluster.routing;\n \n import java.util.List;\n-import java.util.ListIterator;\n \n /**\n * A simple {@link ShardsIterator} that iterates a list or sub-list of\n@@ -29,21 +28,24 @@ public class PlainShardsIterator implements ShardsIterator {\n \n private final List<ShardRouting> shards;\n \n- private ListIterator<ShardRouting> iterator;\n+ // Calls to nextOrNull might be performed on different threads in the transport actions so we need the volatile\n+ // keyword in order to ensure visibility. Note that it is fine to use `volatile` for a counter in that case given\n+ // that although nextOrNull might be called from different threads, it can never happen concurrently.\n+ private volatile int index;\n \n public PlainShardsIterator(List<ShardRouting> shards) {\n this.shards = shards;\n- this.iterator = shards.listIterator();\n+ reset();\n }\n \n @Override\n public void reset() {\n- iterator = shards.listIterator();\n+ index = 0;\n }\n \n @Override\n public int remaining() {\n- return shards.size() - iterator.nextIndex();\n+ return shards.size() - index;\n }\n \n @Override\n@@ -56,10 +58,10 @@ public ShardRouting firstOrNull() {\n \n @Override\n public ShardRouting nextOrNull() {\n- if (iterator.hasNext()) {\n- return iterator.next();\n- } else {\n+ if (index == shards.size()) {\n return null;\n+ } else {\n+ return shards.get(index++);\n }\n }\n ",
"filename": "src/main/java/org/elasticsearch/cluster/routing/PlainShardsIterator.java",
"status": "modified"
}
]
} |
{
"body": "im getting this error logged after executing many multisearch requests concurrently (snippet below for single multisearch):\n\n``` java\nMultiSearchRequestBuilder builder = _client.prepareMultiSearch();\nfor (final String index: indexes)\n{\n final SearchRequestBuilder request = _client.prepareSearch(index).setPreference(\"_local\");\n request.setQuery(QueryBuilders.termQuery(\"group_id\", id)).setSize(100).setTimeout(\"10s\");\n builder.add(request);\n}\n\nfinal MultiSearchResponse response = builder.get();\n```\n\nexception call stack:\n\n```\norg.elasticsearch.common.util.concurrent.EsRejectedExecutionException: rejected execution (queue capacity 1000) on org.elasticsearch.acti\n at org.elasticsearch.common.util.concurrent.EsAbortPolicy.rejectedExecution(EsAbortPolicy.java:62)\n at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821)\n at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1372)\n at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.onFirstPhaseResult(TransportSearchTypeAction.ja\n at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.onFirstPhaseResult(TransportSearchTypeAction.ja\n at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.onFirstPhaseResult(TransportSearchTypeAction.ja\n at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.start(TransportSearchTypeAction.java:190)\n at org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction.doExecute(TransportSearchQueryThenFetchAction.java:59\n at org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction.doExecute(TransportSearchQueryThenFetchAction.java:49\n at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:63)\n at org.elasticsearch.action.search.TransportSearchAction.doExecute(TransportSearchAction.java:108)\n at org.elasticsearch.action.search.TransportSearchAction.doExecute(TransportSearchAction.java:43)\n at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:63)\n at org.elasticsearch.action.search.TransportMultiSearchAction.doExecute(TransportMultiSearchAction.java:63)\n at org.elasticsearch.action.search.TransportMultiSearchAction.doExecute(TransportMultiSearchAction.java:39)\n at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:63)\n at org.elasticsearch.client.node.NodeClient.execute(NodeClient.java:92)\n at org.elasticsearch.client.support.AbstractClient.multiSearch(AbstractClient.java:242)\n at org.elasticsearch.action.search.MultiSearchRequestBuilder.doExecute(MultiSearchRequestBuilder.java:79)\n at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:85)\n at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:59)\n at org.elasticsearch.action.ActionRequestBuilder.get(ActionRequestBuilder.java:67)\n...\n```\n\nthe worst thing about this issue is that it hangs 'forever' on **MultiSearchRequestBuilder .get()** method (call stack of hanging thread below):\n\n```\nsun.misc.Unsafe.park(Native Method)\njava.util.concurrent.locks.LockSupport.park(LockSupport.java:186)\njava.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:834)\njava.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:994)\njava.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1303)\norg.elasticsearch.common.util.concurrent.BaseFuture$Sync.get(BaseFuture.java:274)\norg.elasticsearch.common.util.concurrent.BaseFuture.get(BaseFuture.java:113)\norg.elasticsearch.action.support.AdapterActionFuture.actionGet(AdapterActionFuture.java:45)\norg.elasticsearch.action.ActionRequestBuilder.get(ActionRequestBuilder.java:67)\n...\n```\n\ncontext:\n- es version 1.0.0.RC1\n- 3 node cluster\n- 20 indexes (around 2,000,000 documents per index)\n- 12 shards per index\n- 2 replicas\n",
"comments": [
{
"body": "which version are you running?\n",
"created_at": "2014-01-25T03:05:58Z"
},
{
"body": "also, can you paste the full log of the failure, it is cut off.\n",
"created_at": "2014-01-25T03:07:05Z"
},
{
"body": "side note, the rejections are expected, we have a limit on the search thread pool (3x cores), with a limited queue size (1000), so if you will overload (3x cores + 1000), requests will start to get rejected. This is a good thing, since it will make sure the servers are not being overloaded. The fact that its gets stuck, thats weird (And the logs + version would help).\n",
"created_at": "2014-01-25T03:14:19Z"
},
{
"body": "full log:\n\n```\n[2014-01-25 02:07:39,613][DEBUG][action.search.type ] [<node name>] [<index name>][2], node[WTCscW1_R7uA7juvJ1lacg], [R], s[STARTED]: Failed to execute [org.elasticsearch.action.search.SearchRequest@46d6608e] lastShard [true]\norg.elasticsearch.common.util.concurrent.EsRejectedExecutionException: rejected execution (queue capacity 1000) on org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$4@3bdb7cad\n at org.elasticsearch.common.util.concurrent.EsAbortPolicy.rejectedExecution(EsAbortPolicy.java:62)\n at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821)\n at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1372)\n at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.onFirstPhaseResult(TransportSearchTypeAction.java:289)\n at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.onFirstPhaseResult(TransportSearchTypeAction.java:296)\n at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.onFirstPhaseResult(TransportSearchTypeAction.java:296)\n at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.start(TransportSearchTypeAction.java:190)\n at org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction.doExecute(TransportSearchQueryThenFetchAction.java:59)\n at org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction.doExecute(TransportSearchQueryThenFetchAction.java:49)\n at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:63)\n at org.elasticsearch.action.search.TransportSearchAction.doExecute(TransportSearchAction.java:108)\n at org.elasticsearch.action.search.TransportSearchAction.doExecute(TransportSearchAction.java:43)\n at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:63)\n at org.elasticsearch.action.search.TransportMultiSearchAction.doExecute(TransportMultiSearchAction.java:63)\n at org.elasticsearch.action.search.TransportMultiSearchAction.doExecute(TransportMultiSearchAction.java:39)\n at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:63)\n at org.elasticsearch.client.node.NodeClient.execute(NodeClient.java:92)\n at org.elasticsearch.client.support.AbstractClient.multiSearch(AbstractClient.java:242)\n at org.elasticsearch.action.search.MultiSearchRequestBuilder.doExecute(MultiSearchRequestBuilder.java:79)\n at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:85)\n at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:59)\n at org.elasticsearch.action.ActionRequestBuilder.get(ActionRequestBuilder.java:67)\n ... concealed ...\n at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)\n at java.util.concurrent.FutureTask.run(FutureTask.java:262)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n at java.lang.Thread.run(Thread.java:744)\n\n```\n\nand im on version 1.0.0.RC1\n\nalso i started seeing this problem after switching my code from using post filter to using query, so the code:\n\n```\nMultiSearchRequestBuilder builder = _client.prepareMultiSearch();\nfor (final String index: indexes)\n{\n final SearchRequestBuilder request = _client.prepareSearch(index).setPreference(\"_local\");\n request.setPostFilter(FilterBuilders.termFilter(\"group_id\", id)).setSize(100).setTimeout(\"10s\");\n builder.add(request);\n}\n```\n\nwas working fine\n",
"created_at": "2014-01-25T03:29:38Z"
},
{
"body": "Hi Shay, I'm running 2 bulk requests (each with 420 requests, but on ES queue that translate into more than 1000 requests) and getting the same exception for the second bulk. That worked fine when running on version 0.90.2, but now that I've upgraded to 0.90.10 I'm getting the exception. Was there a change in the queue size or reject behavior since 0.90.2?\n\nnote: the request doesn't hang.\n\nThanks in advanced,\n",
"created_at": "2014-02-11T14:20:15Z"
},
{
"body": "I am facing the exact same issue.. so what is the solution or workaround??\n",
"created_at": "2014-05-01T22:30:02Z"
},
{
"body": "@orenorg yes, the defaults were added post 0.90.2 to set the queue size to make sure the server doesn't get overloaded\n\n@tvinod if you get rejected failures, then you need to add more capacity to your cluster if you expect it to handle such load. Overloading it without protection will just cause it to fall over.\n\n@karol-gwaj sorry to get back late, but did you try to upgrade to latest version and see if it got solved?\n",
"created_at": "2014-05-01T22:33:29Z"
},
{
"body": "Thanks @kimchy . I understand the rejected failures exception. My issue is that my java call on the client side hangs forever. that shouldn't happen. it should either return an error or throw an exception. im on the latest 1.1.0 version.\n",
"created_at": "2014-05-01T22:36:40Z"
},
{
"body": "@tvinod if it hangs then its a problem, is there a chance that you can write a repro for this? we will try and repro it as well again...\n",
"created_at": "2014-05-01T22:37:35Z"
},
{
"body": "i have a 100% repro but i think its because of my setup, amount of data i\nhave in ES and the client pattern.. i can try my best to see if i can write\na repro for you.\nbut if there is any instrumentation that you want me to do, either on\nclient side or ES config side, i can do it.\n\nlet me know.\n\nOn Thu, May 1, 2014 at 3:38 PM, Shay Banon notifications@github.com wrote:\n\n> @tvinod https://github.com/tvinod if it hangs then its a problem, is\n> there a chance that you can write a repro for this? we will try and repro\n> it as well again...\n> \n> —\n> Reply to this email directly or view it on GitHubhttps://github.com/elasticsearch/elasticsearch/issues/4887#issuecomment-41965054\n> .\n",
"created_at": "2014-05-01T22:43:45Z"
},
{
"body": "@tvinod its tricky with instrumentation, because of the async nature of ES..., I wrote a very simple program that continuously simulates rejections and it doesn't seem to get stuck, so a repro (you can mail me privately) would go a long way to help solve this.\n",
"created_at": "2014-05-01T22:46:07Z"
},
{
"body": "ok, ill let you know when i have something for you..\n\nbut if it helps - in my case, it hangs when the number of requests in the\nmultisearch is 26. its not a magic number, it just happens to be in my case.\n\nthanks\n\nOn Thu, May 1, 2014 at 3:46 PM, Shay Banon notifications@github.com wrote:\n\n> @tvinod https://github.com/tvinod its tricky with instrumentation,\n> because of the async nature of ES..., I wrote a very simple program that\n> continuously simulates rejections and it doesn't seem to get stuck, so a\n> repro (you can mail me privately) would go a long way to help solve this.\n> \n> —\n> Reply to this email directly or view it on GitHubhttps://github.com/elasticsearch/elasticsearch/issues/4887#issuecomment-41965689\n> .\n",
"created_at": "2014-05-01T23:11:43Z"
},
{
"body": "I believe I managed to recreate it, tricky... . Hold on the repro for now, will continue to work on it.\n",
"created_at": "2014-05-01T23:35:35Z"
},
{
"body": "I managed to recreate it under certain conditions (very tricky), this is similar in nature to #4526, and it happens because the rejection exception happens on the calling thread, so its not on a forked thread. I will think about how this can be solved, we should fix this case cleanly in ES, but thats a biggish refactoring potentially, will update this issue...\n",
"created_at": "2014-05-01T23:51:28Z"
},
{
"body": "I finally found the problem (see pull request above), it didn't relate to #4526 at the end, but wrong management of how we iterate over the shard iterator between copies of the same shard.\n",
"created_at": "2014-05-04T01:30:41Z"
},
{
"body": "great, any ETA on the fix? whats the best way to get it..\nthanks\n\nOn Sat, May 3, 2014 at 6:31 PM, Shay Banon notifications@github.com wrote:\n\n> I finally found the problem (see pull request above), it didn't relate to\n> #4526 https://github.com/elasticsearch/elasticsearch/issues/4526 at the\n> end, but wrong management of how we iterate over the shard iterator between\n> copies of the same shard.\n> \n> —\n> Reply to this email directly or view it on GitHubhttps://github.com/elasticsearch/elasticsearch/issues/4887#issuecomment-42121255\n> .\n",
"created_at": "2014-05-05T06:46:45Z"
},
{
"body": "I'm stuck on this now also. I'm running 1.1.1 ES server and 1.0.2 java client. It just hangs in the client on actionGet(). Is this a server or client bug? It was working but when I increased the amount I'm bulking, the problem started. Any ETA?\n",
"created_at": "2014-05-13T19:45:20Z"
},
{
"body": "I got this too, any solution? I'm running es 1.4.0 and java 1.7\n",
"created_at": "2014-12-25T05:26:33Z"
},
{
"body": "@situ2011 this was fixed in 1.1.2. If you're seeing something similar please open a new issue with all necessary details.\n",
"created_at": "2014-12-29T10:30:33Z"
}
],
"number": 4887,
"title": "MultiSearch hangs forever + EsRejectedExecutionException"
} | {
"body": "When a thread pool rejects the execution on the local node, the search might not return.\nThis happens due to the fact that we move to the next shard only _within_ the execution on the thread pool in the start method. If it fails to submit the task to the thread pool, it will go through the fail shard logic, but without \"counting\" the current shard itself. When this happens, the relevant shard will then execute more times than intended, causing the total opes counter to skew, and for example, if on another shard the search is successful, the total ops will be incremented _beyond_ the expectedTotalOps, causing the check on == as the exit condition to never happen.\nThe fix here makes sure that the shard iterator properly progresses even in the case of rejections, and also includes improvement to when cleaning a context is sent in case of failures (which were exposed by the test).\nThough the change fixes the problem, we should work on simplifying the code path considerably, the first suggestion as a followup is to remove the support for operation threading (also in broadcast), and move the local optimization execution to SearchService, this will simplify the code in different search action considerably, and will allow to remove the problematic #firstOrNull method on the shard iterator.\nThe second suggestion is to move the optimization of local execution to the TransportService, so all actions will not have to explicitly do the mentioned optimization.\nfixes #4887\n",
"number": 6032,
"review_comments": [
{
"body": "maybe put the for loop and onFailure call into a try final block? \n",
"created_at": "2014-05-04T16:56:56Z"
},
{
"body": "I didn't see a reason where this will require a try .., finally, since the calls to release the context are properly protected. I think its enough?\n",
"created_at": "2014-05-04T17:21:23Z"
}
],
"title": "Search might not return on thread pool rejection"
} | {
"commits": [
{
"message": "Search might not return on thread pool rejection\nWhen a thread pool rejects the execution on the local node, the search might not return.\nThis happens due to the fact that we move to the next shard only *within* the execution on the thread pool in the start method. If it fails to submit the task to the thread pool, it will go through the fail shard logic, but without \"counting\" the current shard itself. When this happens, the relevant shard will then execute more times than intended, causing the total opes counter to skew, and for example, if on another shard the search is successful, the total ops will be incremented *beyond* the expectedTotalOps, causing the check on == as the exit condition to never happen.\nThe fix here makes sure that the shard iterator properly progresses even in the case of rejections, and also includes improvement to when cleaning a context is sent in case of failures (which were exposed by the test).\nThough the change fixes the problem, we should work on simplifying the code path considerably, the first suggestion as a followup is to remove the support for operation threading (also in broadcast), and move the local optimization execution to SearchService, this will simplify the code in different search action considerably, and will allow to remove the problematic #firstOrNull method on the shard iterator.\nThe second suggestion is to move the optimization of local execution to the TransportService, so all actions will not have to explicitly do the mentioned optimization.\nfixes #4887"
}
],
"files": [
{
"diff": "@@ -141,6 +141,7 @@ public void run() {\n executeFetch(entry.index, queryResult.shardTarget(), counter, fetchSearchRequest, node);\n }\n } catch (Throwable t) {\n+ docIdsToLoad.set(entry.index, null); // clear it, we didn't manage to do anything with it\n onFetchFailure(t, fetchSearchRequest, entry.index, queryResult.shardTarget(), counter);\n }\n }\n@@ -162,6 +163,8 @@ public void onResult(FetchSearchResult result) {\n \n @Override\n public void onFailure(Throwable t) {\n+ // the failure might happen without managing to clear the search context..., potentially need to clear its context (for example)\n+ docIdsToLoad.set(shardIndex, null);\n onFetchFailure(t, fetchSearchRequest, shardIndex, shardTarget, counter);\n }\n });",
"filename": "src/main/java/org/elasticsearch/action/search/type/TransportSearchQueryThenFetchAction.java",
"status": "modified"
},
{
"diff": "@@ -21,6 +21,7 @@\n \n import com.carrotsearch.hppc.IntArrayList;\n import org.apache.lucene.search.ScoreDoc;\n+import org.elasticsearch.ElasticsearchIllegalStateException;\n import org.elasticsearch.action.ActionListener;\n import org.elasticsearch.action.NoShardAvailableActionException;\n import org.elasticsearch.action.search.*;\n@@ -155,7 +156,7 @@ public void start() {\n localOperations++;\n } else {\n // do the remote operation here, the localAsync flag is not relevant\n- performFirstPhase(shardIndex, shardIt);\n+ performFirstPhase(shardIndex, shardIt, shardIt.nextOrNull());\n }\n } else {\n // really, no shards active in this group\n@@ -175,7 +176,7 @@ public void run() {\n final ShardRouting shard = shardIt.firstOrNull();\n if (shard != null) {\n if (shard.currentNodeId().equals(nodes.localNodeId())) {\n- performFirstPhase(shardIndex, shardIt);\n+ performFirstPhase(shardIndex, shardIt, shardIt.nextOrNull());\n }\n }\n }\n@@ -190,22 +191,23 @@ public void run() {\n for (final ShardIterator shardIt : shardsIts) {\n shardIndex++;\n final int fShardIndex = shardIndex;\n- final ShardRouting shard = shardIt.firstOrNull();\n- if (shard != null) {\n- if (shard.currentNodeId().equals(nodes.localNodeId())) {\n+ ShardRouting first = shardIt.firstOrNull();\n+ if (first != null) {\n+ if (first.currentNodeId().equals(nodes.localNodeId())) {\n+ final ShardRouting shard = shardIt.nextOrNull();\n if (localAsync) {\n try {\n threadPool.executor(ThreadPool.Names.SEARCH).execute(new Runnable() {\n @Override\n public void run() {\n- performFirstPhase(fShardIndex, shardIt);\n+ performFirstPhase(fShardIndex, shardIt, shard);\n }\n });\n } catch (Throwable t) {\n onFirstPhaseResult(shardIndex, shard, shard.currentNodeId(), shardIt, t);\n }\n } else {\n- performFirstPhase(fShardIndex, shardIt);\n+ performFirstPhase(fShardIndex, shardIt, shard);\n }\n }\n }\n@@ -214,10 +216,6 @@ public void run() {\n }\n }\n \n- void performFirstPhase(final int shardIndex, final ShardIterator shardIt) {\n- performFirstPhase(shardIndex, shardIt, shardIt.nextOrNull());\n- }\n-\n void performFirstPhase(final int shardIndex, final ShardIterator shardIt, final ShardRouting shard) {\n if (shard == null) {\n // no more active shards... (we should not really get here, but just for safety)\n@@ -260,8 +258,10 @@ void onFirstPhaseResult(int shardIndex, ShardRouting shard, FirstResult result,\n if (logger.isDebugEnabled()) {\n logger.debug(shardIt.shardId() + \": Failed to execute [\" + request + \"] while moving to second phase\", e);\n }\n- listener.onFailure(new ReduceSearchPhaseException(firstPhaseName(), \"\", e, buildShardFailures()));\n+ raiseEarlyFailure(new ReduceSearchPhaseException(firstPhaseName(), \"\", e, buildShardFailures()));\n }\n+ } else if (xTotalOps > expectedTotalOps) {\n+ raiseEarlyFailure(new ElasticsearchIllegalStateException(\"unexpected higher total ops [\" + xTotalOps + \"] compared to expected [\" + expectedTotalOps + \"]\"));\n }\n }\n \n@@ -288,12 +288,12 @@ void onFirstPhaseResult(final int shardIndex, @Nullable ShardRouting shard, @Nul\n logger.debug(\"All shards failed for phase: [{}]\", firstPhaseName(), t);\n }\n // no successful ops, raise an exception\n- listener.onFailure(new SearchPhaseExecutionException(firstPhaseName(), \"all shards failed\", buildShardFailures()));\n+ raiseEarlyFailure(new SearchPhaseExecutionException(firstPhaseName(), \"all shards failed\", buildShardFailures()));\n } else {\n try {\n innerMoveToSecondPhase();\n } catch (Throwable e) {\n- listener.onFailure(new ReduceSearchPhaseException(firstPhaseName(), \"\", e, buildShardFailures()));\n+ raiseEarlyFailure(new ReduceSearchPhaseException(firstPhaseName(), \"\", e, buildShardFailures()));\n }\n }\n } else {\n@@ -379,6 +379,20 @@ protected final void addShardFailure(final int shardIndex, @Nullable SearchShard\n }\n }\n \n+ private void raiseEarlyFailure(Throwable t) {\n+ for (AtomicArray.Entry<FirstResult> entry : firstResults.asList()) {\n+ try {\n+ DiscoveryNode node = nodes.get(entry.value.shardTarget().nodeId());\n+ if (node != null) { // should not happen (==null) but safeguard anyhow\n+ searchService.sendFreeContext(node, entry.value.id(), request);\n+ }\n+ } catch (Throwable t1) {\n+ logger.trace(\"failed to release context\", t1);\n+ }\n+ }\n+ listener.onFailure(t);\n+ }\n+\n /**\n * Releases shard targets that are not used in the docsIdsToLoad.\n */\n@@ -391,9 +405,13 @@ protected void releaseIrrelevantSearchContexts(AtomicArray<? extends QuerySearch\n if (request.scroll() == null) {\n for (AtomicArray.Entry<? extends QuerySearchResultProvider> entry : queryResults.asList()) {\n if (docIdsToLoad.get(entry.index) == null) {\n- DiscoveryNode node = nodes.get(entry.value.queryResult().shardTarget().nodeId());\n- if (node != null) { // should not happen (==null) but safeguard anyhow\n- searchService.sendFreeContext(node, entry.value.queryResult().id(), request);\n+ try {\n+ DiscoveryNode node = nodes.get(entry.value.queryResult().shardTarget().nodeId());\n+ if (node != null) { // should not happen (==null) but safeguard anyhow\n+ searchService.sendFreeContext(node, entry.value.queryResult().id(), request);\n+ }\n+ } catch (Throwable t1) {\n+ logger.trace(\"failed to release context\", t1);\n }\n }\n }",
"filename": "src/main/java/org/elasticsearch/action/search/type/TransportSearchTypeAction.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,110 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.action;\n+\n+import com.google.common.collect.Lists;\n+import org.elasticsearch.ExceptionsHelper;\n+import org.elasticsearch.action.search.SearchPhaseExecutionException;\n+import org.elasticsearch.action.search.SearchResponse;\n+import org.elasticsearch.action.search.SearchType;\n+import org.elasticsearch.action.search.ShardSearchFailure;\n+import org.elasticsearch.common.settings.ImmutableSettings;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.index.query.QueryBuilders;\n+import org.elasticsearch.test.ElasticsearchIntegrationTest;\n+import org.elasticsearch.test.ElasticsearchIntegrationTest.ClusterScope;\n+import org.junit.Test;\n+\n+import java.util.Locale;\n+import java.util.concurrent.CopyOnWriteArrayList;\n+import java.util.concurrent.CountDownLatch;\n+\n+import static org.hamcrest.Matchers.equalTo;\n+\n+/**\n+ */\n+@ClusterScope(scope = ElasticsearchIntegrationTest.Scope.SUITE, numDataNodes = 2)\n+public class RejectionActionTests extends ElasticsearchIntegrationTest {\n+\n+ @Override\n+ protected Settings nodeSettings(int nodeOrdinal) {\n+ return ImmutableSettings.builder()\n+ .put(\"threadpool.search.size\", 1)\n+ .put(\"threadpool.search.queue_size\", 1)\n+ .put(\"threadpool.index.size\", 1)\n+ .put(\"threadpool.index.queue_size\", 1)\n+ .put(\"threadpool.get.size\", 1)\n+ .put(\"threadpool.get.queue_size\", 1)\n+ .build();\n+ }\n+\n+\n+ @Test\n+ public void simulateSearchRejectionLoad() throws Throwable {\n+ for (int i = 0; i < 10; i++) {\n+ client().prepareIndex(\"test\", \"type\", Integer.toString(i)).setSource(\"field\", \"1\").get();\n+ }\n+\n+ int numberOfAsyncOps = randomIntBetween(200, 700);\n+ final CountDownLatch latch = new CountDownLatch(numberOfAsyncOps);\n+ final CopyOnWriteArrayList<Object> responses = Lists.newCopyOnWriteArrayList();\n+ for (int i = 0; i < numberOfAsyncOps; i++) {\n+ client().prepareSearch(\"test\")\n+ .setSearchType(SearchType.QUERY_THEN_FETCH)\n+ .setQuery(QueryBuilders.matchQuery(\"field\", \"1\"))\n+ .execute(new ActionListener<SearchResponse>() {\n+ @Override\n+ public void onResponse(SearchResponse searchResponse) {\n+ responses.add(searchResponse);\n+ latch.countDown();\n+ }\n+\n+ @Override\n+ public void onFailure(Throwable e) {\n+ responses.add(e);\n+ latch.countDown();\n+ }\n+ });\n+ }\n+ latch.await();\n+ assertThat(responses.size(), equalTo(numberOfAsyncOps));\n+\n+ // validate all responses\n+ for (Object response : responses) {\n+ if (response instanceof SearchResponse) {\n+ SearchResponse searchResponse = (SearchResponse) response;\n+ for (ShardSearchFailure failure : searchResponse.getShardFailures()) {\n+ assertTrue(\"got unexpected reason...\" + failure.reason(), failure.reason().toLowerCase(Locale.ENGLISH).contains(\"rejected\"));\n+ }\n+ } else {\n+ Throwable t = (Throwable) response;\n+ Throwable unwrap = ExceptionsHelper.unwrapCause(t);\n+ if (unwrap instanceof SearchPhaseExecutionException) {\n+ SearchPhaseExecutionException e = (SearchPhaseExecutionException) unwrap;\n+ for (ShardSearchFailure failure : e.shardFailures()) {\n+ assertTrue(\"got unexpected reason...\" + failure.reason(), failure.reason().toLowerCase(Locale.ENGLISH).contains(\"rejected\"));\n+ }\n+ } else {\n+ throw new AssertionError(\"unexpected failure\", (Throwable) response);\n+ }\n+ }\n+ }\n+ }\n+}",
"filename": "src/test/java/org/elasticsearch/action/RejectionActionTests.java",
"status": "added"
}
]
} |
{
"body": "When installing a plugin using `-u` ( `--url`) option, it fails silently when you don't provide `-i` (`--install`) option:\n\n``` sh\n$ bin/plugin -u http://search.maven.org/remotecontent?filepath=fr/pilato/elasticsearch/river/rssriver/1.1.0/rssriver-1.1.0.zip\n$ bin/plugin -l\nInstalled plugins:\n - No plugin detected in elasticsearch/plugins\n```\n\nWith `-i`:\n\n``` sh\n$ bin/plugin -u http://search.maven.org/remotecontent?filepath=fr/pilato/elasticsearch/river/rssriver/1.1.0/rssriver-1.1.0.zip -i rssriver\n-> Installing rssriver...\nTrying http://search.maven.org/remotecontent?filepath=fr/pilato/elasticsearch/river/rssriver/1.1.0/rssriver-1.1.0.zip...\nDownloading ......................................DONE\nInstalled rssriver into /Users/dpilato/Documents/Elasticsearch/apps/elasticsearch/elasticsearch-2.0.0-SNAPSHOT/plugins/rssriver\n$ bin/plugin -l\nInstalled plugins:\n - rssriver\n```\n",
"comments": [],
"number": 5976,
"title": "bin/plugin -u url fails silently"
} | {
"body": "When setting the `--url`, it now sets the implied action--`INSTALL`. Therefore, if the user fails to supply the install flag themselves, then the plugin name will be caught as missing (also added to remove incase a future scenario allows that) and fail immediately.\n\nAdding code to test for unset plugin names to fail fast with descriptive error messages (for all flags that require a value). Also simplified the series of `if` statements checking for the commands by using a `switch` (now that it's using Java 7), added tests, and updated random exceptions with the up-to-date flag names (e.g., \"--verbose\" instead of \"-verbose\").\n\n@dadoonet Note: This will cause or have merge conflicts with #5977. I have no problem with my changes just being incorporated into that PR or doing the merge here.\n\nAlso, while messing with the unit tests, I noticed that the package for the tests is `org.elasticsearch.plugin` while the non-test code is `org.elasticsearch.plugins`. That's probably worth a cleanup by whoever loses the merge; I avoided doing it upfront to simplify any potential merge.\n\nCloses #5976\n",
"number": 6013,
"review_comments": [],
"title": "`bin/plugin` tests for missing plugin name when passing `--url`"
} | {
"commits": [
{
"message": "Adding code to test for unset plugin names to fail fast with descriptive error messages. Also simplified the series of `if` statements checking for the commands by using a `switch` (now that it's using Java 7), added tests, and updated random exceptions with the up-to-date flag names (e.g., \"--verbose\" instead of \"-verbose\").\n\nCloses #5976"
}
],
"files": [
{
"diff": "@@ -21,6 +21,7 @@\n \n import com.google.common.base.Strings;\n import org.elasticsearch.ElasticsearchIllegalArgumentException;\n+import org.elasticsearch.ElasticsearchIllegalStateException;\n import org.elasticsearch.ElasticsearchTimeoutException;\n import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.common.collect.Tuple;\n@@ -104,6 +105,9 @@ public void checkServerTrusted(\n }\n \n public void downloadAndExtract(String name) throws IOException {\n+ if (name == null) {\n+ throw new ElasticsearchIllegalArgumentException(\"plugin name must be supplied with --install [name].\");\n+ }\n HttpDownloadHelper downloadHelper = new HttpDownloadHelper();\n boolean downloaded = false;\n HttpDownloadHelper.DownloadProgress progress;\n@@ -123,7 +127,7 @@ public void downloadAndExtract(String name) throws IOException {\n // extract the plugin\n File extractLocation = pluginHandle.extractedDir(environment);\n if (extractLocation.exists()) {\n- throw new IOException(\"plugin directory \" + extractLocation.getAbsolutePath() + \" already exists. To update the plugin, uninstall it first using -remove \" + name + \" command\");\n+ throw new IOException(\"plugin directory \" + extractLocation.getAbsolutePath() + \" already exists. To update the plugin, uninstall it first using --remove \" + name + \" command\");\n }\n \n // first, try directly from the URL provided\n@@ -158,7 +162,7 @@ public void downloadAndExtract(String name) throws IOException {\n }\n \n if (!downloaded) {\n- throw new IOException(\"failed to download out of all possible locations..., use -verbose to get detailed information\");\n+ throw new IOException(\"failed to download out of all possible locations..., use --verbose to get detailed information\");\n }\n \n ZipFile zipFile = null;\n@@ -231,6 +235,9 @@ public void downloadAndExtract(String name) throws IOException {\n }\n \n public void removePlugin(String name) throws IOException {\n+ if (name == null) {\n+ throw new ElasticsearchIllegalArgumentException(\"plugin name must be supplied with --remove [name].\");\n+ }\n PluginHandle pluginHandle = PluginHandle.parse(name);\n boolean removed = false;\n \n@@ -329,38 +336,68 @@ public static void main(String[] args) {\n try {\n for (int c = 0; c < args.length; c++) {\n String command = args[c];\n- if (\"-u\".equals(command) || \"--url\".equals(command)\n- // Deprecated commands\n- || \"url\".equals(command) || \"-url\".equals(command)) {\n- url = args[++c];\n- } else if (\"-v\".equals(command) || \"--verbose\".equals(command)\n- || \"verbose\".equals(command) || \"-verbose\".equals(command)) {\n- outputMode = OutputMode.VERBOSE;\n- } else if (\"-s\".equals(command) || \"--silent\".equals(command)\n- || \"silent\".equals(command) || \"-silent\".equals(command)) {\n- outputMode = OutputMode.SILENT;\n- } else if (command.equals(\"-i\") || command.equals(\"--install\")\n- // Deprecated commands\n- || command.equals(\"install\") || command.equals(\"-install\")) {\n- pluginName = args[++c];\n- action = ACTION.INSTALL;\n- } else if (command.equals(\"-t\") || command.equals(\"--timeout\")\n- || command.equals(\"timeout\") || command.equals(\"-timeout\")) {\n- timeout = TimeValue.parseTimeValue(args[++c], DEFAULT_TIMEOUT);\n- } else if (command.equals(\"-r\") || command.equals(\"--remove\")\n- // Deprecated commands\n- || command.equals(\"remove\") || command.equals(\"-remove\")) {\n- pluginName = args[++c];\n-\n- action = ACTION.REMOVE;\n- } else if (command.equals(\"-l\") || command.equals(\"--list\")) {\n- action = ACTION.LIST;\n- } else if (command.equals(\"-h\") || command.equals(\"--help\")) {\n- displayHelp(null);\n- } else {\n- displayHelp(\"Command [\" + args[c] + \"] unknown.\");\n- // Unknown command. We break...\n- System.exit(EXIT_CODE_CMD_USAGE);\n+ switch (command) {\n+ case \"-u\":\n+ case \"--url\":\n+ // deprecated versions:\n+ case \"url\":\n+ case \"-url\":\n+ url = getCommandValue(args, ++c, \"--url\");\n+ // Until update is supported, then supplying a URL implies installing\n+ // By specifying this action, we also avoid silently failing without\n+ // dubious checks.\n+ action = ACTION.INSTALL;\n+ break;\n+ case \"-v\":\n+ case \"--verbose\":\n+ // deprecated versions:\n+ case \"verbose\":\n+ case \"-verbose\":\n+ outputMode = OutputMode.VERBOSE;\n+ break;\n+ case \"-s\":\n+ case \"--silent\":\n+ // deprecated versions:\n+ case \"silent\":\n+ case \"-silent\":\n+ outputMode = OutputMode.SILENT;\n+ break;\n+ case \"-i\":\n+ case \"--install\":\n+ // deprecated versions:\n+ case \"install\":\n+ case \"-install\":\n+ pluginName = getCommandValue(args, ++c, \"--install\");\n+ action = ACTION.INSTALL;\n+ break;\n+ case \"-r\":\n+ case \"--remove\":\n+ // deprecated versions:\n+ case \"remove\":\n+ case \"-remove\":\n+ pluginName = getCommandValue(args, ++c, \"--remove\");\n+ action = ACTION.REMOVE;\n+ break;\n+ case \"-t\":\n+ case \"--timeout\":\n+ // deprecated versions:\n+ case \"timeout\":\n+ case \"-timeout\":\n+ String timeoutValue = getCommandValue(args, ++c, \"--timeout\");\n+ timeout = TimeValue.parseTimeValue(timeoutValue, DEFAULT_TIMEOUT);\n+ break;\n+ case \"-l\":\n+ case \"--list\":\n+ action = ACTION.LIST;\n+ break;\n+ case \"-h\":\n+ case \"--help\":\n+ displayHelp(null);\n+ break;\n+ default:\n+ displayHelp(\"Command [\" + command + \"] unknown.\");\n+ // Unknown command. We break...\n+ System.exit(EXIT_CODE_CMD_USAGE);\n }\n }\n } catch (Throwable e) {\n@@ -375,7 +412,7 @@ public static void main(String[] args) {\n switch (action) {\n case ACTION.INSTALL:\n try {\n- pluginManager.log(\"-> Installing \" + pluginName + \"...\");\n+ pluginManager.log(\"-> Installing \" + Strings.nullToEmpty(pluginName) + \"...\");\n pluginManager.downloadAndExtract(pluginName);\n exitCode = EXIT_CODE_OK;\n } catch (IOException e) {\n@@ -389,7 +426,7 @@ public static void main(String[] args) {\n break;\n case ACTION.REMOVE:\n try {\n- pluginManager.log(\"-> Removing \" + pluginName + \" \");\n+ pluginManager.log(\"-> Removing \" + Strings.nullToEmpty(pluginName) + \"...\");\n pluginManager.removePlugin(pluginName);\n exitCode = EXIT_CODE_OK;\n } catch (ElasticsearchIllegalArgumentException e) {\n@@ -423,6 +460,36 @@ public static void main(String[] args) {\n }\n }\n \n+ /**\n+ * Get the value for the {@code flag} at the specified {@code arg} of the command line {@code args}.\n+ * <p />\n+ * This is useful to avoid having to check for multiple forms of unset (e.g., \" \" versus \"\" versus {@code null}).\n+ * @param args Incoming command line arguments.\n+ * @param arg Expected argument containing the value.\n+ * @param flag The flag whose value is being retrieved.\n+ * @return Never {@code null}. The trimmed value.\n+ * @throws NullPointerException if {@code args} is {@code null}.\n+ * @throws ArrayIndexOutOfBoundsException if {@code arg} is negative.\n+ * @throws ElasticsearchIllegalStateException if {@code arg} is >= {@code args.length}.\n+ * @throws ElasticsearchIllegalArgumentException if the value evaluates to blank ({@code null} or only whitespace)\n+ */\n+ private static String getCommandValue(String[] args, int arg, String flag) {\n+ if (arg >= args.length) {\n+ throw new ElasticsearchIllegalStateException(\"missing value for \" + flag + \". Usage: \" + flag + \" [value]\");\n+ }\n+\n+ // avoid having to interpret multiple forms of unset\n+ String trimmedValue = Strings.emptyToNull(args[arg].trim());\n+\n+ // If we had a value that is blank, then fail immediately\n+ if (trimmedValue == null) {\n+ throw new ElasticsearchIllegalArgumentException(\n+ \"value for \" + flag + \"('\" + args[arg] + \"') must be set. Usage: \" + flag + \" [value]\");\n+ }\n+\n+ return trimmedValue;\n+ }\n+\n private static void displayHelp(String message) {\n System.out.println(\"Usage:\");\n System.out.println(\" -u, --url [plugin location] : Set exact URL to download the plugin from\");",
"filename": "src/main/java/org/elasticsearch/plugins/PluginManager.java",
"status": "modified"
},
{
"diff": "@@ -69,12 +69,16 @@ public void beforeTest() {\n deletePluginsFolder();\n }\n \n+ @Test(expected = ElasticsearchIllegalArgumentException.class)\n+ public void testDownloadAndExtract_NullName_ThrowsException() throws IOException {\n+ pluginManager(getPluginUrlForResource(\"plugin_single_folder.zip\")).downloadAndExtract(null);\n+ }\n+\n @Test\n public void testLocalPluginInstallSingleFolder() throws Exception {\n //When we have only a folder in top-level (no files either) we remove that folder while extracting\n String pluginName = \"plugin-test\";\n- URI uri = URI.create(PluginManagerTests.class.getResource(\"plugin_single_folder.zip\").toString());\n- downloadAndExtract(pluginName, \"file://\" + uri.getPath());\n+ downloadAndExtract(pluginName, getPluginUrlForResource(\"plugin_single_folder.zip\"));\n \n cluster().startNode(SETTINGS);\n \n@@ -87,10 +91,9 @@ public void testLocalPluginInstallSiteFolder() throws Exception {\n //When we have only a folder in top-level (no files either) but it's called _site, we make it work\n //we can either remove the folder while extracting and then re-add it manually or just leave it as it is\n String pluginName = \"plugin-test\";\n- URI uri = URI.create(PluginManagerTests.class.getResource(\"plugin_folder_site.zip\").toString());\n- downloadAndExtract(pluginName, \"file://\" + uri.getPath());\n+ downloadAndExtract(pluginName, getPluginUrlForResource(\"plugin_folder_site.zip\"));\n \n- String nodeName = cluster().startNode(SETTINGS);\n+ cluster().startNode(SETTINGS);\n \n assertPluginLoaded(pluginName);\n assertPluginAvailable(pluginName);\n@@ -100,8 +103,7 @@ public void testLocalPluginInstallSiteFolder() throws Exception {\n public void testLocalPluginWithoutFolders() throws Exception {\n //When we don't have folders at all in the top-level, but only files, we don't modify anything\n String pluginName = \"plugin-test\";\n- URI uri = URI.create(PluginManagerTests.class.getResource(\"plugin_without_folders.zip\").toString());\n- downloadAndExtract(pluginName, \"file://\" + uri.getPath());\n+ downloadAndExtract(pluginName, getPluginUrlForResource(\"plugin_without_folders.zip\"));\n \n cluster().startNode(SETTINGS);\n \n@@ -113,8 +115,7 @@ public void testLocalPluginWithoutFolders() throws Exception {\n public void testLocalPluginFolderAndFile() throws Exception {\n //When we have a single top-level folder but also files in the top-level, we don't modify anything\n String pluginName = \"plugin-test\";\n- URI uri = URI.create(PluginManagerTests.class.getResource(\"plugin_folder_file.zip\").toString());\n- downloadAndExtract(pluginName, \"file://\" + uri.getPath());\n+ downloadAndExtract(pluginName, getPluginUrlForResource(\"plugin_folder_file.zip\"));\n \n cluster().startNode(SETTINGS);\n \n@@ -125,8 +126,7 @@ public void testLocalPluginFolderAndFile() throws Exception {\n @Test(expected = IllegalArgumentException.class)\n public void testSitePluginWithSourceThrows() throws Exception {\n String pluginName = \"plugin-with-source\";\n- URI uri = URI.create(PluginManagerTests.class.getResource(\"plugin_with_sourcefiles.zip\").toString());\n- downloadAndExtract(pluginName, \"file://\" + uri.getPath());\n+ downloadAndExtract(pluginName, getPluginUrlForResource(\"plugin_with_sourcefiles.zip\"));\n }\n \n /**\n@@ -191,14 +191,13 @@ public void testListInstalledEmpty() throws IOException {\n \n @Test(expected = IOException.class)\n public void testInstallPluginNull() throws IOException {\n- pluginManager(null).downloadAndExtract(\"\");\n+ pluginManager(null).downloadAndExtract(\"plugin-test\");\n }\n \n \n @Test\n public void testInstallPlugin() throws IOException {\n- PluginManager pluginManager = pluginManager(\"file://\".concat(\n- URI.create(PluginManagerTests.class.getResource(\"plugin_with_classfile.zip\").toString()).getPath()));\n+ PluginManager pluginManager = pluginManager(getPluginUrlForResource(\"plugin_with_classfile.zip\"));\n \n pluginManager.downloadAndExtract(\"plugin\");\n File[] plugins = pluginManager.getListInstalledPlugins();\n@@ -208,8 +207,7 @@ public void testInstallPlugin() throws IOException {\n \n @Test\n public void testInstallSitePlugin() throws IOException {\n- PluginManager pluginManager = pluginManager(\"file://\".concat(\n- URI.create(PluginManagerTests.class.getResource(\"plugin_without_folders.zip\").toString()).getPath()));\n+ PluginManager pluginManager = pluginManager(getPluginUrlForResource(\"plugin_without_folders.zip\"));\n \n pluginManager.downloadAndExtract(\"plugin-site\");\n File[] plugins = pluginManager.getListInstalledPlugins();\n@@ -303,21 +301,35 @@ private void deletePluginsFolder() {\n @Test\n public void testRemovePlugin() throws Exception {\n // We want to remove plugin with plugin short name\n- singlePluginInstallAndRemove(\"plugintest\", \"file://\".concat(\n- URI.create(PluginManagerTests.class.getResource(\"plugin_without_folders.zip\").toString()).getPath()));\n+ singlePluginInstallAndRemove(\"plugintest\", getPluginUrlForResource(\"plugin_without_folders.zip\"));\n \n // We want to remove plugin with groupid/artifactid/version form\n- singlePluginInstallAndRemove(\"groupid/plugintest/1.0.0\", \"file://\".concat(\n- URI.create(PluginManagerTests.class.getResource(\"plugin_without_folders.zip\").toString()).getPath()));\n+ singlePluginInstallAndRemove(\"groupid/plugintest/1.0.0\", getPluginUrlForResource(\"plugin_without_folders.zip\"));\n \n // We want to remove plugin with groupid/artifactid form\n- singlePluginInstallAndRemove(\"groupid/plugintest\", \"file://\".concat(\n- URI.create(PluginManagerTests.class.getResource(\"plugin_without_folders.zip\").toString()).getPath()));\n+ singlePluginInstallAndRemove(\"groupid/plugintest\", getPluginUrlForResource(\"plugin_without_folders.zip\"));\n+ }\n+\n+ @Test(expected = ElasticsearchIllegalArgumentException.class)\n+ public void testRemovePlugin_NullName_ThrowsException() throws IOException {\n+ pluginManager(getPluginUrlForResource(\"plugin_single_folder.zip\")).removePlugin(null);\n }\n \n @Test(expected = ElasticsearchIllegalArgumentException.class)\n public void testRemovePluginWithURLForm() throws Exception {\n PluginManager pluginManager = pluginManager(null);\n pluginManager.removePlugin(\"file://whatever\");\n }\n+\n+ /**\n+ * Retrieve a URL string that represents the resource with the given {@code resourceName}.\n+ * @param resourceName The resource name relative to {@link PluginManagerTests}.\n+ * @return Never {@code null}.\n+ * @throws NullPointerException if {@code resourceName} does not point to a valid resource.\n+ */\n+ private String getPluginUrlForResource(String resourceName) {\n+ URI uri = URI.create(PluginManagerTests.class.getResource(resourceName).toString());\n+\n+ return \"file://\" + uri.getPath();\n+ }\n }",
"filename": "src/test/java/org/elasticsearch/plugin/PluginManagerTests.java",
"status": "modified"
}
]
} |
{
"body": "In the mapping, the field named domain (String value) is a completion suggester as below:\n\n``` JSON\n\"domain\": {\n \"max_input_length\": 50,\n \"preserve_separators\": true,\n \"payloads\": false,\n \"analyzer\": \"simple\",\n \"preserve_position_increments\": true,\n \"type\": \"completion\"\n }\n```\n\nCurrently, I would like to aggregate based on the value in the field \"domain\". The example of a query in Sense is shown below. \nlocalhost:9200\n\n``` JSON\nPOST /ourindex/_search\n{\n \"query\": {\n \"match_all\": {}\n },\n \"aggs\": {\n \"test\": {\n \"terms\": {\n \"field\": \"domain\"\n }\n }\n }\n}\n```\n\nWhen the query is run, it returns NullPointerException. The detailed response is following.\n\n``` LOG\n{\n\"error\": \"SearchPhaseExecutionException[Failed to execute phase [query], all shards failed; shardFailures {[iQfn8u55QoqZSnlXEGGUlw][ourindex][4]: SearchParseException[[ourindex][4]: query[ConstantScore(*:*)],from[-1],size[-1]: Parse Failure [Failed to parse source [{\\r\\n \\\"query\\\": {\\r\\n \\\"match_all\\\": {}\\r\\n },\\r\\n \\\"aggs\\\": {\\r\\n \\\"test\\\": {\\r\\n \\\"terms\\\": {\\r\\n \\\"field\\\": \\\"domain\\\"\\r\\n }\\r\\n }\\r\\n }\\r\\n}\\n]]]; nested: NullPointerException; }{[iQfn8u55QoqZSnlXEGGUlw][ourindex][2]: SearchParseException[[ourindex][2]: query[ConstantScore(*:*)],from[-1],size[-1]: Parse Failure [Failed to parse source [{\\r\\n \\\"query\\\": {\\r\\n \\\"match_all\\\": {}\\r\\n },\\r\\n \\\"aggs\\\": {\\r\\n \\\"test\\\": {\\r\\n \\\"terms\\\": {\\r\\n \\\"field\\\": \\\"domain\\\"\\r\\n }\\r\\n }\\r\\n }\\r\\n}\\n]]]; nested: NullPointerException; }{[iQfn8u55QoqZSnlXEGGUlw][ourindex][3]: SearchParseException[[ourindex][3]: query[ConstantScore(*:*)],from[-1],size[-1]: Parse Failure [Failed to parse source [{\\r\\n \\\"query\\\": {\\r\\n \\\"match_all\\\": {}\\r\\n },\\r\\n \\\"aggs\\\": {\\r\\n \\\"test\\\": {\\r\\n \\\"terms\\\": {\\r\\n \\\"field\\\": \\\"domain\\\"\\r\\n }\\r\\n }\\r\\n }\\r\\n}\\n]]]; nested: NullPointerException; }{[iQfn8u55QoqZSnlXEGGUlw][ourindex][0]: SearchParseException[[ourindex][0]: query[ConstantScore(*:*)],from[-1],size[-1]: Parse Failure [Failed to parse source [{\\r\\n \\\"query\\\": {\\r\\n \\\"match_all\\\": {}\\r\\n },\\r\\n \\\"aggs\\\": {\\r\\n \\\"test\\\": {\\r\\n \\\"terms\\\": {\\r\\n \\\"field\\\": \\\"domain\\\"\\r\\n }\\r\\n }\\r\\n }\\r\\n}\\n]]]; nested: NullPointerException; }{[iQfn8u55QoqZSnlXEGGUlw][ourindex][1]: SearchParseException[[ourindex][1]: query[ConstantScore(*:*)],from[-1],size[-1]: Parse Failure [Failed to parse source [{\\r\\n \\\"query\\\": {\\r\\n \\\"match_all\\\": {}\\r\\n },\\r\\n \\\"aggs\\\": {\\r\\n \\\"test\\\": {\\r\\n \\\"terms\\\": {\\r\\n \\\"field\\\": \\\"domain\\\"\\r\\n }\\r\\n }\\r\\n }\\r\\n}\\n]]]; nested: NullPointerException; }]\",\n \"status\": 400\n}\n```\n\nDoes the type of the suggester matter? How can it work to aggregate by the values of domain field?\n",
"comments": [
{
"body": "well I don't think it shoudl throw a NPE but on the other hand why whould you want to aggregate on a suggest field. you should use multi field for this IMO. @jpountz can you take a look at this NPE and provide a better error message?\n",
"created_at": "2014-04-24T14:35:27Z"
},
{
"body": "If I understand correctly, currently the completion suggester creates a field (called \"domain\") in the above example which contains the content of \"input\" and is searchable but one cannot aggregate, because of the special format of this type ( @jpountz please correct if I am wrong). A better error message would be useful.\n\n> you should use multi field for this IMO.\n\nIt is unclear to me how `multi_field` can be used with the suggester here. The [documentation](http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-suggesters-completion.html) is not too explicit: \n\n> Even though you are losing most of the features of the completion suggest, you can opt in for the shortest form, which even allows you to use inside of multi fields. But keep in mind, that you will not be able to use several inputs, an output, payloads or weights.\n\nI would have expected `multi_field` to work like this:\n\n```\n{\n \"domain\": {\n \"type\": \"multi_field\",\n \"fields\": {\n \"domain\": {\n \"max_input_length\": 50,\n \"preserve_separators\": true,\n \"payloads\": false,\n \"analyzer\": \"simple\",\n \"preserve_position_increments\": true,\n \"type\": \"completion\"\n },\n \"input\": {\n \"type\": \"string\"\n }\n }\n }\n}\n```\n\nand then expected that aggregation would work on \"domain.input\". Is that supposed to work so so? If so, I think it is broken.\n\nIn any case, it might be nice to have an option to make all parameter fields searchable and and also allow aggregations on them. Currently, the only way to do this seems to be to add the parameters to a field with a different name than the suggestion field. Instead, suggester could create these fields automatically if the user wants to. We could have something like:\n\n```\n{\n \"domain\": {\n \"max_input_length\": 50,\n \"preserve_separators\": true,\n \"payloads\": false,\n \"analyzer\": \"simple\",\n \"preserve_position_increments\": true,\n \"type\": \"completion\",\n \"input\": {\n # Here be whatever the user likes to configure for indexing\n },\n \"output\": {\n ...\n },\n ...\n }\n}\n```\n\nwhich would cause creation of fields `\"domain.input\"`, `\"domain.output\"` and so on with user defined indexing options.\n\nDoes that make sense?\n",
"created_at": "2014-04-29T11:22:18Z"
},
{
"body": "I tried multi_fiield and it works as what I wanted. \nThe mapping I use is similar to what @brwe mentioned.\n\n``` JSON\n{\n \"domain\": {\n \"type\": \"multi_field\",\n \"fields\": {\n \"domain\": {\n \"max_input_length\": 50,\n \"preserve_separators\": true,\n \"payloads\": false,\n \"analyzer\": \"simple\",\n \"preserve_position_increments\": true,\n \"type\": \"completion\"\n },\n \"input\": {\n \"type\": \"string\"\n }\n }\n }\n}\n```\n\nThank you for the hint @s1monw .\n",
"created_at": "2014-04-29T11:52:32Z"
},
{
"body": "@brwe I just tried a multi-field setup, and this seems to work fine:\n\n``` json\nDELETE /test\n\nPUT /test\n{\n \"mappings\": {\n \"test\": {\n \"properties\": {\n \"my_field\": {\n \"type\": \"completion\",\n \"analyzer\": \"simple\",\n \"fields\": {\n \"raw\": {\n \"type\": \"string\",\n \"index\": \"not_analyzed\"\n }\n }\n }\n }\n }\n }\n}\n\nPUT /test/test/1\n{\n \"my_field\": \"foo bar\"\n}\n\nPUT /test/test/2\n{\n \"my_field\": \"foo bar baz\"\n}\n\nPOST /test/_refresh\n\nGET /test/_suggest\n{\n \"my-suggest\" : {\n \"text\" : \"foo b\",\n \"completion\" : {\n \"field\" : \"my_field\"\n }\n }\n}\n\nGET /test/_search\n{\n \"aggs\": {\n \"my_field_values\": {\n \"terms\": {\n \"field\": \"my_field.raw\"\n }\n }\n }\n}\n```\n\nAdding capabilities to store `domain.input` and `domain.output` as you describe feels to me like reinventing multi-fields. I think we should rather fix the error message when trying to aggregate on a completion field and recommend on either indexing the field twice (eg. if you need to specify input and output separately) or using a multi-field as above for the simple cases.\n",
"created_at": "2014-04-29T11:53:27Z"
},
{
"body": "Ah! Now I get it. I was trying out how this works when you explicitly provide \"input\" and \"output\" when indexing. I guess this is meant by \"opt in for the shortest form\".\nThanks for clarifying that this is not what was meant!\n\n> I think we should rather fix the error message\n\nOk, I'l just add the more descriptive error message.\n",
"created_at": "2014-04-29T12:29:25Z"
},
{
"body": "@brwe can you port this to `1.1.2`?\n",
"created_at": "2014-05-18T10:06:42Z"
},
{
"body": "done\n",
"created_at": "2014-05-19T06:41:27Z"
}
],
"number": 5930,
"title": "Cannot aggregate on completion suggester field in elasticsearch"
} | {
"body": "closes #5930\n",
"number": 5979,
"review_comments": [],
"title": "Provide meaningful error message if field has no fielddata type"
} | {
"commits": [
{
"message": "Provide meaningful error message if field has no fielddata type\n\ncloses #5930"
}
],
"files": [
{
"diff": "@@ -207,6 +207,9 @@ public void onMappingUpdate() {\n public <IFD extends IndexFieldData<?>> IFD getForField(FieldMapper<?> mapper) {\n final FieldMapper.Names fieldNames = mapper.names();\n final FieldDataType type = mapper.fieldDataType();\n+ if (type == null) {\n+ throw new ElasticsearchIllegalArgumentException(\"found no fielddata type for field [\" + fieldNames.fullName() + \"]\");\n+ }\n final boolean docValues = mapper.hasDocValues();\n IndexFieldData<?> fieldData = loadedFieldData.get(fieldNames.indexName());\n if (fieldData == null) {",
"filename": "src/main/java/org/elasticsearch/index/fielddata/IndexFieldDataService.java",
"status": "modified"
},
{
"diff": "@@ -30,6 +30,7 @@\n import org.elasticsearch.action.index.IndexRequestBuilder;\n import org.elasticsearch.action.percolate.PercolateResponse;\n import org.elasticsearch.action.search.SearchPhaseExecutionException;\n+import org.elasticsearch.action.search.SearchResponse;\n import org.elasticsearch.action.suggest.SuggestResponse;\n import org.elasticsearch.client.Requests;\n import org.elasticsearch.common.settings.ImmutableSettings;\n@@ -40,6 +41,7 @@\n import org.elasticsearch.index.mapper.MapperParsingException;\n import org.elasticsearch.index.mapper.core.CompletionFieldMapper;\n import org.elasticsearch.percolator.PercolatorService;\n+import org.elasticsearch.search.aggregations.AggregationBuilders;\n import org.elasticsearch.search.sort.FieldSortBuilder;\n import org.elasticsearch.search.suggest.completion.CompletionStats;\n import org.elasticsearch.search.suggest.completion.CompletionSuggestion;\n@@ -1062,6 +1064,34 @@ public void testReservedChars() throws IOException {\n ).setRefresh(true).get();\n }\n \n+ @Test // see #5930\n+ public void testIssue5930() throws IOException {\n+ client().admin().indices().prepareCreate(INDEX).get();\n+ ElasticsearchAssertions.assertAcked(client().admin().indices().preparePutMapping(INDEX).setType(TYPE).setSource(jsonBuilder().startObject()\n+ .startObject(TYPE).startObject(\"properties\")\n+ .startObject(FIELD)\n+ .field(\"type\", \"completion\")\n+ .endObject()\n+ .endObject().endObject()\n+ .endObject()));\n+ ensureYellow();\n+ String string = \"foo bar\";\n+ client().prepareIndex(INDEX, TYPE, \"1\").setSource(jsonBuilder()\n+ .startObject()\n+ .field(FIELD, string)\n+ .endObject()\n+ ).setRefresh(true).get();\n+\n+ try {\n+ client().prepareSearch(INDEX).addAggregation(AggregationBuilders.terms(\"suggest_agg\").field(FIELD)).execute().actionGet();\n+ // Exception must be thrown\n+ assertFalse(true);\n+ } catch (SearchPhaseExecutionException e) {\n+ assertTrue(e.getDetailedMessage().contains(\"found no fielddata type for field [\" + FIELD + \"]\"));\n+\n+ }\n+ }\n+\n private static String replaceReservedChars(String input, char replacement) {\n char[] charArray = input.toCharArray();\n for (int i = 0; i < charArray.length; i++) {",
"filename": "src/test/java/org/elasticsearch/search/suggest/CompletionSuggestSearchTests.java",
"status": "modified"
}
]
} |
{
"body": "Hi\n\nI'm running elasticsearch 1.1.1 on centos 6 AWS instances with java 7. We have a snapshot that the api call \"_snapshot/<repo_name>/_all\" is listing as IN_PROGRESS, however when attempting to delete that snapshot, it doesn't delete. In fact the delete call just hangs and never returns. When I check the current status of all the snapshots using \"_snapshot/<repo_name>/_status\" that to just hangs and does not return. \n\nI then looked at the cluster state and that tells me that the snapshot is in an \"ABORTED\" state. I can't actually create a new snapshot right now. Any ideas how I can resolve this and is it a bug?\n\nThanks\n",
"comments": [
{
"body": "Could you post here the portion of the cluster state with the information about the snapshot?\n",
"created_at": "2014-04-28T13:23:33Z"
},
{
"body": "Hi imotov\n\nIt's quite a big bit of json and I had to obscure the index and shard information as it was too long for the comment section in github. If you need it happy to talk offline via email etc. We quite a large number of indices.\n\n```\n{ \n \"snapshots\": {\n \"snapshots\": [\n {\n \"include_global_state\": true, \n \"indices\": [\n ], \n \"repository\": \"s3_repository\", \n \"shards\": [\n ], \n \"snapshot\": \"custom_snapshot_24_04_2014_17:27\", \n \"state\": \"ABORTED\"\n }\n ]\n }\n}\n\n```\n",
"created_at": "2014-04-28T14:23:24Z"
},
{
"body": "Was the solution to this problem updating to a new version? We were on 1.1.1, upgraded to 1.3.1, and are still having this problem\n",
"created_at": "2014-07-28T16:51:21Z"
},
{
"body": "@TheDude05 yes, it should be fixed in 1.1.2 and above. Could you provide more details about your problem? Was the snapshot in the `ABORTED` state and it's still in `ABORTED` state after rolling restart or you got an snapshot stack in an `ABORTED` state while running 1.3.1. Could you describe what happened before the snapshot went into this state?\n",
"created_at": "2014-07-28T17:15:05Z"
},
{
"body": "I started a snapshot on my cluster (at the time all nodes were version 1.1.1) and found that it was taking a very long time because every node in my cluster was garbage collecting frequently. I performed a rolling restart of all of the nodes in my cluster, found that the snapshot I started was still in a running state so I aborted it via the HTTP DELETE /_snapshot/<repository name>/<snapshot_name> end point. That snapshot was then showing up as aborted in the cluster status so I attempted to start a new one. I however couldn't start a new snapshot and would instead get an error message indicating a snapshot was already running. Whenever I tried to delete that first snapshot again, the HTTP request would hang indefinitely. The issue did not go away after performing another rolling restart. There is also nothing interesting in the logs.\n\nAfter reading the release notes and this Github issue it seemed that 1.2+ fixed some snapshot issues so I updated my cluster (one by one) to version 1.3.1. I again tried deleting the snapshot in question but am still having the same issue as before.\n\n```\n$ curl -s 'http://localhost:9200/_cluster/state' | jq -M '.metadata.snapshots.snapshots[] | {state: .state, snapshot: .snapshot}'\n{\n \"snapshot\": \"snapshot-1406558337\",\n \"state\": \"ABORTED\"\n}\n```\n\nThe repository type is AWS/S3 and I do see that there are \"snapshot\" and \"metadata\" files in our S3 bucket for this particular snapshot. \n\nLet me know if there is more debugging output that would be useful.\n\n_EDIT:_ Reworded some sentences for clarity\n",
"created_at": "2014-07-28T20:27:47Z"
},
{
"body": "Unfortunately, the fix in 1.2 doesn't clean already stuck snapshots, it only prevents new snapshots from being stuck in this state. So, you need to perform a full cluster restart to clean the stuck snapshot or use the [cleanup utility](https://github.com/imotov/elasticsearch-snapshot-cleanup) to remove stuck snapshots. \n",
"created_at": "2014-07-29T12:17:05Z"
},
{
"body": "@imotov Thank you for your help. The cleanup utility worked and removed that old snapshot that was causing problems\n",
"created_at": "2014-07-29T20:00:31Z"
},
{
"body": "This issue is closed but I'm using elasticsearch 1.3.2 and we are having problems with this. We have to use the cleanup utility to remove stuck snapshots. Am I missing something?\n",
"created_at": "2014-08-27T09:15:35Z"
},
{
"body": "more precisely. snapshots restoring hangs from time to time. No need to try to delete or to shut down the node. Igor's script has to bee run periodically. Version is 1.3.2\n",
"created_at": "2014-08-29T17:24:22Z"
},
{
"body": "@JoeZ99 now I am completely confused. Which script are we talking about? If you mean https://github.com/imotov/elasticsearch-snapshot-cleanup it shouldn't do anything for snapshot restore process. The only way to stop restore is by deleting the files being restored. Could you describe in more details which repository you are using and what's going on?\n",
"created_at": "2014-08-29T17:42:30Z"
},
{
"body": "we're using a s3 repository.\n\nwe're on a 1.3.2 ES version\n\nwe make about 400 or so snapshot restores a day. each snapshot restore\nrestores two indices, all 400 restoring process are for different indices.\nthe restoration process of a single snapshot usually takes less than a\nminute.\n\nSome times the snapshot restoring process takes forever, like it was hung\nor something\n\nwe've found that after applying that cleanup script you talk about, the\nsnapshot restore process is agile again, so we apply it regulartly.\n\nFor what i've understood from the ticket, it looks like the bug consist one\nsome snapshot restore process being \"hung\" and not able to being aborted\nafterwards, using the standard DELETE entrypoint (at least until the 1.2\nversion). Your script is meant to wipe out any \"hung\" restoring process,\nbecause the fixing that is since version 1.2 doesn't take care of already\nhung restoring process, it just makes sure the restoring process doesn't\nget \"hung\" again.\n\nAt that light, looks like from time to time one of our multiple restore\nprocess gets \"hung\" and so no further restored process can be applied ,\nsince only one restoration at a time is allowed on the cluster, and that\nwould be when we see our restoration process gets \"hung\". After applying\nyour script, this supposedly \"hung\" restoration process is wiped out and\nthe cluster is back in business.\n\nCould it be something like that?\n\nOn Fri, Aug 29, 2014 at 1:42 PM, Igor Motov notifications@github.com\nwrote:\n\n> @JoeZ99 https://github.com/JoeZ99 now I am completely confused. Which\n> script are we talking about? If you mean\n> https://github.com/imotov/elasticsearch-snapshot-cleanup it shouldn't do\n> anything for snapshot restore process. The only way to stop restore is by\n> deleting the files being restored. Could you describe in more details which\n> repository you are using and what's going on?\n> \n> —\n> Reply to this email directly or view it on GitHub\n> https://github.com/elasticsearch/elasticsearch/issues/5958#issuecomment-53907690\n> .\n\n## \n\nuh, oh http://www.youtube.com/watch?v=GMD_T7ICL0o.\n\nhttp://www.defectivebydesign.org/no-drm-in-html5\n",
"created_at": "2014-08-30T17:50:07Z"
},
{
"body": "@JoeZ99 can you email me cluster state from the cluster in such stuck state?\n",
"created_at": "2014-09-02T17:47:25Z"
},
{
"body": "Hey there,\n\nwe are using ES 1.3.2 and there are several Snapshots in State IN_PROGRESS. Deleting them manually doesn't work and using this script https://github.com/imotov/elasticsearch-snapshot-cleanup told me:\n\n[2014-09-29 11:27:17,376][INFO ][org.elasticsearch.org.motovs.elasticsearch.snapshots.AbortedSnapshotCleaner] No snapshots found\n\nA rolling restart of all nodes didn't help removing these stale snapshots. Any advice on removing them? \n\nThanks in advance!\n\n#######\n\nptlxtme02:/tmp/elasticsearch-snapshot-cleanup-1.0-SNAPSHOT/bin # curl -XGET \"http://ptlxtme02:9200/_snapshot/es_backup_fast/_all?pretty=true\"\n{\n \"snapshots\" : [ {\n \"snapshot\" : \"2014-08-11_19:30:03\",\n \"indices\" : [ \".ps_config\", \".ps_status\", \"_river\", \"ps_article_index_01\" ],\n \"state\" : \"IN_PROGRESS\",\n \"start_time\" : \"2014-08-11T17:30:03.764Z\",\n \"start_time_in_millis\" : 1407778203764,\n \"failures\" : [ ],\n \"shards\" : {\n \"total\" : 0,\n \"failed\" : 0,\n \"successful\" : 0\n }\n }, {\n \"snapshot\" : \"2014-08-12_20:00:03\",\n \"indices\" : [ \".ps_config\", \".ps_status\", \"_river\", \"ps_article_index_01\" ],\n \"state\" : \"IN_PROGRESS\",\n \"start_time\" : \"2014-08-12T18:00:03.255Z\",\n \"start_time_in_millis\" : 1407866403255,\n \"failures\" : [ ],\n \"shards\" : {\n \"total\" : 0,\n \"failed\" : 0,\n \"successful\" : 0\n }\n }, {\n \"snapshot\" : \"2014-08-12_21:00:03\",\n \"indices\" : [ \".ps_config\", \".ps_status\", \"_river\", \"ps_article_index_01\" ],\n \"state\" : \"IN_PROGRESS\",\n \"start_time\" : \"2014-08-12T19:00:03.723Z\",\n \"start_time_in_millis\" : 1407870003723,\n \"failures\" : [ ],\n \"shards\" : {\n \"total\" : 0,\n \"failed\" : 0,\n \"successful\" : 0\n }\n }, {\n \"snapshot\" : \"2014-08-13_03:00:03\",\n \"indices\" : [ \".ps_config\", \".ps_status\", \"_river\", \"ps_article_index_01\" ],\n \"state\" : \"IN_PROGRESS\",\n \"start_time\" : \"2014-08-13T01:00:03.350Z\",\n \"start_time_in_millis\" : 1407891603350,\n \"failures\" : [ ],\n \"shards\" : {\n \"total\" : 0,\n \"failed\" : 0,\n \"successful\" : 0\n }\n }, {\n \"snapshot\" : \"2014-08-13_08:30:03\",\n \"indices\" : [ \".ps_config\", \".ps_status\", \"_river\", \"ps_article_index_01\" ],\n \"state\" : \"IN_PROGRESS\",\n \"start_time\" : \"2014-08-13T06:30:03.183Z\",\n \"start_time_in_millis\" : 1407911403183,\n \"failures\" : [ ],\n \"shards\" : {\n \"total\" : 0,\n \"failed\" : 0,\n \"successful\" : 0\n }\n }, {\n \"snapshot\" : \"2014-08-13_13:30:02\",\n \"indices\" : [ \".ps_config\", \".ps_status\", \"_river\", \"ps_article_index_01\" ],\n \"state\" : \"IN_PROGRESS\",\n \"start_time\" : \"2014-08-13T11:30:03.009Z\",\n \"start_time_in_millis\" : 1407929403009,\n \"failures\" : [ ],\n \"shards\" : {\n \"total\" : 0,\n \"failed\" : 0,\n \"successful\" : 0\n }\n }, {\n \"snapshot\" : \"2014-08-14_19:30:03\",\n \"indices\" : [ \".ps_config\", \".ps_status\", \"_river\", \"ps_article_index_01\" ],\n \"state\" : \"IN_PROGRESS\",\n \"start_time\" : \"2014-08-14T17:30:03.620Z\",\n \"start_time_in_millis\" : 1408037403620,\n \"failures\" : [ ],\n \"shards\" : {\n \"total\" : 0,\n \"failed\" : 0,\n \"successful\" : 0\n }\n }, {\n \"snapshot\" : \"2014-09-26_16:09:43\",\n \"indices\" : [ \".ps_config\", \".ps_status\", \"_river\", \"ps_article_index_01\" ],\n \"state\" : \"SUCCESS\",\n \"start_time\" : \"2014-09-26T14:09:43.829Z\",\n \"start_time_in_millis\" : 1411740583829,\n \"end_time\" : \"2014-09-26T15:26:43.933Z\",\n \"end_time_in_millis\" : 1411745203933,\n \"duration_in_millis\" : 4620104,\n \"failures\" : [ ],\n \"shards\" : {\n \"total\" : 6,\n \"failed\" : 0,\n \"successful\" : 6\n }\n } ]\n}\n",
"created_at": "2014-09-29T09:28:30Z"
},
{
"body": "@l0bster what do you get when you try to delete these snapshots?\n",
"created_at": "2014-09-29T10:36:10Z"
},
{
"body": "@imotov i get:\nptlxtme02:/tmp # curl -XDELETE \"http://ptlxtme02:9200/_snapshot/es_backup_fast/2014-08-14_19:30:03\"\n{\"error\":\"SnapshotMissingException[[es_backup_fast:2014-08-14_19:30:03] is missing]; nested: FileNotFoundException[/mnt/es_backup/fast_snapshot/metadata-2014-08-14_19:30:03 (No such file or directory)]; \",\"status\":404\n\nhere is an ls output of the snapshot directory: \nptlxtme02:/tmp # ll /mnt/es_backup/fast_snapshot/\ninsgesamt 368\n-rw-r--r-- 1 elasticsearch elasticsearch 118 29. Sep 11:10 index\ndrwxr-xr-x 8 elasticsearch elasticsearch 4096 7. Aug 17:00 indices\n-rw-r--r-- 1 elasticsearch elasticsearch 253 11. Aug 04:00 metadata-2014-08-11_04:00:03\n-rw-r--r-- 1 elasticsearch elasticsearch 253 11. Aug 04:30 metadata-2014-08-11_04:30:03\n-rw-r--r-- 1 elasticsearch elasticsearch 253 11. Aug 05:30 metadata-2014-08-11_05:30:03\n-rw-r--r-- 1 elasticsearch elasticsearch 253 11. Aug 07:00 metadata-2014-08-11_07:00:02\n-rw-r--r-- 1 elasticsearch elasticsearch 253 11. Aug 07:30 metadata-2014-08-11_07:30:03\n-rw-r--r-- 1 elasticsearch elasticsearch 253 11. Aug 08:30 metadata-2014-08-11_08:30:02\n-rw-r--r-- 1 elasticsearch elasticsearch 253 11. Aug 10:30 metadata-2014-08-11_10:30:03\n-rw-r--r-- 1 elasticsearch elasticsearch 253 11. Aug 11:30 metadata-2014-08-11_11:30:03\n-rw-r--r-- 1 elasticsearch elasticsearch 253 11. Aug 12:00 metadata-2014-08-11_12:00:04\n-rw-r--r-- 1 elasticsearch elasticsearch 253 11. Aug 13:00 metadata-2014-08-11_13:00:03\n-rw-r--r-- 1 elasticsearch elasticsearch 253 11. Aug 15:00 metadata-2014-08-11_15:00:03\n-rw-r--r-- 1 elasticsearch elasticsearch 253 11. Aug 16:00 metadata-2014-08-11_16:00:02\n-rw-r--r-- 1 elasticsearch elasticsearch 253 11. Aug 16:30 metadata-2014-08-11_16:30:03\n-rw-r--r-- 1 elasticsearch elasticsearch 253 11. Aug 17:00 metadata-2014-08-11_17:00:03\n-rw-r--r-- 1 elasticsearch elasticsearch 253 11. Aug 19:00 metadata-2014-08-11_19:00:03\n-rw-r--r-- 1 elasticsearch elasticsearch 253 11. Aug 20:00 metadata-2014-08-11_20:00:03\n-rw-r--r-- 1 elasticsearch elasticsearch 253 11. Aug 20:30 metadata-2014-08-11_20:30:03\n-rw-r--r-- 1 elasticsearch elasticsearch 253 11. Aug 21:00 metadata-2014-08-11_21:00:03\n-rw-r--r-- 1 elasticsearch elasticsearch 253 11. Aug 21:30 metadata-2014-08-11_21:30:03\n-rw-r--r-- 1 elasticsearch elasticsearch 253 11. Aug 22:00 metadata-2014-08-11_22:00:03\n-rw-r--r-- 1 elasticsearch elasticsearch 253 11. Aug 23:30 metadata-2014-08-11_23:30:02\n-rw-r--r-- 1 elasticsearch elasticsearch 253 12. Aug 01:30 metadata-2014-08-12_01:30:03\n-rw-r--r-- 1 elasticsearch elasticsearch 253 12. Aug 02:00 metadata-2014-08-12_02:00:03\n-rw-r--r-- 1 elasticsearch elasticsearch 253 12. Aug 03:00 metadata-2014-08-12_03:00:03\n-rw-r--r-- 1 elasticsearch elasticsearch 253 12. Aug 03:30 metadata-2014-08-12_03:30:02\n-rw-r--r-- 1 elasticsearch elasticsearch 253 12. Aug 04:00 metadata-2014-08-12_04:00:03\n-rw-r--r-- 1 elasticsearch elasticsearch 253 12. Aug 04:30 metadata-2014-08-12_04:30:03\n-rw-r--r-- 1 elasticsearch elasticsearch 253 12. Aug 06:00 metadata-2014-08-12_06:00:03\n-rw-r--r-- 1 elasticsearch elasticsearch 253 12. Aug 10:00 metadata-2014-08-12_10:00:03\n-rw-r--r-- 1 elasticsearch elasticsearch 253 12. Aug 10:30 metadata-2014-08-12_10:30:02\n-rw-r--r-- 1 elasticsearch elasticsearch 253 12. Aug 11:00 metadata-2014-08-12_11:00:03\n-rw-r--r-- 1 elasticsearch elasticsearch 253 12. Aug 11:30 metadata-2014-08-12_11:30:03\n-rw-r--r-- 1 elasticsearch elasticsearch 253 12. Aug 12:00 metadata-2014-08-12_12:00:03\n-rw-r--r-- 1 elasticsearch elasticsearch 253 12. Aug 12:30 metadata-2014-08-12_12:30:03\n-rw-r--r-- 1 elasticsearch elasticsearch 253 12. Aug 13:00 metadata-2014-08-12_13:00:03\n-rw-r--r-- 1 elasticsearch elasticsearch 253 12. Aug 13:30 metadata-2014-08-12_13:30:03\n-rw-r--r-- 1 elasticsearch elasticsearch 253 12. Aug 16:00 metadata-2014-08-12_16:00:03\n-rw-r--r-- 1 elasticsearch elasticsearch 253 12. Aug 17:00 metadata-2014-08-12_17:00:03\n-rw-r--r-- 1 elasticsearch elasticsearch 253 12. Aug 17:30 metadata-2014-08-12_17:30:04\n-rw-r--r-- 1 elasticsearch elasticsearch 253 12. Aug 21:30 metadata-2014-08-12_21:30:04\n-rw-r--r-- 1 elasticsearch elasticsearch 253 12. Aug 22:00 metadata-2014-08-12_22:00:03\n-rw-r--r-- 1 elasticsearch elasticsearch 253 12. Aug 23:00 metadata-2014-08-12_23:00:03\n-rw-r--r-- 1 elasticsearch elasticsearch 253 12. Aug 23:30 metadata-2014-08-12_23:30:03\n-rw-r--r-- 1 elasticsearch elasticsearch 253 13. Aug 00:00 metadata-2014-08-13_00:00:02\n-rw-r--r-- 1 elasticsearch elasticsearch 253 13. Aug 01:30 metadata-2014-08-13_01:30:02\n-rw-r--r-- 1 elasticsearch elasticsearch 253 13. Aug 02:30 metadata-2014-08-13_02:30:03\n-rw-r--r-- 1 elasticsearch elasticsearch 253 13. Aug 04:30 metadata-2014-08-13_04:30:02\n-rw-r--r-- 1 elasticsearch elasticsearch 253 13. Aug 05:00 metadata-2014-08-13_05:00:03\n-rw-r--r-- 1 elasticsearch elasticsearch 253 13. Aug 06:00 metadata-2014-08-13_06:00:03\n-rw-r--r-- 1 elasticsearch elasticsearch 253 13. Aug 07:00 metadata-2014-08-13_07:00:03\n-rw-r--r-- 1 elasticsearch elasticsearch 253 13. Aug 08:00 metadata-2014-08-13_08:00:02\n-rw-r--r-- 1 elasticsearch elasticsearch 253 13. Aug 09:00 metadata-2014-08-13_09:00:03\n-rw-r--r-- 1 elasticsearch elasticsearch 253 13. Aug 14:00 metadata-2014-08-13_14:00:02\n-rw-r--r-- 1 elasticsearch elasticsearch 253 13. Aug 14:30 metadata-2014-08-13_14:30:03\n-rw-r--r-- 1 elasticsearch elasticsearch 253 13. Aug 16:00 metadata-2014-08-13_16:00:03\n-rw-r--r-- 1 elasticsearch elasticsearch 253 13. Aug 18:00 metadata-2014-08-13_18:00:03\n-rw-r--r-- 1 elasticsearch elasticsearch 253 13. Aug 19:00 metadata-2014-08-13_19:00:02\n-rw-r--r-- 1 elasticsearch elasticsearch 253 13. Aug 20:30 metadata-2014-08-13_20:30:03\n-rw-r--r-- 1 elasticsearch elasticsearch 253 13. Aug 21:00 metadata-2014-08-13_21:00:03\n-rw-r--r-- 1 elasticsearch elasticsearch 253 13. Aug 21:30 metadata-2014-08-13_21:30:04\n-rw-r--r-- 1 elasticsearch elasticsearch 253 13. Aug 23:30 metadata-2014-08-13_23:30:02\n-rw-r--r-- 1 elasticsearch elasticsearch 253 14. Aug 00:00 metadata-2014-08-14_00:00:03\n-rw-r--r-- 1 elasticsearch elasticsearch 253 14. Aug 02:00 metadata-2014-08-14_02:00:04\n-rw-r--r-- 1 elasticsearch elasticsearch 253 14. Aug 03:00 metadata-2014-08-14_03:00:02\n-rw-r--r-- 1 elasticsearch elasticsearch 253 14. Aug 04:00 metadata-2014-08-14_04:00:02\n-rw-r--r-- 1 elasticsearch elasticsearch 253 14. Aug 05:30 metadata-2014-08-14_05:30:02\n-rw-r--r-- 1 elasticsearch elasticsearch 253 14. Aug 06:00 metadata-2014-08-14_06:00:03\n-rw-r--r-- 1 elasticsearch elasticsearch 253 14. Aug 06:30 metadata-2014-08-14_06:30:03\n-rw-r--r-- 1 elasticsearch elasticsearch 253 14. Aug 07:30 metadata-2014-08-14_07:30:02\n-rw-r--r-- 1 elasticsearch elasticsearch 253 14. Aug 08:00 metadata-2014-08-14_08:00:02\n-rw-r--r-- 1 elasticsearch elasticsearch 253 14. Aug 09:00 metadata-2014-08-14_09:00:03\n-rw-r--r-- 1 elasticsearch elasticsearch 253 14. Aug 09:30 metadata-2014-08-14_09:30:02\n-rw-r--r-- 1 elasticsearch elasticsearch 253 14. Aug 12:30 metadata-2014-08-14_12:30:02\n-rw-r--r-- 1 elasticsearch elasticsearch 253 14. Aug 14:30 metadata-2014-08-14_14:30:02\n-rw-r--r-- 1 elasticsearch elasticsearch 253 14. Aug 16:00 metadata-2014-08-14_16:00:03\n-rw-r--r-- 1 elasticsearch elasticsearch 253 14. Aug 18:00 metadata-2014-08-14_18:00:02\n-rw-r--r-- 1 elasticsearch elasticsearch 253 14. Aug 18:30 metadata-2014-08-14_18:30:03\n-rw-r--r-- 1 elasticsearch elasticsearch 253 15. Aug 05:30 metadata-2014-08-15_05:30:04\n-rw-r--r-- 1 elasticsearch elasticsearch 253 15. Aug 07:30 metadata-2014-08-15_07:30:03\n-rw-r--r-- 1 elasticsearch elasticsearch 253 15. Aug 08:00 metadata-2014-08-15_08:00:03\n-rw-r--r-- 1 elasticsearch elasticsearch 253 15. Aug 10:00 metadata-2014-08-15_10:00:35\n-rw-r--r-- 1 elasticsearch elasticsearch 312 26. Sep 16:09 metadata-2014-09-26_16:09:43\n-rw-r--r-- 1 elasticsearch elasticsearch 232 11. Aug 19:30 snapshot-2014-08-11_19:30:03\n-rw-r--r-- 1 elasticsearch elasticsearch 232 12. Aug 20:00 snapshot-2014-08-12_20:00:03\n-rw-r--r-- 1 elasticsearch elasticsearch 232 12. Aug 21:00 snapshot-2014-08-12_21:00:03\n-rw-r--r-- 1 elasticsearch elasticsearch 231 13. Aug 03:00 snapshot-2014-08-13_03:00:03\n-rw-r--r-- 1 elasticsearch elasticsearch 232 13. Aug 08:30 snapshot-2014-08-13_08:30:03\n-rw-r--r-- 1 elasticsearch elasticsearch 232 13. Aug 13:30 snapshot-2014-08-13_13:30:02\n-rw-r--r-- 1 elasticsearch elasticsearch 231 14. Aug 19:30 snapshot-2014-08-14_19:30:03\n-rw-r--r-- 1 elasticsearch elasticsearch 237 26. Sep 17:26 snapshot-2014-09-26_16:09:43\n",
"created_at": "2014-09-29T11:03:07Z"
},
{
"body": "We also see the \"No snapshots found\" message from the cleanup script, but the fact is that after that the stuck seems to disappear.\n",
"created_at": "2014-09-29T15:52:48Z"
},
{
"body": "@peillis i gave it a try ... we have 7 stuck snapshots in state \"IN_PROGRESS\". I ran the utility, it told me, no snapshots found and after that, there were still 7 stuck snapshots :(\n",
"created_at": "2014-09-30T07:48:23Z"
},
{
"body": "@l0bster the utility is created to clean up snapshots that are currently running. In you case these snapshots are no longer running. They got stuck in IN_PROGRESS state because cluster was shutdown or you lost connection to the mounted shared file system while they were running. So elasticsearch didn't have chance to update them and they are still stored in this intermittent state. Theoretically, it should be possible to delete them using snapshot delete command. But since it doesn't work for you, you might be hitting a bug similar to #6383. I will try to reproduce the issue but it would really help me if you could check log file on the current master node to see if there are any errors logged while you are running this snapshot delete command. If you see an error, please post it here with complete stacktrace. \n",
"created_at": "2014-09-30T08:13:45Z"
},
{
"body": "@imotov Hey, when i try to delete one snapshot via: curl -XDELETE \"http://ptlxtme02:9200/_snapshot/es_backup_fast/2014-08-11_19:30:03\" there is no error logged. I can increase the log level to debug but that requires a node restart. I will edit my post asap to post eventually logged errors.\n",
"created_at": "2014-09-30T11:14:09Z"
},
{
"body": "@l0bster thanks, but it will unlikely result in more logging. The log message that I was looking for should have been logged on the WARN level. \n",
"created_at": "2014-09-30T12:10:50Z"
},
{
"body": "@imotov Hey, i managed to enable debuging... sry for the delay. But the Only thing es is logging is:\n\n[2014-09-30 16:29:27,443][DEBUG][cluster.service ] [ptlxtme02] processing [delete snapshot]: execute\n[2014-09-30 16:29:27,444][DEBUG][cluster.service ] [ptlxtme02] processing [delete snapshot]: no change in cluster_state\n\n:(\n",
"created_at": "2014-09-30T14:30:30Z"
},
{
"body": "@imotov did you have time to reproduce the issue alrdy?\n",
"created_at": "2014-10-02T07:49:46Z"
},
{
"body": "@l0bster yes, I was able to reproduce it. It's a different bug. So I created a new issue for it - #7980. Thank you for report and providing helpful information. As a workaround you can simply delete the file `snapshot-2014-08-14_19:30:03` from the repository.\n",
"created_at": "2014-10-03T17:13:10Z"
},
{
"body": "I am seeing this error in v1.3.6. Cluster state shows the snapshot is in ABORTED state on all shards.\nNew snapshots cannot be started (ConcurrentSnapshotExecutionException), and the current snapshot cannot be deleted - DELETE just hangs.\n\n@imotov 's cleanup tool did not help.\n\nUpdate: rolling restart corrected the issue.\n\ncluster state is \n \"snapshots\": {\n \"snapshots\": [\n {\n \"repository\": \"my_backup\",\n \"snapshot\": \"snapshot_1\",\n \"include_global_state\": true,\n \"state\": \"ABORTED\",\n \"indices\": [\n \"grafana-dash\"\n ],\n \"shards\": [\n {\n \"index\": \"grafana-dash\",\n \"shard\": 0,\n \"state\": \"ABORTED\",\n \"node\": \"_mTOiyD_TN2vV2C2A8sNbw\"\n },\n {\n \"index\": \"grafana-dash\",\n \"shard\": 1,\n \"state\": \"ABORTED\",\n \"node\": \"TYy7OHbXR2q_U-xTG4Xtqg\"\n },\n {\n \"index\": \"grafana-dash\",\n \"shard\": 2,\n \"state\": \"ABORTED\",\n \"node\": \"TYy7OHbXR2q_U-xTG4Xtqg\"\n },\n {\n \"index\": \"grafana-dash\",\n \"shard\": 3,\n \"state\": \"ABORTED\",\n \"node\": \"TYy7OHbXR2q_U-xTG4Xtqg\"\n },\n {\n \"index\": \"grafana-dash\",\n \"shard\": 4,\n \"state\": \"ABORTED\",\n \"node\": \"TYy7OHbXR2q_U-xTG4Xtqg\"\n }\n ]\n }\n ]\n }\n },\n",
"created_at": "2014-12-12T20:03:20Z"
},
{
"body": "hey @imotov , I have the same problem that my ES node was restarted during a snpashot process, I am trying to run this script https://github.com/imotov/elasticsearch-snapshot-cleanup \nMy ES version 1.4.0, I did this \n\n> For all other versions update pom.xml file and appropriate elasticsearch and lucene version, run mvn clean package and untar the file found in the target/releases directory. copied my cluster config to config/elasticsearch.yml\n\nwhen I run the bin/cleanup package I get this error \n\n```\nSetting ES_HOME as /root/elasticsearch-snapshot-cleanup/target/releases/elasticsearch-snapshot-cleanup-1.4.4.1\nError: Could not find or load main class org.motovs.elasticsearch.snapshots.AbortedSnapshotCleaner\n```\n\nAny idea?\n",
"created_at": "2015-04-06T09:40:19Z"
},
{
"body": "Occurred on ES v1.4.2. [elasticsearch-snapshot-cleanup](https://github.com/imotov/elasticsearch-snapshot-cleanup) did the trick for me with no fuss. \n\nThanks a bunch @imotov \n",
"created_at": "2016-10-25T14:09:09Z"
}
],
"number": 5958,
"title": "Snapshot aborted but but still in progress"
} | {
"body": "If a node is shutdown while a snapshot that runs on this node is aborted, it might cause the snapshot process to hang.\n\nCloses #5958\n",
"number": 5966,
"review_comments": [
{
"body": "double log?\n",
"created_at": "2014-05-08T01:14:06Z"
},
{
"body": "can't this be wrapped by a transport exception if this ends up being a remote operation? mayne use the new `assertThrows` on `RestStatus.NOT_FOUND`?\n",
"created_at": "2014-05-08T01:15:50Z"
}
],
"title": "Fix for hanging aborted snapshot during node shutdown"
} | {
"commits": [
{
"message": "Fix for hanging aborted snapshot during node shutdown\n\nIf a node is shutdown while a snapshot that runs on this node is aborted, it might cause the snapshot process to hang.\n\nCloses #5958"
}
],
"files": [
{
"diff": "@@ -537,7 +537,7 @@ public ClusterState execute(ClusterState currentState) throws Exception {\n for (final SnapshotMetaData.Entry snapshot : snapshots.entries()) {\n SnapshotMetaData.Entry updatedSnapshot = snapshot;\n boolean snapshotChanged = false;\n- if (snapshot.state() == State.STARTED) {\n+ if (snapshot.state() == State.STARTED || snapshot.state() == State.ABORTED) {\n ImmutableMap.Builder<ShardId, ShardSnapshotStatus> shards = ImmutableMap.builder();\n for (ImmutableMap.Entry<ShardId, ShardSnapshotStatus> shardEntry : snapshot.shards().entrySet()) {\n ShardSnapshotStatus shardStatus = shardEntry.getValue();",
"filename": "src/main/java/org/elasticsearch/snapshots/SnapshotsService.java",
"status": "modified"
},
{
"diff": "@@ -20,8 +20,10 @@\n package org.elasticsearch.snapshots;\n \n import com.carrotsearch.randomizedtesting.LifecycleScope;\n+import org.elasticsearch.action.ListenableActionFuture;\n import org.elasticsearch.action.admin.cluster.repositories.put.PutRepositoryResponse;\n import org.elasticsearch.action.admin.cluster.snapshots.create.CreateSnapshotResponse;\n+import org.elasticsearch.action.admin.cluster.snapshots.delete.DeleteSnapshotResponse;\n import org.elasticsearch.action.admin.cluster.snapshots.restore.RestoreSnapshotResponse;\n import org.elasticsearch.client.Client;\n import org.elasticsearch.common.Priority;\n@@ -86,9 +88,6 @@ public void restorePersistentSettingsTest() throws Exception {\n @Test\n public void snapshotDuringNodeShutdownTest() throws Exception {\n logger.info(\"--> start 2 nodes\");\n- ArrayList<String> nodes = newArrayList();\n- nodes.add(cluster().startNode());\n- nodes.add(cluster().startNode());\n Client client = client();\n \n assertAcked(prepareCreate(\"test-idx\", 2, settingsBuilder().put(\"number_of_shards\", 2).put(\"number_of_replicas\", 0).put(MockDirectoryHelper.RANDOM_NO_DELETE_OPEN_FILE, false)));\n@@ -132,6 +131,61 @@ public void snapshotDuringNodeShutdownTest() throws Exception {\n logger.info(\"--> done\");\n }\n \n+ @Test\n+ public void snapshotWithStuckNodeTest() throws Exception {\n+ logger.info(\"--> start 2 nodes\");\n+ ArrayList<String> nodes = newArrayList();\n+ nodes.add(cluster().startNode());\n+ nodes.add(cluster().startNode());\n+ Client client = client();\n+\n+ assertAcked(prepareCreate(\"test-idx\", 2, settingsBuilder().put(\"number_of_shards\", 2).put(\"number_of_replicas\", 0).put(MockDirectoryHelper.RANDOM_NO_DELETE_OPEN_FILE, false)));\n+ ensureGreen();\n+\n+ logger.info(\"--> indexing some data\");\n+ for (int i = 0; i < 100; i++) {\n+ index(\"test-idx\", \"doc\", Integer.toString(i), \"foo\", \"bar\" + i);\n+ }\n+ refresh();\n+ assertThat(client.prepareCount(\"test-idx\").get().getCount(), equalTo(100L));\n+\n+ logger.info(\"--> creating repository\");\n+ PutRepositoryResponse putRepositoryResponse = client.admin().cluster().preparePutRepository(\"test-repo\")\n+ .setType(MockRepositoryModule.class.getCanonicalName()).setSettings(\n+ ImmutableSettings.settingsBuilder()\n+ .put(\"location\", newTempDir(LifecycleScope.TEST))\n+ .put(\"random\", randomAsciiOfLength(10))\n+ .put(\"wait_after_unblock\", 200)\n+ ).get();\n+ assertThat(putRepositoryResponse.isAcknowledged(), equalTo(true));\n+\n+ // Pick one node and block it\n+ String blockedNode = blockNodeWithIndex(\"test-idx\");\n+ // Remove it from the list of available nodes\n+ nodes.remove(blockedNode);\n+\n+ logger.info(\"--> snapshot\");\n+ client.admin().cluster().prepareCreateSnapshot(\"test-repo\", \"test-snap\").setWaitForCompletion(false).setIndices(\"test-idx\").get();\n+\n+ logger.info(\"--> waiting for block to kick in\");\n+ waitForBlock(blockedNode, \"test-repo\", TimeValue.timeValueSeconds(60));\n+\n+ logger.info(\"--> execution was blocked on node [{}], aborting snapshot\", blockedNode);\n+\n+ ListenableActionFuture<DeleteSnapshotResponse> deleteSnapshotResponseFuture = cluster().client(nodes.get(0)).admin().cluster().prepareDeleteSnapshot(\"test-repo\", \"test-snap\").execute();\n+ // Make sure that abort makes some progress\n+ Thread.sleep(100);\n+ unblockNode(blockedNode);\n+ logger.info(\"--> stopping node\", blockedNode);\n+ stopNode(blockedNode);\n+ DeleteSnapshotResponse deleteSnapshotResponse = deleteSnapshotResponseFuture.get();\n+ assertThat(deleteSnapshotResponse.isAcknowledged(), equalTo(true));\n+\n+ logger.info(\"--> making sure that snapshot no longer exists\");\n+ assertThrows(client().admin().cluster().prepareGetSnapshots(\"test-repo\").setSnapshots(\"test-snap\").execute(), SnapshotMissingException.class);\n+ logger.info(\"--> done\");\n+ }\n+\n @Test\n @TestLogging(\"snapshots:TRACE\")\n public void restoreIndexWithMissingShards() throws Exception {",
"filename": "src/test/java/org/elasticsearch/snapshots/DedicatedClusterSnapshotRestoreTests.java",
"status": "modified"
}
]
} |
{
"body": "_cat is cool!\n\nWe currently run ES 1.0.1.\n\nWhile attempting to lower the overhead of lots of indices (still havent found a solution) I tried making indices readonly. Not sure yet if it has any positive impact, but I found this.\n\nFirst lets make indices readonly:\n\n```\ncurl -s -XPUT 'localhost:9200/logstash-pro-apache-2014.03.*/_settings?index.blocks.read_only=true'\n```\n\nThen execute:\n\n```\ncurl -s 'localhost:9200/_cat/indices/logstash-pro-apache-2014.03*?v'\n```\n\nAnd the pretty result:\n\n```\n{\n \"error\" : \"ClusterBlockException[blocked by: [FORBIDDEN/5/index read-only (api)];]\",\n \"status\" : 403\n}\n```\n",
"comments": [
{
"body": "The _stats api call also fails:\n\n```\nhttp://127.0.0.1:9200/_stats?clear=true&docs=true&store=true&indexing=true&get=true&search=true\n```\n\nResult:\n\n```\nHTTP Error 403: Forbidden\n```\n\nMaybe my expectation is wrong?\n",
"created_at": "2014-04-17T14:48:01Z"
},
{
"body": "@rtoma stats not working on read-only indices is a bug, but I don't think making indices read-only will save any resources.\n",
"created_at": "2014-04-18T18:10:51Z"
},
{
"body": "We discussed it in PR #5876 and decided that more work is needed. Moving it to v1.3.\n",
"created_at": "2014-05-07T15:03:55Z"
},
{
"body": "@imotov bumping this to 1.4\n",
"created_at": "2014-07-11T08:47:53Z"
},
{
"body": "Quoting @imotov from https://github.com/elasticsearch/elasticsearch/pull/5876#issuecomment-42332444\n\n> We will need to revisit this by adding more granular METADATA_READ and METADATA_WRITE blocks and implement this change across other methods such as segments, status, recovery...\n",
"created_at": "2014-11-13T12:37:15Z"
},
{
"body": "Also related to #8102\n",
"created_at": "2014-11-13T12:38:20Z"
}
],
"number": 5855,
"title": "ClusterBlockException on /_cat/indices/ on readonly indices"
} | {
"body": "Closes #5855\n",
"number": 5876,
"review_comments": [],
"title": "Don't block stats for read-only indices"
} | {
"commits": [
{
"message": "Don't block stats for read-only indices\n\nCloses #5855"
}
],
"files": [
{
"diff": "@@ -86,12 +86,12 @@ protected GroupShardsIterator shards(ClusterState clusterState, IndicesStatsRequ\n \n @Override\n protected ClusterBlockException checkGlobalBlock(ClusterState state, IndicesStatsRequest request) {\n- return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA);\n+ return state.blocks().globalBlockedException(ClusterBlockLevel.READ);\n }\n \n @Override\n protected ClusterBlockException checkRequestBlock(ClusterState state, IndicesStatsRequest request, String[] concreteIndices) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, concreteIndices);\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.READ, concreteIndices);\n }\n \n ",
"filename": "src/main/java/org/elasticsearch/action/admin/indices/stats/TransportIndicesStatsAction.java",
"status": "modified"
}
]
} |
{
"body": "```\nDELETE http://127.0.0.1:9200/_search/scroll/asdasdadasdasd HTTP/1.1\n```\n\nReturns the following response\n\n```\nHTTP/1.1 500 Internal Server Error\nContent-Type: application/json; charset=UTF-8\nContent-Length: 73\n\n{\n \"error\" : \"ArrayIndexOutOfBoundsException[1]\",\n \"status\" : 500\n}\n```\n\nWhile a 404 is expected. \n\nIt would also be nice if we can allow the scroll id to be posted. I've had people hit problems with scroll ids that are too big in the past:\n\nhttps://github.com/elasticsearch/elasticsearch-net/issues/318\n",
"comments": [
{
"body": "@Mpdreamz Actually that specific scroll id is malformed and that is where the ArrayIndexOutOfBoundsException comes from, so I think a 400 should be returned?\n\nIf a non existent scroll_id is used, it will just return and act like everything is fine. I agree a 404 would be nice.\n",
"created_at": "2014-04-08T14:31:56Z"
},
{
"body": "++404\n",
"created_at": "2014-04-08T16:25:19Z"
},
{
"body": "++404 and +1 on implementing #5726 @martijnvg ! \n",
"created_at": "2014-04-08T17:31:49Z"
},
{
"body": "PR #5738 only addresses invalid scroll ids. Returning a 404 for a valid, but non existing scroll id requires more work than just validation. The clear scoll api uses an internal free search context api, which for example the search api relies on. This internal api just always returns an empty response. I can change that, so that it includes whether it actually has removed a search context, but that requires a change in the transport layer, so I like to do separate that in a different PR.\n",
"created_at": "2014-04-09T11:29:03Z"
},
{
"body": "LGTM\n",
"created_at": "2014-04-14T20:47:59Z"
},
{
"body": "@martijnvg can you assign the fix version here please\n",
"created_at": "2014-04-14T20:48:38Z"
},
{
"body": "@s1monw PR #5738 only handles invalid scroll ids, but this issue is also about returning a 404 when a valid scroll id doesn't exist. I will assign the proper versions in PR and leave this issue open, once the missing scroll id has been addressed this issue can be closed.\n",
"created_at": "2014-04-16T02:34:38Z"
}
],
"number": 5730,
"title": "clear scroll throws 500 on array out of bounds exception"
} | {
"body": "PR for #5730\n",
"number": 5865,
"review_comments": [
{
"body": "I don't like the name too much maybe we can just count the number of freed contexts and return that responding with `404` if it's `== 0`? then we can call it `numFreed`?\n",
"created_at": "2014-04-23T15:58:33Z"
},
{
"body": "sure make sense, I'll update the PR\n",
"created_at": "2014-04-23T18:13:34Z"
},
{
"body": "more likely `getNumFreed()` as it is not a boolean?\n",
"created_at": "2014-05-08T12:43:41Z"
},
{
"body": "If `ClearScrollResponse` implemented `StatusToXContent`, you could add that check to the `status()` method and allow it to be testable in your tests.\n\nYou could also possibly use this in the rest action and save some code (like in `RestSearchScrollAction`)\n\n```\nclient.clearScroll(clearRequest, new RestStatusToXContentListener<ClearScrollResponse>(channel));\n```\n",
"created_at": "2014-05-08T12:54:29Z"
},
{
"body": "I'll change that :) (used to be a boolean property: `freed`)\n",
"created_at": "2014-05-08T12:55:33Z"
},
{
"body": "+1 Make sense!\n",
"created_at": "2014-05-08T13:07:04Z"
}
],
"title": "Return missing (404) if a scroll_id is cleared that no longer exists."
} | {
"commits": [
{
"message": "Return missing (404) is a scroll_id is cleared that no longer exists.\n\nCloses #5730"
},
{
"message": "Replaced noneFreed with numFreed."
},
{
"message": "Renamed isNumFreed into getNumFreed\nLet ClearScrollResponse implement StatusToXContent"
}
],
"files": [
{
"diff": "@@ -32,3 +32,8 @@\n catch: missing\n scroll:\n scroll_id: $scroll_id1\n+ \n+ - do:\n+ catch: missing\n+ clear_scroll:\n+ scroll_id: $scroll_id1",
"filename": "rest-api-spec/test/scroll/11_clear.yaml",
"status": "modified"
},
{
"diff": "@@ -19,38 +19,80 @@\n \n package org.elasticsearch.action.search;\n \n+import org.elasticsearch.Version;\n import org.elasticsearch.action.ActionResponse;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n+import org.elasticsearch.common.xcontent.StatusToXContent;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.rest.RestStatus;\n \n import java.io.IOException;\n \n+import static org.elasticsearch.rest.RestStatus.NOT_FOUND;\n+import static org.elasticsearch.rest.RestStatus.OK;\n+\n /**\n */\n-public class ClearScrollResponse extends ActionResponse {\n+public class ClearScrollResponse extends ActionResponse implements StatusToXContent {\n \n private boolean succeeded;\n+ private int numFreed;\n \n- public ClearScrollResponse(boolean succeeded) {\n+ public ClearScrollResponse(boolean succeeded, int numFreed) {\n this.succeeded = succeeded;\n+ this.numFreed = numFreed;\n }\n \n ClearScrollResponse() {\n }\n \n+ /**\n+ * @return Whether the attempt to clear a scroll was successful.\n+ */\n public boolean isSucceeded() {\n return succeeded;\n }\n \n+ /**\n+ * @return The number of seach contexts that were freed. If this is <code>0</code> the assumption can be made,\n+ * that the scroll id specified in the request did not exist. (never existed, was expired, or completely consumed)\n+ */\n+ public int getNumFreed() {\n+ return numFreed;\n+ }\n+\n+ @Override\n+ public RestStatus status() {\n+ return numFreed == 0 ? NOT_FOUND : OK;\n+ }\n+\n+ @Override\n+ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {\n+ builder.startObject();\n+ builder.endObject();\n+ return builder;\n+ }\n+\n @Override\n public void readFrom(StreamInput in) throws IOException {\n super.readFrom(in);\n succeeded = in.readBoolean();\n+ if (in.getVersion().onOrAfter(Version.V_1_2_0)) {\n+ numFreed = in.readVInt();\n+ } else {\n+ // On older nodes we can't tell how many search contexts where freed, so we assume at least one,\n+ // so that the rest api doesn't return 404 where SC were indeed freed.\n+ numFreed = 1;\n+ }\n }\n \n @Override\n public void writeTo(StreamOutput out) throws IOException {\n super.writeTo(out);\n out.writeBoolean(succeeded);\n+ if (out.getVersion().onOrAfter(Version.V_1_2_0)) {\n+ out.writeVInt(numFreed);\n+ }\n }\n }",
"filename": "src/main/java/org/elasticsearch/action/search/ClearScrollResponse.java",
"status": "modified"
},
{
"diff": "@@ -37,6 +37,7 @@\n \n import java.util.ArrayList;\n import java.util.List;\n+import java.util.concurrent.atomic.AtomicInteger;\n import java.util.concurrent.atomic.AtomicReference;\n \n import static org.elasticsearch.action.search.type.TransportSearchHelper.parseScrollId;\n@@ -67,8 +68,9 @@ private class Async {\n final CountDown expectedOps;\n final ClearScrollRequest request;\n final List<Tuple<String, Long>[]> contexts = new ArrayList<>();\n- final AtomicReference<Throwable> expHolder;\n final ActionListener<ClearScrollResponse> listener;\n+ final AtomicReference<Throwable> expHolder;\n+ final AtomicInteger numberOfFreedSearchContexts = new AtomicInteger(0);\n \n private Async(ClearScrollRequest request, ActionListener<ClearScrollResponse> listener, ClusterState clusterState) {\n int expectedOps = 0;\n@@ -91,16 +93,16 @@ private Async(ClearScrollRequest request, ActionListener<ClearScrollResponse> li\n \n public void run() {\n if (expectedOps.isCountedDown()) {\n- listener.onResponse(new ClearScrollResponse(true));\n+ listener.onResponse(new ClearScrollResponse(true, 0));\n return;\n }\n \n if (contexts.isEmpty()) {\n for (final DiscoveryNode node : nodes) {\n searchServiceTransportAction.sendClearAllScrollContexts(node, request, new ActionListener<Boolean>() {\n @Override\n- public void onResponse(Boolean success) {\n- onFreedContext();\n+ public void onResponse(Boolean freed) {\n+ onFreedContext(freed);\n }\n \n @Override\n@@ -114,14 +116,14 @@ public void onFailure(Throwable e) {\n for (Tuple<String, Long> target : context) {\n final DiscoveryNode node = nodes.get(target.v1());\n if (node == null) {\n- onFreedContext();\n+ onFreedContext(false);\n continue;\n }\n \n searchServiceTransportAction.sendFreeContext(node, target.v2(), request, new ActionListener<Boolean>() {\n @Override\n- public void onResponse(Boolean success) {\n- onFreedContext();\n+ public void onResponse(Boolean freed) {\n+ onFreedContext(freed);\n }\n \n @Override\n@@ -134,17 +136,20 @@ public void onFailure(Throwable e) {\n }\n }\n \n- void onFreedContext() {\n+ void onFreedContext(boolean freed) {\n+ if (freed) {\n+ numberOfFreedSearchContexts.incrementAndGet();\n+ }\n if (expectedOps.countDown()) {\n boolean succeeded = expHolder.get() == null;\n- listener.onResponse(new ClearScrollResponse(succeeded));\n+ listener.onResponse(new ClearScrollResponse(succeeded, numberOfFreedSearchContexts.get()));\n }\n }\n \n void onFailedFreedContext(Throwable e, DiscoveryNode node) {\n logger.warn(\"Clear SC failed on node[{}]\", e, node);\n if (expectedOps.countDown()) {\n- listener.onResponse(new ClearScrollResponse(false));\n+ listener.onResponse(new ClearScrollResponse(false, numberOfFreedSearchContexts.get()));\n } else {\n expHolder.set(e);\n }",
"filename": "src/main/java/org/elasticsearch/action/search/TransportClearScrollAction.java",
"status": "modified"
},
{
"diff": "@@ -25,15 +25,16 @@\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n-import org.elasticsearch.common.xcontent.XContentBuilder;\n-import org.elasticsearch.rest.*;\n+import org.elasticsearch.rest.BaseRestHandler;\n+import org.elasticsearch.rest.RestChannel;\n+import org.elasticsearch.rest.RestController;\n+import org.elasticsearch.rest.RestRequest;\n import org.elasticsearch.rest.action.support.RestActions;\n-import org.elasticsearch.rest.action.support.RestBuilderListener;\n+import org.elasticsearch.rest.action.support.RestStatusToXContentListener;\n \n import java.util.Arrays;\n \n import static org.elasticsearch.rest.RestRequest.Method.DELETE;\n-import static org.elasticsearch.rest.RestStatus.OK;\n \n /**\n */\n@@ -56,14 +57,7 @@ public void handleRequest(final RestRequest request, final RestChannel channel)\n \n ClearScrollRequest clearRequest = new ClearScrollRequest();\n clearRequest.setScrollIds(Arrays.asList(splitScrollIds(scrollIds)));\n- client.clearScroll(clearRequest, new RestBuilderListener<ClearScrollResponse>(channel) {\n- @Override\n- public RestResponse buildResponse(ClearScrollResponse response, XContentBuilder builder) throws Exception {\n- builder.startObject();\n- builder.endObject();\n- return new BytesRestResponse(OK, builder);\n- }\n- });\n+ client.clearScroll(clearRequest, new RestStatusToXContentListener<ClearScrollResponse>(channel));\n }\n \n public static String[] splitScrollIds(String scrollIds) {",
"filename": "src/main/java/org/elasticsearch/rest/action/search/RestClearScrollAction.java",
"status": "modified"
},
{
"diff": "@@ -538,13 +538,14 @@ final SearchContext createContext(ShardSearchRequest request, @Nullable Engine.S\n return context;\n }\n \n- public void freeContext(long id) {\n+ public boolean freeContext(long id) {\n SearchContext context = activeContexts.remove(id);\n if (context == null) {\n- return;\n+ return false;\n }\n context.indexShard().searchService().onFreeContext(context);\n context.close();\n+ return true;\n }\n \n private void freeContext(SearchContext context) {",
"filename": "src/main/java/org/elasticsearch/search/SearchService.java",
"status": "modified"
},
{
"diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.search.action;\n \n+import org.elasticsearch.Version;\n import org.elasticsearch.action.ActionListener;\n import org.elasticsearch.action.search.ClearScrollRequest;\n import org.elasticsearch.action.search.SearchRequest;\n@@ -106,18 +107,18 @@ public void sendFreeContext(DiscoveryNode node, final long contextId, SearchRequ\n \n public void sendFreeContext(DiscoveryNode node, long contextId, ClearScrollRequest request, final ActionListener<Boolean> actionListener) {\n if (clusterService.state().nodes().localNodeId().equals(node.id())) {\n- searchService.freeContext(contextId);\n- actionListener.onResponse(true);\n+ boolean freed = searchService.freeContext(contextId);\n+ actionListener.onResponse(freed);\n } else {\n- transportService.sendRequest(node, SearchFreeContextTransportHandler.ACTION, new SearchFreeContextRequest(request, contextId), new TransportResponseHandler<TransportResponse>() {\n+ transportService.sendRequest(node, SearchFreeContextTransportHandler.ACTION, new SearchFreeContextRequest(request, contextId), new TransportResponseHandler<SearchFreeContextResponse>() {\n @Override\n- public TransportResponse newInstance() {\n- return TransportResponse.Empty.INSTANCE;\n+ public SearchFreeContextResponse newInstance() {\n+ return new SearchFreeContextResponse();\n }\n \n @Override\n- public void handleResponse(TransportResponse response) {\n- actionListener.onResponse(true);\n+ public void handleResponse(SearchFreeContextResponse response) {\n+ actionListener.onResponse(response.isFreed());\n }\n \n @Override\n@@ -560,6 +561,40 @@ public void writeTo(StreamOutput out) throws IOException {\n }\n }\n \n+ class SearchFreeContextResponse extends TransportResponse {\n+\n+ private boolean freed;\n+\n+ SearchFreeContextResponse() {\n+ }\n+\n+ SearchFreeContextResponse(boolean freed) {\n+ this.freed = freed;\n+ }\n+\n+ public boolean isFreed() {\n+ return freed;\n+ }\n+\n+ @Override\n+ public void readFrom(StreamInput in) throws IOException {\n+ super.readFrom(in);\n+ if (in.getVersion().onOrAfter(Version.V_1_2_0)) {\n+ freed = in.readBoolean();\n+ } else {\n+ freed = true;\n+ }\n+ }\n+\n+ @Override\n+ public void writeTo(StreamOutput out) throws IOException {\n+ super.writeTo(out);\n+ if (out.getVersion().onOrAfter(Version.V_1_2_0)) {\n+ out.writeBoolean(freed);\n+ }\n+ }\n+ }\n+\n class SearchFreeContextTransportHandler extends BaseTransportRequestHandler<SearchFreeContextRequest> {\n \n static final String ACTION = \"search/freeContext\";\n@@ -571,8 +606,8 @@ public SearchFreeContextRequest newInstance() {\n \n @Override\n public void messageReceived(SearchFreeContextRequest request, TransportChannel channel) throws Exception {\n- searchService.freeContext(request.id());\n- channel.sendResponse(TransportResponse.Empty.INSTANCE);\n+ boolean freed = searchService.freeContext(request.id());\n+ channel.sendResponse(new SearchFreeContextResponse(freed));\n }\n \n @Override",
"filename": "src/main/java/org/elasticsearch/search/action/SearchServiceTransportAction.java",
"status": "modified"
},
{
"diff": "@@ -19,7 +19,6 @@\n \n package org.elasticsearch.search.scroll;\n \n-import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.ElasticsearchIllegalArgumentException;\n import org.elasticsearch.action.search.ClearScrollResponse;\n import org.elasticsearch.action.search.SearchRequestBuilder;\n@@ -31,7 +30,6 @@\n import org.elasticsearch.common.util.concurrent.UncategorizedExecutionException;\n import org.elasticsearch.index.query.QueryBuilders;\n import org.elasticsearch.rest.RestStatus;\n-import org.elasticsearch.search.SearchContextMissingException;\n import org.elasticsearch.search.SearchHit;\n import org.elasticsearch.search.sort.SortOrder;\n import org.elasticsearch.test.ElasticsearchIntegrationTest;\n@@ -289,7 +287,9 @@ public void testSimpleScrollQueryThenFetch_clearScrollIds() throws Exception {\n .addScrollId(searchResponse1.getScrollId())\n .addScrollId(searchResponse2.getScrollId())\n .execute().actionGet();\n- assertThat(clearResponse.isSucceeded(), equalTo(true));\n+ assertThat(clearResponse.isSucceeded(), is(true));\n+ assertThat(clearResponse.getNumFreed(), greaterThan(0));\n+ assertThat(clearResponse.status(), equalTo(RestStatus.OK));\n \n assertThrows(client().prepareSearchScroll(searchResponse1.getScrollId()).setScroll(TimeValue.timeValueMinutes(2)), RestStatus.NOT_FOUND);\n assertThrows(client().prepareSearchScroll(searchResponse2.getScrollId()).setScroll(TimeValue.timeValueMinutes(2)), RestStatus.NOT_FOUND);\n@@ -304,6 +304,8 @@ public void testClearNonExistentScrollId() throws Exception {\n // Whether we actually clear a scroll, we can't know, since that information isn't serialized in the\n // free search context response, which is returned from each node we want to clear a particular scroll.\n assertThat(response.isSucceeded(), is(true));\n+ assertThat(response.getNumFreed(), equalTo(0));\n+ assertThat(response.status(), equalTo(RestStatus.NOT_FOUND));\n }\n \n @Test\n@@ -395,7 +397,9 @@ public void testSimpleScrollQueryThenFetch_clearAllScrollIds() throws Exception\n \n ClearScrollResponse clearResponse = client().prepareClearScroll().addScrollId(\"_all\")\n .execute().actionGet();\n- assertThat(clearResponse.isSucceeded(), equalTo(true));\n+ assertThat(clearResponse.isSucceeded(), is(true));\n+ assertThat(clearResponse.getNumFreed(), greaterThan(0));\n+ assertThat(clearResponse.status(), equalTo(RestStatus.OK));\n \n assertThrows(cluster().transportClient().prepareSearchScroll(searchResponse1.getScrollId()).setScroll(TimeValue.timeValueMinutes(2)), RestStatus.NOT_FOUND);\n assertThrows(cluster().transportClient().prepareSearchScroll(searchResponse2.getScrollId()).setScroll(TimeValue.timeValueMinutes(2)), RestStatus.NOT_FOUND);",
"filename": "src/test/java/org/elasticsearch/search/scroll/SearchScrollTests.java",
"status": "modified"
}
]
} |
{
"body": "If you use routing on a field that has doc values, you might hit a `ClassCastException`.\n\nThe reason is that `RoutingFieldMapper` does unchecked casts to the `oal.document.Field` class although for doc values we use classes that only implement `oal.document.IndexableField`.\n",
"comments": [
{
"body": "I think the issue is a bit more general, this field mapper makes too many assumptions about the data, eg. it assumes:\n- the the index field name is the same as the document field name,\n- the that string value of an index field is the same as its external value.\n",
"created_at": "2014-04-17T14:21:23Z"
}
],
"number": 5844,
"title": "Path-based routing doesn't work with doc values"
} | {
"body": "RootMapper.validate was only used by the routing field mapper, which makes\nbuggy assumptions about how fields are indexed. For example, it assumes that\nthe index representation of a field is the same as its external representation.\n\nClose #5844\n",
"number": 5858,
"review_comments": [],
"title": "Remove RootMapper.validate and validate the routing key up-front."
} | {
"commits": [
{
"message": "Remove RootMapper.validate and validate the routing key up-front.\n\nRootMapper.validate was only used by the routing field mapper, which makes\nbuggy assumptions about how fields are indexed. For example, it assumes that\nthe index representation of a field is the same as its external representation.\n\nClose #5844"
}
],
"files": [
{
"diff": "@@ -36,6 +36,7 @@\n import org.elasticsearch.common.lucene.uid.Versions;\n import org.elasticsearch.common.xcontent.*;\n import org.elasticsearch.index.VersionType;\n+import org.elasticsearch.index.mapper.MapperParsingException;\n import org.elasticsearch.index.mapper.internal.TimestampFieldMapper;\n \n import java.io.IOException;\n@@ -567,12 +568,17 @@ public void process(MetaData metaData, String aliasOrIndex, @Nullable MappingMet\n id = parseContext.id();\n }\n if (parseContext.shouldParseRouting()) {\n+ if (routing != null && !routing.equals(parseContext.routing())) {\n+ throw new MapperParsingException(\"The provided routing value [\" + routing + \"] doesn't match the routing key stored in the document: [\" + parseContext.routing() + \"]\");\n+ }\n routing = parseContext.routing();\n }\n if (parseContext.shouldParseTimestamp()) {\n timestamp = parseContext.timestamp();\n timestamp = MappingMetaData.Timestamp.parseStringTimestamp(timestamp, mappingMd.timestamp().dateTimeFormatter());\n }\n+ } catch (MapperParsingException e) {\n+ throw e;\n } catch (Exception e) {\n throw new ElasticsearchParseException(\"failed to parse doc to extract routing/timestamp/id\", e);\n } finally {",
"filename": "src/main/java/org/elasticsearch/action/index/IndexRequest.java",
"status": "modified"
},
{
"diff": "@@ -418,9 +418,11 @@ public Timestamp timestamp() {\n }\n \n public ParseContext createParseContext(@Nullable String id, @Nullable String routing, @Nullable String timestamp) {\n+ // We parse the routing even if there is already a routing key in the request in order to make sure that\n+ // they are the same\n return new ParseContext(\n id == null && id().hasPath(),\n- routing == null && routing().hasPath(),\n+ routing().hasPath(),\n timestamp == null && timestamp().hasPath()\n );\n }",
"filename": "src/main/java/org/elasticsearch/cluster/metadata/MappingMetaData.java",
"status": "modified"
},
{
"diff": "@@ -522,10 +522,6 @@ public ParsedDocument parse(SourceToParse source, @Nullable ParseListener listen\n for (RootMapper rootMapper : rootMappersOrdered) {\n rootMapper.postParse(context);\n }\n-\n- for (RootMapper rootMapper : rootMappersOrdered) {\n- rootMapper.validate(context);\n- }\n } catch (Throwable e) {\n // if its already a mapper parsing exception, no need to wrap it...\n if (e instanceof MapperParsingException) {",
"filename": "src/main/java/org/elasticsearch/index/mapper/DocumentMapper.java",
"status": "modified"
},
{
"diff": "@@ -31,8 +31,6 @@ public interface RootMapper extends Mapper {\n \n void postParse(ParseContext context) throws IOException;\n \n- void validate(ParseContext context) throws MapperParsingException;\n-\n /**\n * Should the mapper be included in the root {@link org.elasticsearch.index.mapper.object.ObjectMapper}.\n */",
"filename": "src/main/java/org/elasticsearch/index/mapper/RootMapper.java",
"status": "modified"
},
{
"diff": "@@ -199,10 +199,6 @@ public void parse(ParseContext context) throws IOException {\n // we parse in post parse\n }\n \n- @Override\n- public void validate(ParseContext context) throws MapperParsingException {\n- }\n-\n @Override\n public boolean includeInObject() {\n return true;",
"filename": "src/main/java/org/elasticsearch/index/mapper/internal/AllFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -124,10 +124,6 @@ public void postParse(ParseContext context) throws IOException {\n context.analyzer(analyzer);\n }\n \n- @Override\n- public void validate(ParseContext context) throws MapperParsingException {\n- }\n-\n @Override\n public boolean includeInObject() {\n return false;",
"filename": "src/main/java/org/elasticsearch/index/mapper/internal/AnalyzerMapper.java",
"status": "modified"
},
{
"diff": "@@ -236,10 +236,6 @@ public void preParse(ParseContext context) throws IOException {\n public void postParse(ParseContext context) throws IOException {\n }\n \n- @Override\n- public void validate(ParseContext context) throws MapperParsingException {\n- }\n-\n @Override\n public boolean includeInObject() {\n return true;",
"filename": "src/main/java/org/elasticsearch/index/mapper/internal/BoostFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -292,10 +292,6 @@ public void parse(ParseContext context) throws IOException {\n super.parse(context);\n }\n \n- @Override\n- public void validate(ParseContext context) throws MapperParsingException {\n- }\n-\n @Override\n public boolean includeInObject() {\n return true;",
"filename": "src/main/java/org/elasticsearch/index/mapper/internal/IdFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -172,10 +172,6 @@ public void parse(ParseContext context) throws IOException {\n \n }\n \n- @Override\n- public void validate(ParseContext context) throws MapperParsingException {\n- }\n-\n @Override\n public boolean includeInObject() {\n return false;",
"filename": "src/main/java/org/elasticsearch/index/mapper/internal/IndexFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -170,10 +170,6 @@ public void postParse(ParseContext context) throws IOException {\n parse(context);\n }\n \n- @Override\n- public void validate(ParseContext context) throws MapperParsingException {\n- }\n-\n @Override\n public boolean includeInObject() {\n return true;",
"filename": "src/main/java/org/elasticsearch/index/mapper/internal/ParentFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -35,7 +35,6 @@\n import org.elasticsearch.index.fielddata.FieldDataType;\n import org.elasticsearch.index.mapper.*;\n import org.elasticsearch.index.mapper.core.AbstractFieldMapper;\n-import org.elasticsearch.index.mapper.core.NumberFieldMapper;\n \n import java.io.IOException;\n import java.util.List;\n@@ -172,31 +171,6 @@ public String value(Object value) {\n return value.toString();\n }\n \n- @Override\n- public void validate(ParseContext context) throws MapperParsingException {\n- String routing = context.sourceToParse().routing();\n- if (path != null && routing != null) {\n- // we have a path, check if we can validate we have the same routing value as the one in the doc...\n- String value = null;\n- Field field = (Field) context.doc().getField(path);\n- if (field != null) {\n- value = field.stringValue();\n- if (value == null) {\n- // maybe its a numeric field...\n- if (field instanceof NumberFieldMapper.CustomNumericField) {\n- value = ((NumberFieldMapper.CustomNumericField) field).numericAsString();\n- }\n- }\n- }\n- if (value == null) {\n- value = context.ignoredValue(path);\n- }\n- if (!routing.equals(value)) {\n- throw new MapperParsingException(\"External routing [\" + routing + \"] and document path routing [\" + value + \"] mismatch\");\n- }\n- }\n- }\n-\n @Override\n public void preParse(ParseContext context) throws IOException {\n super.parse(context);",
"filename": "src/main/java/org/elasticsearch/index/mapper/internal/RoutingFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -121,10 +121,6 @@ public boolean enabled() {\n return this.enabledState.enabled;\n }\n \n- @Override\n- public void validate(ParseContext context) throws MapperParsingException {\n- }\n-\n @Override\n public void preParse(ParseContext context) throws IOException {\n }",
"filename": "src/main/java/org/elasticsearch/index/mapper/internal/SizeFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -248,10 +248,6 @@ public void parse(ParseContext context) throws IOException {\n // nothing to do here, we will call it in pre parse\n }\n \n- @Override\n- public void validate(ParseContext context) throws MapperParsingException {\n- }\n-\n @Override\n public boolean includeInObject() {\n return false;",
"filename": "src/main/java/org/elasticsearch/index/mapper/internal/SourceFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -164,10 +164,6 @@ public Object valueForSearch(long expirationTime) {\n return expirationTime - System.currentTimeMillis();\n }\n \n- @Override\n- public void validate(ParseContext context) throws MapperParsingException {\n- }\n-\n @Override\n public void preParse(ParseContext context) throws IOException {\n }",
"filename": "src/main/java/org/elasticsearch/index/mapper/internal/TTLFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -179,10 +179,6 @@ public Object valueForSearch(Object value) {\n return value(value);\n }\n \n- @Override\n- public void validate(ParseContext context) throws MapperParsingException {\n- }\n-\n @Override\n public void preParse(ParseContext context) throws IOException {\n super.parse(context);",
"filename": "src/main/java/org/elasticsearch/index/mapper/internal/TimestampFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -168,10 +168,6 @@ public void parse(ParseContext context) throws IOException {\n // we parse in pre parse\n }\n \n- @Override\n- public void validate(ParseContext context) throws MapperParsingException {\n- }\n-\n @Override\n public boolean includeInObject() {\n return false;",
"filename": "src/main/java/org/elasticsearch/index/mapper/internal/TypeFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -165,10 +165,6 @@ public void parse(ParseContext context) throws IOException {\n // nothing to do here, we either do it in post parse, or in pre parse.\n }\n \n- @Override\n- public void validate(ParseContext context) throws MapperParsingException {\n- }\n-\n @Override\n public boolean includeInObject() {\n return false;",
"filename": "src/main/java/org/elasticsearch/index/mapper/internal/UidFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -144,10 +144,6 @@ public void postParse(ParseContext context) throws IOException {\n }\n }\n \n- @Override\n- public void validate(ParseContext context) throws MapperParsingException {\n- }\n-\n @Override\n public boolean includeInObject() {\n return false;",
"filename": "src/main/java/org/elasticsearch/index/mapper/internal/VersionFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -43,8 +43,8 @@ public void testParseIdAlone() throws Exception {\n md.parse(XContentFactory.xContent(bytes).createParser(bytes), parseContext);\n assertThat(parseContext.id(), equalTo(\"id\"));\n assertThat(parseContext.idResolved(), equalTo(true));\n- assertThat(parseContext.routing(), nullValue());\n- assertThat(parseContext.routingResolved(), equalTo(false));\n+ assertThat(parseContext.routing(), equalTo(\"routing_value\"));\n+ assertThat(parseContext.routingResolved(), equalTo(true));\n assertThat(parseContext.timestamp(), nullValue());\n assertThat(parseContext.timestampResolved(), equalTo(false));\n }\n@@ -106,8 +106,8 @@ public void testParseTimestampAlone() throws Exception {\n md.parse(XContentFactory.xContent(bytes).createParser(bytes), parseContext);\n assertThat(parseContext.id(), nullValue());\n assertThat(parseContext.idResolved(), equalTo(false));\n- assertThat(parseContext.routing(), nullValue());\n- assertThat(parseContext.routingResolved(), equalTo(false));\n+ assertThat(parseContext.routing(), equalTo(\"routing_value\"));\n+ assertThat(parseContext.routingResolved(), equalTo(true));\n assertThat(parseContext.timestamp(), equalTo(\"1\"));\n assertThat(parseContext.timestampResolved(), equalTo(true));\n }\n@@ -160,8 +160,8 @@ public void testParseIdWithPath() throws Exception {\n md.parse(XContentFactory.xContent(bytes).createParser(bytes), parseContext);\n assertThat(parseContext.id(), equalTo(\"id\"));\n assertThat(parseContext.idResolved(), equalTo(true));\n- assertThat(parseContext.routing(), nullValue());\n- assertThat(parseContext.routingResolved(), equalTo(false));\n+ assertThat(parseContext.routing(), equalTo(\"routing_value\"));\n+ assertThat(parseContext.routingResolved(), equalTo(true));\n assertThat(parseContext.timestamp(), nullValue());\n assertThat(parseContext.timestampResolved(), equalTo(false));\n }\n@@ -202,8 +202,8 @@ public void testParseTimestampWithPath() throws Exception {\n md.parse(XContentFactory.xContent(bytes).createParser(bytes), parseContext);\n assertThat(parseContext.id(), nullValue());\n assertThat(parseContext.idResolved(), equalTo(false));\n- assertThat(parseContext.routing(), nullValue());\n- assertThat(parseContext.routingResolved(), equalTo(false));\n+ assertThat(parseContext.routing(), equalTo(\"routing_value\"));\n+ assertThat(parseContext.routingResolved(), equalTo(true));\n assertThat(parseContext.timestamp(), equalTo(\"1\"));\n assertThat(parseContext.timestampResolved(), equalTo(true));\n }",
"filename": "src/test/java/org/elasticsearch/cluster/metadata/MappingMetaDataParserTests.java",
"status": "modified"
},
{
"diff": "@@ -235,6 +235,7 @@ public void testRequiredRoutingWithPathMapping() throws Exception {\n client().admin().indices().prepareCreate(\"test\")\n .addMapping(\"type1\", XContentFactory.jsonBuilder().startObject().startObject(\"type1\")\n .startObject(\"_routing\").field(\"required\", true).field(\"path\", \"routing_field\").endObject()\n+ .startObject(\"routing_field\").field(\"type\", \"string\").field(\"index\", randomBoolean() ? \"no\" : \"not_analyzed\").field(\"doc_values\", randomBoolean() ? \"yes\" : \"no\").endObject()\n .endObject().endObject())\n .execute().actionGet();\n ensureGreen();\n@@ -303,12 +304,6 @@ public void testRequiredRoutingWithPathNumericType() throws Exception {\n client().admin().indices().prepareCreate(\"test\")\n .addMapping(\"type1\", XContentFactory.jsonBuilder().startObject().startObject(\"type1\")\n .startObject(\"_routing\").field(\"required\", true).field(\"path\", \"routing_field\").endObject()\n- .startObject(\"properties\")\n- .startObject(\"routing_field\")\n- .field(\"type\", \"long\")\n- .field(\"doc_values\", false) // TODO this test fails with doc values https://github.com/elasticsearch/elasticsearch/pull/5858\n- .endObject()\n- .endObject()\n .endObject().endObject())\n .execute().actionGet();\n ensureGreen();",
"filename": "src/test/java/org/elasticsearch/routing/SimpleRoutingTests.java",
"status": "modified"
}
]
} |
{
"body": "Given the following setup:\n\n```\ncurl -XPUT 'http://localhost:9200/haschildtest/'\n\ncurl -XPUT 'localhost:9200/haschildtest/posts/_mapping' -d '\n{\n \"posts\":{\n \"_parent\":{\n \"type\":\"features\"\n },\n \"_routing\":{\n \"required\":true\n }\n }\n}'\n\ncurl -XPUT 'localhost:9200/haschildtest/features/feature1' -d '\n{\n \"title\": \"feature title 1\"\n}'\n\ncurl -XPUT 'localhost:9200/haschildtest/posts/post1?parent=feature1' -d '\n{\n \"specials\":{\n \"title\": \"jack\"\n }\n}'\n\ncurl -XPUT 'localhost:9200/haschildtest/specials/special1' -d '\n{\n \"title\": \"this somehow interferes with the has_child query\"\n}'\n```\n\nThis query runs correctly:\n\n```\ncurl -XPOST 'localhost:9200/haschildtest/features/_search?pretty=true' -d '\n{\n \"query\": {\n \"has_child\": {\n \"type\": \"posts\",\n \"query\": {\n \"match\": {\n \"specials.title\": \"jack\"\n }\n }\n }\n }\n}'\n```\n\nWhereas this query with the parent type specifed after the query, fails:\n\n```\ncurl -XPOST 'localhost:9200/haschildtest/features/_search?pretty=true' -d '\n{\n \"query\": {\n \"has_child\": {\n \"query\": {\n \"match\": {\n \"specials.title\": \"jack\"\n }\n },\n \"type\": \"posts\"\n }\n }\n}'\n```\n\nSomehow the second example is picking the `specials` type `title` field instead of the type specified in the `has_child` block. I have tested this on ES 1.0.2 and took me a while to figure out what was going on...\n",
"comments": [
{
"body": "Also see #5399\n",
"created_at": "2014-04-15T13:36:41Z"
},
{
"body": "I'm not entirely sure if these are related issues. The curiosity here is the position of `type` in the query. Why should its positioning after the match query affect the results?\n",
"created_at": "2014-04-15T20:54:32Z"
},
{
"body": "I think it's simply ignored and then you don't have a type restriction seems like a bug\n",
"created_at": "2014-04-15T21:08:24Z"
},
{
"body": "@MrHash this is a bug with the way that `has_child` queries are parsed before parsing out the \"type\" field, I'm working on a fix for this.\n",
"created_at": "2014-04-15T23:14:58Z"
},
{
"body": "when i use has_child to index and search ,i get no result return ,why?i have use losts of es version.please help me.\n",
"created_at": "2014-07-03T04:04:28Z"
},
{
"body": "@fanbiao Please ask these questions on the mailing list\n",
"created_at": "2014-07-03T10:29:10Z"
},
{
"body": "we could not retrieve the child document _source when i use parent-child query,how can i do that?\n",
"created_at": "2014-07-11T06:53:34Z"
}
],
"number": 5783,
"title": "HasChild query picks wrong type"
} | {
"body": "Fixes #5783\n",
"number": 5838,
"review_comments": [
{
"body": "we can do this in a more optimize manner. We can create a builder, and then use `copyCurrentStructure` on the builder (passing it a parser), to just copy it over, compared with parsing into a map and then serializing the map. Also, since its internal, I would use smile builder, as its considerably more efficient than json.\n",
"created_at": "2014-04-16T17:14:53Z"
},
{
"body": "this is not guaranteed to be smile! We randomize this in our tests I thing this should break if you run it multiple times.\n",
"created_at": "2014-04-16T19:36:44Z"
},
{
"body": "This isn't parsing this as smile, it's creating a internal parser that can be read from. I used smile because Shay recommended it as more efficient than a JSON builder.\n",
"created_at": "2014-04-16T19:40:50Z"
},
{
"body": "oh I misread! I wonder if we need this somewhere else and if we could represent this as an object ie something like this:\n\n```\n\npublic class XContentStructure {\n public XContentStructure( XContentParser parser, WhaterverWeNeedHere ) {\n // copy current struct...\n } \n\n public Query asQuery(String...types) {\n //... do it\n }\n\n public Filter asFilter(String...types) {\n //... do it\n }\n}\n```\n\ndoes this make sense?\n",
"created_at": "2014-04-16T19:58:41Z"
},
{
"body": "since this deals with parsing of query/filter, I don't think it should be in the common.xcontent package, maybe place it in the same package as `QueryParserContext`?\n",
"created_at": "2014-04-17T18:35:40Z"
},
{
"body": "so with this class, we give up on the optimization of not copying over the content if we already have the type? I just wasn't to make sure its explicit.\n",
"created_at": "2014-04-17T18:36:29Z"
},
{
"body": "That's correct, I'll add more detail to the javadoc about the tradeoffs of this.\n",
"created_at": "2014-04-17T20:07:16Z"
},
{
"body": "I think `org.elasticsearch.index.query` contains almost entirely builder/parsers and QueryParseContext, I think `org.elasticsearch.index.query.support` would fit better\n",
"created_at": "2014-04-17T20:11:52Z"
},
{
"body": "Maybe we can remain streaming parsing if the type was parsed before the query? If the `XContentStructure` class has 2 set methods setTypes() and setQuery(), then the latter method can be smart, so that if the type is already know it would immediately create the Lucene query. If the type isn't know it would just keep the unparsed query around in a BytesReference field.\n\nI see that would change this class completely, but I really like to do streaming parsing if possible.\n",
"created_at": "2014-04-18T04:35:35Z"
},
{
"body": "I think this is possible with some refactoring, I will work on adding it.\n",
"created_at": "2014-04-18T14:36:22Z"
},
{
"body": "Awesome :)\n\nOn 18 April 2014 21:36, Lee Hinman notifications@github.com wrote:\n\n> In\n> src/main/java/org/elasticsearch/index/query/support/XContentStructure.java:\n> \n> > +import org.elasticsearch.common.xcontent.XContentFactory;\n> > +import org.elasticsearch.common.xcontent.XContentHelper;\n> > +import org.elasticsearch.common.xcontent.XContentParser;\n> > +import org.elasticsearch.index.query.QueryParseContext;\n> > +\n> > +import java.io.IOException;\n> > +\n> > +/**\n> > - \\* XContentStructure is a class used to capture a subset of query, to be parsed\n> > - \\* at a later time when more information (in this case, types) is available.\n> > - \\* Note that using this class requires copying the parser's data, which will\n> > - \\* result in additional overhead versus parsing the inner query/filter\n> > - \\* immediately, however, the extra overhead means that the type not be\n> > - \\* extracted prior to query parsing (in the case of unordered JSON).\n> > - */\n> > +public class XContentStructure {\n> \n> I think this is possible with some refactoring, I will work on adding it.\n> \n> —\n> Reply to this email directly or view it on GitHubhttps://github.com/elasticsearch/elasticsearch/pull/5838/files?utm_campaign=website&utm_source=sendgrid.com&utm_medium=email#r11773450\n> .\n\n## \n\nMet vriendelijke groet,\n\nMartijn van Groningen\n",
"created_at": "2014-04-18T18:06:43Z"
}
],
"title": "Parse has_child query/filter after child type has been parsed"
} | {
"commits": [
{
"message": "Parse has_child query/filter after child type has been parsed\n\nFixes #5783\nFixes #5838"
}
],
"files": [
{
"diff": "@@ -22,9 +22,9 @@\n import org.apache.lucene.search.Query;\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.inject.Inject;\n-import org.elasticsearch.common.lucene.search.XConstantScoreQuery;\n import org.elasticsearch.common.lucene.search.XFilteredQuery;\n import org.elasticsearch.common.xcontent.XContentParser;\n+import org.elasticsearch.index.query.support.XContentStructure;\n import org.elasticsearch.index.fielddata.plain.ParentChildIndexFieldData;\n import org.elasticsearch.index.mapper.DocumentMapper;\n import org.elasticsearch.index.mapper.internal.ParentFieldMapper;\n@@ -56,38 +56,30 @@ public String[] names() {\n public Filter parse(QueryParseContext parseContext) throws IOException, QueryParsingException {\n XContentParser parser = parseContext.parser();\n \n- Query query = null;\n boolean queryFound = false;\n+ boolean filterFound = false;\n String childType = null;\n int shortCircuitParentDocSet = 8192; // Tests show a cut of point between 8192 and 16384.\n \n String filterName = null;\n String currentFieldName = null;\n XContentParser.Token token;\n+ XContentStructure.InnerQuery innerQuery = null;\n+ XContentStructure.InnerFilter innerFilter = null;\n while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {\n if (token == XContentParser.Token.FIELD_NAME) {\n currentFieldName = parser.currentName();\n } else if (token == XContentParser.Token.START_OBJECT) {\n+ // Usually, the query would be parsed here, but the child\n+ // type may not have been extracted yet, so use the\n+ // XContentStructure.<type> facade to parse if available,\n+ // or delay parsing if not.\n if (\"query\".equals(currentFieldName)) {\n- // TODO we need to set the type, but, `query` can come before `type`...\n- // since we switch types, make sure we change the context\n- String[] origTypes = QueryParseContext.setTypesWithPrevious(childType == null ? null : new String[]{childType});\n- try {\n- query = parseContext.parseInnerQuery();\n- queryFound = true;\n- } finally {\n- QueryParseContext.setTypes(origTypes);\n- }\n+ innerQuery = new XContentStructure.InnerQuery(parseContext, childType == null ? null : new String[] {childType});\n+ queryFound = true;\n } else if (\"filter\".equals(currentFieldName)) {\n- // TODO handle `filter` element before `type` element...\n- String[] origTypes = QueryParseContext.setTypesWithPrevious(childType == null ? null : new String[]{childType});\n- try {\n- Filter innerFilter = parseContext.parseInnerFilter();\n- query = new XConstantScoreQuery(innerFilter);\n- queryFound = true;\n- } finally {\n- QueryParseContext.setTypes(origTypes);\n- }\n+ innerFilter = new XContentStructure.InnerFilter(parseContext, childType == null ? null : new String[] {childType});\n+ filterFound = true;\n } else {\n throw new QueryParsingException(parseContext.index(), \"[has_child] filter does not support [\" + currentFieldName + \"]\");\n }\n@@ -109,16 +101,24 @@ public Filter parse(QueryParseContext parseContext) throws IOException, QueryPar\n }\n }\n }\n- if (!queryFound) {\n- throw new QueryParsingException(parseContext.index(), \"[has_child] filter requires 'query' field\");\n- }\n- if (query == null) {\n- return null;\n+ if (!queryFound && !filterFound) {\n+ throw new QueryParsingException(parseContext.index(), \"[has_child] filter requires 'query' or 'filter' field\");\n }\n if (childType == null) {\n throw new QueryParsingException(parseContext.index(), \"[has_child] filter requires 'type' field\");\n }\n \n+ Query query;\n+ if (queryFound) {\n+ query = innerQuery.asQuery(childType);\n+ } else {\n+ query = innerFilter.asFilter(childType);\n+ }\n+\n+ if (query == null) {\n+ return null;\n+ }\n+\n DocumentMapper childDocMapper = parseContext.mapperService().documentMapper(childType);\n if (childDocMapper == null) {\n throw new QueryParsingException(parseContext.index(), \"No mapping for for type [\" + childType + \"]\");",
"filename": "src/main/java/org/elasticsearch/index/query/HasChildFilterParser.java",
"status": "modified"
},
{
"diff": "@@ -26,6 +26,7 @@\n import org.elasticsearch.common.lucene.search.XConstantScoreQuery;\n import org.elasticsearch.common.lucene.search.XFilteredQuery;\n import org.elasticsearch.common.xcontent.XContentParser;\n+import org.elasticsearch.index.query.support.XContentStructure;\n import org.elasticsearch.index.fielddata.plain.ParentChildIndexFieldData;\n import org.elasticsearch.index.mapper.DocumentMapper;\n import org.elasticsearch.index.mapper.internal.ParentFieldMapper;\n@@ -55,7 +56,6 @@ public String[] names() {\n public Query parse(QueryParseContext parseContext) throws IOException, QueryParsingException {\n XContentParser parser = parseContext.parser();\n \n- Query innerQuery = null;\n boolean queryFound = false;\n float boost = 1.0f;\n String childType = null;\n@@ -65,20 +65,18 @@ public Query parse(QueryParseContext parseContext) throws IOException, QueryPars\n \n String currentFieldName = null;\n XContentParser.Token token;\n+ XContentStructure.InnerQuery iq = null;\n while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {\n if (token == XContentParser.Token.FIELD_NAME) {\n currentFieldName = parser.currentName();\n } else if (token == XContentParser.Token.START_OBJECT) {\n+ // Usually, the query would be parsed here, but the child\n+ // type may not have been extracted yet, so use the\n+ // XContentStructure.<type> facade to parse if available,\n+ // or delay parsing if not.\n if (\"query\".equals(currentFieldName)) {\n- // TODO we need to set the type, but, `query` can come before `type`... (see HasChildFilterParser)\n- // since we switch types, make sure we change the context\n- String[] origTypes = QueryParseContext.setTypesWithPrevious(childType == null ? null : new String[]{childType});\n- try {\n- innerQuery = parseContext.parseInnerQuery();\n- queryFound = true;\n- } finally {\n- QueryParseContext.setTypes(origTypes);\n- }\n+ iq = new XContentStructure.InnerQuery(parseContext, childType == null ? null : new String[] {childType});\n+ queryFound = true;\n } else {\n throw new QueryParsingException(parseContext.index(), \"[has_child] query does not support [\" + currentFieldName + \"]\");\n }\n@@ -111,12 +109,15 @@ public Query parse(QueryParseContext parseContext) throws IOException, QueryPars\n if (!queryFound) {\n throw new QueryParsingException(parseContext.index(), \"[has_child] requires 'query' field\");\n }\n- if (innerQuery == null) {\n- return null;\n- }\n if (childType == null) {\n throw new QueryParsingException(parseContext.index(), \"[has_child] requires 'type' field\");\n }\n+\n+ Query innerQuery = iq.asQuery(childType);\n+\n+ if (innerQuery == null) {\n+ return null;\n+ }\n innerQuery.setBoost(boost);\n \n DocumentMapper childDocMapper = parseContext.mapperService().documentMapper(childType);",
"filename": "src/main/java/org/elasticsearch/index/query/HasChildQueryParser.java",
"status": "modified"
},
{
"diff": "@@ -25,10 +25,9 @@\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.lucene.search.NotFilter;\n import org.elasticsearch.common.lucene.search.XBooleanFilter;\n-import org.elasticsearch.common.lucene.search.XConstantScoreQuery;\n import org.elasticsearch.common.lucene.search.XFilteredQuery;\n import org.elasticsearch.common.xcontent.XContentParser;\n-import org.elasticsearch.index.cache.filter.support.CacheKeyFilter;\n+import org.elasticsearch.index.query.support.XContentStructure;\n import org.elasticsearch.index.fielddata.plain.ParentChildIndexFieldData;\n import org.elasticsearch.index.mapper.DocumentMapper;\n import org.elasticsearch.index.mapper.internal.ParentFieldMapper;\n@@ -61,38 +60,29 @@ public String[] names() {\n public Filter parse(QueryParseContext parseContext) throws IOException, QueryParsingException {\n XContentParser parser = parseContext.parser();\n \n- Query query = null;\n boolean queryFound = false;\n+ boolean filterFound = false;\n String parentType = null;\n \n- boolean cache = false;\n- CacheKeyFilter.Key cacheKey = null;\n String filterName = null;\n String currentFieldName = null;\n XContentParser.Token token;\n+ XContentStructure.InnerQuery innerQuery = null;\n+ XContentStructure.InnerFilter innerFilter = null;\n while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {\n if (token == XContentParser.Token.FIELD_NAME) {\n currentFieldName = parser.currentName();\n } else if (token == XContentParser.Token.START_OBJECT) {\n+ // Usually, the query would be parsed here, but the child\n+ // type may not have been extracted yet, so use the\n+ // XContentStructure.<type> facade to parse if available,\n+ // or delay parsing if not.\n if (\"query\".equals(currentFieldName)) {\n- // TODO handle `query` element before `type` element...\n- String[] origTypes = QueryParseContext.setTypesWithPrevious(parentType == null ? null : new String[]{parentType});\n- try {\n- query = parseContext.parseInnerQuery();\n- queryFound = true;\n- } finally {\n- QueryParseContext.setTypes(origTypes);\n- }\n+ innerQuery = new XContentStructure.InnerQuery(parseContext, parentType == null ? null : new String[] {parentType});\n+ queryFound = true;\n } else if (\"filter\".equals(currentFieldName)) {\n- // TODO handle `filter` element before `type` element...\n- String[] origTypes = QueryParseContext.setTypesWithPrevious(parentType == null ? null : new String[]{parentType});\n- try {\n- Filter innerFilter = parseContext.parseInnerFilter();\n- query = new XConstantScoreQuery(innerFilter);\n- queryFound = true;\n- } finally {\n- QueryParseContext.setTypes(origTypes);\n- }\n+ innerFilter = new XContentStructure.InnerFilter(parseContext, parentType == null ? null : new String[] {parentType});\n+ filterFound = true;\n } else {\n throw new QueryParsingException(parseContext.index(), \"[has_parent] filter does not support [\" + currentFieldName + \"]\");\n }\n@@ -104,25 +94,32 @@ public Filter parse(QueryParseContext parseContext) throws IOException, QueryPar\n } else if (\"_name\".equals(currentFieldName)) {\n filterName = parser.text();\n } else if (\"_cache\".equals(currentFieldName)) {\n- cache = parser.booleanValue();\n+ // noop to be backwards compatible\n } else if (\"_cache_key\".equals(currentFieldName) || \"_cacheKey\".equals(currentFieldName)) {\n- cacheKey = new CacheKeyFilter.Key(parser.text());\n+ // noop to be backwards compatible\n } else {\n throw new QueryParsingException(parseContext.index(), \"[has_parent] filter does not support [\" + currentFieldName + \"]\");\n }\n }\n }\n- if (!queryFound) {\n- throw new QueryParsingException(parseContext.index(), \"[has_parent] filter requires 'query' field\");\n+ if (!queryFound && !filterFound) {\n+ throw new QueryParsingException(parseContext.index(), \"[has_parent] filter requires 'query' or 'filter' field\");\n }\n- if (query == null) {\n- return null;\n- }\n-\n if (parentType == null) {\n throw new QueryParsingException(parseContext.index(), \"[has_parent] filter requires 'parent_type' field\");\n }\n \n+ Query query;\n+ if (queryFound) {\n+ query = innerQuery.asQuery(parentType);\n+ } else {\n+ query = innerFilter.asFilter(parentType);\n+ }\n+\n+ if (query == null) {\n+ return null;\n+ }\n+\n DocumentMapper parentDocMapper = parseContext.mapperService().documentMapper(parentType);\n if (parentDocMapper == null) {\n throw new QueryParsingException(parseContext.index(), \"[has_parent] filter configured 'parent_type' [\" + parentType + \"] is not a valid type\");",
"filename": "src/main/java/org/elasticsearch/index/query/HasParentFilterParser.java",
"status": "modified"
},
{
"diff": "@@ -28,6 +28,7 @@\n import org.elasticsearch.common.lucene.search.XConstantScoreQuery;\n import org.elasticsearch.common.lucene.search.XFilteredQuery;\n import org.elasticsearch.common.xcontent.XContentParser;\n+import org.elasticsearch.index.query.support.XContentStructure;\n import org.elasticsearch.index.fielddata.plain.ParentChildIndexFieldData;\n import org.elasticsearch.index.mapper.DocumentMapper;\n import org.elasticsearch.index.mapper.internal.ParentFieldMapper;\n@@ -58,7 +59,6 @@ public String[] names() {\n public Query parse(QueryParseContext parseContext) throws IOException, QueryParsingException {\n XContentParser parser = parseContext.parser();\n \n- Query innerQuery = null;\n boolean queryFound = false;\n float boost = 1.0f;\n String parentType = null;\n@@ -67,19 +67,18 @@ public Query parse(QueryParseContext parseContext) throws IOException, QueryPars\n \n String currentFieldName = null;\n XContentParser.Token token;\n+ XContentStructure.InnerQuery iq = null;\n while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {\n if (token == XContentParser.Token.FIELD_NAME) {\n currentFieldName = parser.currentName();\n } else if (token == XContentParser.Token.START_OBJECT) {\n+ // Usually, the query would be parsed here, but the child\n+ // type may not have been extracted yet, so use the\n+ // XContentStructure.<type> facade to parse if available,\n+ // or delay parsing if not.\n if (\"query\".equals(currentFieldName)) {\n- // TODO handle `query` element before `type` element...\n- String[] origTypes = QueryParseContext.setTypesWithPrevious(parentType == null ? null : new String[]{parentType});\n- try {\n- innerQuery = parseContext.parseInnerQuery();\n- queryFound = true;\n- } finally {\n- QueryParseContext.setTypes(origTypes);\n- }\n+ iq = new XContentStructure.InnerQuery(parseContext, parentType == null ? null : new String[] {parentType});\n+ queryFound = true;\n } else {\n throw new QueryParsingException(parseContext.index(), \"[has_parent] query does not support [\" + currentFieldName + \"]\");\n }\n@@ -114,14 +113,16 @@ public Query parse(QueryParseContext parseContext) throws IOException, QueryPars\n if (!queryFound) {\n throw new QueryParsingException(parseContext.index(), \"[has_parent] query requires 'query' field\");\n }\n- if (innerQuery == null) {\n- return null;\n- }\n-\n if (parentType == null) {\n throw new QueryParsingException(parseContext.index(), \"[has_parent] query requires 'parent_type' field\");\n }\n \n+ Query innerQuery = iq.asQuery(parentType);\n+\n+ if (innerQuery == null) {\n+ return null;\n+ }\n+\n DocumentMapper parentDocMapper = parseContext.mapperService().documentMapper(parentType);\n if (parentDocMapper == null) {\n throw new QueryParsingException(parseContext.index(), \"[has_parent] query configured 'parent_type' [\" + parentType + \"] is not a valid type\");",
"filename": "src/main/java/org/elasticsearch/index/query/HasParentQueryParser.java",
"status": "modified"
},
{
"diff": "@@ -26,6 +26,7 @@\n import org.elasticsearch.common.lucene.search.Queries;\n import org.elasticsearch.common.regex.Regex;\n import org.elasticsearch.common.xcontent.XContentParser;\n+import org.elasticsearch.index.query.support.XContentStructure;\n \n import java.io.IOException;\n import java.util.ArrayList;\n@@ -54,33 +55,25 @@ public String[] names() {\n public Query parse(QueryParseContext parseContext) throws IOException, QueryParsingException {\n XContentParser parser = parseContext.parser();\n \n- Query query = null;\n- Query noMatchQuery = Queries.newMatchAllQuery();\n+ Query noMatchQuery = null;\n boolean queryFound = false;\n boolean indicesFound = false;\n boolean currentIndexMatchesIndices = false;\n String queryName = null;\n \n String currentFieldName = null;\n XContentParser.Token token;\n+ XContentStructure.InnerQuery innerQuery = null;\n+ XContentStructure.InnerQuery innerNoMatchQuery = null;\n while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {\n if (token == XContentParser.Token.FIELD_NAME) {\n currentFieldName = parser.currentName();\n } else if (token == XContentParser.Token.START_OBJECT) {\n if (\"query\".equals(currentFieldName)) {\n- //TODO We are able to decide whether to parse the query or not only if indices in the query appears first\n+ innerQuery = new XContentStructure.InnerQuery(parseContext, null);\n queryFound = true;\n- if (indicesFound && !currentIndexMatchesIndices) {\n- parseContext.parser().skipChildren(); // skip the query object without parsing it\n- } else {\n- query = parseContext.parseInnerQuery();\n- }\n } else if (\"no_match_query\".equals(currentFieldName)) {\n- if (indicesFound && currentIndexMatchesIndices) {\n- parseContext.parser().skipChildren(); // skip the query object without parsing it\n- } else {\n- noMatchQuery = parseContext.parseInnerQuery();\n- }\n+ innerNoMatchQuery = new XContentStructure.InnerQuery(parseContext, null);\n } else {\n throw new QueryParsingException(parseContext.index(), \"[indices] query does not support [\" + currentFieldName + \"]\");\n }\n@@ -132,9 +125,19 @@ public Query parse(QueryParseContext parseContext) throws IOException, QueryPars\n \n Query chosenQuery;\n if (currentIndexMatchesIndices) {\n- chosenQuery = query;\n+ chosenQuery = innerQuery.asQuery();\n } else {\n- chosenQuery = noMatchQuery;\n+ // If noMatchQuery is set, it means \"no_match_query\" was \"all\" or \"none\"\n+ if (noMatchQuery != null) {\n+ chosenQuery = noMatchQuery;\n+ } else {\n+ // There might be no \"no_match_query\" set, so default to the match_all if not set\n+ if (innerNoMatchQuery == null) {\n+ chosenQuery = Queries.newMatchAllQuery();\n+ } else {\n+ chosenQuery = innerNoMatchQuery.asQuery();\n+ }\n+ }\n }\n if (queryName != null) {\n parseContext.addNamedQuery(queryName, chosenQuery);",
"filename": "src/main/java/org/elasticsearch/index/query/IndicesQueryParser.java",
"status": "modified"
},
{
"diff": "@@ -113,6 +113,10 @@ public Index index() {\n return this.index;\n }\n \n+ public void parser(XContentParser parser) {\n+ this.parser = parser;\n+ }\n+\n public XContentParser parser() {\n return parser;\n }",
"filename": "src/main/java/org/elasticsearch/index/query/QueryParseContext.java",
"status": "modified"
},
{
"diff": "@@ -24,6 +24,7 @@\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.lucene.search.XFilteredQuery;\n import org.elasticsearch.common.xcontent.XContentParser;\n+import org.elasticsearch.index.query.support.XContentStructure;\n import org.elasticsearch.index.fielddata.plain.ParentChildIndexFieldData;\n import org.elasticsearch.index.mapper.DocumentMapper;\n import org.elasticsearch.index.mapper.internal.ParentFieldMapper;\n@@ -55,7 +56,6 @@ public String[] names() {\n public Query parse(QueryParseContext parseContext) throws IOException, QueryParsingException {\n XContentParser parser = parseContext.parser();\n \n- Query innerQuery = null;\n boolean queryFound = false;\n float boost = 1.0f;\n String childType = null;\n@@ -66,20 +66,18 @@ public Query parse(QueryParseContext parseContext) throws IOException, QueryPars\n \n String currentFieldName = null;\n XContentParser.Token token;\n+ XContentStructure.InnerQuery iq = null;\n while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {\n if (token == XContentParser.Token.FIELD_NAME) {\n currentFieldName = parser.currentName();\n } else if (token == XContentParser.Token.START_OBJECT) {\n+ // Usually, the query would be parsed here, but the child\n+ // type may not have been extracted yet, so use the\n+ // XContentStructure.<type> facade to parse if available,\n+ // or delay parsing if not.\n if (\"query\".equals(currentFieldName)) {\n+ iq = new XContentStructure.InnerQuery(parseContext, childType == null ? null : new String[] {childType});\n queryFound = true;\n- // TODO we need to set the type, but, `query` can come before `type`... (see HasChildFilterParser)\n- // since we switch types, make sure we change the context\n- String[] origTypes = QueryParseContext.setTypesWithPrevious(childType == null ? null : new String[]{childType});\n- try {\n- innerQuery = parseContext.parseInnerQuery();\n- } finally {\n- QueryParseContext.setTypes(origTypes);\n- }\n } else {\n throw new QueryParsingException(parseContext.index(), \"[top_children] query does not support [\" + currentFieldName + \"]\");\n }\n@@ -112,6 +110,8 @@ public Query parse(QueryParseContext parseContext) throws IOException, QueryPars\n throw new QueryParsingException(parseContext.index(), \"[top_children] requires 'type' field\");\n }\n \n+ Query innerQuery = iq.asQuery(childType);\n+\n if (innerQuery == null) {\n return null;\n }",
"filename": "src/main/java/org/elasticsearch/index/query/TopChildrenQueryParser.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,199 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.index.query.support;\n+\n+import org.apache.lucene.search.Filter;\n+import org.apache.lucene.search.Query;\n+import org.elasticsearch.common.Nullable;\n+import org.elasticsearch.common.bytes.BytesReference;\n+import org.elasticsearch.common.lucene.search.XConstantScoreQuery;\n+import org.elasticsearch.common.xcontent.XContentFactory;\n+import org.elasticsearch.common.xcontent.XContentHelper;\n+import org.elasticsearch.common.xcontent.XContentParser;\n+import org.elasticsearch.index.query.QueryParseContext;\n+\n+import java.io.IOException;\n+\n+/**\n+ * XContentStructure is a class used to capture a subset of query, to be parsed\n+ * at a later time when more information (in this case, types) is available.\n+ * Note that using this class requires copying the parser's data, which will\n+ * result in additional overhead versus parsing the inner query/filter\n+ * immediately, however, the extra overhead means that the type not be\n+ * extracted prior to query parsing (in the case of unordered JSON).\n+ */\n+public abstract class XContentStructure {\n+\n+ private final QueryParseContext parseContext;\n+ private BytesReference innerBytes;\n+\n+ /**\n+ * Create a new XContentStructure for the current parsing context.\n+ */\n+ public XContentStructure(QueryParseContext queryParseContext) {\n+ this.parseContext = queryParseContext;\n+ }\n+\n+ /**\n+ * \"Freeze\" the parsing content, which means copying the current parser's\n+ * structure into an internal {@link BytesReference} to be parsed later.\n+ * @return the original XContentStructure object\n+ */\n+ public XContentStructure freeze() throws IOException {\n+ this.bytes(XContentFactory.smileBuilder().copyCurrentStructure(parseContext.parser()).bytes());\n+ return this;\n+ }\n+\n+ /**\n+ * Set the bytes to be used for parsing\n+ */\n+ public void bytes(BytesReference innerBytes) {\n+ this.innerBytes = innerBytes;\n+ }\n+\n+ /**\n+ * Return the bytes that are going to be used for parsing\n+ */\n+ public BytesReference bytes() {\n+ return this.innerBytes;\n+ }\n+\n+ /**\n+ * Use the captured bytes to parse the inner query using the specified\n+ * types. The original QueryParseContext's parser is switched during this\n+ * parsing, so this method is NOT thread-safe.\n+ * @param types types to be used during the inner query parsing\n+ * @return {@link Query} parsed from the bytes captured in {@code freeze()}\n+ */\n+ public Query asQuery(String... types) throws IOException {\n+ BytesReference br = this.bytes();\n+ assert br != null : \"innerBytes must be set with .bytes(bytes) or .freeze() before parsing\";\n+ XContentParser innerParser = XContentHelper.createParser(br);\n+ String[] origTypes = QueryParseContext.setTypesWithPrevious(types);\n+ XContentParser old = parseContext.parser();\n+ parseContext.parser(innerParser);\n+ try {\n+ return parseContext.parseInnerQuery();\n+ } finally {\n+ parseContext.parser(old);\n+ QueryParseContext.setTypes(origTypes);\n+ }\n+ }\n+\n+ /**\n+ * Use the captured bytes to parse the inner filter using the specified\n+ * types. The original QueryParseContext's parser is switched during this\n+ * parsing, so this method is NOT thread-safe.\n+ * @param types types to be used during the inner filter parsing\n+ * @return {@link XConstantScoreQuery} wrapping the filter parsed from the bytes captured in {@code freeze()}\n+ */\n+ public Query asFilter(String... types) throws IOException {\n+ BytesReference br = this.bytes();\n+ assert br != null : \"innerBytes must be set with .bytes(bytes) or .freeze() before parsing\";\n+ XContentParser innerParser = XContentHelper.createParser(br);\n+ String[] origTypes = QueryParseContext.setTypesWithPrevious(types);\n+ XContentParser old = parseContext.parser();\n+ parseContext.parser(innerParser);\n+ try {\n+ Filter innerFilter = parseContext.parseInnerFilter();\n+ return new XConstantScoreQuery(innerFilter);\n+ } finally {\n+ parseContext.parser(old);\n+ QueryParseContext.setTypes(origTypes);\n+ }\n+ }\n+\n+ /**\n+ * InnerQuery is an extension of {@code XContentStructure} that eagerly\n+ * parses the query in a streaming manner if the types are available at\n+ * construction time.\n+ */\n+ public static class InnerQuery extends XContentStructure {\n+ private Query query = null;\n+\n+ public InnerQuery(QueryParseContext parseContext1, @Nullable String... types) throws IOException {\n+ super(parseContext1);\n+ if (types != null) {\n+ String[] origTypes = QueryParseContext.setTypesWithPrevious(types);\n+ try {\n+ query = parseContext1.parseInnerQuery();\n+ } finally {\n+ QueryParseContext.setTypes(origTypes);\n+ }\n+ } else {\n+ BytesReference innerBytes = XContentFactory.smileBuilder().copyCurrentStructure(parseContext1.parser()).bytes();\n+ super.bytes(innerBytes);\n+ }\n+ }\n+\n+ /**\n+ * Return the query represented by the XContentStructure object,\n+ * returning the cached Query if it has already been parsed.\n+ * @param types types to be used during the inner query parsing\n+ */\n+ @Override\n+ public Query asQuery(String... types) throws IOException {\n+ if (this.query == null) {\n+ this.query = super.asQuery(types);\n+ }\n+ return this.query;\n+ }\n+ }\n+\n+ /**\n+ * InnerFilter is an extension of {@code XContentStructure} that eagerly\n+ * parses the filter in a streaming manner if the types are available at\n+ * construction time.\n+ */\n+ public static class InnerFilter extends XContentStructure {\n+ private Query query = null;\n+\n+ public InnerFilter(QueryParseContext parseContext1, @Nullable String... types) throws IOException {\n+ super(parseContext1);\n+ if (types != null) {\n+ String[] origTypes = QueryParseContext.setTypesWithPrevious(types);\n+ try {\n+ Filter innerFilter = parseContext1.parseInnerFilter();\n+ query = new XConstantScoreQuery(innerFilter);\n+ } finally {\n+ QueryParseContext.setTypes(origTypes);\n+ }\n+ } else {\n+ BytesReference innerBytes = XContentFactory.smileBuilder().copyCurrentStructure(parseContext1.parser()).bytes();\n+ super.bytes(innerBytes);\n+ }\n+ }\n+\n+ /**\n+ * Return the filter as an\n+ * {@link org.elasticsearch.common.lucene.search.XConstantScoreQuery}\n+ * represented by the XContentStructure object,\n+ * returning the cached Query if it has already been parsed.\n+ * @param types types to be used during the inner filter parsing\n+ */\n+ @Override\n+ public Query asFilter(String... types) throws IOException {\n+ if (this.query == null) {\n+ this.query = super.asFilter(types);\n+ }\n+ return this.query;\n+ }\n+ }\n+}",
"filename": "src/main/java/org/elasticsearch/index/query/support/XContentStructure.java",
"status": "added"
},
{
"diff": "@@ -1924,6 +1924,31 @@ public void testValidateThatHasChildAndHasParentFilterAreNeverCached() throws Ex\n assertThat(statsResponse.getIndex(\"test\").getTotal().getFilterCache().getMemorySizeInBytes(), greaterThan(initialCacheSize));\n }\n \n+ // https://github.com/elasticsearch/elasticsearch/issues/5783\n+ @Test\n+ public void testQueryBeforeChildType() throws Exception {\n+ assertAcked(prepareCreate(\"test\")\n+ .addMapping(\"features\")\n+ .addMapping(\"posts\", \"_parent\", \"type=features\")\n+ .addMapping(\"specials\"));\n+ ensureGreen();\n+\n+ client().prepareIndex(\"test\", \"features\", \"1\").setSource(\"field\", \"foo\").get();\n+ client().prepareIndex(\"test\", \"posts\", \"1\").setParent(\"1\").setSource(\"field\", \"bar\").get();\n+ refresh();\n+\n+ SearchResponse resp;\n+ resp = client().prepareSearch(\"test\")\n+ .setSource(\"{\\\"query\\\": {\\\"has_child\\\": {\\\"type\\\": \\\"posts\\\", \\\"query\\\": {\\\"match\\\": {\\\"field\\\": \\\"bar\\\"}}}}}\").get();\n+ assertHitCount(resp, 1L);\n+\n+ // Now reverse the order for the type after the query\n+ resp = client().prepareSearch(\"test\")\n+ .setSource(\"{\\\"query\\\": {\\\"has_child\\\": {\\\"query\\\": {\\\"match\\\": {\\\"field\\\": \\\"bar\\\"}}, \\\"type\\\": \\\"posts\\\"}}}\").get();\n+ assertHitCount(resp, 1L);\n+\n+ }\n+\n private static HasChildFilterBuilder hasChildFilter(String type, QueryBuilder queryBuilder) {\n HasChildFilterBuilder hasChildFilterBuilder = FilterBuilders.hasChildFilter(type, queryBuilder);\n hasChildFilterBuilder.setShortCircuitCutoff(randomInt(10));",
"filename": "src/test/java/org/elasticsearch/search/child/SimpleChildQuerySearchTests.java",
"status": "modified"
}
]
} |
{
"body": "When constructing aggs, there might be failures (like validation failures) that can cause pages obtained at a previous step in the agg construction not to be released.\n",
"comments": [],
"number": 5703,
"title": "Failures in constructing aggs can cause pages not to be released"
} | {
"body": "For resources that have their life time effectively defined by the search\ncontext they are attached to, it is convenient to use the search context to\nschedule the release of such resources.\n\nThis commit changes aggregations to use this mechanism and also introduces\na `Lifetime` object that can be used to define how long the object should\nlive:\n- COLLECTION: if the object only needs to live during collection time and is\n what SearchContext.addReleasable would have chosen before this change\n (used for p/c queries),\n- PHASE for resources that only need to live during the current search\n phase (DFS, QUERY or FETCH),\n- CONTEXT for resources that need to live until the context is\n destroyed.\n\nAggregators are currently registed with CONTEXT. The reason is that when\nusing the DFS_QUERY_THEN_FETCH search type, they are allocated during the DFS\nphase but only used during the QUERY phase. However we should fix it in order\nto only allocate them during the QUERY phase and use PHASE as a life\ntime.\n\nClose #5703\n",
"number": 5799,
"review_comments": [
{
"body": "maybe call them `COLLECTION, PHASE, CONTEXT` since they are true subsets?\n",
"created_at": "2014-04-14T14:21:37Z"
},
{
"body": "for style can we maybe invert the if statements here ie `if (!masterCopy.isEmpty)` and `if (!success)`? I like to have only one return statement at the end of the method\n",
"created_at": "2014-04-14T14:22:58Z"
}
],
"title": "Improved SearchContext.addReleasable."
} | {
"commits": [
{
"message": "Improved SearchContext.addReleasable.\n\nFor resources that have their life time effectively defined by the search\ncontext they are attached to, it is convenient to use the search context to\nschedule the release of such resources.\n\nThis commit changes aggregations to use this mechanism and also introduces\na `Lifetime` object that can be used to define how long the object should\nlive:\n - COLLECTION: if the object only needs to live during collection time and is\n what SearchContext.addReleasable would have chosen before this change\n (used for p/c queries),\n - SEARCH_PHASE for resources that only need to live during the current search\n phase (DFS, QUERY or FETCH),\n - SEARCH_CONTEXT for resources that need to live until the context is\n destroyed.\n\nAggregators are currently registed with SEARCH_CONTEXT. The reason is that when\nusing the DFS_QUERY_THEN_FETCH search type, they are allocated during the DFS\nphase but only used during the QUERY phase. However we should fix it in order\nto only allocate them during the QUERY phase and use SEARCH_PHASE as a life\ntime.\n\nClose #5703"
},
{
"message": "Rename SEARCH_PHASE to PHASE and SEARCH_CONTEXT to CONTEXT."
},
{
"message": "Inverse conditions."
}
],
"files": [
{
"diff": "@@ -125,7 +125,7 @@ protected PrimaryResponse<ShardDeleteByQueryResponse, ShardDeleteByQueryRequest>\n indexShard.deleteByQuery(deleteByQuery);\n } finally {\n SearchContext searchContext = SearchContext.current();\n- searchContext.clearAndRelease();\n+ searchContext.release();\n SearchContext.removeCurrent();\n }\n return new PrimaryResponse<>(shardRequest.request, new ShardDeleteByQueryResponse(), null);\n@@ -148,7 +148,7 @@ protected void shardOperationOnReplica(ReplicaOperationRequest shardRequest) {\n indexShard.deleteByQuery(deleteByQuery);\n } finally {\n SearchContext searchContext = SearchContext.current();\n- searchContext.clearAndRelease();\n+ searchContext.release();\n SearchContext.removeCurrent();\n }\n }",
"filename": "src/main/java/org/elasticsearch/action/deletebyquery/TransportShardDeleteByQueryAction.java",
"status": "modified"
},
{
"diff": "@@ -41,6 +41,7 @@\n import org.elasticsearch.index.mapper.Uid;\n import org.elasticsearch.index.mapper.internal.UidFieldMapper;\n import org.elasticsearch.search.internal.SearchContext;\n+import org.elasticsearch.search.internal.SearchContext.Lifetime;\n \n import java.io.IOException;\n import java.util.Set;\n@@ -123,7 +124,7 @@ public Weight createWeight(IndexSearcher searcher) throws IOException {\n shortCircuitFilter = new ParentIdsFilter(parentType, nonNestedDocsFilter, parentIds);\n }\n final ParentWeight parentWeight = new ParentWeight(parentFilter, shortCircuitFilter, parentIds);\n- searchContext.addReleasable(parentWeight);\n+ searchContext.addReleasable(parentWeight, Lifetime.COLLECTION);\n releaseParentIds = false;\n return parentWeight;\n } finally {",
"filename": "src/main/java/org/elasticsearch/index/search/child/ChildrenConstantScoreQuery.java",
"status": "modified"
},
{
"diff": "@@ -41,6 +41,7 @@\n import org.elasticsearch.index.mapper.Uid;\n import org.elasticsearch.index.mapper.internal.UidFieldMapper;\n import org.elasticsearch.search.internal.SearchContext;\n+import org.elasticsearch.search.internal.SearchContext.Lifetime;\n \n import java.io.IOException;\n import java.util.Arrays;\n@@ -219,7 +220,7 @@ public Weight createWeight(IndexSearcher searcher) throws IOException {\n parentFilter = new ApplyAcceptedDocsFilter(this.parentFilter);\n }\n ParentWeight parentWeight = new ParentWeight(rewrittenChildQuery.createWeight(searcher), parentFilter, size, parentIds, scores, occurrences);\n- searchContext.addReleasable(parentWeight);\n+ searchContext.addReleasable(parentWeight, Lifetime.COLLECTION);\n return parentWeight;\n }\n ",
"filename": "src/main/java/org/elasticsearch/index/search/child/ChildrenQuery.java",
"status": "modified"
},
{
"diff": "@@ -27,6 +27,7 @@\n import org.elasticsearch.common.lucene.docset.DocIdSets;\n import org.elasticsearch.common.lucene.search.NoCacheFilter;\n import org.elasticsearch.search.internal.SearchContext;\n+import org.elasticsearch.search.internal.SearchContext.Lifetime;\n \n import java.io.IOException;\n import java.util.IdentityHashMap;\n@@ -66,7 +67,7 @@ public DocIdSet getDocIdSet(final AtomicReaderContext context, final Bits accept\n IndexSearcher searcher = searchContext.searcher();\n docIdSets = new IdentityHashMap<>();\n this.searcher = searcher;\n- searchContext.addReleasable(this);\n+ searchContext.addReleasable(this, Lifetime.COLLECTION);\n \n final Weight weight = searcher.createNormalizedWeight(query);\n for (final AtomicReaderContext leaf : searcher.getTopReaderContext().leaves()) {",
"filename": "src/main/java/org/elasticsearch/index/search/child/CustomQueryWrappingFilter.java",
"status": "modified"
},
{
"diff": "@@ -36,6 +36,7 @@\n import org.elasticsearch.index.fielddata.ordinals.Ordinals;\n import org.elasticsearch.index.fielddata.plain.ParentChildIndexFieldData;\n import org.elasticsearch.search.internal.SearchContext;\n+import org.elasticsearch.search.internal.SearchContext.Lifetime;\n \n import java.io.IOException;\n import java.util.Set;\n@@ -104,7 +105,7 @@ public Weight createWeight(IndexSearcher searcher) throws IOException {\n }\n \n final ChildrenWeight childrenWeight = new ChildrenWeight(childrenFilter, parentIds);\n- searchContext.addReleasable(childrenWeight);\n+ searchContext.addReleasable(childrenWeight, Lifetime.COLLECTION);\n releaseParentIds = false;\n return childrenWeight;\n } finally {",
"filename": "src/main/java/org/elasticsearch/index/search/child/ParentConstantScoreQuery.java",
"status": "modified"
},
{
"diff": "@@ -40,6 +40,7 @@\n import org.elasticsearch.index.fielddata.ordinals.Ordinals;\n import org.elasticsearch.index.fielddata.plain.ParentChildIndexFieldData;\n import org.elasticsearch.search.internal.SearchContext;\n+import org.elasticsearch.search.internal.SearchContext.Lifetime;\n \n import java.io.IOException;\n import java.util.Set;\n@@ -156,7 +157,7 @@ public Weight createWeight(IndexSearcher searcher) throws IOException {\n Releasables.release(collector.parentIds, collector.scores);\n }\n }\n- searchContext.addReleasable(childWeight);\n+ searchContext.addReleasable(childWeight, Lifetime.COLLECTION);\n return childWeight;\n }\n ",
"filename": "src/main/java/org/elasticsearch/index/search/child/ParentQuery.java",
"status": "modified"
},
{
"diff": "@@ -38,6 +38,7 @@\n import org.elasticsearch.index.mapper.Uid;\n import org.elasticsearch.index.mapper.internal.UidFieldMapper;\n import org.elasticsearch.search.internal.SearchContext;\n+import org.elasticsearch.search.internal.SearchContext.Lifetime;\n \n import java.io.IOException;\n import java.util.Arrays;\n@@ -168,7 +169,7 @@ public Weight createWeight(IndexSearcher searcher) throws IOException {\n }\n \n ParentWeight parentWeight = new ParentWeight(rewrittenChildQuery.createWeight(searcher), parentDocs);\n- searchContext.addReleasable(parentWeight);\n+ searchContext.addReleasable(parentWeight, Lifetime.COLLECTION);\n return parentWeight;\n }\n ",
"filename": "src/main/java/org/elasticsearch/index/search/child/TopChildrenQuery.java",
"status": "modified"
},
{
"diff": "@@ -23,12 +23,11 @@\n import org.apache.lucene.index.IndexReader;\n import org.apache.lucene.index.IndexableField;\n import org.apache.lucene.search.*;\n-import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.action.percolate.PercolateShardRequest;\n import org.elasticsearch.action.search.SearchType;\n import org.elasticsearch.cache.recycler.CacheRecycler;\n import org.elasticsearch.cache.recycler.PageCacheRecycler;\n-import org.elasticsearch.common.lease.Releasable;\n+import org.elasticsearch.common.lease.Releasables;\n import org.elasticsearch.common.lucene.HashedBytesRef;\n import org.elasticsearch.common.text.StringText;\n import org.elasticsearch.common.util.BigArrays;\n@@ -207,18 +206,15 @@ public SearchLookup lookup() {\n }\n \n @Override\n- public boolean release() throws ElasticsearchException {\n+ protected void doRelease() {\n try {\n if (docSearcher != null) {\n IndexReader indexReader = docSearcher.reader();\n fieldDataService.clear(indexReader);\n indexService.cache().clear(indexReader);\n- return docSearcher.release();\n- } else {\n- return false;\n }\n } finally {\n- engineSearcher.release();\n+ Releasables.release(docSearcher, engineSearcher);\n }\n }\n \n@@ -295,11 +291,6 @@ public SearchContext facets(SearchContextFacets facets) {\n }\n \n // Unused:\n- @Override\n- public boolean clearAndRelease() {\n- throw new UnsupportedOperationException();\n- }\n-\n @Override\n public void preProcess() {\n throw new UnsupportedOperationException();\n@@ -679,16 +670,6 @@ public FetchSearchResult fetchResult() {\n throw new UnsupportedOperationException();\n }\n \n- @Override\n- public void addReleasable(Releasable releasable) {\n- throw new UnsupportedOperationException();\n- }\n-\n- @Override\n- public void clearReleasables() {\n- throw new UnsupportedOperationException();\n- }\n-\n @Override\n public ScanContext scanContext() {\n throw new UnsupportedOperationException();",
"filename": "src/main/java/org/elasticsearch/percolator/PercolateContext.java",
"status": "modified"
},
{
"diff": "@@ -69,10 +69,8 @@\n import org.elasticsearch.search.dfs.DfsPhase;\n import org.elasticsearch.search.dfs.DfsSearchResult;\n import org.elasticsearch.search.fetch.*;\n-import org.elasticsearch.search.internal.DefaultSearchContext;\n-import org.elasticsearch.search.internal.InternalScrollSearchRequest;\n-import org.elasticsearch.search.internal.SearchContext;\n-import org.elasticsearch.search.internal.ShardSearchRequest;\n+import org.elasticsearch.search.internal.*;\n+import org.elasticsearch.search.internal.SearchContext.Lifetime;\n import org.elasticsearch.search.query.*;\n import org.elasticsearch.search.warmer.IndexWarmersMetaData;\n import org.elasticsearch.threadpool.ThreadPool;\n@@ -575,6 +573,8 @@ private void contextProcessedSuccessfully(SearchContext context) {\n }\n \n private void cleanContext(SearchContext context) {\n+ assert context == SearchContext.current();\n+ context.clearReleasables(Lifetime.PHASE);\n SearchContext.removeCurrent();\n }\n ",
"filename": "src/main/java/org/elasticsearch/search/SearchService.java",
"status": "modified"
},
{
"diff": "@@ -25,7 +25,6 @@\n import org.apache.lucene.search.Scorer;\n import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.common.inject.Inject;\n-import org.elasticsearch.common.lease.Releasables;\n import org.elasticsearch.common.lucene.search.Queries;\n import org.elasticsearch.common.lucene.search.XCollector;\n import org.elasticsearch.common.lucene.search.XConstantScoreQuery;\n@@ -106,41 +105,34 @@ public void execute(SearchContext context) throws ElasticsearchException {\n }\n \n Aggregator[] aggregators = context.aggregations().aggregators();\n- boolean success = false;\n- try {\n- List<Aggregator> globals = new ArrayList<>();\n- for (int i = 0; i < aggregators.length; i++) {\n- if (aggregators[i] instanceof GlobalAggregator) {\n- globals.add(aggregators[i]);\n- }\n+ List<Aggregator> globals = new ArrayList<>();\n+ for (int i = 0; i < aggregators.length; i++) {\n+ if (aggregators[i] instanceof GlobalAggregator) {\n+ globals.add(aggregators[i]);\n }\n+ }\n \n- // optimize the global collector based execution\n- if (!globals.isEmpty()) {\n- AggregationsCollector collector = new AggregationsCollector(globals, context.aggregations().aggregationContext());\n- Query query = new XConstantScoreQuery(Queries.MATCH_ALL_FILTER);\n- Filter searchFilter = context.searchFilter(context.types());\n- if (searchFilter != null) {\n- query = new XFilteredQuery(query, searchFilter);\n- }\n- try {\n- context.searcher().search(query, collector);\n- } catch (Exception e) {\n- throw new QueryPhaseExecutionException(context, \"Failed to execute global aggregators\", e);\n- }\n- collector.postCollection();\n+ // optimize the global collector based execution\n+ if (!globals.isEmpty()) {\n+ AggregationsCollector collector = new AggregationsCollector(globals, context.aggregations().aggregationContext());\n+ Query query = new XConstantScoreQuery(Queries.MATCH_ALL_FILTER);\n+ Filter searchFilter = context.searchFilter(context.types());\n+ if (searchFilter != null) {\n+ query = new XFilteredQuery(query, searchFilter);\n }\n-\n- List<InternalAggregation> aggregations = new ArrayList<>(aggregators.length);\n- for (Aggregator aggregator : context.aggregations().aggregators()) {\n- aggregations.add(aggregator.buildAggregation(0));\n+ try {\n+ context.searcher().search(query, collector);\n+ } catch (Exception e) {\n+ throw new QueryPhaseExecutionException(context, \"Failed to execute global aggregators\", e);\n }\n- context.queryResult().aggregations(new InternalAggregations(aggregations));\n- success = true;\n- } finally {\n- Releasables.release(success, aggregators);\n+ collector.postCollection();\n }\n \n+ List<InternalAggregation> aggregations = new ArrayList<>(aggregators.length);\n+ for (Aggregator aggregator : context.aggregations().aggregators()) {\n+ aggregations.add(aggregator.buildAggregation(0));\n+ }\n+ context.queryResult().aggregations(new InternalAggregations(aggregations));\n }\n \n ",
"filename": "src/main/java/org/elasticsearch/search/aggregations/AggregationPhase.java",
"status": "modified"
},
{
"diff": "@@ -19,12 +19,12 @@\n package org.elasticsearch.search.aggregations;\n \n import org.elasticsearch.common.lease.Releasable;\n-import org.elasticsearch.common.lease.Releasables;\n import org.elasticsearch.common.lucene.ReaderContextAware;\n import org.elasticsearch.common.util.BigArrays;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.search.aggregations.support.AggregationContext;\n import org.elasticsearch.search.internal.SearchContext;\n+import org.elasticsearch.search.internal.SearchContext.Lifetime;\n \n import java.io.IOException;\n import java.util.ArrayList;\n@@ -84,6 +84,9 @@ protected Aggregator(String name, BucketAggregationMode bucketAggregationMode, A\n assert factories != null : \"sub-factories provided to BucketAggregator must not be null, use AggragatorFactories.EMPTY instead\";\n this.factories = factories;\n this.subAggregators = factories.createSubAggregators(this, estimatedBucketsCount);\n+ // TODO: change it to SEARCH_PHASE, but this would imply allocating the aggregators in the QUERY\n+ // phase instead of DFS like it is done today\n+ context.searchContext().addReleasable(this, Lifetime.CONTEXT);\n }\n \n /**\n@@ -175,13 +178,7 @@ public final void postCollection() {\n /** Called upon release of the aggregator. */\n @Override\n public boolean release() {\n- boolean success = false;\n- try {\n- doRelease();\n- success = true;\n- } finally {\n- Releasables.release(success, subAggregators);\n- }\n+ doRelease();\n return true;\n }\n ",
"filename": "src/main/java/org/elasticsearch/search/aggregations/Aggregator.java",
"status": "modified"
},
{
"diff": "@@ -18,8 +18,6 @@\n */\n package org.elasticsearch.search.aggregations;\n \n-import com.google.common.collect.Iterables;\n-import com.google.common.collect.UnmodifiableIterator;\n import org.apache.lucene.index.AtomicReaderContext;\n import org.elasticsearch.common.lease.Releasables;\n import org.elasticsearch.common.util.ObjectArray;\n@@ -28,8 +26,6 @@\n \n import java.io.IOException;\n import java.util.ArrayList;\n-import java.util.Collections;\n-import java.util.Iterator;\n import java.util.List;\n \n /**\n@@ -82,9 +78,6 @@ public Aggregator[] createSubAggregators(Aggregator parent, final long estimated\n long arraySize = estimatedBucketsCount > 0 ? estimatedBucketsCount : 1;\n aggregators = bigArrays.newObjectArray(arraySize);\n aggregators.set(0, first);\n- for (long i = 1; i < arraySize; ++i) {\n- aggregators.set(i, createAndRegisterContextAware(parent.context(), factory, parent, estimatedBucketsCount));\n- }\n }\n \n @Override\n@@ -135,29 +128,7 @@ public InternalAggregation buildEmptyAggregation() {\n \n @Override\n public void doRelease() {\n- final Iterable<Aggregator> aggregatorsIter = new Iterable<Aggregator>() {\n-\n- @Override\n- public Iterator<Aggregator> iterator() {\n- return new UnmodifiableIterator<Aggregator>() {\n-\n- long i = 0;\n-\n- @Override\n- public boolean hasNext() {\n- return i < aggregators.size();\n- }\n-\n- @Override\n- public Aggregator next() {\n- return aggregators.get(i++);\n- }\n-\n- };\n- }\n-\n- };\n- Releasables.release(Iterables.concat(aggregatorsIter, Collections.singleton(aggregators)));\n+ Releasables.release(aggregators);\n }\n };\n }",
"filename": "src/main/java/org/elasticsearch/search/aggregations/AggregatorFactories.java",
"status": "modified"
},
{
"diff": "@@ -30,6 +30,7 @@\n import org.elasticsearch.search.fetch.FetchSubPhase;\n import org.elasticsearch.search.internal.InternalSearchHit;\n import org.elasticsearch.search.internal.SearchContext;\n+import org.elasticsearch.search.internal.SearchContext.Lifetime;\n \n import java.io.IOException;\n import java.util.List;\n@@ -97,7 +98,7 @@ private void addMatchedQueries(HitContext hitContext, ImmutableMap<String, Filte\n } catch (IOException e) {\n // ignore\n } finally {\n- SearchContext.current().clearReleasables();\n+ SearchContext.current().clearReleasables(Lifetime.COLLECTION);\n }\n }\n }",
"filename": "src/main/java/org/elasticsearch/search/fetch/matchedqueries/MatchedQueriesFetchSubPhase.java",
"status": "modified"
},
{
"diff": "@@ -21,13 +21,16 @@\n \n import org.apache.lucene.index.AtomicReaderContext;\n import org.apache.lucene.search.*;\n+import org.elasticsearch.common.lease.Releasable;\n+import org.elasticsearch.common.lease.Releasables;\n import org.elasticsearch.common.lucene.MinimumScoreCollector;\n import org.elasticsearch.common.lucene.MultiCollector;\n import org.elasticsearch.common.lucene.search.FilteredCollector;\n import org.elasticsearch.common.lucene.search.XCollector;\n import org.elasticsearch.common.lucene.search.XFilteredQuery;\n import org.elasticsearch.index.engine.Engine;\n import org.elasticsearch.search.dfs.CachedDfSource;\n+import org.elasticsearch.search.internal.SearchContext.Lifetime;\n \n import java.io.IOException;\n import java.util.ArrayList;\n@@ -36,7 +39,7 @@\n /**\n * Context-aware extension of {@link IndexSearcher}.\n */\n-public class ContextIndexSearcher extends IndexSearcher {\n+public class ContextIndexSearcher extends IndexSearcher implements Releasable {\n \n public static enum Stage {\n NA,\n@@ -66,10 +69,10 @@ public ContextIndexSearcher(SearchContext searchContext, Engine.Searcher searche\n setSimilarity(searcher.searcher().getSimilarity());\n }\n \n- public void release() {\n- if (mainDocIdSetCollector != null) {\n- mainDocIdSetCollector.release();\n- }\n+ @Override\n+ public boolean release() {\n+ Releasables.release(mainDocIdSetCollector);\n+ return true;\n }\n \n public void dfSource(CachedDfSource dfSource) {\n@@ -129,7 +132,7 @@ public Weight createNormalizedWeight(Query query) throws IOException {\n }\n return in.createNormalizedWeight(query);\n } catch (Throwable t) {\n- searchContext.clearReleasables();\n+ searchContext.clearReleasables(Lifetime.COLLECTION);\n throw new RuntimeException(t);\n }\n }\n@@ -187,7 +190,7 @@ public void search(List<AtomicReaderContext> leaves, Weight weight, Collector co\n }\n }\n } finally {\n- searchContext.clearReleasables();\n+ searchContext.clearReleasables(Lifetime.COLLECTION);\n }\n }\n \n@@ -200,7 +203,7 @@ public Explanation explain(Query query, int doc) throws IOException {\n XFilteredQuery filteredQuery = new XFilteredQuery(query, searchContext.aliasFilter());\n return super.explain(filteredQuery, doc);\n } finally {\n- searchContext.clearReleasables();\n+ searchContext.clearReleasables(Lifetime.COLLECTION);\n }\n }\n }\n\\ No newline at end of file",
"filename": "src/main/java/org/elasticsearch/search/internal/ContextIndexSearcher.java",
"status": "modified"
},
{
"diff": "@@ -30,7 +30,6 @@\n import org.elasticsearch.cache.recycler.CacheRecycler;\n import org.elasticsearch.cache.recycler.PageCacheRecycler;\n import org.elasticsearch.common.Nullable;\n-import org.elasticsearch.common.lease.Releasable;\n import org.elasticsearch.common.lease.Releasables;\n import org.elasticsearch.common.lucene.search.AndFilter;\n import org.elasticsearch.common.lucene.search.Queries;\n@@ -177,8 +176,6 @@ public class DefaultSearchContext extends SearchContext {\n \n private volatile long lastAccessTime = -1;\n \n- private List<Releasable> clearables = null;\n-\n private volatile boolean useSlowScroll;\n \n public DefaultSearchContext(long id, ShardSearchRequest request, SearchShardTarget shardTarget,\n@@ -207,19 +204,12 @@ public DefaultSearchContext(long id, ShardSearchRequest request, SearchShardTarg\n }\n \n @Override\n- public boolean release() throws ElasticsearchException {\n+ public void doRelease() throws ElasticsearchException {\n if (scanContext != null) {\n scanContext.clear();\n }\n // clear and scope phase we have\n- searcher.release();\n- engineSearcher.release();\n- return true;\n- }\n-\n- public boolean clearAndRelease() {\n- clearReleasables();\n- return release();\n+ Releasables.release(searcher, engineSearcher);\n }\n \n /**\n@@ -678,25 +668,6 @@ public FetchSearchResult fetchResult() {\n return fetchResult;\n }\n \n- @Override\n- public void addReleasable(Releasable releasable) {\n- if (clearables == null) {\n- clearables = new ArrayList<>();\n- }\n- clearables.add(releasable);\n- }\n-\n- @Override\n- public void clearReleasables() {\n- if (clearables != null) {\n- try {\n- Releasables.release(clearables);\n- } finally {\n- clearables.clear();\n- }\n- }\n- }\n-\n public ScanContext scanContext() {\n if (scanContext == null) {\n scanContext = new ScanContext();",
"filename": "src/main/java/org/elasticsearch/search/internal/DefaultSearchContext.java",
"status": "modified"
},
{
"diff": "@@ -23,6 +23,7 @@\n import org.apache.lucene.search.Collector;\n import org.apache.lucene.search.Scorer;\n import org.apache.lucene.util.FixedBitSet;\n+import org.elasticsearch.common.lease.Releasable;\n import org.elasticsearch.common.lucene.docset.ContextDocIdSet;\n import org.elasticsearch.common.lucene.search.XCollector;\n import org.elasticsearch.index.cache.docset.DocSetCache;\n@@ -33,7 +34,7 @@\n \n /**\n */\n-public class DocIdSetCollector extends XCollector {\n+public class DocIdSetCollector extends XCollector implements Releasable {\n \n private final DocSetCache docSetCache;\n private final Collector collector;\n@@ -53,10 +54,11 @@ public List<ContextDocIdSet> docSets() {\n return docSets;\n }\n \n- public void release() {\n+ public boolean release() {\n for (ContextDocIdSet docSet : docSets) {\n docSetCache.release(docSet);\n }\n+ return true;\n }\n \n @Override",
"filename": "src/main/java/org/elasticsearch/search/internal/DocIdSetCollector.java",
"status": "modified"
},
{
"diff": "@@ -18,6 +18,9 @@\n */\n package org.elasticsearch.search.internal;\n \n+import com.google.common.collect.Iterables;\n+import com.google.common.collect.Multimap;\n+import com.google.common.collect.MultimapBuilder;\n import org.apache.lucene.search.Filter;\n import org.apache.lucene.search.Query;\n import org.apache.lucene.search.ScoreDoc;\n@@ -27,6 +30,7 @@\n import org.elasticsearch.cache.recycler.PageCacheRecycler;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.lease.Releasable;\n+import org.elasticsearch.common.lease.Releasables;\n import org.elasticsearch.common.util.BigArrays;\n import org.elasticsearch.index.analysis.AnalysisService;\n import org.elasticsearch.index.cache.docset.DocSetCache;\n@@ -59,6 +63,8 @@\n import org.elasticsearch.search.scan.ScanContext;\n import org.elasticsearch.search.suggest.SuggestionSearchContext;\n \n+import java.util.ArrayList;\n+import java.util.Collection;\n import java.util.List;\n \n /**\n@@ -81,7 +87,18 @@ public static SearchContext current() {\n return current.get();\n }\n \n- public abstract boolean clearAndRelease();\n+ private Multimap<Lifetime, Releasable> clearables = null;\n+\n+ public final boolean release() {\n+ try {\n+ clearReleasables(Lifetime.CONTEXT);\n+ return true;\n+ } finally {\n+ doRelease();\n+ }\n+ }\n+\n+ protected abstract void doRelease();\n \n /**\n * Should be called before executing the main query and after all other parameters have been set.\n@@ -288,9 +305,29 @@ public static SearchContext current() {\n \n public abstract FetchSearchResult fetchResult();\n \n- public abstract void addReleasable(Releasable releasable);\n+ /**\n+ * Schedule the release of a resource. The time when {@link Releasable#release()} will be called on this object\n+ * is function of the provided {@link Lifetime}.\n+ */\n+ public void addReleasable(Releasable releasable, Lifetime lifetime) {\n+ if (clearables == null) {\n+ clearables = MultimapBuilder.enumKeys(Lifetime.class).arrayListValues().build();\n+ }\n+ clearables.put(lifetime, releasable);\n+ }\n \n- public abstract void clearReleasables();\n+ public void clearReleasables(Lifetime lifetime) {\n+ if (clearables != null) {\n+ List<Collection<Releasable>> releasables = new ArrayList<>();\n+ for (Lifetime lc : Lifetime.values()) {\n+ if (lc.compareTo(lifetime) > 0) {\n+ break;\n+ }\n+ releasables.add(clearables.removeAll(lc));\n+ }\n+ Releasables.release(Iterables.concat(releasables));\n+ }\n+ }\n \n public abstract ScanContext scanContext();\n \n@@ -305,4 +342,22 @@ public static SearchContext current() {\n public abstract boolean useSlowScroll();\n \n public abstract SearchContext useSlowScroll(boolean useSlowScroll);\n+\n+ /**\n+ * The life time of an object that is used during search execution.\n+ */\n+ public enum Lifetime {\n+ /**\n+ * This life time is for objects that only live during collection time.\n+ */\n+ COLLECTION,\n+ /**\n+ * This life time is for objects that need to live until the end of the current search phase.\n+ */\n+ PHASE,\n+ /**\n+ * This life time is for objects that need to live until the search context they are attached to is destroyed.\n+ */\n+ CONTEXT;\n+ }\n }",
"filename": "src/main/java/org/elasticsearch/search/internal/SearchContext.java",
"status": "modified"
},
{
"diff": "@@ -26,7 +26,6 @@\n import org.elasticsearch.action.search.SearchType;\n import org.elasticsearch.cache.recycler.CacheRecycler;\n import org.elasticsearch.cache.recycler.PageCacheRecycler;\n-import org.elasticsearch.common.lease.Releasable;\n import org.elasticsearch.common.util.BigArrays;\n import org.elasticsearch.index.analysis.AnalysisService;\n import org.elasticsearch.index.cache.docset.DocSetCache;\n@@ -94,11 +93,6 @@ public TestSearchContext() {\n this.indexFieldDataService = null;\n }\n \n- @Override\n- public boolean clearAndRelease() {\n- return false;\n- }\n-\n @Override\n public void preProcess() {\n }\n@@ -556,14 +550,6 @@ public FetchSearchResult fetchResult() {\n return null;\n }\n \n- @Override\n- public void addReleasable(Releasable releasable) {\n- }\n-\n- @Override\n- public void clearReleasables() {\n- }\n-\n @Override\n public ScanContext scanContext() {\n return null;\n@@ -590,8 +576,8 @@ public MapperService.SmartNameObjectMapper smartNameObjectMapper(String name) {\n }\n \n @Override\n- public boolean release() throws ElasticsearchException {\n- return false;\n+ public void doRelease() throws ElasticsearchException {\n+ // no-op\n }\n \n @Override",
"filename": "src/test/java/org/elasticsearch/index/search/child/TestSearchContext.java",
"status": "modified"
},
{
"diff": "@@ -18,7 +18,6 @@\n */\n package org.elasticsearch.search.aggregations.bucket;\n \n-import org.apache.lucene.util.LuceneTestCase;\n import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.action.index.IndexRequestBuilder;\n import org.elasticsearch.action.search.SearchResponse;\n@@ -33,7 +32,6 @@\n import org.elasticsearch.search.aggregations.metrics.stats.extended.ExtendedStats;\n import org.elasticsearch.search.aggregations.metrics.sum.Sum;\n import org.elasticsearch.test.ElasticsearchIntegrationTest;\n-import org.elasticsearch.test.cache.recycler.MockBigArrays;\n import org.hamcrest.Matchers;\n import org.junit.Test;\n \n@@ -217,7 +215,6 @@ public void singleValueField_OrderedByTermDesc() throws Exception {\n }\n }\n \n- @LuceneTestCase.AwaitsFix(bugUrl = \"https://github.com/elasticsearch/elasticsearch/issues/5703\")\n @Test\n public void singleValuedField_WithSubAggregation() throws Exception {\n SearchResponse response = client().prepareSearch(\"idx\").setTypes(\"type\")\n@@ -433,7 +430,6 @@ public void multiValuedField_WithValueScript_WithInheritedSubAggregator() throws\n }\n }\n \n- @LuceneTestCase.AwaitsFix(bugUrl = \"https://github.com/elasticsearch/elasticsearch/issues/5703\")\n @Test\n public void script_SingleValue() throws Exception {\n SearchResponse response = client().prepareSearch(\"idx\").setTypes(\"type\")\n@@ -615,7 +611,6 @@ public void partiallyUnmapped() throws Exception {\n }\n }\n \n- @LuceneTestCase.AwaitsFix(bugUrl = \"https://github.com/elasticsearch/elasticsearch/issues/5703\")\n @Test\n public void emptyAggregation() throws Exception {\n SearchResponse searchResponse = client().prepareSearch(\"empty_bucket_idx\")\n@@ -756,11 +751,8 @@ public void singleValuedField_OrderedBySubAggregationAsc_MultiHierarchyLevels()\n assertThat(max.getValue(), equalTo(asc ? 4.0 : 2.0));\n }\n \n- @LuceneTestCase.AwaitsFix(bugUrl = \"https://github.com/elasticsearch/elasticsearch/issues/5703\")\n @Test\n public void singleValuedField_OrderedByMissingSubAggregation() throws Exception {\n-\n- MockBigArrays.discardNextCheck();\n try {\n \n client().prepareSearch(\"idx\").setTypes(\"type\")\n@@ -776,11 +768,8 @@ public void singleValuedField_OrderedByMissingSubAggregation() throws Exception\n }\n }\n \n- @LuceneTestCase.AwaitsFix(bugUrl = \"https://github.com/elasticsearch/elasticsearch/issues/5703\")\n @Test\n public void singleValuedField_OrderedByNonMetricsOrMultiBucketSubAggregation() throws Exception {\n-\n- MockBigArrays.discardNextCheck();\n try {\n \n client().prepareSearch(\"idx\").setTypes(\"type\")\n@@ -797,11 +786,8 @@ public void singleValuedField_OrderedByNonMetricsOrMultiBucketSubAggregation() t\n }\n }\n \n- @LuceneTestCase.AwaitsFix(bugUrl = \"https://github.com/elasticsearch/elasticsearch/issues/5703\")\n @Test\n public void singleValuedField_OrderedByMultiValuedSubAggregation_WithUknownMetric() throws Exception {\n-\n- MockBigArrays.discardNextCheck();\n try {\n \n client().prepareSearch(\"idx\").setTypes(\"type\")\n@@ -819,11 +805,8 @@ public void singleValuedField_OrderedByMultiValuedSubAggregation_WithUknownMetri\n }\n }\n \n- @LuceneTestCase.AwaitsFix(bugUrl = \"https://github.com/elasticsearch/elasticsearch/issues/5703\")\n @Test\n public void singleValuedField_OrderedByMultiValuedSubAggregation_WithoutMetric() throws Exception {\n-\n- MockBigArrays.discardNextCheck();\n try {\n \n client().prepareSearch(\"idx\").setTypes(\"type\")\n@@ -841,7 +824,6 @@ public void singleValuedField_OrderedByMultiValuedSubAggregation_WithoutMetric()\n }\n }\n \n- @LuceneTestCase.AwaitsFix(bugUrl = \"https://github.com/elasticsearch/elasticsearch/issues/5703\")\n @Test\n public void singleValuedField_OrderedBySingleValueSubAggregationDesc() throws Exception {\n boolean asc = false;",
"filename": "src/test/java/org/elasticsearch/search/aggregations/bucket/DoubleTermsTests.java",
"status": "modified"
},
{
"diff": "@@ -18,7 +18,6 @@\n */\n package org.elasticsearch.search.aggregations.bucket;\n \n-import org.apache.lucene.util.LuceneTestCase;\n import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.action.index.IndexRequestBuilder;\n import org.elasticsearch.action.search.SearchResponse;\n@@ -32,7 +31,6 @@\n import org.elasticsearch.search.aggregations.metrics.stats.extended.ExtendedStats;\n import org.elasticsearch.search.aggregations.metrics.sum.Sum;\n import org.elasticsearch.test.ElasticsearchIntegrationTest;\n-import org.elasticsearch.test.cache.recycler.MockBigArrays;\n import org.hamcrest.Matchers;\n import org.junit.Test;\n \n@@ -513,7 +511,6 @@ public void script_MultiValued() throws Exception {\n }\n }\n \n- @LuceneTestCase.AwaitsFix(bugUrl = \"https://github.com/elasticsearch/elasticsearch/issues/5703\")\n @Test\n public void script_MultiValued_WithAggregatorInherited_NoExplicitType() throws Exception {\n \n@@ -753,11 +750,8 @@ public void singleValuedField_OrderedBySubAggregationAsc_MultiHierarchyLevels()\n assertThat(max.getValue(), equalTo(asc ? 4.0 : 2.0));\n }\n \n- @LuceneTestCase.AwaitsFix(bugUrl = \"https://github.com/elasticsearch/elasticsearch/issues/5703\")\n @Test\n public void singleValuedField_OrderedByMissingSubAggregation() throws Exception {\n-\n- MockBigArrays.discardNextCheck();\n try {\n \n client().prepareSearch(\"idx\").setTypes(\"type\")\n@@ -773,11 +767,8 @@ public void singleValuedField_OrderedByMissingSubAggregation() throws Exception\n }\n }\n \n- @LuceneTestCase.AwaitsFix(bugUrl = \"https://github.com/elasticsearch/elasticsearch/issues/5703\")\n @Test\n public void singleValuedField_OrderedByNonMetricsOrMultiBucketSubAggregation() throws Exception {\n-\n- MockBigArrays.discardNextCheck();\n try {\n \n client().prepareSearch(\"idx\").setTypes(\"type\")\n@@ -794,11 +785,8 @@ public void singleValuedField_OrderedByNonMetricsOrMultiBucketSubAggregation() t\n }\n }\n \n- @LuceneTestCase.AwaitsFix(bugUrl = \"https://github.com/elasticsearch/elasticsearch/issues/5703\")\n @Test\n public void singleValuedField_OrderedByMultiValuedSubAggregation_WithUknownMetric() throws Exception {\n-\n- MockBigArrays.discardNextCheck();\n try {\n \n client().prepareSearch(\"idx\").setTypes(\"type\")\n@@ -816,11 +804,8 @@ public void singleValuedField_OrderedByMultiValuedSubAggregation_WithUknownMetri\n }\n }\n \n- @LuceneTestCase.AwaitsFix(bugUrl = \"https://github.com/elasticsearch/elasticsearch/issues/5703\")\n @Test\n public void singleValuedField_OrderedByMultiValuedSubAggregation_WithoutMetric() throws Exception {\n-\n- MockBigArrays.discardNextCheck();\n try {\n \n client().prepareSearch(\"idx\").setTypes(\"type\")",
"filename": "src/test/java/org/elasticsearch/search/aggregations/bucket/LongTermsTests.java",
"status": "modified"
},
{
"diff": "@@ -18,7 +18,6 @@\n */\n package org.elasticsearch.search.aggregations.bucket;\n \n-import org.apache.lucene.util.LuceneTestCase;\n import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.action.index.IndexRequestBuilder;\n import org.elasticsearch.action.search.SearchResponse;\n@@ -33,7 +32,6 @@\n import org.elasticsearch.search.aggregations.metrics.stats.Stats;\n import org.elasticsearch.search.aggregations.metrics.sum.Sum;\n import org.elasticsearch.test.ElasticsearchIntegrationTest;\n-import org.elasticsearch.test.cache.recycler.MockBigArrays;\n import org.hamcrest.Matchers;\n import org.junit.Test;\n \n@@ -186,10 +184,8 @@ public void simple() throws Exception {\n assertThat(stats.getAvg(), equalTo((double) sum / count));\n }\n \n- @LuceneTestCase.AwaitsFix(bugUrl = \"https://github.com/elasticsearch/elasticsearch/issues/5703\")\n @Test\n public void onNonNestedField() throws Exception {\n- MockBigArrays.discardNextCheck();\n try {\n client().prepareSearch(\"idx\")\n .addAggregation(nested(\"nested\").path(\"value\")",
"filename": "src/test/java/org/elasticsearch/search/aggregations/bucket/NestedTests.java",
"status": "modified"
},
{
"diff": "@@ -19,20 +19,19 @@\n package org.elasticsearch.search.aggregations.bucket;\n \n import com.google.common.base.Strings;\n-import org.apache.lucene.util.LuceneTestCase;\n import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.action.index.IndexRequestBuilder;\n import org.elasticsearch.action.search.SearchResponse;\n import org.elasticsearch.index.query.FilterBuilders;\n import org.elasticsearch.search.aggregations.bucket.filter.Filter;\n import org.elasticsearch.search.aggregations.bucket.histogram.Histogram;\n import org.elasticsearch.search.aggregations.bucket.terms.Terms;\n+import org.elasticsearch.search.aggregations.bucket.terms.TermsAggregatorFactory.ExecutionMode;\n import org.elasticsearch.search.aggregations.metrics.avg.Avg;\n import org.elasticsearch.search.aggregations.metrics.stats.Stats;\n import org.elasticsearch.search.aggregations.metrics.stats.extended.ExtendedStats;\n import org.elasticsearch.search.aggregations.metrics.valuecount.ValueCount;\n import org.elasticsearch.test.ElasticsearchIntegrationTest;\n-import org.elasticsearch.test.cache.recycler.MockBigArrays;\n import org.hamcrest.Matchers;\n import org.junit.Test;\n \n@@ -45,7 +44,6 @@\n import static org.elasticsearch.index.query.FilterBuilders.termFilter;\n import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.*;\n-import static org.elasticsearch.search.aggregations.bucket.terms.TermsAggregatorFactory.ExecutionMode;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchResponse;\n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.is;\n@@ -943,11 +941,8 @@ public void singleValuedField_OrderedBySubAggregationAsc_MultiHierarchyLevels()\n }\n \n \n- @LuceneTestCase.AwaitsFix(bugUrl = \"https://github.com/elasticsearch/elasticsearch/issues/5703\")\n @Test\n public void singleValuedField_OrderedByMissingSubAggregation() throws Exception {\n-\n- MockBigArrays.discardNextCheck();\n try {\n \n client().prepareSearch(\"idx\").setTypes(\"type\")\n@@ -964,11 +959,8 @@ public void singleValuedField_OrderedByMissingSubAggregation() throws Exception\n }\n }\n \n- @LuceneTestCase.AwaitsFix(bugUrl = \"https://github.com/elasticsearch/elasticsearch/issues/5703\")\n @Test\n public void singleValuedField_OrderedByNonMetricsOrMultiBucketSubAggregation() throws Exception {\n-\n- MockBigArrays.discardNextCheck();\n try {\n \n client().prepareSearch(\"idx\").setTypes(\"type\")\n@@ -986,11 +978,8 @@ public void singleValuedField_OrderedByNonMetricsOrMultiBucketSubAggregation() t\n }\n }\n \n- @LuceneTestCase.AwaitsFix(bugUrl = \"https://github.com/elasticsearch/elasticsearch/issues/5703\")\n @Test\n public void singleValuedField_OrderedByMultiValuedSubAggregation_WithUknownMetric() throws Exception {\n-\n- MockBigArrays.discardNextCheck();\n try {\n client().prepareSearch(\"idx\").setTypes(\"type\")\n .addAggregation(terms(\"terms\")\n@@ -1008,11 +997,8 @@ public void singleValuedField_OrderedByMultiValuedSubAggregation_WithUknownMetri\n }\n }\n \n- @LuceneTestCase.AwaitsFix(bugUrl = \"https://github.com/elasticsearch/elasticsearch/issues/5703\")\n @Test\n public void singleValuedField_OrderedByMultiValuedSubAggregation_WithoutMetric() throws Exception {\n-\n- MockBigArrays.discardNextCheck();\n try {\n \n client().prepareSearch(\"idx\").setTypes(\"type\")",
"filename": "src/test/java/org/elasticsearch/search/aggregations/bucket/StringTermsTests.java",
"status": "modified"
},
{
"diff": "@@ -45,26 +45,11 @@ public class MockBigArrays extends BigArrays {\n */\n private static final boolean TRACK_ALLOCATIONS = false;\n \n- private static boolean DISCARD = false;\n-\n private static ConcurrentMap<Object, Object> ACQUIRED_ARRAYS = new ConcurrentHashMap<>();\n \n- /**\n- * Discard the next check that all arrays should be released. This can be useful if for a specific test, the cost to make\n- * sure the array is released is higher than the cost the user would experience if the array would not be released.\n- */\n- public static void discardNextCheck() {\n- DISCARD = true;\n- }\n-\n public static void ensureAllArraysAreReleased() throws Exception {\n- if (DISCARD) {\n- DISCARD = false;\n- } else {\n- final Map<Object, Object> masterCopy = Maps.newHashMap(ACQUIRED_ARRAYS);\n- if (masterCopy.isEmpty()) {\n- return;\n- }\n+ final Map<Object, Object> masterCopy = Maps.newHashMap(ACQUIRED_ARRAYS);\n+ if (!masterCopy.isEmpty()) {\n // not empty, we might be executing on a shared cluster that keeps on obtaining\n // and releasing arrays, lets make sure that after a reasonable timeout, all master\n // copy (snapshot) have been released\n@@ -74,14 +59,13 @@ public boolean apply(Object input) {\n return Sets.intersection(masterCopy.keySet(), ACQUIRED_ARRAYS.keySet()).isEmpty();\n }\n });\n- if (success) {\n- return;\n- }\n- masterCopy.keySet().retainAll(ACQUIRED_ARRAYS.keySet());\n- ACQUIRED_ARRAYS.keySet().removeAll(masterCopy.keySet()); // remove all existing master copy we will report on\n- if (!masterCopy.isEmpty()) {\n- final Object cause = masterCopy.entrySet().iterator().next().getValue();\n- throw new RuntimeException(masterCopy.size() + \" arrays have not been released\", cause instanceof Throwable ? (Throwable) cause : null);\n+ if (!success) {\n+ masterCopy.keySet().retainAll(ACQUIRED_ARRAYS.keySet());\n+ ACQUIRED_ARRAYS.keySet().removeAll(masterCopy.keySet()); // remove all existing master copy we will report on\n+ if (!masterCopy.isEmpty()) {\n+ final Object cause = masterCopy.entrySet().iterator().next().getValue();\n+ throw new RuntimeException(masterCopy.size() + \" arrays have not been released\", cause instanceof Throwable ? (Throwable) cause : null);\n+ }\n }\n }\n }",
"filename": "src/test/java/org/elasticsearch/test/cache/recycler/MockBigArrays.java",
"status": "modified"
},
{
"diff": "@@ -43,26 +43,24 @@ public class MockPageCacheRecycler extends PageCacheRecycler {\n \n public static void ensureAllPagesAreReleased() throws Exception {\n final Map<Object, Throwable> masterCopy = Maps.newHashMap(ACQUIRED_PAGES);\n- if (masterCopy.isEmpty()) {\n- return;\n- }\n- // not empty, we might be executing on a shared cluster that keeps on obtaining\n- // and releasing pages, lets make sure that after a reasonable timeout, all master\n- // copy (snapshot) have been released\n- boolean success = ElasticsearchTestCase.awaitBusy(new Predicate<Object>() {\n- @Override\n- public boolean apply(Object input) {\n- return Sets.intersection(masterCopy.keySet(), ACQUIRED_PAGES.keySet()).isEmpty();\n- }\n- });\n- if (success) {\n- return;\n- }\n- masterCopy.keySet().retainAll(ACQUIRED_PAGES.keySet());\n- ACQUIRED_PAGES.keySet().removeAll(masterCopy.keySet()); // remove all existing master copy we will report on\n if (!masterCopy.isEmpty()) {\n- final Throwable t = masterCopy.entrySet().iterator().next().getValue();\n- throw new RuntimeException(masterCopy.size() + \" pages have not been released\", t);\n+ // not empty, we might be executing on a shared cluster that keeps on obtaining\n+ // and releasing pages, lets make sure that after a reasonable timeout, all master\n+ // copy (snapshot) have been released\n+ boolean success = ElasticsearchTestCase.awaitBusy(new Predicate<Object>() {\n+ @Override\n+ public boolean apply(Object input) {\n+ return Sets.intersection(masterCopy.keySet(), ACQUIRED_PAGES.keySet()).isEmpty();\n+ }\n+ });\n+ if (!success) {\n+ masterCopy.keySet().retainAll(ACQUIRED_PAGES.keySet());\n+ ACQUIRED_PAGES.keySet().removeAll(masterCopy.keySet()); // remove all existing master copy we will report on\n+ if (!masterCopy.isEmpty()) {\n+ final Throwable t = masterCopy.entrySet().iterator().next().getValue();\n+ throw new RuntimeException(masterCopy.size() + \" pages have not been released\", t);\n+ }\n+ }\n }\n }\n ",
"filename": "src/test/java/org/elasticsearch/test/cache/recycler/MockPageCacheRecycler.java",
"status": "modified"
}
]
} |
{
"body": "posting certain valid geojson polygons results in the following exception:\n\norg.elasticsearch.index.mapper.MapperParsingException: failed to parse [geometry] at org.elasticsearch.index.mapper.geo.GeoShapeFieldMapper.parse(GeoShapeFieldMapper.java:249)\n...\n\ncurl -XDELETE 'http://localhost:9200/test'\n\ncurl -XPOST 'http://localhost:9200/test' -d '{\n \"mappings\":{\n \"test\":{\n \"properties\":{\n \"geometry\":{\n \"type\":\"geo_shape\",\n \"tree\":\"quadtree\",\n \"tree_levels\":14,\n \"distance_error_pct\":0.0\n }\n }\n }\n }\n}'\n\ncurl -XPOST 'http://localhost:9200/test/test/1' -d '{\n \"geometry\":{\n \"type\":\"Polygon\",\n \"coordinates\":[\n [[-85.0018514,37.1311314],\n [-85.0016645,37.1315293],\n [-85.0016246,37.1317069],\n [-85.0016526,37.1318183],\n [-85.0017119,37.1319196],\n [-85.0019371,37.1321182],\n [-85.0019972,37.1322115],\n [-85.0019942,37.1323234],\n [-85.0019543,37.1324336],\n [-85.001906,37.1324985],\n [-85.001834,37.1325497],\n [-85.0016965,37.1325907],\n [-85.0016011,37.1325873],\n [-85.0014816,37.1325353],\n [-85.0011755,37.1323509],\n [-85.000955,37.1322802],\n [-85.0006241,37.1322529],\n [-85.0000002,37.1322307],\n [-84.9994,37.1323001],\n [-84.999109,37.1322864],\n [-84.998934,37.1322415],\n [-84.9988639,37.1321888],\n [-84.9987841,37.1320944],\n [-84.9987208,37.131954],\n [-84.998736,37.1316611],\n [-84.9988091,37.131334],\n [-84.9989283,37.1311337],\n [-84.9991943,37.1309198],\n [-84.9993573,37.1308459],\n [-84.9995888,37.1307924],\n [-84.9998746,37.130806],\n [-85.0000002,37.1308358],\n [-85.0004984,37.1310658],\n [-85.0008008,37.1311625],\n [-85.0009461,37.1311684],\n [-85.0011373,37.1311515],\n [-85.0016455,37.1310491],\n [-85.0018514,37.1311314]],\n [[-85.0000002,37.1317672],\n [-85.0001983,37.1317538],\n [-85.0003378,37.1317582],\n [-85.0004697,37.131792],\n [-85.0008048,37.1319439],\n [-85.0009342,37.1319838],\n [-85.0010184,37.1319463],\n [-85.0010618,37.13184],\n [-85.0010057,37.1315102],\n [-85.000977,37.1314403],\n [-85.0009182,37.1313793],\n [-85.0005366,37.1312209],\n [-85.000224,37.1311466],\n [-85.000087,37.1311356],\n [-85.0000002,37.1311433],\n [-84.9995021,37.1312336],\n [-84.9993308,37.1312859],\n [-84.9992567,37.1313252],\n [-84.9991868,37.1314277],\n [-84.9991593,37.1315381],\n [-84.9991841,37.1316527],\n [-84.9992329,37.1317117],\n [-84.9993527,37.1317788],\n [-84.9994931,37.1318061],\n [-84.9996815,37.1317979],\n [-85.0000002,37.1317672]]]\n }\n}'\n\nExpected:\n {\"ok\":true,\"_index\":\"test\",\"_type\":\"test\",\"_id\":\"1\",\"_version\":1}\nActual:\n {\"error\":\"MapperParsingException[failed to parse [geometry]]; nested: ArrayIndexOutOfBoundsException[-1]; \",\"status\":400}\n\nThis is an issue with es-1.1.0. The same requests execute successfully against es-0.2.4.\n\nIt is possible to view and validate the data in qgis.\n\n\n",
"comments": [
{
"body": "Checked additional versions of elastic search:\nelasticsearch-0.20.4 PASSED\nelasticsearch-0.90.13 PASSED\nelasticsearch-1.0.0.RC1 FAILED\nelasticsearch-1.0.2 FAILED\nelasticsearch-1.1.0 FAILED\n",
"created_at": "2014-04-11T18:02:27Z"
},
{
"body": "Here is a small test case which triggers the issue in v1.1.0\n\nimport org.elasticsearch.common.geo.builders.ShapeBuilder;\nimport org.elasticsearch.common.xcontent.XContentParser;\nimport org.elasticsearch.common.xcontent.json.JsonXContent;\n\npublic class Test {\n public static void main(String[] args) throws Exception {\n\n```\n String geoJson = \"{ \\\"type\\\": \\\"Polygon\\\",\\\"coordinates\\\": [[[-85.0018514,37.1311314],[-85.0016645,37.1315293],[-85.0016246,37.1317069],[-85.0016526,37.1318183],[-85.0017119,37.1319196],[-85.0019371,37.1321182],[-85.0019972,37.1322115],[-85.0019942,37.1323234],[-85.0019543,37.1324336],[-85.001906,37.1324985],[-85.001834,37.1325497],[-85.0016965,37.1325907],[-85.0016011,37.1325873],[-85.0014816,37.1325353],[-85.0011755,37.1323509],[-85.000955,37.1322802],[-85.0006241,37.1322529],[-85.0000002,37.1322307],[-84.9994,37.1323001],[-84.999109,37.1322864],[-84.998934,37.1322415],[-84.9988639,37.1321888],[-84.9987841,37.1320944],[-84.9987208,37.131954],[-84.998736,37.1316611],[-84.9988091,37.131334],[-84.9989283,37.1311337],[-84.9991943,37.1309198],[-84.9993573,37.1308459],[-84.9995888,37.1307924],[-84.9998746,37.130806],[-85.0000002,37.1308358],[-85.0004984,37.1310658],[-85.0008008,37.1311625],[-85.0009461,37.1311684],[-85.0011373,37.1311515],[-85.0016455,37.1310491],[-85.0018514,37.1311314]],[[-85.0000002,37.1317672],[-85.0001983,37.1317538],[-85.0003378,37.1317582],[-85.0004697,37.131792],[-85.0008048,37.1319439],[-85.0009342,37.1319838],[-85.0010184,37.1319463],[-85.0010618,37.13184],[-85.0010057,37.1315102],[-85.000977,37.1314403],[-85.0009182,37.1313793],[-85.0005366,37.1312209],[-85.000224,37.1311466],[-85.000087,37.1311356],[-85.0000002,37.1311433],[-84.9995021,37.1312336],[-84.9993308,37.1312859],[-84.9992567,37.1313252],[-84.9991868,37.1314277],[-84.9991593,37.1315381],[-84.9991841,37.1316527],[-84.9992329,37.1317117],[-84.9993527,37.1317788],[-84.9994931,37.1318061],[-84.9996815,37.1317979],[-85.0000002,37.1317672]]]}\";\n\n XContentParser parser = JsonXContent.jsonXContent.createParser(geoJson); \n parser.nextToken();\n ShapeBuilder.parse(parser).build();\n}\n```\n\n}\n\n// stack trace\nException in thread \"main\" java.lang.ArrayIndexOutOfBoundsException: -1\n at org.elasticsearch.common.geo.builders.BasePolygonBuilder.assign(BasePolygonBuilder.java:366)\n at org.elasticsearch.common.geo.builders.BasePolygonBuilder.compose(BasePolygonBuilder.java:347)\n at org.elasticsearch.common.geo.builders.BasePolygonBuilder.coordinates(BasePolygonBuilder.java:146)\n at org.elasticsearch.common.geo.builders.BasePolygonBuilder.buildGeometry(BasePolygonBuilder.java:175)\n at org.elasticsearch.common.geo.builders.BasePolygonBuilder.build(BasePolygonBuilder.java:151)\n",
"created_at": "2014-04-11T21:58:08Z"
},
{
"body": "Hey.\n\nyou can also use the github geo json feature as well to visualize this, see https://gist.github.com/spinscale/9cc6ba24bff03cca2be5\n\nthere has been a huge geo refactoring going on between those affected versions, will check for a regression there.\n\nDo you have any other data where this happens with, or is it just this single polygon?\n\nThanks a lot for all your input!\n",
"created_at": "2014-04-14T08:08:07Z"
},
{
"body": "I updated my geojson gist above and added a test with a simple polygon (rectangle with hole, which is a rectangle as well), which works... need to investigate\n",
"created_at": "2014-04-14T08:49:34Z"
},
{
"body": "I have several thousand polygons that result in this error. I have not\npulled them all out of the logs yet, I will hopefully have time to do that\ntoday.\n\nOn Mon, Apr 14, 2014 at 1:08 AM, Alexander Reelsen <notifications@github.com\n\n> wrote:\n> \n> Hey.\n> \n> you can also use the github geo json feature as well to visualize this,\n> see https://gist.github.com/spinscale/9cc6ba24bff03cca2be5\n> \n> there has been a huge geo refactoring going on between those affected\n> versions, will check for a regression there.\n> \n> Do you have any other data where this happens with, or is it just this\n> single polygon?\n> \n> Thanks a lot for all your input!\n> \n> —\n> Reply to this email directly or view it on GitHubhttps://github.com/elasticsearch/elasticsearch/issues/5773#issuecomment-40342030\n> .\n",
"created_at": "2014-04-14T14:38:49Z"
},
{
"body": "I'd highly appreciate it, if you could test with the PR referenced above... it solves this problem, but maybe you could check if I introduced side effects (one just came to mind, which i need to check)...\n",
"created_at": "2014-04-14T14:49:52Z"
},
{
"body": "I have put up a file containing 15k+ polygons which had the same\nArrayIndexOutOfBoundsException against ES1.0+. It is available at\nhttps://github.com/marcuswr/elasticsearch-polygon-data.git\n\nAdditionally, the gist at https://gist.github.com/marcuswr/493406918e0a9edeb509 contains a set of polygons which still fail against the patched version (same IndexOutOfBoundsException)\n\nOn Mon, Apr 14, 2014 at 7:50 AM, Alexander Reelsen <notifications@github.com\n\n> wrote:\n> \n> I'd highly appreciate it, if you could test with the PR referenced\n> above... it solves this problem, but maybe you could check if I introduced\n> side effects (one just came to mind, which i need to check)...\n> \n> —\n> Reply to this email directly or view it on GitHubhttps://github.com/elasticsearch/elasticsearch/issues/5773#issuecomment-40374176\n> .\n",
"created_at": "2014-04-14T20:03:46Z"
},
{
"body": "I have added another file to the repository, with the subset of polygons,\nwhich still failed to ingest, after the patch was applied.\nhttps://github.com/marcuswr/elasticsearch-polygon-data/blob/master/patch_errors.geojson\n\nOn Mon, Apr 14, 2014 at 1:03 PM, Marcus Richardson\nmrichardson@climate.comwrote:\n\n> I have put up a file containing 15k+ polygons which had the same\n> ArrayIndexOutOfBoundsException against ES1.0+. It is available at\n> https://github.com/marcuswr/elasticsearch-polygon-data.git\n> \n> Additionally, I have pulled down your fix locally (your branch of master,\n> and I used your patch against es 1.1). I am able to insert some polygons\n> but\n> not all (I have not tried all of them). I also cloned your gist to:\n> https://gist.github.com/marcuswr/493406918e0a9edeb509 The third\n> (test.geojson) renders correctly in the gist, however, does not work for me\n> against either patched version. The 4th file (test1.geojson) does work in\n> both versions.\n> \n> On Mon, Apr 14, 2014 at 7:50 AM, Alexander Reelsen <\n> notifications@github.com> wrote:\n> \n> > I'd highly appreciate it, if you could test with the PR referenced\n> > above... it solves this problem, but maybe you could check if I introduced\n> > side effects (one just came to mind, which i need to check)...\n> > \n> > —\n> > Reply to this email directly or view it on GitHubhttps://github.com/elasticsearch/elasticsearch/issues/5773#issuecomment-40374176\n> > .\n",
"created_at": "2014-04-14T22:31:07Z"
},
{
"body": "thanks a lot for all the data, I will test with the other polygons as soon as possible (travelling a bit the next days, but will try to check ASAP).\n",
"created_at": "2014-04-20T11:56:07Z"
},
{
"body": "@spinscale wondering if you had a chance to look at this? We are looking to do a major elastic search upgrade, but will not be able to without a fix. Please let me know if there is anything I can do to assist. \n",
"created_at": "2014-04-29T22:42:11Z"
},
{
"body": "sorry, did not yet have the time to check out all the other polygons you supplied due to traveling\n",
"created_at": "2014-04-30T14:40:23Z"
},
{
"body": "These polygons fail to ingest when the point in the hole (the first point in the LineString, or the leftmost point in the patched version https://github.com/elasticsearch/elasticsearch/pull/5796) has the same x coordinate (starting or ending) as 2 or more line segments of the shell.\nI have created another gist (https://gist.github.com/marcuswr/e0490b4f6e25b344e779) with simplified polygons which demonstrate this problem (hole_aligned.geojson, hole_aligned_simple.geojson). These will fail on the patched version. Changing the order of the coordinates in the hole so the leftmost coordinate is first and last (repeated), should result in it failing in both patched and non-patched). There are additional polygons shown which touch or cross the dateline (the dateline hole should be removed from these polygons -> convert to multi-polygon).\nThese failures only occur when fixdateline = true.\n\nAdditionally, could you tell me why there is special handling of the ear in ShapeBuilder.intersections?\n if (Double.compare(p1.x, dateline) == Double.compare(edges[i].next.next.coordinate.x, dateline)) {\n // Ignore the ear\n\nAlso Double.compare is not guaranteed to return -1, 0, 1 I'm not sure what the equality is testing for.\n",
"created_at": "2014-05-08T23:50:20Z"
},
{
"body": "thanks a lot for testing and debugging, your comments make a lot of sense, I will close the PR\n",
"created_at": "2014-05-09T08:42:43Z"
},
{
"body": "Hey @spinscale sorry to bug, but do you think you could glance at this PR and at least indicate if ES would move forward with it? If so, we (I work with Marcus) can proceed locally as the finer details of the PR are worked out.\n",
"created_at": "2014-05-20T15:02:54Z"
}
],
"number": 5773,
"title": "Geo: Valid complex polygons fail to parse"
} | {
"body": "A multi polygon with holes didnt correctly calculate possible intersections\nbecause it used the first edge instead of the edge with the min/max value of\nthe longitude (depending if positive/negative)\n\nCloses #5773\n",
"number": 5796,
"review_comments": [],
"title": "Geo: Find out min/max values for correct intersection handling"
} | {
"commits": [
{
"message": "Geo: Find out min/max values for correct intersection handling\n\nA multi polygon with holes didnt correctly calculate possible intersections\nbecause it used the first edge instead of the edge with the min/max value of\nthe longitude (depending if positive/negative)\n\nCloses #5773"
}
],
"files": [
{
"diff": "@@ -359,7 +359,8 @@ private static void assign(Edge[] holes, Coordinate[][] points, int numHoles, Ed\n }\n for (int i = 0; i < numHoles; i++) {\n final Edge current = holes[i];\n- final int intersections = intersections(current.coordinate.x, edges);\n+ double x = findMinimumXCoord(current);\n+ final int intersections = intersections(x, edges);\n final int pos = Arrays.binarySearch(edges, 0, intersections, current, INTERSECTION_ORDER);\n assert pos < 0 : \"illegal state: two edges cross the datum at the same position\";\n final int index = -(pos+2);\n@@ -375,6 +376,18 @@ private static void assign(Edge[] holes, Coordinate[][] points, int numHoles, Ed\n }\n }\n \n+ private static double findMinimumXCoord(Edge current) {\n+ double coordinate = current.coordinate.x;\n+\n+ Edge next = current.next;\n+ while (next != null && !current.equals(next)) {\n+ coordinate = coordinate >= 0 ? Math.max(next.coordinate.x, coordinate) : Math.min(next.coordinate.x, coordinate);\n+ next = next.next;\n+ }\n+\n+ return coordinate;\n+ }\n+\n private static int merge(Edge[] intersections, int offset, int length, Edge[] holes, int numHoles) {\n // Intersections appear pairwise. On the first edge the inner of\n // of the polygon is entered. On the second edge the outer face",
"filename": "src/main/java/org/elasticsearch/common/geo/builders/BasePolygonBuilder.java",
"status": "modified"
},
{
"diff": "@@ -38,6 +38,7 @@\n import java.util.List;\n \n import static org.elasticsearch.common.geo.builders.ShapeBuilder.SPATIAL_CONTEXT;\n+import static org.hamcrest.Matchers.instanceOf;\n \n \n /**\n@@ -145,6 +146,33 @@ public void testParse_polygonWithHole() throws IOException {\n assertGeometryEquals(jtsGeom(expected), polygonGeoJson);\n }\n \n+ @Test\n+ public void testParsePolygonWithHolesBroken() throws Exception {\n+ String geoJson = \"{ \\\"type\\\": \\\"Polygon\\\",\\\"coordinates\\\": [[[-85.0018514,37.1311314],[-85.0016645,37.1315293],[-85.0016246,37.1317069],[-85.0016526,37.1318183],[-85.0017119,37.1319196],[-85.0019371,37.1321182],[-85.0019972,37.1322115],[-85.0019942,37.1323234],[-85.0019543,37.1324336],[-85.001906,37.1324985],[-85.001834,37.1325497],[-85.0016965,37.1325907],[-85.0016011,37.1325873],[-85.0014816,37.1325353],[-85.0011755,37.1323509],[-85.000955,37.1322802],[-85.0006241,37.1322529],[-85.0000002,37.1322307],[-84.9994,37.1323001],[-84.999109,37.1322864],[-84.998934,37.1322415],[-84.9988639,37.1321888],[-84.9987841,37.1320944],[-84.9987208,37.131954],[-84.998736,37.1316611],[-84.9988091,37.131334],[-84.9989283,37.1311337],[-84.9991943,37.1309198],[-84.9993573,37.1308459],[-84.9995888,37.1307924],[-84.9998746,37.130806],[-85.0000002,37.1308358],[-85.0004984,37.1310658],[-85.0008008,37.1311625],[-85.0009461,37.1311684],[-85.0011373,37.1311515],[-85.0016455,37.1310491],[-85.0018514,37.1311314]],[[-85.0000002,37.1317672],[-85.0001983,37.1317538],[-85.0003378,37.1317582],[-85.0004697,37.131792],[-85.0008048,37.1319439],[-85.0009342,37.1319838],[-85.0010184,37.1319463],[-85.0010618,37.13184],[-85.0010057,37.1315102],[-85.000977,37.1314403],[-85.0009182,37.1313793],[-85.0005366,37.1312209],[-85.000224,37.1311466],[-85.000087,37.1311356],[-85.0000002,37.1311433],[-84.9995021,37.1312336],[-84.9993308,37.1312859],[-84.9992567,37.1313252],[-84.9991868,37.1314277],[-84.9991593,37.1315381],[-84.9991841,37.1316527],[-84.9992329,37.1317117],[-84.9993527,37.1317788],[-84.9994931,37.1318061],[-84.9996815,37.1317979],[-85.0000002,37.1317672]]]}\";\n+ XContentParser parser = JsonXContent.jsonXContent.createParser(geoJson);\n+ parser.nextToken();\n+ Shape shape = ShapeBuilder.parse(parser).build();\n+ assertThat(shape, instanceOf(JtsGeometry.class));\n+ JtsGeometry geometry = (JtsGeometry) shape;\n+ assertThat(geometry.getGeom(), instanceOf(Polygon.class));\n+ }\n+\n+ @Test\n+ public void testParsePolygonWithHolesWorks() throws Exception {\n+ String geoJson = \"{\\n\" +\n+ \"\\\"type\\\":\\\"Polygon\\\",\\n\" +\n+ \"\\\"coordinates\\\":[\\n\" +\n+ \"[[-85.0018514,37.1311314],[-84.998998, 37.1311314],[-84.998998, 37.129352],[-85.001851, 37.129352],[-85.0018514,37.1311314]],\\n\" +\n+ \"[[-85.0000, 37.1301],[-85.001, 37.1301],[-85.001, 37.1303],[-85.0000, 37.1303],[-85.0000, 37.1301]]]\\n\" +\n+ \"}\\n\";\n+ XContentParser parser = JsonXContent.jsonXContent.createParser(geoJson);\n+ parser.nextToken();\n+ Shape shape = ShapeBuilder.parse(parser).build();\n+ assertThat(shape, instanceOf(JtsGeometry.class));\n+ JtsGeometry geometry = (JtsGeometry) shape;\n+ assertThat(geometry.getGeom(), instanceOf(Polygon.class));\n+ }\n+\n @Test\n public void testParse_multiPoint() throws IOException {\n String multiPointGeoJson = XContentFactory.jsonBuilder().startObject().field(\"type\", \"MultiPoint\")",
"filename": "src/test/java/org/elasticsearch/common/geo/GeoJSONShapeParserTests.java",
"status": "modified"
}
]
} |
{
"body": "the node level TTL Purger thread fires up bulk delete request that might trigger `auto_create_index` if the purger thread runs a bulk after the index has been deleted.\n",
"comments": [
{
"body": "Possible solutions:\n- Add a bulk request option to never create new indices, even if `action.auto_create_index` is activated (however this is some sort of override-specialty and not too consistent) and always set this option in the purger\n- Duplicate the check in https://github.com/elasticsearch/elasticsearch/blob/master/src/main/java/org/elasticsearch/action/bulk/TransportBulkAction.java#L95-L114 in the TTLPurger and ensure that the indices do exist. Biggest problem: This again leaves a small window open between the check and the execution, where an index might be deleted\n\nMaybe there is a cleaner solution, which I cant spot ATM\n",
"created_at": "2014-04-10T16:49:49Z"
},
{
"body": "I know its technically a breaking change, but technically you could also stop any sort of DELETE requests from auto creating the index.\n",
"created_at": "2014-04-10T16:51:53Z"
},
{
"body": "@nik9000 we could, and we should definitely start another issue to discuss if thats what we want to do or not (at least for master)\n",
"created_at": "2014-04-10T16:59:18Z"
},
{
"body": "@kimchy makes sense to me.\n",
"created_at": "2014-04-10T17:00:02Z"
},
{
"body": "@spinscale @kimchy @martijnvg I think I have a more elegant solution for this particular problem I just added a commit and will open a PR soon\n",
"created_at": "2014-04-14T10:08:17Z"
}
],
"number": 5766,
"title": "TTL Purge Thread might bring back already deleted index"
} | {
"body": "This prevents executing bulks internal autocreate indices logic\nand ensures that this internal request never creates an index\nautomaticall.\n\nThis fixes a bug where the TTL purger thread ran after the actual\nindex it was purging was already closed / deleted and that re-created\nthat index.\n\nCloses #5766\n",
"number": 5795,
"review_comments": [
{
"body": "Do we need this?\n",
"created_at": "2014-04-14T21:48:27Z"
},
{
"body": "we don't I will fix\n",
"created_at": "2014-04-15T09:52:49Z"
},
{
"body": "I think we might miss some responses in case of onFailure() because it will be using responses created [here](https://github.com/s1monw/elasticsearch/blob/issues/5766/src/main/java/org/elasticsearch/action/bulk/TransportBulkAction.java#L94) \n",
"created_at": "2014-04-15T10:24:28Z"
},
{
"body": "yeah you are right test fail all over the place :) I will revert this in a sec\n",
"created_at": "2014-04-15T10:26:41Z"
}
],
"title": "Use TransportBulkAction for internal request from IndicesTTLService"
} | {
"commits": [
{
"message": "Use TransportBulkAction for internal request from IndicesTTLService\n\nThis prevents executing bulks internal autocreate indices logic\nand ensures that this internal request never creates an index\nautomaticall.\n\nThis fixes a bug where the TTL purger thread ran after the actual\nindex it was purging was already closed / deleted and that re-created\nthat index.\n\nCloses #5766"
}
],
"files": [
{
"diff": "@@ -1056,7 +1056,7 @@\n <exclude>org/elasticsearch/bootstrap/Bootstrap.class</exclude>\n <exclude>org/elasticsearch/Version.class</exclude>\n <exclude>org/elasticsearch/index/merge/Merges.class</exclude>\n-\t\t\t\t<exclude>org/elasticsearch/common/lucene/search/Queries$QueryWrapperFilterFactory.class</exclude>\n+ <exclude>org/elasticsearch/common/lucene/search/Queries$QueryWrapperFilterFactory.class</exclude>\n <!-- end excludes for valid system-out -->\n <!-- start excludes for Unsafe -->\n <exclude>org/elasticsearch/common/util/UnsafeUtils.class</exclude>",
"filename": "pom.xml",
"status": "modified"
},
{
"diff": "@@ -53,8 +53,10 @@\n import org.elasticsearch.transport.TransportChannel;\n import org.elasticsearch.transport.TransportService;\n \n-import java.util.*;\n-import java.util.concurrent.atomic.AtomicBoolean;\n+import java.util.List;\n+import java.util.Locale;\n+import java.util.Map;\n+import java.util.Set;\n import java.util.concurrent.atomic.AtomicInteger;\n \n /**\n@@ -178,7 +180,18 @@ private boolean setResponseFailureIfIndexMatches(AtomicArray<BulkItemResponse> r\n return false;\n }\n \n- private void executeBulk(final BulkRequest bulkRequest, final long startTime, final ActionListener<BulkResponse> listener, final AtomicArray<BulkItemResponse> responses) {\n+ /**\n+ * This method executes the {@link BulkRequest} and calls the given listener once the request returns.\n+ * This method will not create any indices even if auto-create indices is enabled.\n+ *\n+ * @see #doExecute(BulkRequest, org.elasticsearch.action.ActionListener)\n+ */\n+ public void executeBulk(final BulkRequest bulkRequest, final ActionListener<BulkResponse> listener) {\n+ final long startTime = System.currentTimeMillis();\n+ executeBulk(bulkRequest, startTime, listener, new AtomicArray<BulkItemResponse>(bulkRequest.requests.size()));\n+ }\n+\n+ private void executeBulk(final BulkRequest bulkRequest, final long startTime, final ActionListener<BulkResponse> listener, final AtomicArray<BulkItemResponse> responses ) {\n ClusterState clusterState = clusterService.state();\n // TODO use timeout to wait here if its blocked...\n clusterState.blocks().globalBlockedRaiseException(ClusterBlockLevel.WRITE);",
"filename": "src/main/java/org/elasticsearch/action/bulk/TransportBulkAction.java",
"status": "modified"
},
{
"diff": "@@ -27,10 +27,8 @@\n import org.apache.lucene.search.Scorer;\n import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.action.ActionListener;\n-import org.elasticsearch.action.bulk.BulkRequestBuilder;\n-import org.elasticsearch.action.bulk.BulkResponse;\n+import org.elasticsearch.action.bulk.*;\n import org.elasticsearch.action.delete.DeleteRequest;\n-import org.elasticsearch.client.Client;\n import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.MetaData;\n@@ -44,7 +42,6 @@\n import org.elasticsearch.index.fieldvisitor.UidAndRoutingFieldsVisitor;\n import org.elasticsearch.index.mapper.FieldMapper;\n import org.elasticsearch.index.mapper.FieldMappers;\n-import org.elasticsearch.index.mapper.MapperService;\n import org.elasticsearch.index.mapper.Uid;\n import org.elasticsearch.index.mapper.internal.TTLFieldMapper;\n import org.elasticsearch.index.mapper.internal.UidFieldMapper;\n@@ -57,6 +54,11 @@\n import java.io.IOException;\n import java.util.ArrayList;\n import java.util.List;\n+import java.util.concurrent.CountDownLatch;\n+import java.util.concurrent.TimeUnit;\n+import java.util.concurrent.atomic.AtomicBoolean;\n+import java.util.concurrent.locks.Condition;\n+import java.util.concurrent.locks.ReentrantLock;\n \n \n /**\n@@ -69,68 +71,83 @@ public class IndicesTTLService extends AbstractLifecycleComponent<IndicesTTLServ\n \n private final ClusterService clusterService;\n private final IndicesService indicesService;\n- private final Client client;\n+ private final TransportBulkAction bulkAction;\n \n- private volatile TimeValue interval;\n private final int bulkSize;\n private PurgerThread purgerThread;\n \n @Inject\n- public IndicesTTLService(Settings settings, ClusterService clusterService, IndicesService indicesService, NodeSettingsService nodeSettingsService, Client client) {\n+ public IndicesTTLService(Settings settings, ClusterService clusterService, IndicesService indicesService, NodeSettingsService nodeSettingsService, TransportBulkAction bulkAction) {\n super(settings);\n this.clusterService = clusterService;\n this.indicesService = indicesService;\n- this.client = client;\n- this.interval = componentSettings.getAsTime(\"interval\", TimeValue.timeValueSeconds(60));\n+ TimeValue interval = componentSettings.getAsTime(\"interval\", TimeValue.timeValueSeconds(60));\n+ this.bulkAction = bulkAction;\n this.bulkSize = componentSettings.getAsInt(\"bulk_size\", 10000);\n+ this.purgerThread = new PurgerThread(EsExecutors.threadName(settings, \"[ttl_expire]\"), interval);\n \n nodeSettingsService.addListener(new ApplySettings());\n }\n \n @Override\n protected void doStart() throws ElasticsearchException {\n- this.purgerThread = new PurgerThread(EsExecutors.threadName(settings, \"[ttl_expire]\"));\n this.purgerThread.start();\n }\n \n @Override\n protected void doStop() throws ElasticsearchException {\n- this.purgerThread.doStop();\n- this.purgerThread.interrupt();\n+ try {\n+ this.purgerThread.shutdown();\n+ } catch (InterruptedException e) {\n+ Thread.interrupted();\n+ }\n }\n \n @Override\n protected void doClose() throws ElasticsearchException {\n }\n \n private class PurgerThread extends Thread {\n- volatile boolean running = true;\n+ private final AtomicBoolean running = new AtomicBoolean(true);\n+ private final Notifier notifier;\n+ private final CountDownLatch shutdownLatch = new CountDownLatch(1);\n+\n \n- public PurgerThread(String name) {\n+ public PurgerThread(String name, TimeValue interval) {\n super(name);\n setDaemon(true);\n+ this.notifier = new Notifier(interval);\n+ }\n+\n+ public void shutdown() throws InterruptedException {\n+ if (running.compareAndSet(true, false)) {\n+ notifier.doNotify();\n+ shutdownLatch.await();\n+ }\n+\n }\n \n- public void doStop() {\n- running = false;\n+ public void resetInterval(TimeValue interval) {\n+ notifier.setTimeout(interval);\n }\n \n public void run() {\n- while (running) {\n- try {\n- List<IndexShard> shardsToPurge = getShardsToPurge();\n- purgeShards(shardsToPurge);\n- } catch (Throwable e) {\n- if (running) {\n- logger.warn(\"failed to execute ttl purge\", e);\n+ try {\n+ while (running.get()) {\n+ try {\n+ List<IndexShard> shardsToPurge = getShardsToPurge();\n+ purgeShards(shardsToPurge);\n+ } catch (Throwable e) {\n+ if (running.get()) {\n+ logger.warn(\"failed to execute ttl purge\", e);\n+ }\n+ }\n+ if (running.get()) {\n+ notifier.await();\n }\n }\n- try {\n- Thread.sleep(interval.millis());\n- } catch (InterruptedException e) {\n- // ignore, if we are interrupted because we are shutting down, running will be false\n- }\n-\n+ } finally {\n+ shutdownLatch.countDown();\n }\n }\n \n@@ -174,6 +191,10 @@ private List<IndexShard> getShardsToPurge() {\n }\n return shardsToPurge;\n }\n+\n+ public TimeValue getInterval() {\n+ return notifier.getTimeout();\n+ }\n }\n \n private void purgeShards(List<IndexShard> shardsToPurge) {\n@@ -182,11 +203,13 @@ private void purgeShards(List<IndexShard> shardsToPurge) {\n Engine.Searcher searcher = shardToPurge.acquireSearcher(\"indices_ttl\");\n try {\n logger.debug(\"[{}][{}] purging shard\", shardToPurge.routingEntry().index(), shardToPurge.routingEntry().id());\n- ExpiredDocsCollector expiredDocsCollector = new ExpiredDocsCollector(shardToPurge.routingEntry().index());\n+ ExpiredDocsCollector expiredDocsCollector = new ExpiredDocsCollector();\n searcher.searcher().search(query, expiredDocsCollector);\n List<DocToPurge> docsToPurge = expiredDocsCollector.getDocsToPurge();\n- BulkRequestBuilder bulkRequest = client.prepareBulk();\n+\n+ BulkRequest bulkRequest = new BulkRequest();\n for (DocToPurge docToPurge : docsToPurge) {\n+\n bulkRequest.add(new DeleteRequest().index(shardToPurge.routingEntry().index()).type(docToPurge.type).id(docToPurge.id).version(docToPurge.version).routing(docToPurge.routing));\n bulkRequest = processBulkIfNeeded(bulkRequest, false);\n }\n@@ -214,12 +237,10 @@ public DocToPurge(String type, String id, long version, String routing) {\n }\n \n private class ExpiredDocsCollector extends Collector {\n- private final MapperService mapperService;\n private AtomicReaderContext context;\n private List<DocToPurge> docsToPurge = new ArrayList<>();\n \n- public ExpiredDocsCollector(String index) {\n- mapperService = indicesService.indexService(index).mapperService();\n+ public ExpiredDocsCollector() {\n }\n \n public void setScorer(Scorer scorer) {\n@@ -250,10 +271,10 @@ public List<DocToPurge> getDocsToPurge() {\n }\n }\n \n- private BulkRequestBuilder processBulkIfNeeded(BulkRequestBuilder bulkRequest, boolean force) {\n+ private BulkRequest processBulkIfNeeded(BulkRequest bulkRequest, boolean force) {\n if ((force && bulkRequest.numberOfActions() > 0) || bulkRequest.numberOfActions() >= bulkSize) {\n try {\n- bulkRequest.execute(new ActionListener<BulkResponse>() {\n+ bulkAction.executeBulk(bulkRequest, new ActionListener<BulkResponse>() {\n @Override\n public void onResponse(BulkResponse bulkResponse) {\n logger.trace(\"bulk took \" + bulkResponse.getTookInMillis() + \"ms\");\n@@ -267,18 +288,64 @@ public void onFailure(Throwable e) {\n } catch (Exception e) {\n logger.warn(\"failed to process bulk\", e);\n }\n- bulkRequest = client.prepareBulk();\n+ bulkRequest = new BulkRequest();\n }\n return bulkRequest;\n }\n \n class ApplySettings implements NodeSettingsService.Listener {\n @Override\n public void onRefreshSettings(Settings settings) {\n- TimeValue interval = settings.getAsTime(INDICES_TTL_INTERVAL, IndicesTTLService.this.interval);\n- if (!interval.equals(IndicesTTLService.this.interval)) {\n- logger.info(\"updating indices.ttl.interval from [{}] to [{}]\", IndicesTTLService.this.interval, interval);\n- IndicesTTLService.this.interval = interval;\n+ final TimeValue currentInterval = IndicesTTLService.this.purgerThread.getInterval();\n+ final TimeValue interval = settings.getAsTime(INDICES_TTL_INTERVAL, currentInterval);\n+ if (!interval.equals(currentInterval)) {\n+ logger.info(\"updating indices.ttl.interval from [{}] to [{}]\",currentInterval, interval);\n+ IndicesTTLService.this.purgerThread.resetInterval(interval);\n+\n+ }\n+ }\n+ }\n+\n+\n+ private static final class Notifier {\n+\n+ private final ReentrantLock lock = new ReentrantLock();\n+ private final Condition condition = lock.newCondition();\n+ private volatile TimeValue timeout;\n+\n+ public Notifier(TimeValue timeout) {\n+ assert timeout != null;\n+ this.timeout = timeout;\n+ }\n+\n+ public void await() {\n+ lock.lock();\n+ try {\n+ condition.await(timeout.millis(), TimeUnit.MILLISECONDS);\n+ } catch (InterruptedException e) {\n+ Thread.interrupted();\n+ } finally {\n+ lock.unlock();\n+ }\n+\n+ }\n+\n+ public void setTimeout(TimeValue timeout) {\n+ assert timeout != null;\n+ this.timeout = timeout;\n+ doNotify();\n+ }\n+\n+ public TimeValue getTimeout() {\n+ return timeout;\n+ }\n+\n+ public void doNotify() {\n+ lock.lock();\n+ try {\n+ condition.signalAll();\n+ } finally {\n+ lock.unlock();\n }\n }\n }",
"filename": "src/main/java/org/elasticsearch/indices/ttl/IndicesTTLService.java",
"status": "modified"
},
{
"diff": "@@ -30,6 +30,7 @@\n import org.elasticsearch.test.ElasticsearchIntegrationTest.ClusterScope;\n import org.junit.Test;\n \n+import java.io.IOException;\n import java.util.concurrent.TimeUnit;\n \n import static org.elasticsearch.common.settings.ImmutableSettings.settingsBuilder;\n@@ -51,14 +52,12 @@ protected Settings nodeSettings(int nodeOrdinal) {\n return settingsBuilder()\n .put(super.nodeSettings(nodeOrdinal))\n .put(\"indices.ttl.interval\", PURGE_INTERVAL)\n- .put(\"action.auto_create_index\", false) // see #5766\n .build();\n }\n \n @Test\n public void testPercolatingWithTimeToLive() throws Exception {\n final Client client = client();\n- client.admin().indices().prepareDelete(\"_all\").execute().actionGet();\n ensureGreen();\n \n String percolatorMapping = XContentFactory.jsonBuilder().startObject().startObject(PercolatorService.TYPE_NAME)\n@@ -150,4 +149,56 @@ public boolean apply(Object input) {\n assertThat(percolateResponse.getMatches(), emptyArray());\n }\n \n+\n+ @Test\n+ public void testEnsureTTLDoesNotCreateIndex() throws IOException, InterruptedException {\n+ final Client client = client();\n+ ensureGreen();\n+ client().admin().cluster().prepareUpdateSettings().setTransientSettings(settingsBuilder()\n+ .put(\"indices.ttl.interval\", 60) // 60 sec\n+ .build()).get();\n+\n+ String typeMapping = XContentFactory.jsonBuilder().startObject().startObject(\"type1\")\n+ .startObject(\"_ttl\").field(\"enabled\", true).endObject()\n+ .endObject().endObject().string();\n+\n+ client.admin().indices().prepareCreate(\"test\")\n+ .setSettings(settingsBuilder().put(\"index.number_of_shards\", 1))\n+ .addMapping(\"type1\", typeMapping)\n+ .execute().actionGet();\n+ ensureGreen();\n+ client().admin().cluster().prepareUpdateSettings().setTransientSettings(settingsBuilder()\n+ .put(\"indices.ttl.interval\", 1) // 60 sec\n+ .build()).get();\n+\n+ for (int i = 0; i < 100; i++) {\n+ logger.info(\"index: \" + i);\n+ client.prepareIndex(\"test\", \"type1\", \"\" + i).setSource(jsonBuilder()\n+ .startObject()\n+ .startObject(\"query\")\n+ .startObject(\"term\")\n+ .field(\"field1\", \"value1\")\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ ).setTTL(randomIntBetween(10, 500)).execute().actionGet();\n+ }\n+ refresh();\n+ assertThat(awaitBusy(new Predicate<Object>() {\n+ @Override\n+ public boolean apply(Object input) {\n+ IndicesStatsResponse indicesStatsResponse = client.admin().indices().prepareStats(\"test\").clear().setIndexing(true).get();\n+ logger.info(\"delete count [{}]\", indicesStatsResponse.getIndices().get(\"test\").getTotal().getIndexing().getTotal().getDeleteCount());\n+ // TTL deletes one doc, but it is indexed in the primary shard and replica shards\n+ return indicesStatsResponse.getIndices().get(\"test\").getTotal().getIndexing().getTotal().getDeleteCount() != 0;\n+ }\n+ }, 5, TimeUnit.SECONDS), equalTo(true));\n+ cluster().wipeIndices(\"test\");\n+ client.admin().indices().prepareCreate(\"test\")\n+ .addMapping(\"type1\", typeMapping)\n+ .execute().actionGet();\n+\n+\n+ }\n+\n }",
"filename": "src/test/java/org/elasticsearch/percolator/TTLPercolatorTests.java",
"status": "modified"
}
]
} |
{
"body": "The snapshot status request \n\n```\n curl -XGET \"localhost:9200/_snapshot/_status\"\n```\n\nreturns an error\n\n```\n{\"error\":\"ActionRequestValidationException[Validation Failed: 1: repository is missing;]\",\"status\":500}\n```\n\nIt should return all a list of currently running snapshots in all repositories instead.\n",
"comments": [],
"number": 5790,
"title": "Snapshot Status failing without repository"
} | {
"body": "The snapshot status command with empty repository should return current status of currently running snapshots in all repositories.\n\nFixes #5790\n",
"number": 5791,
"review_comments": [],
"title": "Fix snapshot status with empty repository"
} | {
"commits": [
{
"message": "Fix snapshot status with empty repository\n\nThe snapshot status command with empty repository should return current status of currently running snapshots in all repositories.\n\nFixes #5790"
}
],
"files": [
{
"diff": "@@ -34,11 +34,11 @@\n */\n public class SnapshotsStatusRequest extends MasterNodeOperationRequest<SnapshotsStatusRequest> {\n \n- private String repository;\n+ private String repository = \"_all\";\n \n private String[] snapshots = Strings.EMPTY_ARRAY;\n \n- SnapshotsStatusRequest() {\n+ public SnapshotsStatusRequest() {\n }\n \n /**",
"filename": "src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/SnapshotsStatusRequest.java",
"status": "modified"
},
{
"diff": "@@ -87,7 +87,8 @@ protected void masterOperation(final SnapshotsStatusRequest request,\n ImmutableList<SnapshotMetaData.Entry> currentSnapshots = snapshotsService.currentSnapshots(request.repository(), request.snapshots());\n \n if (currentSnapshots.isEmpty()) {\n- buildResponse(request, currentSnapshots, null);\n+ listener.onResponse(buildResponse(request, currentSnapshots, null));\n+ return;\n }\n \n Set<String> nodesIds = newHashSet();",
"filename": "src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/TransportSnapshotsStatusAction.java",
"status": "modified"
},
{
"diff": "@@ -447,4 +447,9 @@ public interface ClusterAdminClient {\n */\n SnapshotsStatusRequestBuilder prepareSnapshotStatus(String repository);\n \n+ /**\n+ * Get snapshot status.\n+ */\n+ SnapshotsStatusRequestBuilder prepareSnapshotStatus();\n+\n }",
"filename": "src/main/java/org/elasticsearch/client/ClusterAdminClient.java",
"status": "modified"
},
{
"diff": "@@ -419,4 +419,9 @@ public void snapshotsStatus(SnapshotsStatusRequest request, ActionListener<Snaps\n public SnapshotsStatusRequestBuilder prepareSnapshotStatus(String repository) {\n return new SnapshotsStatusRequestBuilder(this, repository);\n }\n+\n+ @Override\n+ public SnapshotsStatusRequestBuilder prepareSnapshotStatus() {\n+ return new SnapshotsStatusRequestBuilder(this);\n+ }\n }",
"filename": "src/main/java/org/elasticsearch/client/support/AbstractClusterAdminClient.java",
"status": "modified"
},
{
"diff": "@@ -49,7 +49,7 @@ public RestSnapshotsStatusAction(Settings settings, Client client, RestControlle\n \n @Override\n public void handleRequest(final RestRequest request, final RestChannel channel) {\n- String repository = request.param(\"repository\");\n+ String repository = request.param(\"repository\", \"_all\");\n String[] snapshots = request.paramAsStringArray(\"snapshot\", Strings.EMPTY_ARRAY);\n if (snapshots.length == 1 && \"_all\".equalsIgnoreCase(snapshots[0])) {\n snapshots = Strings.EMPTY_ARRAY;",
"filename": "src/main/java/org/elasticsearch/rest/action/admin/cluster/snapshots/status/RestSnapshotsStatusAction.java",
"status": "modified"
},
{
"diff": "@@ -382,7 +382,7 @@ public ImmutableList<SnapshotMetaData.Entry> currentSnapshots(String repository,\n if (snapshotMetaData == null || snapshotMetaData.entries().isEmpty()) {\n return ImmutableList.of();\n }\n- if (repository == null) {\n+ if (\"_all\".equals(repository)) {\n return snapshotMetaData.entries();\n }\n if (snapshotMetaData.entries().size() == 1) {",
"filename": "src/main/java/org/elasticsearch/snapshots/SnapshotsService.java",
"status": "modified"
},
{
"diff": "@@ -1036,12 +1036,8 @@ public void snapshotStatusTest() throws Exception {\n logger.info(\"--> waiting for block to kick in\");\n waitForBlock(blockedNode, \"test-repo\", TimeValue.timeValueSeconds(60));\n \n- logger.info(\"--> execution was blocked on node [{}], checking snapshot status\", blockedNode);\n+ logger.info(\"--> execution was blocked on node [{}], checking snapshot status with specified repository and snapshot\", blockedNode);\n SnapshotsStatusResponse response = client.admin().cluster().prepareSnapshotStatus(\"test-repo\").execute().actionGet();\n-\n- logger.info(\"--> unblocking blocked node\");\n- unblockNode(blockedNode);\n-\n assertThat(response.getSnapshots().size(), equalTo(1));\n SnapshotStatus snapshotStatus = response.getSnapshots().get(0);\n assertThat(snapshotStatus.getState(), equalTo(SnapshotMetaData.State.STARTED));\n@@ -1053,6 +1049,22 @@ public void snapshotStatusTest() throws Exception {\n }\n }\n \n+ logger.info(\"--> checking snapshot status for all currently running and snapshot with empty repository\", blockedNode);\n+ response = client.admin().cluster().prepareSnapshotStatus().execute().actionGet();\n+ assertThat(response.getSnapshots().size(), equalTo(1));\n+ snapshotStatus = response.getSnapshots().get(0);\n+ assertThat(snapshotStatus.getState(), equalTo(SnapshotMetaData.State.STARTED));\n+ // We blocked the node during data write operation, so at least one shard snapshot should be in STARTED stage\n+ assertThat(snapshotStatus.getShardsStats().getStartedShards(), greaterThan(0));\n+ for( SnapshotIndexShardStatus shardStatus : snapshotStatus.getIndices().get(\"test-idx\")) {\n+ if (shardStatus.getStage() == SnapshotIndexShardStage.STARTED) {\n+ assertThat(shardStatus.getNodeId(), notNullValue());\n+ }\n+ }\n+\n+ logger.info(\"--> unblocking blocked node\");\n+ unblockNode(blockedNode);\n+\n SnapshotInfo snapshotInfo = waitForCompletion(\"test-repo\", \"test-snap\", TimeValue.timeValueSeconds(600));\n logger.info(\"Number of failed shards [{}]\", snapshotInfo.shardFailures().size());\n logger.info(\"--> done\");\n@@ -1069,6 +1081,10 @@ public void snapshotStatusTest() throws Exception {\n assertThat(indexStatus.getShardsStats().getDoneShards(), equalTo(snapshotInfo.successfulShards()));\n assertThat(indexStatus.getShards().size(), equalTo(snapshotInfo.totalShards()));\n \n+ logger.info(\"--> checking snapshot status after it is done with empty repository\", blockedNode);\n+ response = client.admin().cluster().prepareSnapshotStatus().execute().actionGet();\n+ assertThat(response.getSnapshots().size(), equalTo(0));\n+\n try {\n client.admin().cluster().prepareSnapshotStatus(\"test-repo\").addSnapshots(\"test-snap-doesnt-exist\").execute().actionGet();\n fail();",
"filename": "src/test/java/org/elasticsearch/snapshots/SharedClusterSnapshotRestoreTests.java",
"status": "modified"
}
]
} |
{
"body": "We currently play some tricks to make sure merges are not picked up on indexing threads such that the threadpool that is used for indexing is not consumed by merges. Yet, there are a couple of problems that prevent merges to be picked up since we are checking if the IW has pending merges but that datastructure might not be picked up due to a problem in the way we try to prevent the merges to happen. There is also a problem where we don't refresh a reader due to merges where we could / should do that. This does NOT have an impact on correctness but it can have an impact on the amount of segments that are used for searching and for resolving versions to. Both have performance impacts if the number of segments is large.\n",
"comments": [],
"number": 5779,
"title": "Merges might not be picked up when they are ready"
} | {
"body": "```\nDue to the default of `async_merge` to `true` we never run\nthe merge policy on a segment flush which prevented the\npending merges from being updated and that caused actual\npending merges not to contribute to the merge decision.\n\nThis commit removes the `index.async.merge` setting. The setting is \nactually misleading since we take care of this on a different level \n(the merge scheduler) since 1.1.\n\nThis commit also adds an additional check when to run a refresh\nsince solely relying on the dirty flag might leave merges un-refreshed\nwhich can cause search slowdowns and higher memory consumption.\n```\n\nCloses #5779\n",
"number": 5780,
"review_comments": [],
"title": "Ensure pending merges are updated on segment flushes"
} | {
"commits": [
{
"message": "Ensure pending merges are updated on segment flushes\n\nDue to the default of `async_merge` to `true` we never run\nthe merge policy on a segment flush which prevented the\npending merges from being updated and that caused actual\npending merges not to contribute to the merge decision.\n\nThis commit also removes the `index.async.merge` setting is actually\nmisleading since we take care of merges not being excecuted on the\nindexing threads on a different level (the merge scheduler) since 1.1.\n\nThis commit also adds an additional check when to run a refresh\nsince soely relying on the dirty flag might leave merges un-refreshed\nwhich can cause search slowdowns and higher memory consumption.\n\nCloses #5779"
},
{
"message": "[TEST] Improve performance of MockBigArray MockPageRecycler"
}
],
"files": [
{
"diff": "@@ -674,7 +674,15 @@ protected Searcher newSearcher(String source, IndexSearcher searcher, SearcherMa\n \n @Override\n public boolean refreshNeeded() {\n- return dirty;\n+ try {\n+ // we are either dirty due to a document added or due to a\n+ // finished merge - either way we should refresh\n+ return dirty || !searcherManager.isSearcherCurrent();\n+ } catch (IOException e) {\n+ logger.error(\"failed to access searcher manager\", e);\n+ failEngine(e);\n+ throw new EngineException(shardId, \"failed to access searcher manager\",e);\n+ }\n }\n \n @Override\n@@ -706,7 +714,7 @@ public void refresh(Refresh refresh) throws EngineException {\n // maybeRefresh will only allow one refresh to execute, and the rest will \"pass through\",\n // but, we want to make sure not to loose ant refresh calls, if one is taking time\n synchronized (refreshMutex) {\n- if (dirty || refresh.force()) {\n+ if (refreshNeeded() || refresh.force()) {\n // we set dirty to false, even though the refresh hasn't happened yet\n // as the refresh only holds for data indexed before it. Any data indexed during\n // the refresh will not be part of it and will set the dirty flag back to true\n@@ -926,7 +934,7 @@ private void refreshVersioningTable(long time) {\n \n @Override\n public void maybeMerge() throws EngineException {\n- if (!possibleMergeNeeded) {\n+ if (!possibleMergeNeeded()) {\n return;\n }\n possibleMergeNeeded = false;",
"filename": "src/main/java/org/elasticsearch/index/engine/internal/InternalEngine.java",
"status": "modified"
},
{
"diff": "@@ -21,7 +21,6 @@\n \n import org.apache.lucene.index.LogByteSizeMergePolicy;\n import org.apache.lucene.index.MergePolicy;\n-import org.apache.lucene.index.SegmentInfos;\n import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.common.Preconditions;\n import org.elasticsearch.common.inject.Inject;\n@@ -31,7 +30,6 @@\n import org.elasticsearch.index.settings.IndexSettingsService;\n import org.elasticsearch.index.store.Store;\n \n-import java.io.IOException;\n import java.util.Set;\n import java.util.concurrent.CopyOnWriteArraySet;\n \n@@ -41,13 +39,14 @@\n public class LogByteSizeMergePolicyProvider extends AbstractMergePolicyProvider<LogByteSizeMergePolicy> {\n \n private final IndexSettingsService indexSettingsService;\n-\n+ public static final String MAX_MERGE_BYTE_SIZE_KEY = \"index.merge.policy.max_merge_sizes\";\n+ public static final String MIN_MERGE_BYTE_SIZE_KEY = \"index.merge.policy.min_merge_size\";\n+ public static final String MERGE_FACTORY_KEY = \"index.merge.policy.merge_factor\";\n private volatile ByteSizeValue minMergeSize;\n private volatile ByteSizeValue maxMergeSize;\n private volatile int mergeFactor;\n private volatile int maxMergeDocs;\n private final boolean calibrateSizeByDeletes;\n- private boolean asyncMerge;\n \n private final Set<CustomLogByteSizeMergePolicy> policies = new CopyOnWriteArraySet<>();\n \n@@ -63,21 +62,15 @@ public LogByteSizeMergePolicyProvider(Store store, IndexSettingsService indexSet\n this.mergeFactor = componentSettings.getAsInt(\"merge_factor\", LogByteSizeMergePolicy.DEFAULT_MERGE_FACTOR);\n this.maxMergeDocs = componentSettings.getAsInt(\"max_merge_docs\", LogByteSizeMergePolicy.DEFAULT_MAX_MERGE_DOCS);\n this.calibrateSizeByDeletes = componentSettings.getAsBoolean(\"calibrate_size_by_deletes\", true);\n- this.asyncMerge = indexSettings.getAsBoolean(\"index.merge.async\", true);\n- logger.debug(\"using [log_bytes_size] merge policy with merge_factor[{}], min_merge_size[{}], max_merge_size[{}], max_merge_docs[{}], calibrate_size_by_deletes[{}], async_merge[{}]\",\n- mergeFactor, minMergeSize, maxMergeSize, maxMergeDocs, calibrateSizeByDeletes, asyncMerge);\n+ logger.debug(\"using [log_bytes_size] merge policy with merge_factor[{}], min_merge_size[{}], max_merge_size[{}], max_merge_docs[{}], calibrate_size_by_deletes[{}]\",\n+ mergeFactor, minMergeSize, maxMergeSize, maxMergeDocs, calibrateSizeByDeletes);\n \n indexSettingsService.addListener(applySettings);\n }\n \n @Override\n public LogByteSizeMergePolicy newMergePolicy() {\n- CustomLogByteSizeMergePolicy mergePolicy;\n- if (asyncMerge) {\n- mergePolicy = new EnableMergeLogByteSizeMergePolicy(this);\n- } else {\n- mergePolicy = new CustomLogByteSizeMergePolicy(this);\n- }\n+ final CustomLogByteSizeMergePolicy mergePolicy = new CustomLogByteSizeMergePolicy(this);\n mergePolicy.setMinMergeMB(minMergeSize.mbFrac());\n mergePolicy.setMaxMergeMB(maxMergeSize.mbFrac());\n mergePolicy.setMergeFactor(mergeFactor);\n@@ -173,19 +166,4 @@ public MergePolicy clone() {\n }\n }\n \n- public static class EnableMergeLogByteSizeMergePolicy extends CustomLogByteSizeMergePolicy {\n-\n- public EnableMergeLogByteSizeMergePolicy(LogByteSizeMergePolicyProvider provider) {\n- super(provider);\n- }\n-\n- @Override\n- public MergeSpecification findMerges(MergeTrigger trigger, SegmentInfos infos) throws IOException {\n- // we don't enable merges while indexing documents, we do them in the background\n- if (trigger == MergeTrigger.SEGMENT_FLUSH) {\n- return null;\n- }\n- return super.findMerges(trigger, infos);\n- }\n- }\n }",
"filename": "src/main/java/org/elasticsearch/index/merge/policy/LogByteSizeMergePolicyProvider.java",
"status": "modified"
},
{
"diff": "@@ -20,16 +20,13 @@\n package org.elasticsearch.index.merge.policy;\n \n import org.apache.lucene.index.LogDocMergePolicy;\n-import org.apache.lucene.index.MergePolicy;\n-import org.apache.lucene.index.SegmentInfos;\n import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.common.Preconditions;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.index.settings.IndexSettingsService;\n import org.elasticsearch.index.store.Store;\n \n-import java.io.IOException;\n import java.util.Set;\n import java.util.concurrent.CopyOnWriteArraySet;\n \n@@ -39,12 +36,13 @@\n public class LogDocMergePolicyProvider extends AbstractMergePolicyProvider<LogDocMergePolicy> {\n \n private final IndexSettingsService indexSettingsService;\n-\n+ public static final String MAX_MERGE_DOCS_KEY = \"index.merge.policy.max_merge_docs\";\n+ public static final String MIN_MERGE_DOCS_KEY = \"index.merge.policy.min_merge_docs\";\n+ public static final String MERGE_FACTORY_KEY = \"index.merge.policy.merge_factor\";\n private volatile int minMergeDocs;\n private volatile int maxMergeDocs;\n private volatile int mergeFactor;\n private final boolean calibrateSizeByDeletes;\n- private boolean asyncMerge;\n \n private final Set<CustomLogDocMergePolicy> policies = new CopyOnWriteArraySet<>();\n \n@@ -60,9 +58,8 @@ public LogDocMergePolicyProvider(Store store, IndexSettingsService indexSettings\n this.maxMergeDocs = componentSettings.getAsInt(\"max_merge_docs\", LogDocMergePolicy.DEFAULT_MAX_MERGE_DOCS);\n this.mergeFactor = componentSettings.getAsInt(\"merge_factor\", LogDocMergePolicy.DEFAULT_MERGE_FACTOR);\n this.calibrateSizeByDeletes = componentSettings.getAsBoolean(\"calibrate_size_by_deletes\", true);\n- this.asyncMerge = indexSettings.getAsBoolean(\"index.merge.async\", true);\n- logger.debug(\"using [log_doc] merge policy with merge_factor[{}], min_merge_docs[{}], max_merge_docs[{}], calibrate_size_by_deletes[{}], async_merge[{}]\",\n- mergeFactor, minMergeDocs, maxMergeDocs, calibrateSizeByDeletes, asyncMerge);\n+ logger.debug(\"using [log_doc] merge policy with merge_factor[{}], min_merge_docs[{}], max_merge_docs[{}], calibrate_size_by_deletes[{}]\",\n+ mergeFactor, minMergeDocs, maxMergeDocs, calibrateSizeByDeletes);\n \n indexSettingsService.addListener(applySettings);\n }\n@@ -74,12 +71,7 @@ public void close() throws ElasticsearchException {\n \n @Override\n public LogDocMergePolicy newMergePolicy() {\n- CustomLogDocMergePolicy mergePolicy;\n- if (asyncMerge) {\n- mergePolicy = new EnableMergeLogDocMergePolicy(this);\n- } else {\n- mergePolicy = new CustomLogDocMergePolicy(this);\n- }\n+ final CustomLogDocMergePolicy mergePolicy = new CustomLogDocMergePolicy(this);\n mergePolicy.setMinMergeDocs(minMergeDocs);\n mergePolicy.setMaxMergeDocs(maxMergeDocs);\n mergePolicy.setMergeFactor(mergeFactor);\n@@ -150,27 +142,4 @@ public void close() {\n provider.policies.remove(this);\n }\n }\n-\n- public static class EnableMergeLogDocMergePolicy extends CustomLogDocMergePolicy {\n-\n- public EnableMergeLogDocMergePolicy(LogDocMergePolicyProvider provider) {\n- super(provider);\n- }\n-\n- @Override\n- public MergeSpecification findMerges(MergeTrigger trigger, SegmentInfos infos) throws IOException {\n- // we don't enable merges while indexing documents, we do them in the background\n- if (trigger == MergeTrigger.SEGMENT_FLUSH) {\n- return null;\n- }\n- return super.findMerges(trigger, infos);\n- }\n- \n- @Override\n- public MergePolicy clone() {\n- // Lucene IW makes a clone internally but since we hold on to this instance \n- // the clone will just be the identity.\n- return this;\n- }\n- }\n }",
"filename": "src/main/java/org/elasticsearch/index/merge/policy/LogDocMergePolicyProvider.java",
"status": "modified"
},
{
"diff": "@@ -20,7 +20,6 @@\n package org.elasticsearch.index.merge.policy;\n \n import org.apache.lucene.index.MergePolicy;\n-import org.apache.lucene.index.SegmentInfos;\n import org.apache.lucene.index.TieredMergePolicy;\n import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.common.inject.Inject;\n@@ -30,7 +29,6 @@\n import org.elasticsearch.index.settings.IndexSettingsService;\n import org.elasticsearch.index.store.Store;\n \n-import java.io.IOException;\n import java.util.Set;\n import java.util.concurrent.CopyOnWriteArraySet;\n \n@@ -47,7 +45,6 @@ public class TieredMergePolicyProvider extends AbstractMergePolicyProvider<Tiere\n private volatile ByteSizeValue maxMergedSegment;\n private volatile double segmentsPerTier;\n private volatile double reclaimDeletesWeight;\n- private boolean asyncMerge;\n \n private final ApplySettings applySettings = new ApplySettings();\n \n@@ -57,7 +54,6 @@ public class TieredMergePolicyProvider extends AbstractMergePolicyProvider<Tiere\n public TieredMergePolicyProvider(Store store, IndexSettingsService indexSettingsService) {\n super(store);\n this.indexSettingsService = indexSettingsService;\n- this.asyncMerge = indexSettings.getAsBoolean(\"index.merge.async\", true);\n this.forceMergeDeletesPctAllowed = componentSettings.getAsDouble(\"expunge_deletes_allowed\", 10d); // percentage\n this.floorSegment = componentSettings.getAsBytesSize(\"floor_segment\", new ByteSizeValue(2, ByteSizeUnit.MB));\n this.maxMergeAtOnce = componentSettings.getAsInt(\"max_merge_at_once\", 10);\n@@ -69,8 +65,8 @@ public TieredMergePolicyProvider(Store store, IndexSettingsService indexSettings\n \n fixSettingsIfNeeded();\n \n- logger.debug(\"using [tiered] merge policy with expunge_deletes_allowed[{}], floor_segment[{}], max_merge_at_once[{}], max_merge_at_once_explicit[{}], max_merged_segment[{}], segments_per_tier[{}], reclaim_deletes_weight[{}], async_merge[{}]\",\n- forceMergeDeletesPctAllowed, floorSegment, maxMergeAtOnce, maxMergeAtOnceExplicit, maxMergedSegment, segmentsPerTier, reclaimDeletesWeight, asyncMerge);\n+ logger.debug(\"using [tiered] merge policy with expunge_deletes_allowed[{}], floor_segment[{}], max_merge_at_once[{}], max_merge_at_once_explicit[{}], max_merged_segment[{}], segments_per_tier[{}], reclaim_deletes_weight[{}]\",\n+ forceMergeDeletesPctAllowed, floorSegment, maxMergeAtOnce, maxMergeAtOnceExplicit, maxMergedSegment, segmentsPerTier, reclaimDeletesWeight);\n \n indexSettingsService.addListener(applySettings);\n }\n@@ -91,12 +87,7 @@ private void fixSettingsIfNeeded() {\n \n @Override\n public TieredMergePolicy newMergePolicy() {\n- CustomTieredMergePolicyProvider mergePolicy;\n- if (asyncMerge) {\n- mergePolicy = new EnableMergeTieredMergePolicyProvider(this);\n- } else {\n- mergePolicy = new CustomTieredMergePolicyProvider(this);\n- }\n+ final CustomTieredMergePolicyProvider mergePolicy = new CustomTieredMergePolicyProvider(this);\n mergePolicy.setNoCFSRatio(noCFSRatio);\n mergePolicy.setForceMergeDeletesPctAllowed(forceMergeDeletesPctAllowed);\n mergePolicy.setFloorSegmentMB(floorSegment.mbFrac());\n@@ -222,20 +213,4 @@ public MergePolicy clone() {\n return this;\n }\n }\n-\n- public static class EnableMergeTieredMergePolicyProvider extends CustomTieredMergePolicyProvider {\n-\n- public EnableMergeTieredMergePolicyProvider(TieredMergePolicyProvider provider) {\n- super(provider);\n- }\n-\n- @Override\n- public MergePolicy.MergeSpecification findMerges(MergeTrigger trigger, SegmentInfos infos) throws IOException {\n- // we don't enable merges while indexing documents, we do them in the background\n- if (trigger == MergeTrigger.SEGMENT_FLUSH) {\n- return null;\n- }\n- return super.findMerges(trigger, infos);\n- }\n- }\n }\n\\ No newline at end of file",
"filename": "src/main/java/org/elasticsearch/index/merge/policy/TieredMergePolicyProvider.java",
"status": "modified"
},
{
"diff": "@@ -152,4 +152,5 @@ private void assertTotalCompoundSegments(int i, int t, String index) {\n assertThat(total, Matchers.equalTo(t));\n \n }\n+\n }",
"filename": "src/test/java/org/elasticsearch/index/engine/internal/InternalEngineIntegrationTest.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,94 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.index.engine.internal;\n+\n+import com.carrotsearch.randomizedtesting.annotations.Nightly;\n+import com.carrotsearch.randomizedtesting.annotations.Seed;\n+import com.google.common.base.Predicate;\n+import org.apache.lucene.index.LogByteSizeMergePolicy;\n+import org.apache.lucene.util.LuceneTestCase;\n+import org.elasticsearch.action.admin.indices.stats.IndicesStatsResponse;\n+import org.elasticsearch.action.bulk.BulkRequestBuilder;\n+import org.elasticsearch.action.bulk.BulkResponse;\n+import org.elasticsearch.action.index.IndexRequestBuilder;\n+import org.elasticsearch.client.Requests;\n+import org.elasticsearch.cluster.metadata.IndexMetaData;\n+import org.elasticsearch.common.settings.ImmutableSettings;\n+import org.elasticsearch.index.merge.policy.LogDocMergePolicyProvider;\n+import org.elasticsearch.test.ElasticsearchIntegrationTest;\n+import org.hamcrest.Matchers;\n+import org.junit.Test;\n+\n+import java.io.IOException;\n+import java.util.ArrayList;\n+import java.util.List;\n+import java.util.concurrent.ExecutionException;\n+\n+import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertNoFailures;\n+\n+/**\n+ */\n+@ElasticsearchIntegrationTest.ClusterScope(numNodes = 1, scope = ElasticsearchIntegrationTest.Scope.SUITE)\n+public class InternalEngineMergeTests extends ElasticsearchIntegrationTest {\n+\n+ @Test\n+ @LuceneTestCase.Slow\n+ public void testMergesHappening() throws InterruptedException, IOException, ExecutionException {\n+ final int numOfShards = 5;\n+ // some settings to keep num segments low\n+ assertAcked(prepareCreate(\"test\").setSettings(ImmutableSettings.builder()\n+ .put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, numOfShards)\n+ .put(LogDocMergePolicyProvider.MIN_MERGE_DOCS_KEY, 10)\n+ .put(LogDocMergePolicyProvider.MERGE_FACTORY_KEY, 5)\n+ .put(LogByteSizeMergePolicy.DEFAULT_MIN_MERGE_MB, 0.5)\n+ .build()));\n+ long id = 0;\n+ final int rounds = scaledRandomIntBetween(50, 300);\n+ logger.info(\"Starting rounds [{}] \", rounds);\n+ for (int i = 0; i < rounds; ++i) {\n+ final int numDocs = scaledRandomIntBetween(100, 1000);\n+ BulkRequestBuilder request = client().prepareBulk();\n+ for (int j = 0; j < numDocs; ++j) {\n+ request.add(Requests.indexRequest(\"test\").type(\"type1\").id(Long.toString(id++)).source(jsonBuilder().startObject().field(\"l\", randomLong()).endObject()));\n+ }\n+ BulkResponse response = request.execute().actionGet();\n+ refresh();\n+ assertNoFailures(response);\n+ IndicesStatsResponse stats = client().admin().indices().prepareStats(\"test\").setSegments(true).setMerge(true).get();\n+ logger.info(\"index round [{}] - segments {}, total merges {}, current merge {}\", i, stats.getPrimaries().getSegments().getCount(), stats.getPrimaries().getMerge().getTotal(), stats.getPrimaries().getMerge().getCurrent());\n+ }\n+ awaitBusy(new Predicate<Object>() {\n+ @Override\n+ public boolean apply(Object input) {\n+ IndicesStatsResponse stats = client().admin().indices().prepareStats().setSegments(true).setMerge(true).get();\n+ logger.info(\"numshards {}, segments {}, total merges {}, current merge {}\", numOfShards, stats.getPrimaries().getSegments().getCount(), stats.getPrimaries().getMerge().getTotal(), stats.getPrimaries().getMerge().getCurrent());\n+ long current = stats.getPrimaries().getMerge().getCurrent();\n+ long count = stats.getPrimaries().getSegments().getCount();\n+ return count < 50 && current == 0;\n+ }\n+ });\n+ IndicesStatsResponse stats = client().admin().indices().prepareStats().setSegments(true).setMerge(true).get();\n+ logger.info(\"numshards {}, segments {}, total merges {}, current merge {}\", numOfShards, stats.getPrimaries().getSegments().getCount(), stats.getPrimaries().getMerge().getTotal(), stats.getPrimaries().getMerge().getCurrent());\n+ long count = stats.getPrimaries().getSegments().getCount();\n+ assertThat(count, Matchers.lessThanOrEqualTo(50l));\n+ }\n+\n+}",
"filename": "src/test/java/org/elasticsearch/index/engine/internal/InternalEngineMergeTests.java",
"status": "added"
},
{
"diff": "@@ -294,9 +294,7 @@ protected BigArray getDelegate() {\n \n @Override\n protected void randomizeContent(long from, long to) {\n- for (long i = from; i < to; ++i) {\n- set(i, (byte) random.nextInt(1 << 8));\n- }\n+ fill(from, to, (byte) random.nextInt(1 << 8));\n }\n \n @Override\n@@ -342,9 +340,7 @@ protected BigArray getDelegate() {\n \n @Override\n protected void randomizeContent(long from, long to) {\n- for (long i = from; i < to; ++i) {\n- set(i, random.nextInt());\n- }\n+ fill(from, to, random.nextInt());\n }\n \n @Override\n@@ -385,9 +381,7 @@ protected BigArray getDelegate() {\n \n @Override\n protected void randomizeContent(long from, long to) {\n- for (long i = from; i < to; ++i) {\n- set(i, random.nextLong());\n- }\n+ fill(from, to, random.nextLong());\n }\n \n @Override\n@@ -428,9 +422,7 @@ protected BigArray getDelegate() {\n \n @Override\n protected void randomizeContent(long from, long to) {\n- for (long i = from; i < to; ++i) {\n- set(i, (random.nextFloat() - 0.5f) * 1000);\n- }\n+ fill(from, to, (random.nextFloat() - 0.5f) * 1000);\n }\n \n @Override\n@@ -471,9 +463,7 @@ protected BigArray getDelegate() {\n \n @Override\n protected void randomizeContent(long from, long to) {\n- for (long i = from; i < to; ++i) {\n- set(i, (random.nextDouble() - 0.5) * 1000);\n- }\n+ fill(from, to, (random.nextDouble() - 0.5) * 1000);\n }\n \n @Override",
"filename": "src/test/java/org/elasticsearch/test/cache/recycler/MockBigArrays.java",
"status": "modified"
},
{
"diff": "@@ -32,6 +32,7 @@\n import org.elasticsearch.threadpool.ThreadPool;\n \n import java.lang.reflect.Array;\n+import java.util.Arrays;\n import java.util.Map;\n import java.util.Random;\n import java.util.concurrent.ConcurrentMap;\n@@ -85,11 +86,21 @@ public boolean release() throws ElasticsearchException {\n throw new IllegalStateException(\"Releasing a page that has not been acquired\");\n }\n final T ref = v();\n- for (int i = 0; i < Array.getLength(ref); ++i) {\n- if (ref instanceof Object[]) {\n- Array.set(ref, i, null);\n- } else {\n- Array.set(ref, i, (byte) random.nextInt(256));\n+ if (ref instanceof Object[]) {\n+ Arrays.fill((Object[])ref, 0, Array.getLength(ref), null);\n+ } else if (ref instanceof byte[]) {\n+ Arrays.fill((byte[])ref, 0, Array.getLength(ref), (byte) random.nextInt(256));\n+ } else if (ref instanceof long[]) {\n+ Arrays.fill((long[])ref, 0, Array.getLength(ref), random.nextLong());\n+ } else if (ref instanceof int[]) {\n+ Arrays.fill((int[])ref, 0, Array.getLength(ref), random.nextInt());\n+ } else if (ref instanceof double[]) {\n+ Arrays.fill((double[])ref, 0, Array.getLength(ref), random.nextDouble() - 0.5);\n+ } else if (ref instanceof float[]) {\n+ Arrays.fill((float[])ref, 0, Array.getLength(ref), random.nextFloat() - 0.5f);\n+ } else {\n+ for (int i = 0; i < Array.getLength(ref); ++i) {\n+ Array.set(ref, i, (byte) random.nextInt(256));\n }\n }\n return v.release();\n@@ -112,7 +123,7 @@ public boolean isRecycled() {\n public V<byte[]> bytePage(boolean clear) {\n final V<byte[]> page = super.bytePage(clear);\n if (!clear) {\n- random.nextBytes(page.v());\n+ Arrays.fill(page.v(), 0, page.v().length, (byte)random.nextInt(1<<8));\n }\n return wrap(page);\n }\n@@ -121,9 +132,7 @@ public V<byte[]> bytePage(boolean clear) {\n public V<int[]> intPage(boolean clear) {\n final V<int[]> page = super.intPage(clear);\n if (!clear) {\n- for (int i = 0; i < page.v().length; ++i) {\n- page.v()[i] = random.nextInt();\n- }\n+ Arrays.fill(page.v(), 0, page.v().length, random.nextInt());\n }\n return wrap(page);\n }\n@@ -132,9 +141,7 @@ public V<int[]> intPage(boolean clear) {\n public V<long[]> longPage(boolean clear) {\n final V<long[]> page = super.longPage(clear);\n if (!clear) {\n- for (int i = 0; i < page.v().length; ++i) {\n- page.v()[i] = random.nextLong();\n- }\n+ Arrays.fill(page.v(), 0, page.v().length, random.nextLong());\n }\n return wrap(page);\n }\n@@ -143,9 +150,7 @@ public V<long[]> longPage(boolean clear) {\n public V<double[]> doublePage(boolean clear) {\n final V<double[]> page = super.doublePage(clear);\n if (!clear) {\n- for (int i = 0; i < page.v().length; ++i) {\n- page.v()[i] = random.nextDouble() - 0.5;\n- }\n+ Arrays.fill(page.v(), 0, page.v().length, random.nextDouble() - 0.5);\n }\n return wrap(page);\n }",
"filename": "src/test/java/org/elasticsearch/test/cache/recycler/MockPageCacheRecycler.java",
"status": "modified"
},
{
"diff": "@@ -32,6 +32,7 @@\n import org.elasticsearch.action.admin.cluster.health.ClusterHealthResponse;\n import org.elasticsearch.action.admin.indices.delete.DeleteIndexRequestBuilder;\n import org.elasticsearch.action.admin.indices.delete.DeleteIndexResponse;\n+import org.elasticsearch.action.bulk.BulkResponse;\n import org.elasticsearch.action.count.CountResponse;\n import org.elasticsearch.action.get.GetResponse;\n import org.elasticsearch.action.percolate.PercolateResponse;\n@@ -202,6 +203,12 @@ public static void assertFailures(SearchResponse searchResponse) {\n assertVersionSerializable(searchResponse);\n }\n \n+ public static void assertNoFailures(BulkResponse response) {\n+ assertThat(\"Unexpected ShardFailures: \" + response.buildFailureMessage(),\n+ response.hasFailures(), is(false));\n+ assertVersionSerializable(response);\n+ }\n+\n public static void assertFailures(SearchRequestBuilder searchRequestBuilder, RestStatus restStatus, Matcher<String> reasonMatcher) {\n //when the number for shards is randomized and we expect failures\n //we can either run into partial or total failures depending on the current number of shards\n@@ -513,4 +520,5 @@ public boolean apply(Object input) {\n MockDirectoryHelper.wrappers.clear();\n }\n }\n+\n }",
"filename": "src/test/java/org/elasticsearch/test/hamcrest/ElasticsearchAssertions.java",
"status": "modified"
}
]
} |
{
"body": "When the `_recovery` API is executed against a cluster with no indices, an empty HTTP body is returned, causing errors in clients expecting a valid JSON response:\n\n```\n$ elasticsearch --cluster.name=recovery_response_error --http.port=9280\n# ...\n$ curl -i localhost:9280/_recovery\nHTTP/1.1 200 OK\nContent-Type: application/json; charset=UTF-8\nContent-Length: 0\n```\n\nAn empty JSON object (`{}`) is expected here.\n",
"comments": [],
"number": 5743,
"title": "Empty HTTP body returned from _recovery API on empty cluster"
} | {
"body": "This is a fix to send back to the client a valid empty JSON response in\nthe case when we have no recovery information.\n\nCloses #5743\n",
"number": 5756,
"review_comments": [],
"title": "Return valid empty JSON response when no recovery information"
} | {
"commits": [
{
"message": "Return valid empty JSON response when no recovery information\n\nThis is a fix to send back to the client a valid empty JSON response in\nthe case when we have no recovery information.\n\nCloses #5743"
}
],
"files": [
{
"diff": "@@ -89,22 +89,23 @@ public Map<String, List<ShardRecoveryResponse>> shardResponses() {\n \n @Override\n public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {\n-\n- for (String index : shardResponses.keySet()) {\n- List<ShardRecoveryResponse> responses = shardResponses.get(index);\n- if (responses == null || responses.size() == 0) {\n- continue;\n- }\n- builder.startObject(index);\n- builder.startArray(\"shards\");\n- for (ShardRecoveryResponse recoveryResponse : responses) {\n- builder.startObject();\n- recoveryResponse.detailed(this.detailed);\n- recoveryResponse.toXContent(builder, params);\n+ if (hasRecoveries()) {\n+ for (String index : shardResponses.keySet()) {\n+ List<ShardRecoveryResponse> responses = shardResponses.get(index);\n+ if (responses == null || responses.size() == 0) {\n+ continue;\n+ }\n+ builder.startObject(index);\n+ builder.startArray(\"shards\");\n+ for (ShardRecoveryResponse recoveryResponse : responses) {\n+ builder.startObject();\n+ recoveryResponse.detailed(this.detailed);\n+ recoveryResponse.toXContent(builder, params);\n+ builder.endObject();\n+ }\n+ builder.endArray();\n builder.endObject();\n }\n- builder.endArray();\n- builder.endObject();\n }\n return builder;\n }",
"filename": "src/main/java/org/elasticsearch/action/admin/indices/recovery/RecoveryResponse.java",
"status": "modified"
},
{
"diff": "@@ -57,12 +57,10 @@ public void handleRequest(final RestRequest request, final RestChannel channel)\n client.admin().indices().recoveries(recoveryRequest, new RestBuilderListener<RecoveryResponse>(channel) {\n @Override\n public RestResponse buildResponse(RecoveryResponse response, XContentBuilder builder) throws Exception {\n- if (response.hasRecoveries()) {\n- response.detailed(recoveryRequest.detailed());\n- builder.startObject();\n- response.toXContent(builder, request);\n- builder.endObject();\n- }\n+ response.detailed(recoveryRequest.detailed());\n+ builder.startObject();\n+ response.toXContent(builder, request);\n+ builder.endObject();\n return new BytesRestResponse(OK, builder);\n }\n });",
"filename": "src/main/java/org/elasticsearch/rest/action/admin/indices/recovery/RestRecoveryAction.java",
"status": "modified"
}
]
} |
{
"body": "```\nDELETE http://127.0.0.1:9200/_search/scroll/asdasdadasdasd HTTP/1.1\n```\n\nReturns the following response\n\n```\nHTTP/1.1 500 Internal Server Error\nContent-Type: application/json; charset=UTF-8\nContent-Length: 73\n\n{\n \"error\" : \"ArrayIndexOutOfBoundsException[1]\",\n \"status\" : 500\n}\n```\n\nWhile a 404 is expected. \n\nIt would also be nice if we can allow the scroll id to be posted. I've had people hit problems with scroll ids that are too big in the past:\n\nhttps://github.com/elasticsearch/elasticsearch-net/issues/318\n",
"comments": [
{
"body": "@Mpdreamz Actually that specific scroll id is malformed and that is where the ArrayIndexOutOfBoundsException comes from, so I think a 400 should be returned?\n\nIf a non existent scroll_id is used, it will just return and act like everything is fine. I agree a 404 would be nice.\n",
"created_at": "2014-04-08T14:31:56Z"
},
{
"body": "++404\n",
"created_at": "2014-04-08T16:25:19Z"
},
{
"body": "++404 and +1 on implementing #5726 @martijnvg ! \n",
"created_at": "2014-04-08T17:31:49Z"
},
{
"body": "PR #5738 only addresses invalid scroll ids. Returning a 404 for a valid, but non existing scroll id requires more work than just validation. The clear scoll api uses an internal free search context api, which for example the search api relies on. This internal api just always returns an empty response. I can change that, so that it includes whether it actually has removed a search context, but that requires a change in the transport layer, so I like to do separate that in a different PR.\n",
"created_at": "2014-04-09T11:29:03Z"
},
{
"body": "LGTM\n",
"created_at": "2014-04-14T20:47:59Z"
},
{
"body": "@martijnvg can you assign the fix version here please\n",
"created_at": "2014-04-14T20:48:38Z"
},
{
"body": "@s1monw PR #5738 only handles invalid scroll ids, but this issue is also about returning a 404 when a valid scroll id doesn't exist. I will assign the proper versions in PR and leave this issue open, once the missing scroll id has been addressed this issue can be closed.\n",
"created_at": "2014-04-16T02:34:38Z"
}
],
"number": 5730,
"title": "clear scroll throws 500 on array out of bounds exception"
} | {
"body": "PR for #5730 but only dealing with invalid scroll_ids and not non existing ones.\n",
"number": 5738,
"review_comments": [
{
"body": "I don't think we should catch assertions errors anywhere in our code..., what happens when they are disabled? \n",
"created_at": "2014-04-09T09:18:38Z"
},
{
"body": "agreed we should not catch those\n",
"created_at": "2014-04-09T10:24:31Z"
},
{
"body": "I don't like it as well... But a test (SearchScrollTests line 325) makes an assertion fail in UnicodeUtil#UTF8toUTF16 and this was my attempt to make it fail consistently with the other check I added in this method. \n\nI can catch AssertionError in the test instead? Also the when the scroll id in test is used outside the test framework it will fail on the check I below here.\n",
"created_at": "2014-04-09T10:42:50Z"
}
],
"title": "Throw better error if invalid scroll id is used"
} | {
"commits": [
{
"message": "Better deal with invalid scroll ids"
},
{
"message": "Removed AssertionError check and check for AssertionError in the test"
}
],
"files": [
{
"diff": "@@ -96,13 +96,21 @@ public static ParsedScrollId parseScrollId(String scrollId) {\n try {\n byte[] decode = Base64.decode(scrollId, Base64.URL_SAFE);\n UnicodeUtil.UTF8toUTF16(decode, 0, decode.length, spare);\n- } catch (IOException e) {\n+ } catch (Exception e) {\n throw new ElasticsearchIllegalArgumentException(\"Failed to decode scrollId\", e);\n }\n String[] elements = Strings.splitStringToArray(spare, ';');\n+ if (elements.length < 2) {\n+ throw new ElasticsearchIllegalArgumentException(\"Malformed scrollId [\" + scrollId + \"]\");\n+ }\n+\n int index = 0;\n String type = elements[index++];\n int contextSize = Integer.parseInt(elements[index++]);\n+ if (elements.length < contextSize + 2) {\n+ throw new ElasticsearchIllegalArgumentException(\"Malformed scrollId [\" + scrollId + \"]\");\n+ }\n+\n @SuppressWarnings({\"unchecked\"}) Tuple<String, Long>[] context = new Tuple[contextSize];\n for (int i = 0; i < contextSize; i++) {\n String element = elements[index++];",
"filename": "src/main/java/org/elasticsearch/action/search/type/TransportSearchHelper.java",
"status": "modified"
},
{
"diff": "@@ -19,13 +19,15 @@\n \n package org.elasticsearch.search.scroll;\n \n+import org.elasticsearch.ElasticsearchIllegalArgumentException;\n import org.elasticsearch.action.search.ClearScrollResponse;\n import org.elasticsearch.action.search.SearchRequestBuilder;\n import org.elasticsearch.action.search.SearchResponse;\n import org.elasticsearch.action.search.SearchType;\n import org.elasticsearch.common.Priority;\n import org.elasticsearch.common.settings.ImmutableSettings;\n import org.elasticsearch.common.unit.TimeValue;\n+import org.elasticsearch.common.util.concurrent.UncategorizedExecutionException;\n import org.elasticsearch.index.query.QueryBuilders;\n import org.elasticsearch.search.SearchHit;\n import org.elasticsearch.search.sort.SortOrder;\n@@ -37,7 +39,7 @@\n \n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n import static org.elasticsearch.index.query.QueryBuilders.*;\n-import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.*;\n \n /**\n *\n@@ -300,6 +302,40 @@ public void testSimpleScrollQueryThenFetch_clearScrollIds() throws Exception {\n assertThat(searchResponse2.getHits().hits().length, equalTo(0));\n }\n \n+ @Test\n+ public void testClearNonExistentScrollId() throws Exception {\n+ createIndex(\"idx\");\n+ ClearScrollResponse response = client().prepareClearScroll()\n+ .addScrollId(\"cXVlcnlUaGVuRmV0Y2g7MzsyOlpBRC1qOUhrUjhhZ0NtQWUxU2FuWlE7MjpRcjRaNEJ2R1JZV1VEMW02ZGF1LW5ROzI6S0xUal9lZDRTd3lWNUhUU2VSb01CQTswOw==\")\n+ .get();\n+ // Whether we actually clear a scroll, we can't know, since that information isn't serialized in the\n+ // free search context response, which is returned from each node we want to clear a particular scroll.\n+ assertThat(response.isSucceeded(), is(true));\n+ }\n+\n+ @Test\n+ public void testClearIllegalScrollId() throws Exception {\n+ createIndex(\"idx\");\n+ try {\n+ client().prepareClearScroll().addScrollId(\"c2Nhbjs2OzM0NDg1ODpzRlBLc0FXNlNyNm5JWUc1\").get();\n+ fail();\n+ } catch (ElasticsearchIllegalArgumentException e) {\n+ }\n+ try {\n+ // Fails during base64 decoding (Base64-encoded string must have at least four characters)\n+ client().prepareClearScroll().addScrollId(\"a\").get();\n+ fail();\n+ } catch (ElasticsearchIllegalArgumentException e) {\n+ }\n+ try {\n+ client().prepareClearScroll().addScrollId(\"abcabc\").get();\n+ fail();\n+ // if running without -ea this will also throw ElasticsearchIllegalArgumentException\n+ } catch (UncategorizedExecutionException e) {\n+ assertThat(e.getRootCause(), instanceOf(AssertionError.class));\n+ }\n+ }\n+\n @Test\n public void testSimpleScrollQueryThenFetch_clearAllScrollIds() throws Exception {\n client().admin().indices().prepareCreate(\"test\").setSettings(ImmutableSettings.settingsBuilder().put(\"index.number_of_shards\", 3)).execute().actionGet();",
"filename": "src/test/java/org/elasticsearch/search/scroll/SearchScrollTests.java",
"status": "modified"
}
]
} |
{
"body": "In https://github.com/elasticsearch/elasticsearch-perl/issues/24 a user was generating scroll IDs which were too long for the intervening proxy to handle in the URL or query string. The same problem would apply to the clear-scroll API.\n\nThe clear-scroll API should optionally accept the scroll_id in the body as well as in the URL\n",
"comments": [],
"number": 5726,
"title": "Clear scroll should accept scroll_id in body"
} | {
"body": "PR ofr #5726\n",
"number": 5734,
"review_comments": [],
"title": "The clear scroll apis should optionally accepts a scroll_id in the request body."
} | {
"commits": [
{
"message": "The clear scroll apis now optionally accepts a scroll_id in it body.\n\nCloses #5726"
}
],
"files": [
{
"diff": "@@ -14,6 +14,8 @@\n },\n \"params\": {}\n },\n- \"body\": null\n+ \"body\": {\n+ \"description\": \"A comma-separated list of scroll IDs to clear if none was specified via the scroll_id parameter\"\n+ }\n }\n }",
"filename": "rest-api-spec/api/clear_scroll.json",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,35 @@\n+---\n+\"Clear scroll\":\n+ - do:\n+ indices.create:\n+ index: test_scroll\n+ - do:\n+ index:\n+ index: test_scroll\n+ type: test\n+ id: 42\n+ body: { foo: bar }\n+\n+ - do:\n+ indices.refresh: {}\n+\n+ - do:\n+ search:\n+ index: test_scroll\n+ search_type: scan\n+ scroll: 1m\n+ body:\n+ query:\n+ match_all: {}\n+\n+ - set: {_scroll_id: scroll_id1}\n+\n+ - do:\n+ clear_scroll:\n+ scroll_id: $scroll_id1\n+\n+ - do:\n+ scroll:\n+ scroll_id: $scroll_id1\n+\n+ - length: {hits.hits: 0}",
"filename": "rest-api-spec/test/scroll/11_clear.yaml",
"status": "added"
},
{
"diff": "@@ -27,6 +27,7 @@\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.rest.*;\n+import org.elasticsearch.rest.action.support.RestActions;\n import org.elasticsearch.rest.action.support.RestBuilderListener;\n \n import java.util.Arrays;\n@@ -49,6 +50,9 @@ public RestClearScrollAction(Settings settings, Client client, RestController co\n @Override\n public void handleRequest(final RestRequest request, final RestChannel channel) {\n String scrollIds = request.param(\"scroll_id\");\n+ if (scrollIds == null) {\n+ scrollIds = RestActions.getRestContent(request).toUtf8();\n+ }\n \n ClearScrollRequest clearRequest = new ClearScrollRequest();\n clearRequest.setScrollIds(Arrays.asList(splitScrollIds(scrollIds)));",
"filename": "src/main/java/org/elasticsearch/rest/action/search/RestClearScrollAction.java",
"status": "modified"
}
]
} |
{
"body": "After upgrading to 1.1.0, I found that my backend processes that perform bulk inserts/updates to ElasticSearch started failing with the following error:\n\nMapperParsingException[failed to parse]; nested: ElasticsearchParseException[geo_point expected]; \n\nIt seems that geo_point fields now require a non-null value for every document? Is there a way to bypass this behavior without having to change my ETL process to create fake geo_point coordinates?\n",
"comments": [
{
"body": "Confirmed. I hit the same issue yesterday while moving some code from 1.0 to 1.1.\n",
"created_at": "2014-04-04T06:35:35Z"
},
{
"body": "Pushed in 1.1, 1.x (1.2) and master branches.\n1.0.2 is not affected by this issue.\n",
"created_at": "2014-04-04T09:54:06Z"
},
{
"body": "I had the same problem migrating from 1.0.1 to 1.1.0, but I noticed that you can have missing values, if you completely omit the geo_point field instead of having something like \"field:{}\" or \"field:null\". See this gist: https://gist.github.com/hkorte/9936192\n",
"created_at": "2014-04-04T14:02:52Z"
},
{
"body": "You are probably right, as we do not include null value fields in our ETL process. However, our bulk updates do fail when the geo_point fields are left out of the bulk request.\n",
"created_at": "2014-04-04T15:41:05Z"
},
{
"body": "I just hit the same issue. Today I upgraded to 1.1. Should I wait for the next release for this fix? @dadoonet I got the 1.1 dependency today but this change is not there, i don't see it in the source code.\n",
"created_at": "2014-04-05T14:06:58Z"
},
{
"body": "It will be in 1.1.1 which is not released yet. If you can't modify your injection, I guess you should wait for 1.1.1.\n",
"created_at": "2014-04-05T14:33:00Z"
},
{
"body": "+1 on the issue.\nWe've been able to change our code to work around the issue, but looking forward to having it fixed.\n",
"created_at": "2014-04-07T23:00:03Z"
},
{
"body": "This still seems to be an issue in ES `1.7.1`\n",
"created_at": "2015-09-24T15:02:36Z"
},
{
"body": "@Tonkpils this works just fine in 1.7.1:\n\n```\nPUT t\n{\n \"mappings\": {\n \"t\": {\n \"properties\": {\n \"loc\": {\n \"type\": \"geo_point\"\n }\n }\n }\n }\n}\n\nPUT t/t/1\n{\n \"loc\": null\n}\n```\n",
"created_at": "2015-09-25T12:24:28Z"
},
{
"body": "You're right, it was an issue with the incoming parameters where Lon and lat were null. Apologies. \n",
"created_at": "2015-09-25T15:16:48Z"
}
],
"number": 5680,
"title": "geo_point doesn't allow null values as of 1.1.0"
} | {
"body": "closes #5680\n",
"number": 5681,
"review_comments": [],
"title": "Fix geo_point accepting null values"
} | {
"commits": [
{
"message": "fix geo_point null value"
}
],
"files": [
{
"diff": "@@ -525,7 +525,7 @@ public void parse(ParseContext context) throws IOException {\n }\n } else if (token == XContentParser.Token.VALUE_STRING) {\n parsePointFromString(context, sparse, context.parser().text());\n- } else {\n+ } else if (token != XContentParser.Token.VALUE_NULL) {\n parse(context, GeoUtils.parseGeoPoint(context.parser(), sparse), null);\n }\n }",
"filename": "src/main/java/org/elasticsearch/index/mapper/geo/GeoPointFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -117,4 +117,21 @@ public void testGeoHashPrecisionAsLength() throws Exception {\n GeoPointFieldMapper geoPointFieldMapper = (GeoPointFieldMapper) mapper;\n assertThat(geoPointFieldMapper.geoHashPrecision(), is(10));\n }\n+\n+ @Test\n+ public void testNullValue() throws Exception {\n+ String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n+ .startObject(\"properties\").startObject(\"point\").field(\"type\", \"geo_point\").endObject().endObject()\n+ .endObject().endObject().string();\n+\n+ DocumentMapper defaultMapper = MapperTestUtils.newParser().parse(mapping);\n+\n+ ParsedDocument doc = defaultMapper.parse(\"type\", \"1\", XContentFactory.jsonBuilder()\n+ .startObject()\n+ .field(\"point\", (Object) null)\n+ .endObject()\n+ .bytes());\n+\n+ assertThat(doc.rootDoc().get(\"point\"), nullValue());\n+ }\n }",
"filename": "src/test/java/org/elasticsearch/index/mapper/geo/GeohashMappingGeoPointTests.java",
"status": "modified"
}
]
} |
{
"body": "I rebased #3278 to latest master and one of my benchmarks is throwing a NPE. Traced it down to ref.bytes being null for the array copy. See [PagedBytesReference.java#L448](https://github.com/elasticsearch/elasticsearch/blob/master/src/main/java/org/elasticsearch/common/bytes/PagedBytesReference.java#L448).\n\nNot sure why, but this is the chunk of code that triggers this in my PR is:\n\n[TermsFilterParser.java#L135](https://github.com/mattweber/elasticsearch/blob/terms_lookup_by_query/src/main/java/org/elasticsearch/index/query/TermsFilterParser.java#L135)\n\n```\n if (\"filter\".equals(currentFieldName)) {\n lookupFilter = XContentFactory.contentBuilder(parser.contentType());\n lookupFilter.copyCurrentStructure(parser);\n}\n```\n\nI can push up the rebased PR that is failing if you need it to test. Let me know.\n\n/cc @s1monw @hhoffstaette \n",
"comments": [
{
"body": "a test for this would be awesome!\n",
"created_at": "2014-04-02T21:39:51Z"
},
{
"body": "Let me see if I can get something specific. #3278 is pretty complex.\n",
"created_at": "2014-04-02T21:41:18Z"
},
{
"body": "Ironically, trying to write a test for this triggered another. \n\nDrop this into [XContentBuilderTests.java](https://github.com/elasticsearch/elasticsearch/blob/master/src/test/java/org/elasticsearch/common/xcontent/builder/XContentBuilderTests.java)\n\n```\n @Test\n public void testCopyCurrentStructure() throws Exception {\n XContentBuilder builder = XContentFactory.contentBuilder(XContentType.JSON);\n builder.startObject()\n .field(\"test\", \"test field\")\n .startObject(\"filter\")\n .startObject(\"terms\");\n\n // up to 20k random terms\n int numTerms = randomInt(20000) + 1;\n List<String> terms = new ArrayList<>(numTerms);\n for (int i = 0; i < numTerms; i++) {\n terms.add(\"test\" + i);\n }\n\n builder.field(\"fakefield\", terms).endObject().endObject().endObject();\n\n XContentParser parser = XContentFactory.xContent(XContentType.JSON).createParser(builder.bytes());\n\n XContentBuilder filterBuilder = null;\n XContentParser.Token token;\n String currentFieldName = null;\n assertThat(parser.nextToken(), equalTo(XContentParser.Token.START_OBJECT));\n while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {\n if (token == XContentParser.Token.FIELD_NAME) {\n currentFieldName = parser.currentName();\n } else if (token.isValue()) {\n if (\"test\".equals(currentFieldName)) {\n assertThat(parser.text(), equalTo(\"test field\"));\n }\n } else if (token == XContentParser.Token.START_OBJECT) {\n if (\"filter\".equals(currentFieldName)) {\n filterBuilder = XContentFactory.contentBuilder(parser.contentType());\n filterBuilder.copyCurrentStructure(parser);\n }\n }\n }\n\n assertNotNull(filterBuilder);\n parser = XContentFactory.xContent(XContentType.JSON).createParser(filterBuilder.bytes());\n assertThat(parser.nextToken(), equalTo(XContentParser.Token.START_OBJECT));\n assertThat(parser.nextToken(), equalTo(XContentParser.Token.FIELD_NAME));\n assertThat(parser.currentName(), equalTo(\"terms\"));\n assertThat(parser.nextToken(), equalTo(XContentParser.Token.START_OBJECT));\n assertThat(parser.nextToken(), equalTo(XContentParser.Token.FIELD_NAME));\n assertThat(parser.currentName(), equalTo(\"fakefield\"));\n assertThat(parser.nextToken(), equalTo(XContentParser.Token.START_ARRAY));\n int i = 0;\n while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) {\n assertThat(parser.text(), equalTo(terms.get(i++)));\n }\n\n assertThat(i, equalTo(terms.size()));\n }\n```\n\nRun test with:\n\n```\nmvn test -Dtests.seed=F330AACFAFD76C8B -Dtests.class=org.elasticsearch.common.xcontent.builder.XContentBuilderTests -Dtests.method=testCopyCurrentStructure -Dtests.prefix=tests -Dfile.encoding=UTF-8 -Duser.timezone=America/Los_Angeles\n```\n\nI am wondering if my original issue is something to do with threading. Multiple shards on the same node all parsing the terms filter and calling copyCurrentStructure. Not sure, still trying to reproduce in an actual test.\n",
"created_at": "2014-04-02T23:05:41Z"
},
{
"body": "Actually this test is for the error I reported, tests run with -ea enabled which triggers the assertion. In my benchmark I didn't have -ea enabled so it hits the NPE. Running my benchmark with -ea triggers this same assertion.\n",
"created_at": "2014-04-03T04:24:27Z"
},
{
"body": "Need a PR for the test or is the code above enough?\n",
"created_at": "2014-04-03T04:25:33Z"
},
{
"body": "It is not clear to me which assertion you are talking about in https://github.com/elasticsearch/elasticsearch/issues/5667#issuecomment-39411624 - are you referring to the assert right before the arraycopy or one in your tests? Regardless, it's not surprising that this would fail when the same PBR and/or stream is used & mutated by multiple threads. Hitting the NPE has btw nothing to do with the assertion, and would just be a side effect of corruption: the condition for the assert still implies that ref.bytes is not null and valid.\n",
"created_at": "2014-04-03T09:18:38Z"
},
{
"body": "Awesome @s1monw! I will give this a try shortly!\n",
"created_at": "2014-04-03T16:11:11Z"
}
],
"number": 5667,
"title": "NPE in PagedBytesReference"
} | {
"body": "Currently `PagedBytesReferenceStreamInput#read(byte[],int,int)` ignores\nthe current pos and that causes to read past EOF\n\nCloses #5667\n",
"number": 5677,
"review_comments": [],
"title": "Take stream position into account when calculating remaining length"
} | {
"commits": [
{
"message": "Take stream position into account when calculating remaining length\n\nCurrently `PagedBytesReferenceStreamInput#read(byte[],int,int)` ignores\nthe current pos and that causes to read past EOF\n\nCloses #5667"
}
],
"files": [
{
"diff": "@@ -431,27 +431,26 @@ public int read(final byte[] b, final int bOffset, final int len) throws IOExcep\n return -1;\n }\n \n- // we need to stop at the end\n- int todo = Math.min(len, length);\n+ final int numBytesToCopy = Math.min(len, length - pos); // copy the full lenth or the remaining part\n \n // current offset into the underlying ByteArray\n- long bytearrayOffset = offset + pos;\n+ long byteArrayOffset = offset + pos;\n \n // bytes already copied\n- int written = 0;\n+ int copiedBytes = 0;\n \n- while (written < todo) {\n- long pagefragment = PAGE_SIZE - (bytearrayOffset % PAGE_SIZE); // how much can we read until hitting N*PAGE_SIZE?\n- int bulksize = (int)Math.min(pagefragment, todo - written); // we cannot copy more than a page fragment\n- boolean copied = bytearray.get(bytearrayOffset, bulksize, ref); // get the fragment\n+ while (copiedBytes < numBytesToCopy) {\n+ long pageFragment = PAGE_SIZE - (byteArrayOffset % PAGE_SIZE); // how much can we read until hitting N*PAGE_SIZE?\n+ int bulkSize = (int)Math.min(pageFragment, numBytesToCopy - copiedBytes); // we cannot copy more than a page fragment\n+ boolean copied = bytearray.get(byteArrayOffset, bulkSize, ref); // get the fragment\n assert (copied == false); // we should never ever get back a materialized byte[]\n- System.arraycopy(ref.bytes, ref.offset, b, bOffset + written, bulksize); // copy fragment contents\n- written += bulksize; // count how much we copied\n- bytearrayOffset += bulksize; // advance ByteArray index\n+ System.arraycopy(ref.bytes, ref.offset, b, bOffset + copiedBytes, bulkSize); // copy fragment contents\n+ copiedBytes += bulkSize; // count how much we copied\n+ byteArrayOffset += bulkSize; // advance ByteArray index\n }\n \n- pos += written; // finally advance our stream position\n- return written;\n+ pos += copiedBytes; // finally advance our stream position\n+ return copiedBytes;\n }\n \n @Override",
"filename": "src/main/java/org/elasticsearch/common/bytes/PagedBytesReference.java",
"status": "modified"
},
{
"diff": "@@ -178,6 +178,34 @@ public void testStreamInputBulkReadWithOffset() throws IOException {\n assertArrayEquals(pbrBytesWithOffset, targetBytes);\n }\n \n+ public void testRandomReads() throws IOException {\n+ int length = randomIntBetween(10, scaledRandomIntBetween(PAGE_SIZE * 2, PAGE_SIZE * 20));\n+ BytesReference pbr = getRandomizedPagedBytesReference(length);\n+ StreamInput streamInput = pbr.streamInput();\n+ BytesRef target = new BytesRef();\n+ while(target.length < pbr.length()) {\n+ switch (randomIntBetween(0, 10)) {\n+ case 6:\n+ case 5:\n+ target.append(new BytesRef(new byte[] {streamInput.readByte()}));\n+ break;\n+ case 4:\n+ case 3:\n+ BytesRef bytesRef = streamInput.readBytesRef(scaledRandomIntBetween(1, pbr.length() - target.length));\n+ target.append(bytesRef);\n+ break;\n+ default:\n+ byte[] buffer = new byte[scaledRandomIntBetween(1, pbr.length() - target.length)];\n+ int offset = scaledRandomIntBetween(0, buffer.length - 1);\n+ int read = streamInput.read(buffer, offset, buffer.length - offset);\n+ target.append(new BytesRef(buffer, offset, read));\n+ break;\n+ }\n+ }\n+ assertEquals(pbr.length(), target.length);\n+ assertArrayEquals(pbr.toBytes(), Arrays.copyOfRange(target.bytes, target.offset, target.length));\n+ }\n+\n public void testSliceStreamInput() throws IOException {\n int length = randomIntBetween(10, scaledRandomIntBetween(PAGE_SIZE * 2, PAGE_SIZE * 20));\n BytesReference pbr = getRandomizedPagedBytesReference(length);\n@@ -208,6 +236,13 @@ public void testSliceStreamInput() throws IOException {\n byte[] sliceToBytes = slice.toBytes();\n assertEquals(sliceBytes.length, sliceToBytes.length);\n assertArrayEquals(sliceBytes, sliceToBytes);\n+\n+ sliceInput.reset();\n+ byte[] buffer = new byte[sliceLength + scaledRandomIntBetween(1, 100)];\n+ int offset = scaledRandomIntBetween(0, Math.max(1, buffer.length - sliceLength - 1));\n+ int read = sliceInput.read(buffer, offset, sliceLength / 2);\n+ sliceInput.read(buffer, offset + read, sliceLength);\n+ assertArrayEquals(sliceBytes, Arrays.copyOfRange(buffer, offset, offset + sliceLength));\n }\n \n public void testWriteTo() throws IOException {",
"filename": "src/test/java/org/elasticsearch/common/bytes/PagedBytesReferenceTest.java",
"status": "modified"
},
{
"diff": "@@ -23,10 +23,7 @@\n import org.elasticsearch.common.bytes.BytesArray;\n import org.elasticsearch.common.io.FastCharArrayWriter;\n import org.elasticsearch.common.io.stream.BytesStreamOutput;\n-import org.elasticsearch.common.xcontent.XContentBuilder;\n-import org.elasticsearch.common.xcontent.XContentFactory;\n-import org.elasticsearch.common.xcontent.XContentGenerator;\n-import org.elasticsearch.common.xcontent.XContentType;\n+import org.elasticsearch.common.xcontent.*;\n import org.elasticsearch.test.ElasticsearchTestCase;\n import org.junit.Test;\n \n@@ -201,4 +198,59 @@ public void testDateTypesConversion() throws Exception {\n builder.map(map);\n assertThat(builder.string(), equalTo(\"{\\\"calendar\\\":\\\"\" + expectedCalendar + \"\\\"}\"));\n }\n+\n+ @Test\n+ public void testCopyCurrentStructure() throws Exception {\n+ XContentBuilder builder = XContentFactory.contentBuilder(XContentType.JSON);\n+ builder.startObject()\n+ .field(\"test\", \"test field\")\n+ .startObject(\"filter\")\n+ .startObject(\"terms\");\n+\n+ // up to 20k random terms\n+ int numTerms = randomInt(20000) + 1;\n+ List<String> terms = new ArrayList<>(numTerms);\n+ for (int i = 0; i < numTerms; i++) {\n+ terms.add(\"test\" + i);\n+ }\n+\n+ builder.field(\"fakefield\", terms).endObject().endObject().endObject();\n+\n+ XContentParser parser = XContentFactory.xContent(XContentType.JSON).createParser(builder.bytes());\n+\n+ XContentBuilder filterBuilder = null;\n+ XContentParser.Token token;\n+ String currentFieldName = null;\n+ assertThat(parser.nextToken(), equalTo(XContentParser.Token.START_OBJECT));\n+ while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {\n+ if (token == XContentParser.Token.FIELD_NAME) {\n+ currentFieldName = parser.currentName();\n+ } else if (token.isValue()) {\n+ if (\"test\".equals(currentFieldName)) {\n+ assertThat(parser.text(), equalTo(\"test field\"));\n+ }\n+ } else if (token == XContentParser.Token.START_OBJECT) {\n+ if (\"filter\".equals(currentFieldName)) {\n+ filterBuilder = XContentFactory.contentBuilder(parser.contentType());\n+ filterBuilder.copyCurrentStructure(parser);\n+ }\n+ }\n+ }\n+\n+ assertNotNull(filterBuilder);\n+ parser = XContentFactory.xContent(XContentType.JSON).createParser(filterBuilder.bytes());\n+ assertThat(parser.nextToken(), equalTo(XContentParser.Token.START_OBJECT));\n+ assertThat(parser.nextToken(), equalTo(XContentParser.Token.FIELD_NAME));\n+ assertThat(parser.currentName(), equalTo(\"terms\"));\n+ assertThat(parser.nextToken(), equalTo(XContentParser.Token.START_OBJECT));\n+ assertThat(parser.nextToken(), equalTo(XContentParser.Token.FIELD_NAME));\n+ assertThat(parser.currentName(), equalTo(\"fakefield\"));\n+ assertThat(parser.nextToken(), equalTo(XContentParser.Token.START_ARRAY));\n+ int i = 0;\n+ while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) {\n+ assertThat(parser.text(), equalTo(terms.get(i++)));\n+ }\n+\n+ assertThat(i, equalTo(terms.size()));\n+ }\n }\n\\ No newline at end of file",
"filename": "src/test/java/org/elasticsearch/common/xcontent/builder/XContentBuilderTests.java",
"status": "modified"
}
]
} |
{
"body": "This might not look like an issue since getting a single value on something that is empty makes no sense but it introduces an inconsistency between documents that have no value, depending on whether they are on a segment that has no value at all (which will use `ScriptDocValues.EMPTY`), or on a segment where at list one document has a value. In the latter case, `ScriptDocValues.Longs` (or `Doubles`) will be used and these classes implement `getValue` and return a default value (`0`) for documents with no value.\n\nSee https://github.com/elasticsearch/elasticsearch/pull/4684#issuecomment-39167864 for the original bug report.\n",
"comments": [
{
"body": "+1 to just remove the empty one!\n",
"created_at": "2014-04-02T08:55:15Z"
}
],
"number": 5646,
"title": "ScriptDocValues.EMPTY doesn't implement `getValue`"
} | {
"body": "Instead the default implementation is used, but on top of empty\n(Bytes|Long|Double|GeoPoint)Values. This makes sure there is no\ninconsistency between documents depending on whether other documents in the\nsegment have values or not.\n\nClose #5646\n",
"number": 5650,
"review_comments": [],
"title": "Remove ScriptDocValues.EMPTY."
} | {
"commits": [
{
"message": "Remove ScriptDocValues.EMPTY.\n\nInstead the default implementation is used, but on top of empty\n(Bytes|Long|Double|GeoPoint)Values. This makes sure there is no\ninconsistency between documents depending on whether other documents in the\nsegment have values or not.\n\nClose #5646"
}
],
"files": [
{
"diff": "@@ -29,7 +29,6 @@\n import org.joda.time.DateTimeZone;\n import org.joda.time.MutableDateTime;\n \n-import java.util.Collections;\n import java.util.List;\n \n /**\n@@ -38,7 +37,9 @@\n */\n public abstract class ScriptDocValues {\n \n- public static final ScriptDocValues EMPTY = new Empty();\n+ public static final Longs EMPTY_LONGS = new Longs(LongValues.EMPTY);\n+ public static final Doubles EMPTY_DOUBLES = new Doubles(DoubleValues.EMPTY);\n+ public static final GeoPoints EMPTY_GEOPOINTS = new GeoPoints(GeoPointValues.EMPTY);\n public static final Strings EMPTY_STRINGS = new Strings(BytesValues.EMPTY);\n protected int docId;\n protected boolean listLoaded = false;\n@@ -52,23 +53,6 @@ public void setNextDocId(int docId) {\n \n public abstract List<?> getValues();\n \n- public static class Empty extends ScriptDocValues {\n- @Override\n- public void setNextDocId(int docId) {\n- }\n-\n- @Override\n- public boolean isEmpty() {\n- return true;\n- }\n-\n- @Override\n- public List<?> getValues() {\n- return Collections.emptyList();\n- }\n-\n- }\n-\n public final static class Strings extends ScriptDocValues {\n \n private final BytesValues values;",
"filename": "src/main/java/org/elasticsearch/index/fielddata/ScriptDocValues.java",
"status": "modified"
},
{
"diff": "@@ -76,7 +76,7 @@ public GeoPointValues getGeoPointValues() {\n \n @Override\n public ScriptDocValues getScriptValues() {\n- return ScriptDocValues.EMPTY;\n+ return ScriptDocValues.EMPTY_GEOPOINTS;\n }\n \n @Override",
"filename": "src/main/java/org/elasticsearch/index/fielddata/plain/AbstractGeoPointIndexFieldData.java",
"status": "modified"
},
{
"diff": "@@ -94,7 +94,7 @@ public BytesValues getBytesValues(boolean needsHashes) {\n \n @Override\n public ScriptDocValues getScriptValues() {\n- return ScriptDocValues.EMPTY;\n+ return ScriptDocValues.EMPTY_DOUBLES;\n }\n }\n ",
"filename": "src/main/java/org/elasticsearch/index/fielddata/plain/DoubleArrayAtomicFieldData.java",
"status": "modified"
},
{
"diff": "@@ -93,7 +93,7 @@ public BytesValues getBytesValues(boolean needsHashes) {\n \n @Override\n public ScriptDocValues getScriptValues() {\n- return ScriptDocValues.EMPTY;\n+ return ScriptDocValues.EMPTY_DOUBLES;\n }\n }\n ",
"filename": "src/main/java/org/elasticsearch/index/fielddata/plain/FloatArrayAtomicFieldData.java",
"status": "modified"
},
{
"diff": "@@ -95,7 +95,7 @@ public BytesValues getBytesValues(boolean needsHashes) {\n \n @Override\n public ScriptDocValues getScriptValues() {\n- return ScriptDocValues.EMPTY;\n+ return ScriptDocValues.EMPTY_LONGS;\n }\n }\n ",
"filename": "src/main/java/org/elasticsearch/index/fielddata/plain/PackedArrayAtomicFieldData.java",
"status": "modified"
}
]
} |
{
"body": "```\n[2014-04-01 08:00:24,976][WARN ][tribe ] [Keith Kilham] failed to process [cluster event from t2, local-disco-receive(from master)]\njava.lang.NullPointerException\n at org.elasticsearch.tribe.TribeService$TribeClusterStateListener$1.execute(TribeService.java:315)\n at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:309)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:134)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n at java.lang.Thread.run(Thread.java:724)\n```\n",
"comments": [],
"number": 5643,
"title": "TribeNode throws NPE if index doesn't exist"
} | {
"body": "Closes #5643\n",
"number": 5645,
"review_comments": [],
"title": "Fix possible NPE in TribeNodeService"
} | {
"commits": [
{
"message": "[TEST] use assertAcked when creating indices"
},
{
"message": "ignore index if it's not in the cluster state and it's in the drop indices set\n\nCloses #5643"
}
],
"files": [
{
"diff": "@@ -22,7 +22,6 @@\n import com.google.common.collect.ImmutableMap;\n import com.google.common.collect.Lists;\n import com.google.common.collect.Maps;\n-import com.google.common.collect.Sets;\n import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.ElasticsearchIllegalStateException;\n import org.elasticsearch.action.support.master.TransportMasterNodeReadOperationAction;\n@@ -307,12 +306,15 @@ public ClusterState execute(ClusterState currentState) throws Exception {\n if (table == null) {\n continue;\n }\n- if (!currentState.metaData().hasIndex(tribeIndex.index()) && !droppedIndices.contains(tribeIndex.index())) {\n- // a new index, add it, and add the tribe name as a setting\n- logger.info(\"[{}] adding index [{}]\", tribeName, tribeIndex.index());\n- addNewIndex(tribeState, blocks, metaData, routingTable, tribeIndex);\n+ final IndexMetaData indexMetaData = currentState.metaData().index(tribeIndex.index());\n+ if (indexMetaData == null) {\n+ if (!droppedIndices.contains(tribeIndex.index())) {\n+ // a new index, add it, and add the tribe name as a setting\n+ logger.info(\"[{}] adding index [{}]\", tribeName, tribeIndex.index());\n+ addNewIndex(tribeState, blocks, metaData, routingTable, tribeIndex);\n+ }\n } else {\n- String existingFromTribe = currentState.metaData().index(tribeIndex.index()).getSettings().get(TRIBE_NAME);\n+ String existingFromTribe = indexMetaData.getSettings().get(TRIBE_NAME);\n if (!tribeName.equals(existingFromTribe)) {\n // we have a potential conflict on index names, decide what to do...\n if (ON_CONFLICT_ANY.equals(onConflict)) {",
"filename": "src/main/java/org/elasticsearch/tribe/TribeService.java",
"status": "modified"
},
{
"diff": "@@ -39,6 +39,7 @@\n import org.junit.BeforeClass;\n import org.junit.Test;\n \n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertHitCount;\n import static org.hamcrest.Matchers.equalTo;\n \n@@ -171,10 +172,10 @@ public void testIndexWriteBlocks() throws Exception {\n @Test\n public void testOnConflictDrop() throws Exception {\n logger.info(\"create 2 indices, test1 on t1, and test2 on t2\");\n- cluster().client().admin().indices().prepareCreate(\"conflict\").get();\n- cluster2.client().admin().indices().prepareCreate(\"conflict\").get();\n- cluster().client().admin().indices().prepareCreate(\"test1\").get();\n- cluster2.client().admin().indices().prepareCreate(\"test2\").get();\n+ assertAcked(cluster().client().admin().indices().prepareCreate(\"conflict\"));\n+ assertAcked(cluster2.client().admin().indices().prepareCreate(\"conflict\"));\n+ assertAcked(cluster().client().admin().indices().prepareCreate(\"test1\"));\n+ assertAcked(cluster2.client().admin().indices().prepareCreate(\"test2\"));\n \n setupTribeNode(ImmutableSettings.builder()\n .put(\"tribe.on_conflict\", \"drop\")",
"filename": "src/test/java/org/elasticsearch/tribe/TribeTests.java",
"status": "modified"
}
]
} |
{
"body": "I've been debugging a system where I couldn't get memory locking working. The upshot is JNA extracts out the relevant native library to /tmp and trys to load it, however if you follow security best practice and disable execute in /tmp (noexec mount option), this fails silently. The JNA documentation recommends you extract out the libraries on systems with additional security constraints; in fact the JNA package that ships with RHEL puts the relevant library in /usr/lib64/jna.\n\nI see two potential solutions:\n- src/main/java/org/elasticsearch/common/jna/CLibrary.java - log with warning on UnsatisfiedLinkError (rather than debug), and hint at noexec being a potential issue\n- extract out the various libjnidispatch.so versions into the elasticsearch lib directory and get JNA to load from there\n",
"comments": [
{
"body": "I'm very sympathetic to not have JNA unpack, but unfortunately this has several implications for packaging and deployment. However I just looked into this and found that while our current version of JNA does not support a custom directoy, the latest version does (see https://github.com/twall/jna/blob/master/src/com/sun/jna/Native.java#L1014), so that might be a way forward.\n\nI'm not the biggest fan of simply logging the failure since that does not provide an actual solution to the problem and will only confuse too many people even more (who will then ask about how to turn off noexec).\n\nIn the meantime you can try:\n- set the java.io.tempdir system property to a user-writable directory (maybe ES_HOME/tmp or somesuch)\n- sysctl vm.swappiness=0 _with a recent kernel_. If you want to know why this has the same effect and is in fact less harmful to the system than mlockall, see:\n http://www.quora.com/Linux/Why-does-Linux-swap-out-pages-when-I-have-many-pages-cached-and-vm-swappiness-is-set-to-0-Shouldnt-cached-pages-get-resized-and-no-swapping-should-occur\n\nHope this helps for now.\n",
"created_at": "2014-03-31T15:10:34Z"
},
{
"body": "Hey, thanks for your response. I agree logging isn't ideal, but in its defence:\n- #1194 already converted one class of mlockall failing to warn, but not this case (not sure why)\n- Not many people have noexec /tmp, and those who do generally know they have, so I don't think it should confuse too many people\n",
"created_at": "2014-03-31T18:36:34Z"
},
{
"body": "Please have a look at https://github.com/hhoffstaette/elasticsearch/commit/3cb270cf4f7e2326eb7e924275b5187d9b140e09 and let me know if the message is clear enough.\n",
"created_at": "2014-04-01T08:04:15Z"
},
{
"body": "LGTM, thanks Holger!\n",
"created_at": "2014-04-01T18:47:39Z"
}
],
"number": 5493,
"title": "mlockall fails on noexec /tmp"
} | {
"body": "See issue #5493\n",
"number": 5636,
"review_comments": [
{
"body": "Would be nice to add delimiters (eg. quotes) around `java.io.tmpdir` and `jna.tmpdir`.\n",
"created_at": "2014-04-01T12:38:08Z"
},
{
"body": "While we are improving this documentation, I think it would be nice to make clearer what applies to all (or at least most) OSes (such as the paragraph above about filesystem caching and swapping) and what is specific to Linux (vm.swappiness, mlockall).\n",
"created_at": "2014-04-01T12:43:52Z"
},
{
"body": "I first had ${..} realized that might be confusing some people with environment variables. I can certainly add single quotes in the log.\n",
"created_at": "2014-04-01T12:44:45Z"
},
{
"body": "The paragraph about mlockall explicitly says it only works on Linux. There are similar ways to accomplish something like it on Windows, but unless we have code to actually do that or someone with Windows system tuning experience I'd rather not write things I don't know anything about.\n",
"created_at": "2014-04-01T12:49:46Z"
}
],
"title": "Update JNA to 4.1.0, properly warn on error, hint at noexec mount"
} | {
"commits": [
{
"message": "Bump JNA to latest 4.1.0."
},
{
"message": "Log UnsatisfiedLinkError with WARN instead of DEBUG, mention noexec\nmount. (#5493)"
},
{
"message": "Additional documentation for jna.tmpdir and setting vm.swappiness."
},
{
"message": "Small update to phrasing."
}
],
"files": [
{
"diff": "@@ -52,18 +52,37 @@ curl localhost:9200/_nodes/process?pretty\n [[setup-configuration-memory]]\n ==== Memory Settings\n \n-There is an option to use\n+The Linux kernel tries to use as much memory as possible for file system\n+caches, and eagerly swaps out unused application memory. This is harmful\n+especially for the JVM; fortunately there are steps that can be taken to\n+avoid performance degradation by swapping.\n+\n+The first (and generally recommended) option is to ensure that the `sysctl`\n+value `vm.swappiness` is set to `0`. This reduces the kernel's tendency to\n+swap and should not lead to swapping under normal circumstances, while still\n+allowing the whole system to swap in emergency conditions.\n+However, while this works correctly with recent Linux kernels, older\n+distributions may not behave correctly even with `vm.swappiness=0` and\n+still swap, especially when doing lots of file I/O.\n+\n+For these cases there is an option to use\n http://opengroup.org/onlinepubs/007908799/xsh/mlockall.html[mlockall] to\n try to lock the process address space so it won't be swapped. For this\n to work, the `bootstrap.mlockall` should be set to `true` and it is\n recommended to set both the min and max memory allocation to be the\n-same. Note: This option is only available on Linux/Unix operating\n-systems.\n+same, which happens automatically when you set `ES_HEAP_SIZE`.\n+Note: This option is only available on Linux/Unix operating systems.\n \n In order to see if this works or not, set the `common.jna` logging to\n DEBUG level. A solution to \"Unknown mlockall error 0\" can be to set\n `ulimit -l unlimited`.\n \n+Another possible reason why `mlockall` can fail is when when the directory\n+pointed to by the `java.io.tmpdir` JVM system property - typically `/tmp` -\n+is mounted with the `noexec` option for security reasons. In this case you\n+can specify an additional directory `jna.tmpdir` to use for loading the\n+native library.\n+ \n Note, `mlockall` might cause the JVM or shell\n session to exit if it fails to allocate the memory (because not enough\n memory is available on the machine).",
"filename": "docs/reference/setup/configuration.asciidoc",
"status": "modified"
},
{
"diff": "@@ -273,7 +273,7 @@\n <dependency>\n <groupId>net.java.dev.jna</groupId>\n <artifactId>jna</artifactId>\n- <version>3.3.0</version>\n+ <version>4.1.0</version>\n <scope>compile</scope>\n <optional>true</optional>\n </dependency>",
"filename": "pom.xml",
"status": "modified"
},
{
"diff": "@@ -40,9 +40,10 @@ public class CLibrary {\n try {\n Native.register(\"c\");\n } catch (NoClassDefFoundError e) {\n- logger.warn(\"jna not found. native methods (mlockall) will be disabled.\");\n+ logger.warn(\"JNA not found. native methods (mlockall) will be disabled.\");\n } catch (UnsatisfiedLinkError e) {\n- logger.debug(\"unable to link C library. native methods (mlockall) will be disabled.\");\n+ logger.warn(\"unable to link C library. native methods (mlockall) will be disabled.\");\n+ logger.warn(\"if java.io.tmpdir is mounted noexec then try setting jna.tmpdir.\");\n }\n }\n ",
"filename": "src/main/java/org/elasticsearch/common/jna/CLibrary.java",
"status": "modified"
}
]
} |
{
"body": "Is there any particular reason why `fromXContent` should raise an Exception, while `toXContent` serializes the information?\n\nSource at https://github.com/elasticsearch/elasticsearch/blob/master/src/main/java/org/elasticsearch/cluster/metadata/SnapshotMetaData.java#L358\n\nThis seems to cause problems for certain gateways, such as https://github.com/elasticsearch/elasticsearch-cloud-aws/issues/68\n",
"comments": [
{
"body": "@nkvoll Thanks for the report. SnapshotMetaData doesn't implement `fromXContent` because it's not supposed to be serialized as part of persistent cluster state and therefore shouldn't be deserializable when cluster state is restored. I am working on a fix for this issue.\n",
"created_at": "2014-03-31T18:56:10Z"
},
{
"body": "LGTM\n",
"created_at": "2014-04-14T08:55:16Z"
},
{
"body": "Fixed by #5628 \n",
"created_at": "2014-04-14T22:43:33Z"
}
],
"number": 5615,
"title": "SnapshotMetaData.fromXContent does not match toXContent, but throws an Exception"
} | {
"body": "Fixes #5615\n\nI would like to merge the first commit b0172c0 to 1.1, 1.x and master since it looks like a good change anyway. The second commit will go only to 1.1 because 1.x and master no longer have blob store based gateways (S3, shared filesystem, etc). \n",
"number": 5628,
"review_comments": [],
"title": "Blob store-based gateways shouldn't serialize transient metadata"
} | {
"commits": [
{
"message": "Separate persistent and global metadata serialization settings"
},
{
"message": "Blob store-based gateways shouldn't serialize transient metadata\n\nFixes #5615"
}
],
"files": [
{
"diff": "@@ -114,7 +114,9 @@ public static <T extends Custom> Custom.Factory<T> lookupFactorySafe(String type\n \n public static final MetaData EMPTY_META_DATA = builder().build();\n \n- public static final String GLOBAL_PERSISTENT_ONLY_PARAM = \"global_persistent_only\";\n+ public static final String GLOBAL_ONLY_PARAM = \"global_only\";\n+\n+ public static final String PERSISTENT_ONLY_PARAM = \"persistent_only\";\n \n private final String uuid;\n private final long version;\n@@ -1230,7 +1232,8 @@ public static String toXContent(MetaData metaData) throws IOException {\n }\n \n public static void toXContent(MetaData metaData, XContentBuilder builder, ToXContent.Params params) throws IOException {\n- boolean globalPersistentOnly = params.paramAsBoolean(GLOBAL_PERSISTENT_ONLY_PARAM, false);\n+ boolean globalOnly = params.paramAsBoolean(GLOBAL_ONLY_PARAM, false);\n+ boolean persistentOnly = params.paramAsBoolean(PERSISTENT_ONLY_PARAM, false);\n builder.startObject(\"meta-data\");\n \n builder.field(\"version\", metaData.version());\n@@ -1244,7 +1247,7 @@ public static void toXContent(MetaData metaData, XContentBuilder builder, ToXCon\n builder.endObject();\n }\n \n- if (!globalPersistentOnly && !metaData.transientSettings().getAsMap().isEmpty()) {\n+ if (!persistentOnly && !metaData.transientSettings().getAsMap().isEmpty()) {\n builder.startObject(\"transient_settings\");\n for (Map.Entry<String, String> entry : metaData.transientSettings().getAsMap().entrySet()) {\n builder.field(entry.getKey(), entry.getValue());\n@@ -1258,7 +1261,7 @@ public static void toXContent(MetaData metaData, XContentBuilder builder, ToXCon\n }\n builder.endObject();\n \n- if (!globalPersistentOnly && !metaData.indices().isEmpty()) {\n+ if (!globalOnly && !metaData.indices().isEmpty()) {\n builder.startObject(\"indices\");\n for (IndexMetaData indexMetaData : metaData) {\n IndexMetaData.Builder.toXContent(indexMetaData, builder, params);\n@@ -1268,7 +1271,7 @@ public static void toXContent(MetaData metaData, XContentBuilder builder, ToXCon\n \n for (ObjectObjectCursor<String, Custom> cursor : metaData.customs()) {\n Custom.Factory factory = lookupFactorySafe(cursor.key);\n- if (!globalPersistentOnly || factory.isPersistent()) {\n+ if (!persistentOnly || factory.isPersistent()) {\n builder.startObject(cursor.key);\n factory.toXContent(cursor.value, builder, params);\n builder.endObject();",
"filename": "src/main/java/org/elasticsearch/cluster/metadata/MetaData.java",
"status": "modified"
},
{
"diff": "@@ -21,6 +21,7 @@\n \n import com.google.common.collect.ImmutableMap;\n import com.google.common.collect.Lists;\n+import com.google.common.collect.Maps;\n import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.cluster.ClusterName;\n import org.elasticsearch.cluster.ClusterService;\n@@ -43,6 +44,7 @@\n \n import java.io.IOException;\n import java.util.List;\n+import java.util.Map;\n \n /**\n *\n@@ -61,8 +63,13 @@ public abstract class BlobStoreGateway extends SharedStorageGateway {\n \n private volatile int currentIndex;\n \n+ private final ToXContent.Params persistentOnlyFormatParams;\n+\n protected BlobStoreGateway(Settings settings, ThreadPool threadPool, ClusterService clusterService, ClusterName clusterName) {\n super(settings, threadPool, clusterService, clusterName);\n+ Map<String, String> persistentOnlyParams = Maps.newHashMap();\n+ persistentOnlyParams.put(MetaData.PERSISTENT_ONLY_PARAM, \"true\");\n+ persistentOnlyFormatParams = new ToXContent.MapParams(persistentOnlyParams);\n }\n \n protected void initialize(BlobStore blobStore, ClusterName clusterName, @Nullable ByteSizeValue defaultChunkSize) throws IOException {\n@@ -158,7 +165,7 @@ public void write(MetaData metaData) throws GatewayException {\n }\n XContentBuilder builder = XContentFactory.contentBuilder(XContentType.JSON, stream);\n builder.startObject();\n- MetaData.Builder.toXContent(metaData, builder, ToXContent.EMPTY_PARAMS);\n+ MetaData.Builder.toXContent(metaData, builder, persistentOnlyFormatParams);\n builder.endObject();\n builder.close();\n metaDataBlobContainer.writeBlob(newMetaData, bStream.bytes().streamInput(), bStream.bytes().length());",
"filename": "src/main/java/org/elasticsearch/gateway/blobstore/BlobStoreGateway.java",
"status": "modified"
},
{
"diff": "@@ -132,12 +132,14 @@ public LocalGatewayMetaState(Settings settings, ThreadPool threadPool, NodeEnvir\n formatParams = new ToXContent.MapParams(params);\n Map<String, String> globalOnlyParams = Maps.newHashMap();\n globalOnlyParams.put(\"binary\", \"true\");\n- globalOnlyParams.put(MetaData.GLOBAL_PERSISTENT_ONLY_PARAM, \"true\");\n+ globalOnlyParams.put(MetaData.PERSISTENT_ONLY_PARAM, \"true\");\n+ globalOnlyParams.put(MetaData.GLOBAL_ONLY_PARAM, \"true\");\n globalOnlyFormatParams = new ToXContent.MapParams(globalOnlyParams);\n } else {\n formatParams = ToXContent.EMPTY_PARAMS;\n Map<String, String> globalOnlyParams = Maps.newHashMap();\n- globalOnlyParams.put(MetaData.GLOBAL_PERSISTENT_ONLY_PARAM, \"true\");\n+ globalOnlyParams.put(MetaData.PERSISTENT_ONLY_PARAM, \"true\");\n+ globalOnlyParams.put(MetaData.GLOBAL_ONLY_PARAM, \"true\");\n globalOnlyFormatParams = new ToXContent.MapParams(globalOnlyParams);\n }\n ",
"filename": "src/main/java/org/elasticsearch/gateway/local/state/meta/LocalGatewayMetaState.java",
"status": "modified"
},
{
"diff": "@@ -137,7 +137,8 @@ protected BlobStoreRepository(String repositoryName, RepositorySettings reposito\n this.repositoryName = repositoryName;\n this.indexShardRepository = (BlobStoreIndexShardRepository) indexShardRepository;\n Map<String, String> globalOnlyParams = Maps.newHashMap();\n- globalOnlyParams.put(MetaData.GLOBAL_PERSISTENT_ONLY_PARAM, \"true\");\n+ globalOnlyParams.put(MetaData.PERSISTENT_ONLY_PARAM, \"true\");\n+ globalOnlyParams.put(MetaData.GLOBAL_ONLY_PARAM, \"true\");\n globalOnlyFormatParams = new ToXContent.MapParams(globalOnlyParams);\n snapshotRateLimiter = getRateLimiter(repositorySettings, \"max_snapshot_bytes_per_sec\", new ByteSizeValue(20, ByteSizeUnit.MB));\n restoreRateLimiter = getRateLimiter(repositorySettings, \"max_restore_bytes_per_sec\", new ByteSizeValue(20, ByteSizeUnit.MB));",
"filename": "src/main/java/org/elasticsearch/repositories/blobstore/BlobStoreRepository.java",
"status": "modified"
}
]
} |
{
"body": " Today the cluster_state is not associated with the cluster_name which is odd since it's pretty much it's only valid identifier. Any node can send a cluster state to another node today even if it's not the same cluster. These situations can happen rarely during tests or even in the wild by accident. The problem can occur if multiple nodes run on the same machine and they join multiple clusters. If node `A` from cluster `cluster_name=a` shuts down while the other node `B` starts up with `cluster_name=b` and this node happens to get the same port on that physical host a third node that still thinks `B` is listening on that port might send a new cluster state. This can cause weird problems and we should discard the cluster state if the cluster names don't match\n",
"comments": [],
"number": 5622,
"title": "Clusterstate misses the cluster name as it's identifier"
} | {
"body": "see #5622\n",
"number": 5624,
"review_comments": [
{
"body": "should we make this final then? Also, in the `Builder(ClusterState state)` constructor, we should copy over the cluster state from the provided state.\n",
"created_at": "2014-03-31T17:42:50Z"
}
],
"title": "Add cluster name to state"
} | {
"commits": [
{
"message": "Add cluster_name to cluster_state\n\nToday the clusterstate is not asssociated with the cluster_name which is odd\nsince it's pretty much it's only valid identifier. Any node can send a cluster\nstate to another node today even if it's not the same cluster. These situations\ncan happen rarely during tests or even in the wild by accident."
},
{
"message": "Discard new cluster state if the clustername doesn't match\n\nCloses #5622"
}
],
"files": [
{
"diff": "@@ -73,7 +73,7 @@ protected ClusterStateResponse newResponse() {\n protected void masterOperation(final ClusterStateRequest request, final ClusterState state, ActionListener<ClusterStateResponse> listener) throws ElasticsearchException {\n ClusterState currentState = clusterService.state();\n logger.trace(\"Serving cluster state request using version {}\", currentState.version());\n- ClusterState.Builder builder = ClusterState.builder();\n+ ClusterState.Builder builder = ClusterState.builder(currentState.getClusterName());\n builder.version(currentState.version());\n if (request.nodes()) {\n builder.nodes(currentState.nodes());",
"filename": "src/main/java/org/elasticsearch/action/admin/cluster/state/TransportClusterStateAction.java",
"status": "modified"
},
{
"diff": "@@ -110,17 +110,20 @@ public static <T extends Custom> Custom.Factory<T> lookupFactorySafe(String type\n \n private final ImmutableOpenMap<String, Custom> customs;\n \n+ private final ClusterName clusterName;\n+\n // built on demand\n private volatile RoutingNodes routingNodes;\n \n private SettingsFilter settingsFilter;\n \n public ClusterState(long version, ClusterState state) {\n- this(version, state.metaData(), state.routingTable(), state.nodes(), state.blocks(), state.customs());\n+ this(state.clusterName, version, state.metaData(), state.routingTable(), state.nodes(), state.blocks(), state.customs());\n }\n \n- public ClusterState(long version, MetaData metaData, RoutingTable routingTable, DiscoveryNodes nodes, ClusterBlocks blocks, ImmutableOpenMap<String, Custom> customs) {\n+ public ClusterState(ClusterName clusterName, long version, MetaData metaData, RoutingTable routingTable, DiscoveryNodes nodes, ClusterBlocks blocks, ImmutableOpenMap<String, Custom> customs) {\n this.version = version;\n+ this.clusterName = clusterName;\n this.metaData = metaData;\n this.routingTable = routingTable;\n this.nodes = nodes;\n@@ -184,6 +187,10 @@ public ImmutableOpenMap<String, Custom> getCustoms() {\n return this.customs;\n }\n \n+ public ClusterName getClusterName() {\n+ return this.clusterName;\n+ }\n+\n /**\n * Returns a built (on demand) routing nodes view of the routing table. <b>NOTE, the routing nodes\n * are mutable, use them just for read operations</b>\n@@ -420,8 +427,8 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n return builder;\n }\n \n- public static Builder builder() {\n- return new Builder();\n+ public static Builder builder(ClusterName clusterName) {\n+ return new Builder(clusterName);\n }\n \n public static Builder builder(ClusterState state) {\n@@ -430,18 +437,17 @@ public static Builder builder(ClusterState state) {\n \n public static class Builder {\n \n+ private final ClusterName clusterName;\n private long version = 0;\n private MetaData metaData = MetaData.EMPTY_META_DATA;\n private RoutingTable routingTable = RoutingTable.EMPTY_ROUTING_TABLE;\n private DiscoveryNodes nodes = DiscoveryNodes.EMPTY_NODES;\n private ClusterBlocks blocks = ClusterBlocks.EMPTY_CLUSTER_BLOCK;\n private final ImmutableOpenMap.Builder<String, Custom> customs;\n \n- public Builder() {\n- customs = ImmutableOpenMap.builder();\n- }\n \n public Builder(ClusterState state) {\n+ this.clusterName = state.clusterName;\n this.version = state.version();\n this.nodes = state.nodes();\n this.routingTable = state.routingTable();\n@@ -450,6 +456,11 @@ public Builder(ClusterState state) {\n this.customs = ImmutableOpenMap.builder(state.customs());\n }\n \n+ public Builder(ClusterName clusterName) {\n+ customs = ImmutableOpenMap.builder();\n+ this.clusterName = clusterName;\n+ }\n+\n public Builder nodes(DiscoveryNodes.Builder nodesBuilder) {\n return nodes(nodesBuilder.build());\n }\n@@ -511,7 +522,7 @@ public Builder removeCustom(String type) {\n }\n \n public ClusterState build() {\n- return new ClusterState(version, metaData, routingTable, nodes, blocks, customs.build());\n+ return new ClusterState(clusterName, version, metaData, routingTable, nodes, blocks, customs.build());\n }\n \n public static byte[] toBytes(ClusterState state) throws IOException {\n@@ -525,6 +536,12 @@ public static ClusterState fromBytes(byte[] data, DiscoveryNode localNode) throw\n }\n \n public static void writeTo(ClusterState state, StreamOutput out) throws IOException {\n+ if (out.getVersion().onOrAfter(Version.V_1_1_1)) {\n+ out.writeBoolean(state.clusterName != null);\n+ if (state.clusterName != null) {\n+ state.clusterName.writeTo(out);\n+ }\n+ }\n out.writeLong(state.version());\n MetaData.Builder.writeTo(state.metaData(), out);\n RoutingTable.Builder.writeTo(state.routingTable(), out);\n@@ -542,7 +559,14 @@ public static void writeTo(ClusterState state, StreamOutput out) throws IOExcept\n }\n \n public static ClusterState readFrom(StreamInput in, @Nullable DiscoveryNode localNode) throws IOException {\n- Builder builder = new Builder();\n+ ClusterName clusterName = null;\n+ if (in.getVersion().onOrAfter(Version.V_1_1_1)) {\n+ // it might be null even if it comes from a >= 1.1.1 node since it's origin might be an older node\n+ if (in.readBoolean()) {\n+ clusterName = ClusterName.readClusterName(in);\n+ }\n+ }\n+ Builder builder = new Builder(clusterName);\n builder.version = in.readLong();\n builder.metaData = MetaData.Builder.readFrom(in);\n builder.routingTable = RoutingTable.Builder.readFrom(in);",
"filename": "src/main/java/org/elasticsearch/cluster/ClusterState.java",
"status": "modified"
},
{
"diff": "@@ -81,21 +81,22 @@ public class InternalClusterService extends AbstractLifecycleComponent<ClusterSe\n \n private final Queue<NotifyTimeout> onGoingTimeouts = ConcurrentCollections.newQueue();\n \n- private volatile ClusterState clusterState = ClusterState.builder().build();\n+ private volatile ClusterState clusterState;\n \n private final ClusterBlocks.Builder initialBlocks = ClusterBlocks.builder().addGlobalBlock(Discovery.NO_MASTER_BLOCK);\n \n private volatile ScheduledFuture reconnectToNodes;\n \n @Inject\n public InternalClusterService(Settings settings, DiscoveryService discoveryService, OperationRouting operationRouting, TransportService transportService,\n- NodeSettingsService nodeSettingsService, ThreadPool threadPool) {\n+ NodeSettingsService nodeSettingsService, ThreadPool threadPool, ClusterName clusterName) {\n super(settings);\n this.operationRouting = operationRouting;\n this.transportService = transportService;\n this.discoveryService = discoveryService;\n this.threadPool = threadPool;\n this.nodeSettingsService = nodeSettingsService;\n+ this.clusterState = ClusterState.builder(clusterName).build();\n \n this.nodeSettingsService.setClusterService(this);\n \n@@ -126,7 +127,7 @@ public void removeInitialStateBlock(ClusterBlock block) throws ElasticsearchIlle\n @Override\n protected void doStart() throws ElasticsearchException {\n add(localNodeMasterListeners);\n- this.clusterState = ClusterState.builder().blocks(initialBlocks).build();\n+ this.clusterState = ClusterState.builder(clusterState).blocks(initialBlocks).build();\n this.updateTasksExecutor = EsExecutors.newSinglePrioritizing(daemonThreadFactory(settings, \"clusterService#updateTask\"));\n this.reconnectToNodes = threadPool.schedule(reconnectInterval, ThreadPool.Names.GENERIC, new ReconnectToNodes());\n }",
"filename": "src/main/java/org/elasticsearch/cluster/service/InternalClusterService.java",
"status": "modified"
},
{
"diff": "@@ -544,6 +544,14 @@ static class ProcessClusterState {\n private final BlockingQueue<ProcessClusterState> processNewClusterStates = ConcurrentCollections.newBlockingQueue();\n \n void handleNewClusterStateFromMaster(ClusterState newClusterState, final PublishClusterStateAction.NewClusterStateListener.NewStateProcessed newStateProcessed) {\n+ final ClusterName incomingClusterName = newClusterState.getClusterName();\n+ /* The cluster name can still be null if the state comes from a node that is prev 1.1.1*/\n+ if (incomingClusterName != null && !incomingClusterName.equals(this.clusterName)) {\n+ logger.warn(\"received cluster state from [{}] which is also master but with a different cluster name [{}]\", newClusterState.nodes().masterNode(), incomingClusterName);\n+ newStateProcessed.onNewClusterStateFailed(new ElasticsearchIllegalStateException(\"received state from a node that is not part of the cluster\"));\n+ return;\n+ }\n+ logger.debug(\"received cluster state from [{}] which is also master but with cluster name [{}]\", newClusterState.nodes().masterNode(), incomingClusterName);\n if (master) {\n final ClusterState newState = newClusterState;\n clusterService.submitStateUpdateTask(\"zen-disco-master_receive_cluster_state_from_another_master [\" + newState.nodes().masterNode() + \"]\", Priority.URGENT, new ProcessedClusterStateUpdateTask() {",
"filename": "src/main/java/org/elasticsearch/discovery/zen/ZenDiscovery.java",
"status": "modified"
},
{
"diff": "@@ -170,6 +170,7 @@ public void messageReceived(BytesTransportRequest request, final TransportChanne\n }\n in.setVersion(request.version());\n ClusterState clusterState = ClusterState.Builder.readFrom(in, nodesProvider.nodes().localNode());\n+\n logger.debug(\"received cluster state version {}\", clusterState.version());\n listener.onNewClusterState(clusterState, new NewClusterStateListener.NewStateProcessed() {\n @Override",
"filename": "src/main/java/org/elasticsearch/discovery/zen/publish/PublishClusterStateAction.java",
"status": "modified"
},
{
"diff": "@@ -24,10 +24,7 @@\n import com.carrotsearch.hppc.cursors.ObjectCursor;\n import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.action.FailedNodeException;\n-import org.elasticsearch.cluster.ClusterChangedEvent;\n-import org.elasticsearch.cluster.ClusterService;\n-import org.elasticsearch.cluster.ClusterState;\n-import org.elasticsearch.cluster.ClusterStateListener;\n+import org.elasticsearch.cluster.*;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.MetaData;\n import org.elasticsearch.common.component.AbstractLifecycleComponent;\n@@ -58,16 +55,18 @@ public class LocalGateway extends AbstractLifecycleComponent<Gateway> implements\n private final TransportNodesListGatewayMetaState listGatewayMetaState;\n \n private final String initialMeta;\n+ private final ClusterName clusterName;\n \n @Inject\n public LocalGateway(Settings settings, ClusterService clusterService, NodeEnvironment nodeEnv,\n LocalGatewayShardsState shardsState, LocalGatewayMetaState metaState,\n- TransportNodesListGatewayMetaState listGatewayMetaState) {\n+ TransportNodesListGatewayMetaState listGatewayMetaState, ClusterName clusterName) {\n super(settings);\n this.clusterService = clusterService;\n this.nodeEnv = nodeEnv;\n this.metaState = metaState;\n this.listGatewayMetaState = listGatewayMetaState;\n+ this.clusterName = clusterName;\n \n this.shardsState = shardsState;\n \n@@ -186,7 +185,7 @@ public void performStateRecovery(final GatewayStateRecoveredListener listener) t\n }\n }\n }\n- ClusterState.Builder builder = ClusterState.builder();\n+ ClusterState.Builder builder = ClusterState.builder(clusterName);\n builder.metaData(metaDataBuilder);\n listener.onSuccess(builder.build());\n }",
"filename": "src/main/java/org/elasticsearch/gateway/local/LocalGateway.java",
"status": "modified"
},
{
"diff": "@@ -20,10 +20,7 @@\n package org.elasticsearch.gateway.none;\n \n import org.elasticsearch.ElasticsearchException;\n-import org.elasticsearch.cluster.ClusterChangedEvent;\n-import org.elasticsearch.cluster.ClusterService;\n-import org.elasticsearch.cluster.ClusterState;\n-import org.elasticsearch.cluster.ClusterStateListener;\n+import org.elasticsearch.cluster.*;\n import org.elasticsearch.cluster.action.index.NodeIndexDeletedAction;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.MetaData;\n@@ -49,16 +46,18 @@ public class NoneGateway extends AbstractLifecycleComponent<Gateway> implements\n private final ClusterService clusterService;\n private final NodeEnvironment nodeEnv;\n private final NodeIndexDeletedAction nodeIndexDeletedAction;\n+ private final ClusterName clusterName;\n \n @Nullable\n private volatile MetaData currentMetaData;\n \n @Inject\n- public NoneGateway(Settings settings, ClusterService clusterService, NodeEnvironment nodeEnv, NodeIndexDeletedAction nodeIndexDeletedAction) {\n+ public NoneGateway(Settings settings, ClusterService clusterService, NodeEnvironment nodeEnv, NodeIndexDeletedAction nodeIndexDeletedAction, ClusterName clusterName) {\n super(settings);\n this.clusterService = clusterService;\n this.nodeEnv = nodeEnv;\n this.nodeIndexDeletedAction = nodeIndexDeletedAction;\n+ this.clusterName = clusterName;\n \n clusterService.addLast(this);\n }\n@@ -88,7 +87,7 @@ protected void doClose() throws ElasticsearchException {\n @Override\n public void performStateRecovery(GatewayStateRecoveredListener listener) throws GatewayException {\n logger.debug(\"performing state recovery\");\n- listener.onSuccess(ClusterState.builder().build());\n+ listener.onSuccess(ClusterState.builder(clusterName).build());\n }\n \n @Override",
"filename": "src/main/java/org/elasticsearch/gateway/none/NoneGateway.java",
"status": "modified"
},
{
"diff": "@@ -19,6 +19,7 @@\n package org.elasticsearch.benchmark.cluster;\n \n import com.google.common.collect.ImmutableMap;\n+import org.elasticsearch.cluster.ClusterName;\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.MetaData;\n@@ -65,7 +66,7 @@ public static void main(String[] args) {\n for (int i = 1; i <= numberOfNodes; i++) {\n nb.put(ElasticsearchAllocationTestCase.newNode(\"node\" + i, numberOfTags == 0 ? ImmutableMap.<String, String>of() : ImmutableMap.of(\"tag\", \"tag_\" + (i % numberOfTags))));\n }\n- ClusterState initialClusterState = ClusterState.builder().metaData(metaData).routingTable(routingTable).nodes(nb).build();\n+ ClusterState initialClusterState = ClusterState.builder(ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).nodes(nb).build();\n \n long start = System.currentTimeMillis();\n for (int i = 0; i < numberOfRuns; i++) {",
"filename": "src/test/java/org/elasticsearch/benchmark/cluster/ClusterAllocationRerouteBenchmark.java",
"status": "modified"
},
{
"diff": "@@ -187,7 +187,7 @@ public void testClusterHealth() {\n metaData.put(indexMetaData, true);\n routingTable.add(indexRoutingTable);\n }\n- ClusterState clusterState = ClusterState.builder().metaData(metaData).routingTable(routingTable).build();\n+ ClusterState clusterState = ClusterState.builder(ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).build();\n ClusterHealthResponse clusterHealth = new ClusterHealthResponse(\"bla\", clusterState.metaData().concreteIndices(null), clusterState);\n logger.info(\"cluster status: {}, expected {}\", clusterHealth.getStatus(), counter.status());\n \n@@ -208,7 +208,7 @@ public void testValidations() {\n MetaData.Builder metaData = MetaData.builder();\n metaData.put(indexMetaData, true);\n routingTable.add(indexRoutingTable);\n- ClusterState clusterState = ClusterState.builder().metaData(metaData).routingTable(routingTable).build();\n+ ClusterState clusterState = ClusterState.builder(ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).build();\n ClusterHealthResponse clusterHealth = new ClusterHealthResponse(\"bla\", clusterState.metaData().concreteIndices(null), clusterState);\n // currently we have no cluster level validation failures as index validation issues are reported per index.\n assertThat(clusterHealth.getValidationFailures(), Matchers.hasSize(0));",
"filename": "src/test/java/org/elasticsearch/cluster/ClusterHealthResponsesTests.java",
"status": "modified"
},
{
"diff": "@@ -310,7 +310,7 @@ private ClusterState initCluster(AllocationService service, int numberOfNodes, i\n for (int i = 0; i < numberOfNodes; i++) {\n nodes.put(newNode(\"node\" + i));\n }\n- ClusterState clusterState = ClusterState.builder().nodes(nodes).metaData(metaData).routingTable(routingTable).build();\n+ ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).nodes(nodes).metaData(metaData).routingTable(routingTable).build();\n routingTable = service.reroute(clusterState).routingTable();\n clusterState = ClusterState.builder(clusterState).routingTable(routingTable).build();\n RoutingNodes routingNodes = clusterState.routingNodes();",
"filename": "src/test/java/org/elasticsearch/cluster/routing/allocation/AddIncrementallyTests.java",
"status": "modified"
},
{
"diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.cluster.routing.allocation;\n \n+import org.elasticsearch.cluster.ClusterName;\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.MetaData;\n@@ -50,7 +51,7 @@ public void simpleFlagTests() {\n RoutingTable routingTable = RoutingTable.builder()\n .addAsNew(metaData.index(\"test\"))\n .build();\n- ClusterState clusterState = ClusterState.builder().metaData(metaData).routingTable(routingTable).build();\n+ ClusterState clusterState = ClusterState.builder(ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).build();\n assertThat(clusterState.routingTable().index(\"test\").shard(0).primaryAllocatedPostApi(), equalTo(false));\n \n logger.info(\"adding two nodes and performing rerouting\");",
"filename": "src/test/java/org/elasticsearch/cluster/routing/allocation/AllocatePostApiFlagTests.java",
"status": "modified"
},
{
"diff": "@@ -64,7 +64,7 @@ public void moveShardCommand() {\n RoutingTable routingTable = RoutingTable.builder()\n .addAsNew(metaData.index(\"test\"))\n .build();\n- ClusterState clusterState = ClusterState.builder().metaData(metaData).routingTable(routingTable).build();\n+ ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).build();\n \n logger.info(\"adding two nodes and performing rerouting\");\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder().put(newNode(\"node1\")).put(newNode(\"node2\"))).build();\n@@ -111,7 +111,7 @@ public void allocateCommand() {\n RoutingTable routingTable = RoutingTable.builder()\n .addAsNew(metaData.index(\"test\"))\n .build();\n- ClusterState clusterState = ClusterState.builder().metaData(metaData).routingTable(routingTable).build();\n+ ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).build();\n \n logger.info(\"--> adding 3 nodes on same rack and do rerouting\");\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder()\n@@ -200,7 +200,7 @@ public void cancelCommand() {\n RoutingTable routingTable = RoutingTable.builder()\n .addAsNew(metaData.index(\"test\"))\n .build();\n- ClusterState clusterState = ClusterState.builder().metaData(metaData).routingTable(routingTable).build();\n+ ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).build();\n \n logger.info(\"--> adding 3 nodes\");\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder()",
"filename": "src/test/java/org/elasticsearch/cluster/routing/allocation/AllocationCommandsTests.java",
"status": "modified"
},
{
"diff": "@@ -60,7 +60,7 @@ public void moveShardOnceNewNodeWithAttributeAdded1() {\n .addAsNew(metaData.index(\"test\"))\n .build();\n \n- ClusterState clusterState = ClusterState.builder().metaData(metaData).routingTable(routingTable).build();\n+ ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).build();\n \n logger.info(\"--> adding two nodes on same rack and do rerouting\");\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder()\n@@ -129,7 +129,7 @@ public void moveShardOnceNewNodeWithAttributeAdded2() {\n .addAsNew(metaData.index(\"test\"))\n .build();\n \n- ClusterState clusterState = ClusterState.builder().metaData(metaData).routingTable(routingTable).build();\n+ ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).build();\n \n logger.info(\"--> adding two nodes on same rack and do rerouting\");\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder()\n@@ -204,7 +204,7 @@ public void moveShardOnceNewNodeWithAttributeAdded3() {\n .addAsNew(metaData.index(\"test\"))\n .build();\n \n- ClusterState clusterState = ClusterState.builder().metaData(metaData).routingTable(routingTable).build();\n+ ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).build();\n \n logger.info(\"--> adding two nodes on same rack and do rerouting\");\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder()\n@@ -304,7 +304,7 @@ public void moveShardOnceNewNodeWithAttributeAdded4() {\n .addAsNew(metaData.index(\"test2\"))\n .build();\n \n- ClusterState clusterState = ClusterState.builder().metaData(metaData).routingTable(routingTable).build();\n+ ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).build();\n \n logger.info(\"--> adding two nodes on same rack and do rerouting\");\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder()\n@@ -396,7 +396,7 @@ public void moveShardOnceNewNodeWithAttributeAdded5() {\n .addAsNew(metaData.index(\"test\"))\n .build();\n \n- ClusterState clusterState = ClusterState.builder().metaData(metaData).routingTable(routingTable).build();\n+ ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).build();\n \n logger.info(\"--> adding two nodes on same rack and do rerouting\");\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder()\n@@ -475,7 +475,7 @@ public void moveShardOnceNewNodeWithAttributeAdded6() {\n .addAsNew(metaData.index(\"test\"))\n .build();\n \n- ClusterState clusterState = ClusterState.builder().metaData(metaData).routingTable(routingTable).build();\n+ ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).build();\n \n logger.info(\"--> adding two nodes on same rack and do rerouting\");\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder()\n@@ -557,7 +557,7 @@ public void fullAwareness1() {\n .addAsNew(metaData.index(\"test\"))\n .build();\n \n- ClusterState clusterState = ClusterState.builder().metaData(metaData).routingTable(routingTable).build();\n+ ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).build();\n \n logger.info(\"--> adding two nodes on same rack and do rerouting\");\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder()\n@@ -625,7 +625,7 @@ public void fullAwareness2() {\n .addAsNew(metaData.index(\"test\"))\n .build();\n \n- ClusterState clusterState = ClusterState.builder().metaData(metaData).routingTable(routingTable).build();\n+ ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).build();\n \n logger.info(\"--> adding two nodes on same rack and do rerouting\");\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder()\n@@ -701,7 +701,7 @@ public void fullAwareness3() {\n .addAsNew(metaData.index(\"test2\"))\n .build();\n \n- ClusterState clusterState = ClusterState.builder().metaData(metaData).routingTable(routingTable).build();\n+ ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).build();\n \n logger.info(\"--> adding two nodes on same rack and do rerouting\");\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder()\n@@ -781,7 +781,7 @@ public void testUnbalancedZones() {\n .addAsNew(metaData.index(\"test\"))\n .build();\n \n- ClusterState clusterState = ClusterState.builder().metaData(metaData).routingTable(routingTable).build();\n+ ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).build();\n \n logger.info(\"--> adding two nodes on same rack and do rerouting\");\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder()",
"filename": "src/test/java/org/elasticsearch/cluster/routing/allocation/AwarenessAllocationTests.java",
"status": "modified"
},
{
"diff": "@@ -159,7 +159,7 @@ private ClusterState initCluster(AllocationService strategy) {\n for (int i = 0; i < numberOfNodes; i++) {\n nodes.put(newNode(\"node\" + i));\n }\n- ClusterState clusterState = ClusterState.builder().nodes(nodes).metaData(metaData).routingTable(routingTable).build();\n+ ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).nodes(nodes).metaData(metaData).routingTable(routingTable).build();\n routingTable = strategy.reroute(clusterState).routingTable();\n clusterState = ClusterState.builder(clusterState).routingTable(routingTable).build();\n RoutingNodes routingNodes = clusterState.routingNodes();\n@@ -459,7 +459,7 @@ public boolean allocateUnassigned(RoutingAllocation allocation) {\n nodes.put(node);\n }\n \n- ClusterState clusterState = ClusterState.builder().nodes(nodes).metaData(metaData).routingTable(routingTable).build();\n+ ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).nodes(nodes).metaData(metaData).routingTable(routingTable).build();\n routingTable = strategy.reroute(clusterState).routingTable();\n clusterState = ClusterState.builder(clusterState).routingTable(routingTable).build();\n RoutingNodes routingNodes = clusterState.routingNodes();",
"filename": "src/test/java/org/elasticsearch/cluster/routing/allocation/BalanceConfigurationTests.java",
"status": "modified"
},
{
"diff": "@@ -54,7 +54,7 @@ public void testAlways() {\n .addAsNew(metaData.index(\"test2\"))\n .build();\n \n- ClusterState clusterState = ClusterState.builder().metaData(metaData).routingTable(routingTable).build();\n+ ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).build();\n \n logger.info(\"start two nodes\");\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder().put(newNode(\"node1\")).put(newNode(\"node2\"))).build();\n@@ -140,7 +140,7 @@ public void testClusterPrimariesActive1() {\n .addAsNew(metaData.index(\"test2\"))\n .build();\n \n- ClusterState clusterState = ClusterState.builder().metaData(metaData).routingTable(routingTable).build();\n+ ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).build();\n \n logger.info(\"start two nodes\");\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder().put(newNode(\"node1\")).put(newNode(\"node2\"))).build();\n@@ -244,7 +244,7 @@ public void testClusterPrimariesActive2() {\n .addAsNew(metaData.index(\"test2\"))\n .build();\n \n- ClusterState clusterState = ClusterState.builder().metaData(metaData).routingTable(routingTable).build();\n+ ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).build();\n \n logger.info(\"start two nodes\");\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder().put(newNode(\"node1\")).put(newNode(\"node2\"))).build();\n@@ -328,7 +328,7 @@ public void testClusterAllActive1() {\n .addAsNew(metaData.index(\"test2\"))\n .build();\n \n- ClusterState clusterState = ClusterState.builder().metaData(metaData).routingTable(routingTable).build();\n+ ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).build();\n \n logger.info(\"start two nodes\");\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder().put(newNode(\"node1\")).put(newNode(\"node2\"))).build();\n@@ -451,7 +451,7 @@ public void testClusterAllActive2() {\n .addAsNew(metaData.index(\"test2\"))\n .build();\n \n- ClusterState clusterState = ClusterState.builder().metaData(metaData).routingTable(routingTable).build();\n+ ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).build();\n \n logger.info(\"start two nodes\");\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder().put(newNode(\"node1\")).put(newNode(\"node2\"))).build();\n@@ -535,7 +535,7 @@ public void testClusterAllActive3() {\n .addAsNew(metaData.index(\"test2\"))\n .build();\n \n- ClusterState clusterState = ClusterState.builder().metaData(metaData).routingTable(routingTable).build();\n+ ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).build();\n \n logger.info(\"start two nodes\");\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder().put(newNode(\"node1\")).put(newNode(\"node2\"))).build();",
"filename": "src/test/java/org/elasticsearch/cluster/routing/allocation/ClusterRebalanceRoutingTests.java",
"status": "modified"
},
{
"diff": "@@ -56,7 +56,7 @@ public void testClusterConcurrentRebalance() {\n .addAsNew(metaData.index(\"test\"))\n .build();\n \n- ClusterState clusterState = ClusterState.builder().metaData(metaData).routingTable(routingTable).build();\n+ ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).build();\n \n assertThat(routingTable.index(\"test\").shards().size(), equalTo(5));\n for (int i = 0; i < routingTable.index(\"test\").shards().size(); i++) {",
"filename": "src/test/java/org/elasticsearch/cluster/routing/allocation/ConcurrentRebalanceRoutingTests.java",
"status": "modified"
},
{
"diff": "@@ -55,7 +55,7 @@ public void simpleDeadNodeOnStartedPrimaryShard() {\n RoutingTable routingTable = RoutingTable.builder()\n .addAsNew(metaData.index(\"test\"))\n .build();\n- ClusterState clusterState = ClusterState.builder().metaData(metaData).routingTable(routingTable).build();\n+ ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).build();\n \n logger.info(\"--> adding 2 nodes on same rack and do rerouting\");\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder()\n@@ -107,7 +107,7 @@ public void deadNodeWhileRelocatingOnToNode() {\n RoutingTable routingTable = RoutingTable.builder()\n .addAsNew(metaData.index(\"test\"))\n .build();\n- ClusterState clusterState = ClusterState.builder().metaData(metaData).routingTable(routingTable).build();\n+ ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).build();\n \n logger.info(\"--> adding 2 nodes on same rack and do rerouting\");\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder()\n@@ -182,7 +182,7 @@ public void deadNodeWhileRelocatingOnFromNode() {\n RoutingTable routingTable = RoutingTable.builder()\n .addAsNew(metaData.index(\"test\"))\n .build();\n- ClusterState clusterState = ClusterState.builder().metaData(metaData).routingTable(routingTable).build();\n+ ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).build();\n \n logger.info(\"--> adding 2 nodes on same rack and do rerouting\");\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder()",
"filename": "src/test/java/org/elasticsearch/cluster/routing/allocation/DeadNodesAllocationTests.java",
"status": "modified"
},
{
"diff": "@@ -59,7 +59,7 @@ public void testClusterDisableAllocation() {\n .addAsNew(metaData.index(\"test\"))\n .build();\n \n- ClusterState clusterState = ClusterState.builder().metaData(metaData).routingTable(routingTable).build();\n+ ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).build();\n \n logger.info(\"--> adding two nodes and do rerouting\");\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder()\n@@ -88,7 +88,7 @@ public void testClusterDisableReplicaAllocation() {\n .addAsNew(metaData.index(\"test\"))\n .build();\n \n- ClusterState clusterState = ClusterState.builder().metaData(metaData).routingTable(routingTable).build();\n+ ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).build();\n \n logger.info(\"--> adding two nodes do rerouting\");\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder()\n@@ -121,7 +121,7 @@ public void testIndexDisableAllocation() {\n .addAsNew(metaData.index(\"enabled\"))\n .build();\n \n- ClusterState clusterState = ClusterState.builder().metaData(metaData).routingTable(routingTable).build();\n+ ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).build();\n \n logger.info(\"--> adding two nodes and do rerouting\");\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder()",
"filename": "src/test/java/org/elasticsearch/cluster/routing/allocation/DisableAllocationTests.java",
"status": "modified"
},
{
"diff": "@@ -57,7 +57,7 @@ public void testElectReplicaAsPrimaryDuringRelocation() {\n .addAsNew(metaData.index(\"test\"))\n .build();\n \n- ClusterState clusterState = ClusterState.builder().metaData(metaData).routingTable(routingTable).build();\n+ ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).build();\n \n logger.info(\"Adding two nodes and performing rerouting\");\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder().put(newNode(\"node1\")).put(newNode(\"node2\"))).build();",
"filename": "src/test/java/org/elasticsearch/cluster/routing/allocation/ElectReplicaAsPrimaryDuringRelocationTests.java",
"status": "modified"
},
{
"diff": "@@ -55,7 +55,7 @@ public void simpleFailedNodeTest() {\n .addAsNew(metaData.index(\"test2\"))\n .build();\n \n- ClusterState clusterState = ClusterState.builder().metaData(metaData).routingTable(routingTable).build();\n+ ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).build();\n \n logger.info(\"start 4 nodes\");\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder().put(newNode(\"node1\")).put(newNode(\"node2\")).put(newNode(\"node3\")).put(newNode(\"node4\"))).build();\n@@ -115,7 +115,7 @@ public void simpleFailedNodeTestNoReassign() {\n .addAsNew(metaData.index(\"test2\"))\n .build();\n \n- ClusterState clusterState = ClusterState.builder().metaData(metaData).routingTable(routingTable).build();\n+ ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).build();\n \n logger.info(\"start 4 nodes\");\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder().put(newNode(\"node1\")).put(newNode(\"node2\")).put(newNode(\"node3\")).put(newNode(\"node4\"))).build();",
"filename": "src/test/java/org/elasticsearch/cluster/routing/allocation/FailedNodeRoutingTests.java",
"status": "modified"
},
{
"diff": "@@ -59,7 +59,7 @@ public void testFailedShardPrimaryRelocatingToAndFrom() {\n RoutingTable routingTable = RoutingTable.builder()\n .addAsNew(metaData.index(\"test\"))\n .build();\n- ClusterState clusterState = ClusterState.builder().metaData(metaData).routingTable(routingTable).build();\n+ ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).build();\n \n logger.info(\"--> adding 2 nodes on same rack and do rerouting\");\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder()\n@@ -151,7 +151,7 @@ public void failPrimaryStartedCheckReplicaElected() {\n .addAsNew(metaData.index(\"test\"))\n .build();\n \n- ClusterState clusterState = ClusterState.builder().metaData(metaData).routingTable(routingTable).build();\n+ ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).build();\n \n logger.info(\"Adding two nodes and performing rerouting\");\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder().put(newNode(\"node1\")).put(newNode(\"node2\"))).build();\n@@ -233,7 +233,7 @@ public void firstAllocationFailureSingleNode() {\n .addAsNew(metaData.index(\"test\"))\n .build();\n \n- ClusterState clusterState = ClusterState.builder().metaData(metaData).routingTable(routingTable).build();\n+ ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).build();\n \n logger.info(\"Adding single node and performing rerouting\");\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder().put(newNode(\"node1\"))).build();\n@@ -290,7 +290,7 @@ public void firstAllocationFailureTwoNodes() {\n .addAsNew(metaData.index(\"test\"))\n .build();\n \n- ClusterState clusterState = ClusterState.builder().metaData(metaData).routingTable(routingTable).build();\n+ ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).build();\n \n logger.info(\"Adding two nodes and performing rerouting\");\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder().put(newNode(\"node1\")).put(newNode(\"node2\"))).build();\n@@ -348,7 +348,7 @@ public void rebalanceFailure() {\n .addAsNew(metaData.index(\"test\"))\n .build();\n \n- ClusterState clusterState = ClusterState.builder().metaData(metaData).routingTable(routingTable).build();\n+ ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).build();\n \n logger.info(\"Adding two nodes and performing rerouting\");\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder().put(newNode(\"node1\")).put(newNode(\"node2\"))).build();",
"filename": "src/test/java/org/elasticsearch/cluster/routing/allocation/FailedShardsRoutingTests.java",
"status": "modified"
},
{
"diff": "@@ -62,7 +62,7 @@ public void testClusterFilters() {\n .addAsNew(metaData.index(\"test\"))\n .build();\n \n- ClusterState clusterState = ClusterState.builder().metaData(metaData).routingTable(routingTable).build();\n+ ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).build();\n \n logger.info(\"--> adding four nodes and performing rerouting\");\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder()\n@@ -111,7 +111,7 @@ public void testIndexFilters() {\n .addAsNew(metaData.index(\"test\"))\n .build();\n \n- ClusterState clusterState = ClusterState.builder().metaData(metaData).routingTable(routingTable).build();\n+ ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).build();\n \n logger.info(\"--> adding two nodes and performing rerouting\");\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder()",
"filename": "src/test/java/org/elasticsearch/cluster/routing/allocation/FilterRoutingTests.java",
"status": "modified"
},
{
"diff": "@@ -58,7 +58,7 @@ public void testBalanceAllNodesStarted() {\n \n RoutingTable routingTable = RoutingTable.builder().addAsNew(metaData.index(\"test\")).addAsNew(metaData.index(\"test1\")).build();\n \n- ClusterState clusterState = ClusterState.builder().metaData(metaData).routingTable(routingTable).build();\n+ ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).build();\n \n assertThat(routingTable.index(\"test\").shards().size(), equalTo(3));\n for (int i = 0; i < routingTable.index(\"test\").shards().size(); i++) {\n@@ -189,7 +189,7 @@ public void testBalanceIncrementallyStartNodes() {\n \n RoutingTable routingTable = RoutingTable.builder().addAsNew(metaData.index(\"test\")).addAsNew(metaData.index(\"test1\")).build();\n \n- ClusterState clusterState = ClusterState.builder().metaData(metaData).routingTable(routingTable).build();\n+ ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).build();\n \n assertThat(routingTable.index(\"test\").shards().size(), equalTo(3));\n for (int i = 0; i < routingTable.index(\"test\").shards().size(); i++) {\n@@ -351,7 +351,7 @@ public void testBalanceAllNodesStartedAddIndex() {\n \n RoutingTable routingTable = RoutingTable.builder().addAsNew(metaData.index(\"test\")).build();\n \n- ClusterState clusterState = ClusterState.builder().metaData(metaData).routingTable(routingTable).build();\n+ ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).build();\n \n assertThat(routingTable.index(\"test\").shards().size(), equalTo(3));\n for (int i = 0; i < routingTable.index(\"test\").shards().size(); i++) {",
"filename": "src/test/java/org/elasticsearch/cluster/routing/allocation/IndexBalanceTests.java",
"status": "modified"
},
{
"diff": "@@ -67,7 +67,7 @@ public void testDoNotAllocateFromPrimary() {\n .addAsNew(metaData.index(\"test\"))\n .build();\n \n- ClusterState clusterState = ClusterState.builder().metaData(metaData).routingTable(routingTable).build();\n+ ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).build();\n \n assertThat(routingTable.index(\"test\").shards().size(), equalTo(5));\n for (int i = 0; i < routingTable.index(\"test\").shards().size(); i++) {\n@@ -187,7 +187,7 @@ public void testRandom() {\n }\n RoutingTable routingTable = rtBuilder.build();\n \n- ClusterState clusterState = ClusterState.builder().metaData(metaData).routingTable(routingTable).build();\n+ ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).build();\n assertThat(routingTable.shardsWithState(UNASSIGNED).size(), equalTo(routingTable.allShards().size()));\n List<DiscoveryNode> nodes = new ArrayList<>();\n int nodeIdx = 0;\n@@ -233,7 +233,7 @@ public void testRollingRestart() {\n .addAsNew(metaData.index(\"test\"))\n .build();\n \n- ClusterState clusterState = ClusterState.builder().metaData(metaData).routingTable(routingTable).build();\n+ ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).build();\n \n assertThat(routingTable.index(\"test\").shards().size(), equalTo(5));\n for (int i = 0; i < routingTable.index(\"test\").shards().size(); i++) {",
"filename": "src/test/java/org/elasticsearch/cluster/routing/allocation/NodeVersionAllocationDeciderTests.java",
"status": "modified"
},
{
"diff": "@@ -60,7 +60,7 @@ public void testPreferLocalPrimaryAllocationOverFiltered() {\n .addAsNew(metaData.index(\"test2\"))\n .build();\n \n- ClusterState clusterState = ClusterState.builder().metaData(metaData).routingTable(routingTable).build();\n+ ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).build();\n \n logger.info(\"adding two nodes and performing rerouting till all are allocated\");\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder()",
"filename": "src/test/java/org/elasticsearch/cluster/routing/allocation/PreferLocalPrimariesToRelocatingPrimariesTests.java",
"status": "modified"
},
{
"diff": "@@ -59,7 +59,7 @@ public void testPreferPrimaryAllocationOverReplicas() {\n .addAsNew(metaData.index(\"test2\"))\n .build();\n \n- ClusterState clusterState = ClusterState.builder().metaData(metaData).routingTable(routingTable).build();\n+ ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).build();\n \n logger.info(\"adding two nodes and performing rerouting till all are allocated\");\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder().put(newNode(\"node1\")).put(newNode(\"node2\"))).build();",
"filename": "src/test/java/org/elasticsearch/cluster/routing/allocation/PreferPrimaryAllocationTests.java",
"status": "modified"
},
{
"diff": "@@ -57,7 +57,7 @@ public void testBackupElectionToPrimaryWhenPrimaryCanBeAllocatedToAnotherNode()\n .addAsNew(metaData.index(\"test\"))\n .build();\n \n- ClusterState clusterState = ClusterState.builder().metaData(metaData).routingTable(routingTable).build();\n+ ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).build();\n \n logger.info(\"Adding two nodes and performing rerouting\");\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder().put(newNode(\"node1\")).put(newNode(\"node2\"))).build();\n@@ -108,7 +108,7 @@ public void testRemovingInitializingReplicasIfPrimariesFails() {\n .addAsNew(metaData.index(\"test\"))\n .build();\n \n- ClusterState clusterState = ClusterState.builder().metaData(metaData).routingTable(routingTable).build();\n+ ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).build();\n \n logger.info(\"Adding two nodes and performing rerouting\");\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder().put(newNode(\"node1\")).put(newNode(\"node2\"))).build();",
"filename": "src/test/java/org/elasticsearch/cluster/routing/allocation/PrimaryElectionRoutingTests.java",
"status": "modified"
},
{
"diff": "@@ -60,7 +60,7 @@ public void testPrimaryNotRelocatedWhileBeingRecoveredFrom() {\n .addAsNew(metaData.index(\"test\"))\n .build();\n \n- ClusterState clusterState = ClusterState.builder().metaData(metaData).routingTable(routingTable).build();\n+ ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).build();\n \n logger.info(\"Adding two nodes and performing rerouting\");\n clusterState = ClusterState.builder(clusterState).nodes(DiscoveryNodes.builder().put(newNode(\"node1\"))).build();",
"filename": "src/test/java/org/elasticsearch/cluster/routing/allocation/PrimaryNotRelocatedWhileBeingRecoveredTests.java",
"status": "modified"
},
{
"diff": "@@ -79,7 +79,7 @@ public void testRandomDecisions() {\n }\n \n RoutingTable routingTable = routingTableBuilder.build();\n- ClusterState clusterState = ClusterState.builder().metaData(metaData).routingTable(routingTable).build();\n+ ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).build();\n int numIters = scaledRandomIntBetween(10, 30);\n int nodeIdCounter = 0;\n int atMostNodes = between(Math.max(1, maxNumReplicas), numIters);",
"filename": "src/test/java/org/elasticsearch/cluster/routing/allocation/RandomAllocationDeciderTests.java",
"status": "modified"
},
{
"diff": "@@ -61,7 +61,7 @@ public void testRebalanceOnlyAfterAllShardsAreActive() {\n .addAsNew(metaData.index(\"test\"))\n .build();\n \n- ClusterState clusterState = ClusterState.builder().metaData(metaData).routingTable(routingTable).build();\n+ ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).build();\n \n assertThat(routingTable.index(\"test\").shards().size(), equalTo(5));\n for (int i = 0; i < routingTable.index(\"test\").shards().size(); i++) {",
"filename": "src/test/java/org/elasticsearch/cluster/routing/allocation/RebalanceAfterActiveTests.java",
"status": "modified"
}
]
} |
{
"body": "According to the docs on http://www.elasticsearch.org/guide/reference/mapping/core-types.html a `boolean` field should accept `F` as false, but it doesn't:\n\n```\ncurl -XPUT 'http://127.0.0.1:9200/test/?pretty=1' -d '\n{\n \"mappings\" : {\n \"test\" : {\n \"properties\" : {\n \"x\" : {\n \"type\" : \"boolean\"\n }\n }\n }\n },\n \"settings\" : {\n \"number_of_shards\" : 1\n }\n}\n'\ncurl -XPUT 'http://127.0.0.1:9200/test/test/1?pretty=1&refresh=true' -d '\n{\n \"x\" : \"F\"\n}\n'\n\ncurl -XGET 'http://127.0.0.1:9200/test/test/_search?pretty=1' -d '\n{\n \"facets\" : {\n \"x\" : {\n \"terms\" : {\n \"field\" : \"x\"\n }\n }\n },\n \"size\" : 0\n}\n'\n\n# {\n# \"hits\" : {\n# \"hits\" : [],\n# \"max_score\" : 1,\n# \"total\" : 1\n# },\n# \"timed_out\" : false,\n# \"_shards\" : {\n# \"failed\" : 0,\n# \"successful\" : 1,\n# \"total\" : 1\n# },\n# \"facets\" : {\n# \"x\" : {\n# \"other\" : 0,\n# \"terms\" : [\n# {\n# \"count\" : 1,\n# \"term\" : \"T\"\n# }\n# ],\n# \"missing\" : 0,\n# \"_type\" : \"terms\",\n# \"total\" : 1\n# }\n# },\n# \"took\" : 2\n# }\n```\n",
"comments": [
{
"body": "Actually, I misread the docs. It says that it stores `T` and `F` internally but not that those values are accepted as `true` or `false`\n",
"created_at": "2014-07-08T15:24:25Z"
}
],
"number": 2075,
"title": "Boolean fields do not accept 'F' as false"
} | {
"body": "according to document `F` should be treat as `false`\nhttp://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-core-types.html#boolean\n\ncloses #2075\n",
"number": 5618,
"review_comments": [
{
"body": "Should probably create a table or list that maps values to `true` and `false`, and add `T` to it.\n\nIf going this route, then would it be worth it to add `f` and `t` as well? Personally, I think that it might be better to change the documentation than to add these options as they seem arbitrary, albeit nice and short.\n",
"created_at": "2014-03-31T23:56:14Z"
},
{
"body": "Consider `value.isEmpty()` instead of `value.equals(\"\")`. Also, would it be safer to do to avoid potential incosistencies (same for above):\n\n```\nreturn !(value.isEmpty() || isExplicitFalse(value));\n```\n",
"created_at": "2014-03-31T23:58:44Z"
},
{
"body": "can we maybe replace this with a `switch` statement?\n",
"created_at": "2014-04-02T09:02:52Z"
},
{
"body": "That's a good point @s1monw. Now that ES is using Java 7, the other checks that provide `String`s can also use a `switch`. Definitely suggest _not_ duplicating them all over the place if that happens (like the other one that I commented on).\n",
"created_at": "2014-04-02T16:12:22Z"
}
],
"title": "support `F` as false"
} | {
"commits": [
{
"message": "support `F` as false"
}
],
"files": [
{
"diff": "@@ -29,7 +29,7 @@ public static boolean parseBoolean(char[] text, int offset, int length, boolean\n return defaultValue;\n }\n if (length == 1) {\n- return text[offset] != '0';\n+ return text[offset] != '0' && text[offset] != 'F';\n }\n if (length == 2) {\n return !(text[offset] == 'n' && text[offset + 1] == 'o');\n@@ -55,7 +55,7 @@ public static boolean isBoolean(char[] text, int offset, int length) {\n return false;\n }\n if (length == 1) {\n- return text[offset] == '0' || text[offset] == '1';\n+ return text[offset] == '0' || text[offset] == '1' || text[offset] == 'F' || text[offset] == 'T';\n }\n if (length == 2) {\n return (text[offset] == 'n' && text[offset + 1] == 'o') || (text[offset] == 'o' && text[offset + 1] == 'n');\n@@ -78,22 +78,22 @@ public static boolean parseBoolean(String value, boolean defaultValue) {\n if (value == null) {\n return defaultValue;\n }\n- return !(value.equals(\"false\") || value.equals(\"0\") || value.equals(\"off\") || value.equals(\"no\"));\n+ return !(value.equals(\"false\") || value.equals(\"0\") || value.equals(\"off\") || value.equals(\"no\") || value.equals(\"F\") || value.equals(\"\"));\n }\n \n public static Boolean parseBoolean(String value, Boolean defaultValue) {\n if (value == null) {\n return defaultValue;\n }\n- return !(value.equals(\"false\") || value.equals(\"0\") || value.equals(\"off\") || value.equals(\"no\"));\n+ return !(value.equals(\"false\") || value.equals(\"0\") || value.equals(\"off\") || value.equals(\"no\") || value.equals(\"F\") || value.equals(\"\"));\n }\n \n public static boolean isExplicitFalse(String value) {\n- return (value.equals(\"false\") || value.equals(\"0\") || value.equals(\"off\") || value.equals(\"no\"));\n+ return (value.equals(\"false\") || value.equals(\"0\") || value.equals(\"off\") || value.equals(\"no\") || value.equals(\"F\"));\n }\n \n public static boolean isExplicitTrue(String value) {\n- return (value.equals(\"true\") || value.equals(\"1\") || value.equals(\"on\") || value.equals(\"yes\"));\n+ return (value.equals(\"true\") || value.equals(\"1\") || value.equals(\"on\") || value.equals(\"yes\") || value.equals(\"T\"));\n }\n \n }",
"filename": "src/main/java/org/elasticsearch/common/Booleans.java",
"status": "modified"
},
{
"diff": "@@ -20,24 +20,49 @@\n package org.elasticsearch.common;\n \n import org.elasticsearch.test.ElasticsearchTestCase;\n-import org.hamcrest.Matchers;\n+import static org.hamcrest.Matchers.*;\n import org.junit.Test;\n \n public class BooleansTests extends ElasticsearchTestCase {\n \n @Test\n public void testIsBoolean() {\n- String[] booleans = new String[]{\"true\", \"false\", \"on\", \"off\", \"yes\", \"no\", \"0\", \"1\"};\n- String[] notBooleans = new String[]{\"11\", \"00\", \"sdfsdfsf\", \"F\", \"T\"};\n+ String[] booleans = new String[]{\"true\", \"false\", \"on\", \"off\", \"yes\", \"no\", \"0\", \"1\", \"F\", \"T\"};\n+ String[] notBooleans = new String[]{\"11\", \"00\", \"sdfsdfsf\"};\n \n for (String b : booleans) {\n String t = \"prefix\" + b + \"suffix\";\n- assertThat(\"failed to recognize [\" + b + \"] as boolean\", Booleans.isBoolean(t.toCharArray(), \"prefix\".length(), b.length()), Matchers.equalTo(true));\n+ assertThat(\"failed to recognize [\" + b + \"] as boolean\", Booleans.isBoolean(t.toCharArray(), \"prefix\".length(), b.length()), equalTo(true));\n }\n \n for (String nb : notBooleans) {\n String t = \"prefix\" + nb + \"suffix\";\n- assertThat(\"recognized [\" + nb + \"] as boolean\", Booleans.isBoolean(t.toCharArray(), \"prefix\".length(), nb.length()), Matchers.equalTo(false));\n+ assertThat(\"recognized [\" + nb + \"] as boolean\", Booleans.isBoolean(t.toCharArray(), \"prefix\".length(), nb.length()), equalTo(false));\n+ }\n+ }\n+\n+ @Test\n+ public void testParseBoolean() {\n+ String[] trueValues = new String[]{\"true\", \"on\", \"yes\", \"1\", \"T\"};\n+ String[] falseValues = new String[]{\"false\", \"off\", \"no\", \"0\", \"F\", \"\"};\n+ String[] unknownValues = new String[]{\"11\", \"00\", \"sdfsdfsf\"}; // unknown value should be treat as true\n+\n+ for (String s : trueValues) {\n+ assertThat(Booleans.parseBoolean(s, false), equalTo(true));\n+ assertThat(Booleans.parseBoolean(s, null), equalTo(true));\n+ assertThat(Booleans.parseBoolean(s.toCharArray(), 0, s.length(), false), equalTo(true));\n+ }\n+\n+ for (String s : falseValues) {\n+ assertThat(Booleans.parseBoolean(s, false), equalTo(false));\n+ assertThat(Booleans.parseBoolean(s, null), equalTo(false));\n+ assertThat(Booleans.parseBoolean(s.toCharArray(), 0, s.length(), false), equalTo(false));\n+ }\n+\n+ for (String s : unknownValues) {\n+ assertThat(Booleans.parseBoolean(s, false), equalTo(true));\n+ assertThat(Booleans.parseBoolean(s, null), equalTo(true));\n+ assertThat(Booleans.parseBoolean(s.toCharArray(), 0, s.length(), false), equalTo(true));\n }\n }\n }",
"filename": "src/test/java/org/elasticsearch/common/BooleansTests.java",
"status": "modified"
}
]
} |
{
"body": "Here is some code extracted from `PRNG.random(int)`:\n\n``` java\nlong rand = doc;\nrand |= rand << 32;\nrand ^= rand;\nreturn nextFloat(rand);\n```\n\nThe issue is that `rand ^= rand;` is equivalent to `rand = 0;` so in the end, the random score generation completely discards the doc ID that was provided.\n",
"comments": [
{
"body": "Note: this is a dupe of #5454 although this one explains the issue\n",
"created_at": "2014-03-28T04:08:52Z"
},
{
"body": "closed by https://github.com/elasticsearch/elasticsearch/commit/e8ea9d75852bfc79e931143807af99ff9297da7e\n",
"created_at": "2014-04-28T17:27:43Z"
}
],
"number": 5578,
"title": "RandomScoreFunction.PRNG generates weak random numbers"
} | {
"body": "Closes #5454 and #5578\n\nThis strengthens and simplifies the PNRG used by `random_score` by more closely mirroring the `Random.nextFloat()` method, rather than a mix of that, `nextInt` and `nextDouble`. The `docBase` or `docId` are no longer used as they were biasing the result (particularly if it was `0`, which consistently made it the highest scoring result in tests), which partially defeats the purpose of random scoring.\n",
"number": 5613,
"review_comments": [],
"title": "Fixing questionable PNRG behavior"
} | {
"commits": [
{
"message": "Strengthening pseudo random number generator and adding tests to verify its behavior.\n\nCloses #5454 and #5578"
}
],
"files": [
{
"diff": "@@ -22,12 +22,11 @@\n import org.apache.lucene.search.Explanation;\n \n /**\n- *\n+ * Pseudo randomly generate a score for each {@link #score}.\n */\n public class RandomScoreFunction extends ScoreFunction {\n \n private final PRNG prng;\n- private int docBase;\n \n public RandomScoreFunction(long seed) {\n super(CombineFunction.MULT);\n@@ -36,12 +35,12 @@ public RandomScoreFunction(long seed) {\n \n @Override\n public void setNextReader(AtomicReaderContext context) {\n- this.docBase = context.docBase;\n+ // intentionally does nothing\n }\n \n @Override\n public double score(int docId, float subQueryScore) {\n- return prng.random(docBase + docId);\n+ return prng.nextFloat();\n }\n \n @Override\n@@ -70,22 +69,10 @@ static class PRNG {\n this.seed = (seed ^ multiplier) & mask;\n }\n \n- public float random(int doc) {\n- if (doc == 0) {\n- doc = 0xCAFEBAB;\n- }\n-\n- long rand = doc;\n- rand |= rand << 32;\n- rand ^= rand;\n- return nextFloat(rand);\n- }\n-\n- public float nextFloat(long rand) {\n+ public float nextFloat() {\n seed = (seed * multiplier + addend) & mask;\n- rand ^= seed;\n- double result = rand / (double)(1L << 54);\n- return (float) result;\n+\n+ return seed / (float)(1 << 24);\n }\n \n }",
"filename": "src/main/java/org/elasticsearch/common/lucene/search/function/RandomScoreFunction.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,213 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.common.lucene.search.function;\n+\n+import com.carrotsearch.randomizedtesting.annotations.Repeat;\n+import com.google.common.collect.Lists;\n+import org.apache.lucene.document.Document;\n+import org.apache.lucene.document.Field;\n+import org.apache.lucene.document.TextField;\n+import org.apache.lucene.index.AtomicReader;\n+import org.apache.lucene.index.DirectoryReader;\n+import org.apache.lucene.index.IndexWriter;\n+import org.apache.lucene.index.IndexWriterConfig;\n+import org.apache.lucene.index.SlowCompositeReaderWrapper;\n+import org.apache.lucene.search.Explanation;\n+import org.apache.lucene.search.IndexSearcher;\n+import org.apache.lucene.search.TopDocs;\n+import org.apache.lucene.store.RAMDirectory;\n+import org.elasticsearch.common.lucene.Lucene;\n+import org.elasticsearch.common.lucene.search.Queries;\n+import org.elasticsearch.test.ElasticsearchTestCase;\n+import org.junit.After;\n+import org.junit.Test;\n+\n+import java.io.IOException;\n+import java.util.List;\n+\n+import static org.hamcrest.Matchers.*;\n+\n+/**\n+ * Test {@link RandomScoreFunction}\n+ */\n+public class RandomScoreFunctionTests extends ElasticsearchTestCase {\n+\n+ private final String[] ids = { \"1\", \"2\", \"3\" };\n+ private IndexWriter writer;\n+ private AtomicReader reader;\n+\n+ @After\n+ public void closeReaderAndWriterIfUsed() throws IOException {\n+ if (reader != null) {\n+ reader.close();\n+ }\n+\n+ if (writer != null) {\n+ writer.close();\n+ }\n+ }\n+\n+ /**\n+ * Given the same seed, the pseudo random number generator should match on\n+ * each use given the same number of invocations.\n+ */\n+ @Test\n+ @Repeat(iterations = 5)\n+ public void testPrngNextFloatIsConsistent() {\n+ long seed = randomLong();\n+\n+ RandomScoreFunction.PRNG prng = new RandomScoreFunction.PRNG(seed);\n+ RandomScoreFunction.PRNG prng2 = new RandomScoreFunction.PRNG(seed);\n+\n+ // The seed will be changing the entire time, so each value should be\n+ // different\n+ assertThat(prng.nextFloat(), equalTo(prng2.nextFloat()));\n+ assertThat(prng.nextFloat(), equalTo(prng2.nextFloat()));\n+ assertThat(prng.nextFloat(), equalTo(prng2.nextFloat()));\n+ assertThat(prng.nextFloat(), equalTo(prng2.nextFloat()));\n+ }\n+\n+ @Test\n+ public void testPrngNextFloatSometimesFirstIsGreaterThanSecond() {\n+ boolean firstWasGreater = false;\n+\n+ // Since the results themselves are intended to be random, we cannot\n+ // just do @Repeat(iterations = 100) because some iterations are\n+ // expected to fail\n+ for (int i = 0; i < 100; ++i) {\n+ long seed = randomLong();\n+\n+ RandomScoreFunction.PRNG prng = new RandomScoreFunction.PRNG(seed);\n+\n+ float firstRandom = prng.nextFloat();\n+ float secondRandom = prng.nextFloat();\n+\n+ if (firstRandom > secondRandom) {\n+ firstWasGreater = true;\n+ }\n+ }\n+\n+ assertTrue(\"First value was never greater than the second value\",\n+ firstWasGreater);\n+ }\n+\n+ @Test\n+ public void testPrngNextFloatSometimesFirstIsLessThanSecond() {\n+ boolean firstWasLess = false;\n+\n+ // Since the results themselves are intended to be random, we cannot\n+ // just do @Repeat(iterations = 100) because some iterations are\n+ // expected to fail\n+ for (int i = 0; i < 1000; ++i) {\n+ long seed = randomLong();\n+\n+ RandomScoreFunction.PRNG prng = new RandomScoreFunction.PRNG(seed);\n+\n+ float firstRandom = prng.nextFloat();\n+ float secondRandom = prng.nextFloat();\n+\n+ if (firstRandom < secondRandom) {\n+ firstWasLess = true;\n+ }\n+ }\n+\n+ assertTrue(\"First value was never less than the second value\",\n+ firstWasLess);\n+ }\n+\n+ @Test\n+ public void testScorerResultsInRandomOrder() throws IOException {\n+ List<String> idsNotSpotted = Lists.newArrayList(ids);\n+ IndexSearcher searcher = mockSearcher();\n+\n+ // Since the results themselves are intended to be random, we cannot\n+ // just do @Repeat(iterations = 100) because some iterations are\n+ // expected to fail\n+ for (int i = 0; i < 100; ++i) {\n+ // Randomly seeded to keep trying to shuffle without walking through\n+ // values\n+ RandomScoreFunction function =\n+ new RandomScoreFunction(randomLong());\n+ // fulfilling contract\n+ function.setNextReader(reader.getContext());\n+\n+ FunctionScoreQuery query =\n+ new FunctionScoreQuery(Queries.newMatchAllQuery(), function);\n+\n+ // Testing that we get a random result\n+ TopDocs docs = searcher.search(query, 1);\n+\n+ String id = reader.document(docs.scoreDocs[0].doc)\n+ .getField(\"_id\").stringValue();\n+\n+ if (idsNotSpotted.remove(id) && idsNotSpotted.isEmpty()) {\n+ // short circuit test because we succeeded\n+ break;\n+ }\n+ }\n+\n+ assertThat(idsNotSpotted, empty());\n+ }\n+\n+ @Test\n+ public void testExplainScoreReportsOriginalSeed() {\n+ long seed = randomLong();\n+ Explanation subExplanation = new Explanation();\n+\n+ RandomScoreFunction function = new RandomScoreFunction(seed);\n+ // Trigger a random call to change the seed to ensure that we are\n+ // reporting the _original_ seed\n+ function.score(0, 1.0f);\n+\n+ // Generate the randomScore explanation\n+ Explanation randomExplanation =\n+ function.explainScore(0, subExplanation);\n+\n+ // Original seed should be there\n+ assertThat(randomExplanation.getDescription(),\n+ containsString(\"\" + seed));\n+ assertThat(randomExplanation.getDetails(),\n+ arrayContaining(subExplanation));\n+ }\n+\n+ /**\n+ * Create a \"mock\" {@link IndexSearcher} that uses an in-memory directory\n+ * containing three documents whose IDs are \"1\", \"2\", and \"3\" respectively.\n+ * @return Never {@code null}\n+ * @throws IOException if an unexpected error occurs while mocking\n+ */\n+ private IndexSearcher mockSearcher() throws IOException {\n+ writer =\n+ new IndexWriter(new RAMDirectory(),\n+ new IndexWriterConfig(Lucene.VERSION,\n+ Lucene.STANDARD_ANALYZER));\n+\n+ for (String id : ids) {\n+ Document document = new Document();\n+ document.add(new TextField(\"_id\", id, Field.Store.YES));\n+ writer.addDocument(document);\n+ }\n+\n+ reader = SlowCompositeReaderWrapper.wrap(\n+ DirectoryReader.open(writer, true));\n+\n+ return new IndexSearcher(reader);\n+ }\n+}",
"filename": "src/test/java/org/elasticsearch/common/lucene/search/function/RandomScoreFunctionTests.java",
"status": "added"
}
]
} |
{
"body": "https://github.com/elasticsearch/elasticsearch/blob/master/src/main/java/org/elasticsearch/common/lucene/search/function/RandomScoreFunction.java\n\nIn method random(int doc)\n\nThe intent of rand ^= rand isn't very clear, a possible bug.\n",
"comments": [],
"number": 5454,
"title": "Illogical xor operation in RandomScoreFunction"
} | {
"body": "Closes #5454 and #5578\n\nThis strengthens and simplifies the PNRG used by `random_score` by more closely mirroring the `Random.nextFloat()` method, rather than a mix of that, `nextInt` and `nextDouble`. The `docBase` or `docId` are no longer used as they were biasing the result (particularly if it was `0`, which consistently made it the highest scoring result in tests), which partially defeats the purpose of random scoring.\n",
"number": 5613,
"review_comments": [],
"title": "Fixing questionable PNRG behavior"
} | {
"commits": [
{
"message": "Strengthening pseudo random number generator and adding tests to verify its behavior.\n\nCloses #5454 and #5578"
}
],
"files": [
{
"diff": "@@ -22,12 +22,11 @@\n import org.apache.lucene.search.Explanation;\n \n /**\n- *\n+ * Pseudo randomly generate a score for each {@link #score}.\n */\n public class RandomScoreFunction extends ScoreFunction {\n \n private final PRNG prng;\n- private int docBase;\n \n public RandomScoreFunction(long seed) {\n super(CombineFunction.MULT);\n@@ -36,12 +35,12 @@ public RandomScoreFunction(long seed) {\n \n @Override\n public void setNextReader(AtomicReaderContext context) {\n- this.docBase = context.docBase;\n+ // intentionally does nothing\n }\n \n @Override\n public double score(int docId, float subQueryScore) {\n- return prng.random(docBase + docId);\n+ return prng.nextFloat();\n }\n \n @Override\n@@ -70,22 +69,10 @@ static class PRNG {\n this.seed = (seed ^ multiplier) & mask;\n }\n \n- public float random(int doc) {\n- if (doc == 0) {\n- doc = 0xCAFEBAB;\n- }\n-\n- long rand = doc;\n- rand |= rand << 32;\n- rand ^= rand;\n- return nextFloat(rand);\n- }\n-\n- public float nextFloat(long rand) {\n+ public float nextFloat() {\n seed = (seed * multiplier + addend) & mask;\n- rand ^= seed;\n- double result = rand / (double)(1L << 54);\n- return (float) result;\n+\n+ return seed / (float)(1 << 24);\n }\n \n }",
"filename": "src/main/java/org/elasticsearch/common/lucene/search/function/RandomScoreFunction.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,213 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.common.lucene.search.function;\n+\n+import com.carrotsearch.randomizedtesting.annotations.Repeat;\n+import com.google.common.collect.Lists;\n+import org.apache.lucene.document.Document;\n+import org.apache.lucene.document.Field;\n+import org.apache.lucene.document.TextField;\n+import org.apache.lucene.index.AtomicReader;\n+import org.apache.lucene.index.DirectoryReader;\n+import org.apache.lucene.index.IndexWriter;\n+import org.apache.lucene.index.IndexWriterConfig;\n+import org.apache.lucene.index.SlowCompositeReaderWrapper;\n+import org.apache.lucene.search.Explanation;\n+import org.apache.lucene.search.IndexSearcher;\n+import org.apache.lucene.search.TopDocs;\n+import org.apache.lucene.store.RAMDirectory;\n+import org.elasticsearch.common.lucene.Lucene;\n+import org.elasticsearch.common.lucene.search.Queries;\n+import org.elasticsearch.test.ElasticsearchTestCase;\n+import org.junit.After;\n+import org.junit.Test;\n+\n+import java.io.IOException;\n+import java.util.List;\n+\n+import static org.hamcrest.Matchers.*;\n+\n+/**\n+ * Test {@link RandomScoreFunction}\n+ */\n+public class RandomScoreFunctionTests extends ElasticsearchTestCase {\n+\n+ private final String[] ids = { \"1\", \"2\", \"3\" };\n+ private IndexWriter writer;\n+ private AtomicReader reader;\n+\n+ @After\n+ public void closeReaderAndWriterIfUsed() throws IOException {\n+ if (reader != null) {\n+ reader.close();\n+ }\n+\n+ if (writer != null) {\n+ writer.close();\n+ }\n+ }\n+\n+ /**\n+ * Given the same seed, the pseudo random number generator should match on\n+ * each use given the same number of invocations.\n+ */\n+ @Test\n+ @Repeat(iterations = 5)\n+ public void testPrngNextFloatIsConsistent() {\n+ long seed = randomLong();\n+\n+ RandomScoreFunction.PRNG prng = new RandomScoreFunction.PRNG(seed);\n+ RandomScoreFunction.PRNG prng2 = new RandomScoreFunction.PRNG(seed);\n+\n+ // The seed will be changing the entire time, so each value should be\n+ // different\n+ assertThat(prng.nextFloat(), equalTo(prng2.nextFloat()));\n+ assertThat(prng.nextFloat(), equalTo(prng2.nextFloat()));\n+ assertThat(prng.nextFloat(), equalTo(prng2.nextFloat()));\n+ assertThat(prng.nextFloat(), equalTo(prng2.nextFloat()));\n+ }\n+\n+ @Test\n+ public void testPrngNextFloatSometimesFirstIsGreaterThanSecond() {\n+ boolean firstWasGreater = false;\n+\n+ // Since the results themselves are intended to be random, we cannot\n+ // just do @Repeat(iterations = 100) because some iterations are\n+ // expected to fail\n+ for (int i = 0; i < 100; ++i) {\n+ long seed = randomLong();\n+\n+ RandomScoreFunction.PRNG prng = new RandomScoreFunction.PRNG(seed);\n+\n+ float firstRandom = prng.nextFloat();\n+ float secondRandom = prng.nextFloat();\n+\n+ if (firstRandom > secondRandom) {\n+ firstWasGreater = true;\n+ }\n+ }\n+\n+ assertTrue(\"First value was never greater than the second value\",\n+ firstWasGreater);\n+ }\n+\n+ @Test\n+ public void testPrngNextFloatSometimesFirstIsLessThanSecond() {\n+ boolean firstWasLess = false;\n+\n+ // Since the results themselves are intended to be random, we cannot\n+ // just do @Repeat(iterations = 100) because some iterations are\n+ // expected to fail\n+ for (int i = 0; i < 1000; ++i) {\n+ long seed = randomLong();\n+\n+ RandomScoreFunction.PRNG prng = new RandomScoreFunction.PRNG(seed);\n+\n+ float firstRandom = prng.nextFloat();\n+ float secondRandom = prng.nextFloat();\n+\n+ if (firstRandom < secondRandom) {\n+ firstWasLess = true;\n+ }\n+ }\n+\n+ assertTrue(\"First value was never less than the second value\",\n+ firstWasLess);\n+ }\n+\n+ @Test\n+ public void testScorerResultsInRandomOrder() throws IOException {\n+ List<String> idsNotSpotted = Lists.newArrayList(ids);\n+ IndexSearcher searcher = mockSearcher();\n+\n+ // Since the results themselves are intended to be random, we cannot\n+ // just do @Repeat(iterations = 100) because some iterations are\n+ // expected to fail\n+ for (int i = 0; i < 100; ++i) {\n+ // Randomly seeded to keep trying to shuffle without walking through\n+ // values\n+ RandomScoreFunction function =\n+ new RandomScoreFunction(randomLong());\n+ // fulfilling contract\n+ function.setNextReader(reader.getContext());\n+\n+ FunctionScoreQuery query =\n+ new FunctionScoreQuery(Queries.newMatchAllQuery(), function);\n+\n+ // Testing that we get a random result\n+ TopDocs docs = searcher.search(query, 1);\n+\n+ String id = reader.document(docs.scoreDocs[0].doc)\n+ .getField(\"_id\").stringValue();\n+\n+ if (idsNotSpotted.remove(id) && idsNotSpotted.isEmpty()) {\n+ // short circuit test because we succeeded\n+ break;\n+ }\n+ }\n+\n+ assertThat(idsNotSpotted, empty());\n+ }\n+\n+ @Test\n+ public void testExplainScoreReportsOriginalSeed() {\n+ long seed = randomLong();\n+ Explanation subExplanation = new Explanation();\n+\n+ RandomScoreFunction function = new RandomScoreFunction(seed);\n+ // Trigger a random call to change the seed to ensure that we are\n+ // reporting the _original_ seed\n+ function.score(0, 1.0f);\n+\n+ // Generate the randomScore explanation\n+ Explanation randomExplanation =\n+ function.explainScore(0, subExplanation);\n+\n+ // Original seed should be there\n+ assertThat(randomExplanation.getDescription(),\n+ containsString(\"\" + seed));\n+ assertThat(randomExplanation.getDetails(),\n+ arrayContaining(subExplanation));\n+ }\n+\n+ /**\n+ * Create a \"mock\" {@link IndexSearcher} that uses an in-memory directory\n+ * containing three documents whose IDs are \"1\", \"2\", and \"3\" respectively.\n+ * @return Never {@code null}\n+ * @throws IOException if an unexpected error occurs while mocking\n+ */\n+ private IndexSearcher mockSearcher() throws IOException {\n+ writer =\n+ new IndexWriter(new RAMDirectory(),\n+ new IndexWriterConfig(Lucene.VERSION,\n+ Lucene.STANDARD_ANALYZER));\n+\n+ for (String id : ids) {\n+ Document document = new Document();\n+ document.add(new TextField(\"_id\", id, Field.Store.YES));\n+ writer.addDocument(document);\n+ }\n+\n+ reader = SlowCompositeReaderWrapper.wrap(\n+ DirectoryReader.open(writer, true));\n+\n+ return new IndexSearcher(reader);\n+ }\n+}",
"filename": "src/test/java/org/elasticsearch/common/lucene/search/function/RandomScoreFunctionTests.java",
"status": "added"
}
]
} |
{
"body": "All GET-with-body endpoints should support passing the body as the `source` parameter in the query string. `search_template` does not currently support this\n",
"comments": [
{
"body": "Fails with:\n\n```\n[2014-03-28 16:18:29,962][DEBUG][action.search.type ] [Ape-Man] [test][4], node[FKDjNLD9RFq2-Mm6CgTADw], [P], s[STARTED]: Failed to execute [org.elasticsearch.action.search.SearchRequest@52514fbf] lastShard [true]\norg.elasticsearch.transport.RemoteTransportException: [Ecstasy][inet[/192.168.5.102:9301]][search/phase/query]\nCaused by: org.elasticsearch.search.SearchParseException: [test][4]: from[-1],size[-1]: Parse Failure [Failed to parse source [{\"params\":{\"template\":\"all\"},\"template\":{\"query\":{\"match_{{template}}\":{}}}}]]\n at org.elasticsearch.search.SearchService.parseSource(SearchService.java:634)\n at org.elasticsearch.search.SearchService.createContext(SearchService.java:507)\n at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:480)\n at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:252)\n at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:623)\n at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:612)\n at org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.run(MessageChannelHandler.java:270)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n at java.lang.Thread.run(Thread.java:722)\nCaused by: org.elasticsearch.search.SearchParseException: [test][4]: from[-1],size[-1]: Parse Failure [No parser for element [params]]\n at org.elasticsearch.search.SearchService.parseSource(SearchService.java:620)\n ... 9 more\n```\n",
"created_at": "2014-03-28T15:21:08Z"
}
],
"number": 5556,
"title": "search_template does not support ?source="
} | {
"body": "If a search template was created using the source parameter, the\ncontent of the parameter was put as source instead of sourceTemplate\n\nFixes #5556\n",
"number": 5598,
"review_comments": [
{
"body": "May be we should change `String source` to `String templateSource`?\n\nJust \"cosmetic\" change though... :)\n",
"created_at": "2014-03-28T17:01:14Z"
}
],
"title": "Search template: Put source param into template variable"
} | {
"commits": [
{
"message": "Search template: Fix support for source parameter\n\nIf a search template was created using the source parameter, the\ncontent of the parameter was put as source instead of sourceTemplate\n\nFixes #5556"
}
],
"files": [
{
"diff": "@@ -422,9 +422,9 @@ public SearchRequest templateSource(BytesReference template, boolean unsafe) {\n /**\n * The template of the search request.\n */\n- public SearchRequest templateSource(String source) {\n- this.source = new BytesArray(source);\n- this.sourceUnsafe = false;\n+ public SearchRequest templateSource(String template) {\n+ this.templateSource = new BytesArray(template);\n+ this.templateSourceUnsafe = false;\n return this;\n }\n ",
"filename": "src/main/java/org/elasticsearch/action/search/SearchRequest.java",
"status": "modified"
}
]
} |
{
"body": "The context suggester seems to have a couple of bugs in the geo impl - when working according to the docs I could not create working suggestions.\n\nThis example does not return any suggestions (but it should return the one in Amsterdam)\n\n``` JSON\nDELETE venues\nPUT venues\n\nGET venues/_mapping\nPUT venues/poi/_mapping\n{\n \"poi\" : {\n \"properties\" : {\n \"suggest_field\": {\n \"type\": \"completion\",\n \"context\": {\n \"loc\": { \n \"type\": \"geo\",\n \"precision\" : \"10km\"\n }\n }\n }\n }\n }\n}\n\nPUT venues/poi/1\n{\n \"suggest_field\": {\n \"input\": [\"Hotel Amsterdam\" ],\n \"context\": {\n \"loc\": {\n \"lat\": 52.529172, \n \"lon\": 13.407333\n }\n }\n }\n}\n\nGET venues/poi/2\nPUT venues/poi/2\n{\n \"suggest_field\": {\n \"input\": [\"Hotel Berlin in AMS\" ],\n \"context\": {\n \"loc\": {\n \"lat\": 52.363389, \n \"lon\": 4.888695\n }\n }\n }\n}\n\nGET /venues/_search\n\nGET /venues/_suggest\n{\n \"suggest\" : {\n \"text\" : \"h\",\n \"completion\" : {\n \"field\" : \"suggest_field\",\n \"context\": {\n \"loc\": {\n \"lat\": 52.36,\n \"lon\": 4.88\n }\n }\n }\n }\n}\n\n```\n\nThis returns a parsing exception (but is mentioned in the docs like that)\n\n``` JSON\nDELETE venues\nPUT venues\n\nGET venues/_mapping\nPUT venues/poi/_mapping\n{\n \"poi\" : {\n \"properties\" : {\n \"suggest_field\": {\n \"type\": \"completion\",\n \"context\": {\n \"location\": {\n \"type\": \"geo\",\n \"precision\": [\"1km\", \"5m\"],\n \"neighbors\": true,\n \"path\": \"pin\",\n \"default\": {\n \"lat\": 0.0,\n \"lon\": 0.0\n }\n }\n }\n }\n }\n }\n}\n```\n\nThis is also directly from the docs, using the `value` field, results in a parse exception\n\n``` JSON\nDELETE venues\nPUT venues\n\nGET venues/_mapping\nPUT venues/poi/_mapping\n{\n \"poi\" : {\n \"properties\" : {\n \"suggest_field\": {\n \"type\": \"completion\",\n \"context\": {\n \"loc\": { \n \"type\": \"geo\",\n \"precision\" : \"10km\"\n }\n }\n }\n }\n }\n}\n\nGET /venues/_suggest\n{\n \"suggest\" : {\n \"text\" : \"m\",\n \"completion\" : {\n\n \"field\" : \"suggest_field\",\n \"size\": 10,\n \"context\": {\n \"location\": {\n \"value\": {\n\n \"lat\": 0,\n \"lon\": 0\n },\n \"precision\": \"1km\"\n }\n }\n }\n\n }\n}\n```\n\nAlso the category suggester seems to have an issue, when you specify a path in the mapping for the context, but that path is not set on document indexing\n\n``` JSON\nDELETE services\nPUT services\nPUT services/service/_mapping\n{\n \"service\": {\n \"properties\": {\n \"suggest_field\": {\n \"type\": \"completion\",\n \"context\": {\n \"color\": { \n \"type\": \"category\",\n \"path\": \"color_field\"\n }\n }\n }\n }\n }\n}\n\nGET services/service/_mapping\n\nPUT services/service/1\n{\n \"suggest_field\": {\n \"input\": [\"knacksack\", \"backpack\", \"daypack\"]\n }\n}\n```\n\nThis was tested on the 1.x branch and master\n",
"comments": [],
"number": 5525,
"title": "Context Suggester Issues/Exceptions"
} | {
"body": "A bunch of minor fixes have been included here, especially due\nto wrongly parsed mappings. Also using assertions resulted in an\nNPE because they were disabled in the distribution.\n\nCloses #5525\n\nAlso, there is one last thing, which needs discussing: Currently the default precision is so accurate, that a suggestions will fail, you index the lat/lon `52.363389/4.888695` but search with `52.3633, 4.8886` - I think this is confusing. We should either force people to set a `precision` in the mapping or have less high accuracy.\n",
"number": 5596,
"review_comments": [
{
"body": "can this be ElasticsearchIllegalArguementExcpetion?\n",
"created_at": "2014-03-31T12:21:38Z"
},
{
"body": "this refers to the problems which the precision? \n",
"created_at": "2014-03-31T12:23:59Z"
},
{
"body": "fixed\n",
"created_at": "2014-03-31T12:56:35Z"
},
{
"body": "exactly, we need to discuss, if the current default precision makes sense from a usability point of view\n",
"created_at": "2014-03-31T12:57:00Z"
},
{
"body": "ok lets open an issue for it and get this one in...\n",
"created_at": "2014-03-31T13:18:31Z"
}
],
"title": "ContextSuggester: Adding couple of tests to catch more bugs"
} | {
"commits": [
{
"message": "ContextSuggester: Adding couple of tests to catch more bugs\n\nA bunch of minor fixes have been included here, especially due\nto wrongly parsed mappings. Also using assertions resulted in an\nNPE because they were disabled in the distribution.\n\nCloses #5525"
}
],
"files": [
{
"diff": "@@ -0,0 +1,219 @@\n+# This test creates one huge mapping in the setup\n+# Every test should use its own field to make sure it works\n+\n+setup:\n+\n+ - do:\n+ indices.create:\n+ index: test\n+ body:\n+ mappings:\n+ test:\n+ \"properties\":\n+ \"suggest_context\":\n+ \"type\" : \"completion\"\n+ \"context\":\n+ \"color\":\n+ \"type\" : \"category\"\n+ \"suggest_context_default_hardcoded\":\n+ \"type\" : \"completion\"\n+ \"context\":\n+ \"color\":\n+ \"type\" : \"category\"\n+ \"default\" : \"red\"\n+ \"suggest_context_default_path\":\n+ \"type\" : \"completion\"\n+ \"context\":\n+ \"color\":\n+ \"type\" : \"category\"\n+ \"default\" : \"red\"\n+ \"path\" : \"color\"\n+ \"suggest_geo\":\n+ \"type\" : \"completion\"\n+ \"context\":\n+ \"location\":\n+ \"type\" : \"geo\"\n+\n+---\n+\"Simple context suggestion should work\":\n+\n+ - do:\n+ index:\n+ index: test\n+ type: test\n+ id: 1\n+ body:\n+ suggest_context:\n+ input: \"Hoodie red\"\n+ context:\n+ color: \"red\"\n+\n+ - do:\n+ index:\n+ index: test\n+ type: test\n+ id: 2\n+ body:\n+ suggest_context:\n+ input: \"Hoodie blue\"\n+ context:\n+ color: \"blue\"\n+\n+ - do:\n+ indices.refresh: {}\n+\n+ - do:\n+ suggest:\n+ body:\n+ result:\n+ text: \"hoo\"\n+ completion:\n+ field: suggest_context\n+ context:\n+ color: \"red\"\n+\n+ - match: {result.0.options.0.text: \"Hoodie red\" }\n+\n+---\n+\"Hardcoded category value should work\":\n+\n+ - do:\n+ index:\n+ index: test\n+ type: test\n+ id: 1\n+ body:\n+ suggest_context_default_hardcoded:\n+ input: \"Hoodie red\"\n+\n+ - do:\n+ index:\n+ index: test\n+ type: test\n+ id: 2\n+ body:\n+ suggest_context_default_hardcoded:\n+ input: \"Hoodie blue\"\n+ context:\n+ color: \"blue\"\n+\n+ - do:\n+ indices.refresh: {}\n+\n+ - do:\n+ suggest:\n+ body:\n+ result:\n+ text: \"hoo\"\n+ completion:\n+ field: suggest_context_default_hardcoded\n+ context:\n+ color: \"red\"\n+\n+ - length: { result: 1 }\n+ - length: { result.0.options: 1 }\n+ - match: { result.0.options.0.text: \"Hoodie red\" }\n+\n+\n+---\n+\"Category suggest context default path should work\":\n+\n+ - do:\n+ index:\n+ index: test\n+ type: test\n+ id: 1\n+ body:\n+ suggest_context_default_path:\n+ input: \"Hoodie red\"\n+\n+ - do:\n+ index:\n+ index: test\n+ type: test\n+ id: 2\n+ body:\n+ suggest_context_default_path:\n+ input: \"Hoodie blue\"\n+ color: \"blue\"\n+\n+ - do:\n+ indices.refresh: {}\n+\n+ - do:\n+ suggest:\n+ body:\n+ result:\n+ text: \"hoo\"\n+ completion:\n+ field: suggest_context_default_path\n+ context:\n+ color: \"red\"\n+\n+ - length: { result: 1 }\n+ - length: { result.0.options: 1 }\n+ - match: { result.0.options.0.text: \"Hoodie red\" }\n+\n+ - do:\n+ suggest:\n+ body:\n+ result:\n+ text: \"hoo\"\n+ completion:\n+ field: suggest_context_default_path\n+ context:\n+ color: \"blue\"\n+\n+ - length: { result: 1 }\n+ - length: { result.0.options: 1 }\n+ - match: { result.0.options.0.text: \"Hoodie blue\" }\n+\n+\n+---\n+\"Geo suggest should work\":\n+\n+ - do:\n+ index:\n+ index: test\n+ type: test\n+ id: 1\n+ body:\n+ suggest_geo:\n+ input: \"Hotel Marriot in Amsterdam\"\n+ context:\n+ location:\n+ lat : 52.22\n+ lon : 4.53\n+\n+ - do:\n+ index:\n+ index: test\n+ type: test\n+ id: 2\n+ body:\n+ suggest_geo:\n+ input: \"Hotel Marriot in Berlin\"\n+ context:\n+ location:\n+ lat : 53.31\n+ lon : 13.24\n+\n+ - do:\n+ indices.refresh: {}\n+\n+ - do:\n+ suggest:\n+ body:\n+ result:\n+ text: \"hote\"\n+ completion:\n+ field: suggest_geo\n+ context:\n+ location:\n+ lat : 52.22\n+ lon : 4.53\n+\n+ - length: { result: 1 }\n+ - length: { result.0.options: 1 }\n+ - match: { result.0.options.0.text: \"Hotel Marriot in Amsterdam\" }\n+",
"filename": "rest-api-spec/test/suggest/20_context.yaml",
"status": "added"
},
{
"diff": "@@ -21,6 +21,7 @@\n \n import org.apache.lucene.analysis.tokenattributes.CharTermAttribute;\n import org.apache.lucene.analysis.tokenattributes.PositionIncrementAttribute;\n+import org.elasticsearch.ElasticsearchIllegalArgumentException;\n \n import java.io.IOException;\n import java.io.Reader;\n@@ -96,7 +97,9 @@ public PrefixTokenFilter(TokenStream input, char separator, Iterable<? extends C\n this.prefixes = prefixes;\n this.currentPrefix = null;\n this.separator = separator;\n- assert (prefixes != null && prefixes.iterator().hasNext()) : \"one or more prefix needed\";\n+ if (prefixes == null || !prefixes.iterator().hasNext()) {\n+ throw new ElasticsearchIllegalArgumentException(\"one or more prefixes needed\");\n+ }\n }\n \n @Override",
"filename": "src/main/java/org/apache/lucene/analysis/PrefixAnalyzer.java",
"status": "modified"
},
{
"diff": "@@ -33,7 +33,7 @@\n \n /**\n * Defines how to perform suggesting. This builders allows a number of global options to be specified and\n- * an arbitrary number of {@link org.elasticsearch.search.suggest.SuggestBuilder.TermSuggestionBuilder} instances.\n+ * an arbitrary number of {@link org.elasticsearch.search.suggest.term.TermSuggestionBuilder} instances.\n * <p/>\n * Suggesting works by suggesting terms that appear in the suggest text that are similar compared to the terms in\n * provided text. These spelling suggestions are based on several options described in this class.\n@@ -66,7 +66,7 @@ public SuggestBuilder setText(String globalText) {\n }\n \n /**\n- * Adds an {@link org.elasticsearch.search.suggest.SuggestBuilder.TermSuggestionBuilder} instance under a user defined name.\n+ * Adds an {@link org.elasticsearch.search.suggest.term.TermSuggestionBuilder} instance under a user defined name.\n * The order in which the <code>Suggestions</code> are added, is the same as in the response.\n */\n public SuggestBuilder addSuggestion(SuggestionBuilder<?> suggestion) {\n@@ -141,17 +141,28 @@ private T addContextQuery(ContextQuery ctx) {\n }\n \n /**\n- * Setup a Geolocation for suggestions. See {@link GeoContextMapping}.\n+ * Setup a Geolocation for suggestions. See {@link GeolocationContextMapping}.\n * @param lat Latitude of the location\n * @param lon Longitude of the Location\n * @return this\n */\n- public T addGeoLocation(String name, double lat, double lon) {\n- return addContextQuery(GeolocationContextMapping.query(name, lat, lon));\n+ public T addGeoLocation(String name, double lat, double lon, int ... precisions) {\n+ return addContextQuery(GeolocationContextMapping.query(name, lat, lon, precisions));\n }\n \n /**\n- * Setup a Geolocation for suggestions. See {@link GeoContextMapping}.\n+ * Setup a Geolocation for suggestions. See {@link GeolocationContextMapping}.\n+ * @param lat Latitude of the location\n+ * @param lon Longitude of the Location\n+ * @param precisions precisions as string var-args\n+ * @return this\n+ */\n+ public T addGeoLocationWithPrecision(String name, double lat, double lon, String ... precisions) {\n+ return addContextQuery(GeolocationContextMapping.query(name, lat, lon, precisions));\n+ }\n+\n+ /**\n+ * Setup a Geolocation for suggestions. See {@link GeolocationContextMapping}.\n * @param geohash Geohash of the location\n * @return this\n */\n@@ -160,17 +171,17 @@ public T addGeoLocation(String name, String geohash) {\n }\n \n /**\n- * Setup a Category for suggestions. See {@link CategoryMapping}.\n- * @param category name of the category\n+ * Setup a Category for suggestions. See {@link CategoryContextMapping}.\n+ * @param categories name of the category\n * @return this\n */\n public T addCategory(String name, CharSequence...categories) {\n return addContextQuery(CategoryContextMapping.query(name, categories));\n }\n \n /**\n- * Setup a Category for suggestions. See {@link CategoryMapping}.\n- * @param category name of the category\n+ * Setup a Category for suggestions. See {@link CategoryContextMapping}.\n+ * @param categories name of the category\n * @return this\n */\n public T addCategory(String name, Iterable<? extends CharSequence> categories) {\n@@ -179,7 +190,7 @@ public T addCategory(String name, Iterable<? extends CharSequence> categories) {\n \n /**\n * Setup a Context Field for suggestions. See {@link CategoryContextMapping}.\n- * @param category name of the category\n+ * @param fieldvalues name of the category\n * @return this\n */\n public T addContextField(String name, CharSequence...fieldvalues) {\n@@ -188,7 +199,7 @@ public T addContextField(String name, CharSequence...fieldvalues) {\n \n /**\n * Setup a Context Field for suggestions. See {@link CategoryContextMapping}.\n- * @param category name of the category\n+ * @param fieldvalues name of the category\n * @return this\n */\n public T addContextField(String name, Iterable<? extends CharSequence> fieldvalues) {\n@@ -242,7 +253,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n /**\n * Sets from what field to fetch the candidate suggestions from. This is an\n * required option and needs to be set via this setter or\n- * {@link org.elasticsearch.search.suggest.SuggestBuilder.TermSuggestionBuilder#setField(String)}\n+ * {@link org.elasticsearch.search.suggest.term.TermSuggestionBuilder#field(String)}\n * method\n */\n @SuppressWarnings(\"unchecked\")",
"filename": "src/main/java/org/elasticsearch/search/suggest/SuggestBuilder.java",
"status": "modified"
},
{
"diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.search.suggest.context;\n \n+import com.google.common.base.Joiner;\n import com.google.common.collect.Iterables;\n import com.google.common.collect.Lists;\n import org.apache.lucene.analysis.PrefixAnalyzer;\n@@ -235,29 +236,30 @@ public FieldConfig(String fieldname, Iterable<? extends CharSequence> defaultVal\n \n @Override\n protected TokenStream wrapTokenStream(Document doc, TokenStream stream) {\n- if(values != null) {\n+ if (values != null) {\n return new PrefixAnalyzer.PrefixTokenFilter(stream, ContextMapping.SEPARATOR, values);\n+ // if fieldname is default, BUT our default values are set, we take that one\n+ } else if ((doc.getFields(fieldname).length == 0 || fieldname.equals(DEFAULT_FIELDNAME)) && defaultValues.iterator().hasNext()) {\n+ return new PrefixAnalyzer.PrefixTokenFilter(stream, ContextMapping.SEPARATOR, defaultValues);\n } else {\n IndexableField[] fields = doc.getFields(fieldname);\n ArrayList<CharSequence> values = new ArrayList<>(fields.length);\n- \n for (int i = 0; i < fields.length; i++) {\n values.add(fields[i].stringValue());\n }\n- \n+\n return new PrefixAnalyzer.PrefixTokenFilter(stream, ContextMapping.SEPARATOR, values);\n }\n }\n \n @Override\n public String toString() {\n StringBuilder sb = new StringBuilder(\"FieldConfig(\" + fieldname + \" = [\");\n- Iterator<? extends CharSequence> value = this.defaultValues.iterator();\n- if (value.hasNext()) {\n- sb.append(value.next());\n- while (value.hasNext()) {\n- sb.append(\", \").append(value.next());\n- }\n+ if (this.values != null && this.values.iterator().hasNext()) {\n+ sb.append(\"(\").append(Joiner.on(\", \").join(this.values.iterator())).append(\")\");\n+ }\n+ if (this.defaultValues != null && this.defaultValues.iterator().hasNext()) {\n+ sb.append(\" default(\").append(Joiner.on(\", \").join(this.defaultValues.iterator())).append(\")\");\n }\n return sb.append(\"])\").toString();\n }",
"filename": "src/main/java/org/elasticsearch/search/suggest/context/CategoryContextMapping.java",
"status": "modified"
},
{
"diff": "@@ -83,7 +83,7 @@ public class GeolocationContextMapping extends ContextMapping {\n * length of the geohashes\n * @param neighbors\n * should neighbors be indexed\n- * @param defaultLocation\n+ * @param defaultLocations\n * location to use, if it is not provided by the document\n */\n protected GeolocationContextMapping(String name, int[] precision, boolean neighbors, Collection<String> defaultLocations, String fieldName) {\n@@ -158,7 +158,16 @@ protected static GeolocationContextMapping load(String name, Map<String, Object>\n builder.addDefaultLocation(location.toString()); \n }\n } else if (def instanceof String) {\n- builder.addDefaultLocation(def.toString()); \n+ builder.addDefaultLocation(def.toString());\n+ } else if (def instanceof Map) {\n+ Map<String, Object> latlonMap = (Map<String, Object>) def;\n+ if (!latlonMap.containsKey(\"lat\") || !(latlonMap.get(\"lat\") instanceof Double)) {\n+ throw new ElasticsearchParseException(\"field [\" + FIELD_MISSING + \"] map must have field lat and a valid latitude\");\n+ }\n+ if (!latlonMap.containsKey(\"lon\") || !(latlonMap.get(\"lon\") instanceof Double)) {\n+ throw new ElasticsearchParseException(\"field [\" + FIELD_MISSING + \"] map must have field lon and a valid longitude\");\n+ }\n+ builder.addDefaultLocation(Double.valueOf(latlonMap.get(\"lat\").toString()), Double.valueOf(latlonMap.get(\"lon\").toString()));\n } else {\n throw new ElasticsearchParseException(\"field [\" + FIELD_MISSING + \"] must be of type string or list\");\n }\n@@ -264,8 +273,16 @@ public static GeoQuery query(String name, GeoPoint point) {\n * longitude of the location\n * @return new geolocation query\n */\n- public static GeoQuery query(String name, double lat, double lon) {\n- return query(name, GeoHashUtils.encode(lat, lon));\n+ public static GeoQuery query(String name, double lat, double lon, int ... precisions) {\n+ return query(name, GeoHashUtils.encode(lat, lon), precisions);\n+ }\n+\n+ public static GeoQuery query(String name, double lat, double lon, String ... precisions) {\n+ int precisionInts[] = new int[precisions.length];\n+ for (int i = 0 ; i < precisions.length; i++) {\n+ precisionInts[i] = GeoUtils.geoHashLevelsForPrecision(precisions[i]);\n+ }\n+ return query(name, GeoHashUtils.encode(lat, lon), precisionInts);\n }\n \n /**\n@@ -275,8 +292,8 @@ public static GeoQuery query(String name, double lat, double lon) {\n * geohash of the location\n * @return new geolocation query\n */\n- public static GeoQuery query(String name, String geohash) {\n- return new GeoQuery(name, geohash);\n+ public static GeoQuery query(String name, String geohash, int ... precisions) {\n+ return new GeoQuery(name, geohash, precisions);\n }\n \n private static final int parsePrecision(XContentParser parser) throws IOException, ElasticsearchParseException {\n@@ -338,6 +355,7 @@ public GeoQuery parseQuery(String name, XContentParser parser) throws IOExceptio\n }\n } else if (FIELD_VALUE.equals(fieldName)) {\n if(Double.isNaN(lon) && Double.isNaN(lat)) {\n+ parser.nextToken();\n point = GeoUtils.parseGeoPoint(parser);\n } else {\n throw new ElasticsearchParseException(\"only lat/lon or [\" + FIELD_VALUE + \"] is allowed\");\n@@ -450,7 +468,7 @@ public Builder precision(double precision, DistanceUnit unit) {\n /**\n * Set the precision use o make suggestions\n * \n- * @param precision\n+ * @param meters\n * precision as distance in meters\n * @return this\n */\n@@ -466,7 +484,7 @@ public Builder precision(double meters) {\n /**\n * Set the precision use o make suggestions\n * \n- * @param precision\n+ * @param level\n * maximum length of geohashes\n * @return this\n */\n@@ -504,7 +522,7 @@ public Builder addDefaultLocation(String geohash) {\n * Set a default location that should be used, if no location is\n * provided by the query\n * \n- * @param geohash\n+ * @param geohashes\n * geohash of the default location\n * @return this\n */\n@@ -578,15 +596,28 @@ protected TokenStream wrapTokenStream(Document doc, TokenStream stream) {\n if (locations == null || locations.size() == 0) {\n if(mapping.fieldName != null) {\n IndexableField[] fields = doc.getFields(mapping.fieldName);\n- if(fields.length > 0) {\n+ if(fields.length == 0) {\n+ IndexableField[] lonFields = doc.getFields(mapping.fieldName + \".lon\");\n+ IndexableField[] latFields = doc.getFields(mapping.fieldName + \".lat\");\n+ if (lonFields.length > 0 && latFields.length > 0) {\n+ geohashes = new ArrayList<>(fields.length);\n+ GeoPoint spare = new GeoPoint();\n+ for (int i = 0 ; i < lonFields.length ; i++) {\n+ IndexableField lonField = lonFields[i];\n+ IndexableField latField = latFields[i];\n+ spare.reset(latField.numericValue().doubleValue(), lonField.numericValue().doubleValue());\n+ geohashes.add(spare.geohash());\n+ }\n+ } else {\n+ geohashes = mapping.defaultLocations;\n+ }\n+ } else {\n geohashes = new ArrayList<>(fields.length);\n GeoPoint spare = new GeoPoint();\n for (IndexableField field : fields) {\n spare.resetFromString(field.stringValue());\n geohashes.add(spare.geohash());\n }\n- } else {\n- geohashes = mapping.defaultLocations;\n }\n } else {\n geohashes = mapping.defaultLocations;",
"filename": "src/main/java/org/elasticsearch/search/suggest/context/GeolocationContextMapping.java",
"status": "modified"
},
{
"diff": "@@ -18,11 +18,14 @@\n */\n package org.elasticsearch.search.suggest;\n \n-import com.carrotsearch.randomizedtesting.generators.RandomStrings;\n import com.google.common.collect.Sets;\n+import org.elasticsearch.ElasticsearchIllegalArgumentException;\n import org.elasticsearch.action.admin.indices.create.CreateIndexRequestBuilder;\n+import org.elasticsearch.action.suggest.SuggestRequest;\n import org.elasticsearch.action.suggest.SuggestRequestBuilder;\n import org.elasticsearch.action.suggest.SuggestResponse;\n+import org.elasticsearch.common.geo.GeoHashUtils;\n+import org.elasticsearch.common.geo.GeoPoint;\n import org.elasticsearch.common.unit.Fuzziness;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.search.suggest.Suggest.Suggestion;\n@@ -35,14 +38,16 @@\n import org.elasticsearch.search.suggest.context.ContextMapping;\n import org.elasticsearch.test.ElasticsearchIntegrationTest;\n import org.hamcrest.Matchers;\n+import org.junit.Ignore;\n import org.junit.Test;\n \n import java.io.IOException;\n import java.util.*;\n \n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n-import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.*;\n import static org.elasticsearch.test.hamcrest.ElasticsearchGeoAssertions.assertDistance;\n+import static org.hamcrest.Matchers.containsString;\n \n public class ContextSuggestSearchTests extends ElasticsearchIntegrationTest {\n \n@@ -96,7 +101,7 @@ public void testBasicGeo() throws Exception {\n \n client().admin().indices().prepareRefresh(INDEX).get();\n \n- String suggestionName = RandomStrings.randomAsciiOfLength(new Random(), 10);\n+ String suggestionName = randomAsciiOfLength(10);\n CompletionSuggestionBuilder context = new CompletionSuggestionBuilder(suggestionName).field(FIELD).text(\"h\").size(10)\n .addGeoLocation(\"st\", 52.52, 13.4);\n \n@@ -106,7 +111,7 @@ public void testBasicGeo() throws Exception {\n assertEquals(suggestResponse.getSuggest().size(), 1);\n assertEquals(\"Hotel Amsterdam in Berlin\", suggestResponse.getSuggest().getSuggestion(suggestionName).iterator().next().getOptions().iterator().next().getText().string());\n }\n- \n+\n @Test\n public void testGeoField() throws Exception {\n \n@@ -158,7 +163,7 @@ public void testGeoField() throws Exception {\n \n refresh();\n \n- String suggestionName = RandomStrings.randomAsciiOfLength(new Random(), 10);\n+ String suggestionName = randomAsciiOfLength(10);\n CompletionSuggestionBuilder context = new CompletionSuggestionBuilder(suggestionName).field(FIELD).text(\"h\").size(10)\n .addGeoLocation(\"st\", 52.52, 13.4);\n SuggestRequestBuilder suggestionRequest = client().prepareSuggest(INDEX).addSuggestion(context);\n@@ -466,9 +471,225 @@ public void testSimpleType() throws Exception {\n assertFieldSuggestions(types[2], \"w\", \"Whitemane, Kofi\");\n }\n \n- public void assertGeoSuggestionsInRange(String location, String suggest, double precision) throws IOException {\n+ @Test // issue 5525, default location didnt work with lat/lon map, and did not set default location appropriately\n+ public void testGeoContextDefaultMapping() throws Exception {\n+ GeoPoint berlinAlexanderplatz = GeoHashUtils.decode(\"u33dc1\");\n+\n+ XContentBuilder xContentBuilder = jsonBuilder().startObject()\n+ .startObject(\"poi\").startObject(\"properties\").startObject(\"suggest\")\n+ .field(\"type\", \"completion\")\n+ .startObject(\"context\").startObject(\"location\")\n+ .field(\"type\", \"geo\")\n+ .startObject(\"default\").field(\"lat\", berlinAlexanderplatz.lat()).field(\"lon\", berlinAlexanderplatz.lon()).endObject()\n+ .endObject().endObject()\n+ .endObject().endObject().endObject()\n+ .endObject();\n+\n+ assertAcked(prepareCreate(INDEX).addMapping(\"poi\", xContentBuilder));\n+ ensureYellow();\n+\n+ index(INDEX, \"poi\", \"1\", jsonBuilder().startObject().startObject(\"suggest\").field(\"input\", \"Berlin Alexanderplatz\").endObject().endObject());\n+ refresh();\n+\n+ CompletionSuggestionBuilder suggestionBuilder = new CompletionSuggestionBuilder(\"suggestion\").field(\"suggest\").text(\"b\").size(10).addGeoLocation(\"location\", berlinAlexanderplatz.lat(), berlinAlexanderplatz.lon());\n+ SuggestResponse suggestResponse = client().prepareSuggest(INDEX).addSuggestion(suggestionBuilder).get();\n+ assertSuggestion(suggestResponse.getSuggest(), 0, \"suggestion\", \"Berlin Alexanderplatz\");\n+ }\n+\n+ @Test // issue 5525, setting the path of a category context and then indexing a document without that field returned an error\n+ public void testThatMissingPrefixesForContextReturnException() throws Exception {\n+ XContentBuilder xContentBuilder = jsonBuilder().startObject()\n+ .startObject(\"service\").startObject(\"properties\").startObject(\"suggest\")\n+ .field(\"type\", \"completion\")\n+ .startObject(\"context\").startObject(\"color\")\n+ .field(\"type\", \"category\")\n+ .field(\"path\", \"color\")\n+ .endObject().endObject()\n+ .endObject().endObject().endObject()\n+ .endObject();\n+\n+ assertAcked(prepareCreate(INDEX).addMapping(\"service\", xContentBuilder));\n+ ensureYellow();\n+\n+ // now index a document with color field\n+ index(INDEX, \"service\", \"1\", jsonBuilder().startObject().field(\"color\", \"red\").startObject(\"suggest\").field(\"input\", \"backback\").endObject().endObject());\n+\n+ // now index a document without a color field\n+ try {\n+ index(INDEX, \"service\", \"2\", jsonBuilder().startObject().startObject(\"suggest\").field(\"input\", \"backback\").endObject().endObject());\n+ fail(\"index operation was not supposed to be succesful\");\n+ } catch (ElasticsearchIllegalArgumentException e) {\n+ assertThat(e.getMessage(), containsString(\"one or more prefixes needed\"));\n+ }\n+ }\n+\n+ @Test // issue 5525, the geo point parser did not work when the lat/lon values were inside of a value object\n+ public void testThatLocationVenueCanBeParsedAsDocumented() throws Exception {\n+ XContentBuilder xContentBuilder = jsonBuilder().startObject()\n+ .startObject(\"poi\").startObject(\"properties\").startObject(\"suggest\")\n+ .field(\"type\", \"completion\")\n+ .startObject(\"context\").startObject(\"location\")\n+ .field(\"type\", \"geo\")\n+ .endObject().endObject()\n+ .endObject().endObject().endObject()\n+ .endObject();\n+\n+ assertAcked(prepareCreate(INDEX).addMapping(\"poi\", xContentBuilder));\n+ ensureYellow();\n+\n+ SuggestRequest suggestRequest = new SuggestRequest(INDEX);\n+ XContentBuilder builder = jsonBuilder().startObject()\n+ .startObject(\"suggest\")\n+ .field(\"text\", \"m\")\n+ .startObject(\"completion\")\n+ .field(\"field\", \"suggest\")\n+ .startObject(\"context\").startObject(\"location\").startObject(\"value\").field(\"lat\", 0).field(\"lon\", 0).endObject().field(\"precision\", \"1km\").endObject().endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject();\n+ suggestRequest.suggest(builder.bytes());\n+\n+ SuggestResponse suggestResponse = client().suggest(suggestRequest).get();\n+ assertNoFailures(suggestResponse);\n+ }\n+\n+ @Test\n+ public void testThatCategoryDefaultWorks() throws Exception {\n+ XContentBuilder xContentBuilder = jsonBuilder().startObject()\n+ .startObject(\"item\").startObject(\"properties\").startObject(\"suggest\")\n+ .field(\"type\", \"completion\")\n+ .startObject(\"context\").startObject(\"color\")\n+ .field(\"type\", \"category\").field(\"default\", \"red\")\n+ .endObject().endObject()\n+ .endObject().endObject().endObject()\n+ .endObject();\n+\n+ assertAcked(prepareCreate(INDEX).addMapping(\"item\", xContentBuilder));\n+ ensureYellow();\n+\n+ index(INDEX, \"item\", \"1\", jsonBuilder().startObject().startObject(\"suggest\").field(\"input\", \"Hoodie red\").endObject().endObject());\n+ index(INDEX, \"item\", \"2\", jsonBuilder().startObject().startObject(\"suggest\").field(\"input\", \"Hoodie blue\").startObject(\"context\").field(\"color\", \"blue\").endObject().endObject().endObject());\n+ refresh();\n+\n+ CompletionSuggestionBuilder suggestionBuilder = new CompletionSuggestionBuilder(\"suggestion\").field(\"suggest\").text(\"h\").size(10).addContextField(\"color\", \"red\");\n+ SuggestResponse suggestResponse = client().prepareSuggest(INDEX).addSuggestion(suggestionBuilder).get();\n+ assertSuggestion(suggestResponse.getSuggest(), 0, \"suggestion\", \"Hoodie red\");\n+ }\n+\n+ @Test\n+ public void testThatDefaultCategoryAndPathWorks() throws Exception {\n+ XContentBuilder xContentBuilder = jsonBuilder().startObject()\n+ .startObject(\"item\").startObject(\"properties\").startObject(\"suggest\")\n+ .field(\"type\", \"completion\")\n+ .startObject(\"context\").startObject(\"color\")\n+ .field(\"type\", \"category\")\n+ .field(\"default\", \"red\")\n+ .field(\"path\", \"color\")\n+ .endObject().endObject()\n+ .endObject().endObject().endObject()\n+ .endObject();\n+\n+ assertAcked(prepareCreate(INDEX).addMapping(\"item\", xContentBuilder));\n+ ensureYellow();\n \n- String suggestionName = RandomStrings.randomAsciiOfLength(new Random(), 10);\n+ index(INDEX, \"item\", \"1\", jsonBuilder().startObject().startObject(\"suggest\").field(\"input\", \"Hoodie red\").endObject().endObject());\n+ index(INDEX, \"item\", \"2\", jsonBuilder().startObject().startObject(\"suggest\").field(\"input\", \"Hoodie blue\").endObject().field(\"color\", \"blue\").endObject());\n+ refresh();\n+\n+ CompletionSuggestionBuilder suggestionBuilder = new CompletionSuggestionBuilder(\"suggestion\").field(\"suggest\").text(\"h\").size(10).addContextField(\"color\", \"red\");\n+ SuggestResponse suggestResponse = client().prepareSuggest(INDEX).addSuggestion(suggestionBuilder).get();\n+ assertSuggestion(suggestResponse.getSuggest(), 0, \"suggestion\", \"Hoodie red\");\n+ }\n+\n+ @Test\n+ public void testThatGeoPrecisionIsWorking() throws Exception {\n+ XContentBuilder xContentBuilder = jsonBuilder().startObject()\n+ .startObject(\"item\").startObject(\"properties\").startObject(\"suggest\")\n+ .field(\"type\", \"completion\")\n+ .startObject(\"context\").startObject(\"location\")\n+ .field(\"type\", \"geo\")\n+ .field(\"precision\", 4) // this means geo hashes with a length of four are used, like u345\n+ .endObject().endObject()\n+ .endObject().endObject().endObject()\n+ .endObject();\n+\n+ assertAcked(prepareCreate(INDEX).addMapping(\"item\", xContentBuilder));\n+ ensureYellow();\n+\n+ // lets create some locations by geohashes in different cells with the precision 4\n+ // this means, that poelchaustr is not a neighour to alexanderplatz, but they share the same prefix until the fourth char!\n+ GeoPoint alexanderplatz = GeoHashUtils.decode(\"u33dc1\");\n+ GeoPoint poelchaustr = GeoHashUtils.decode(\"u33du5\");\n+ GeoPoint dahlem = GeoHashUtils.decode(\"u336q\"); // berlin dahlem, should be included with that precision\n+ GeoPoint middleOfNoWhere = GeoHashUtils.decode(\"u334\"); // location for west from berlin, should not be included in any suggestions\n+\n+ index(INDEX, \"item\", \"1\", jsonBuilder().startObject().startObject(\"suggest\").field(\"input\", \"Berlin Alexanderplatz\").field(\"weight\", 3).startObject(\"context\").startObject(\"location\").field(\"lat\", alexanderplatz.lat()).field(\"lon\", alexanderplatz.lon()).endObject().endObject().endObject().endObject());\n+ index(INDEX, \"item\", \"2\", jsonBuilder().startObject().startObject(\"suggest\").field(\"input\", \"Berlin Poelchaustr.\").field(\"weight\", 2).startObject(\"context\").startObject(\"location\").field(\"lat\", poelchaustr.lat()).field(\"lon\", poelchaustr.lon()).endObject().endObject().endObject().endObject());\n+ index(INDEX, \"item\", \"3\", jsonBuilder().startObject().startObject(\"suggest\").field(\"input\", \"Berlin Far Away\").field(\"weight\", 1).startObject(\"context\").startObject(\"location\").field(\"lat\", middleOfNoWhere.lat()).field(\"lon\", middleOfNoWhere.lon()).endObject().endObject().endObject().endObject());\n+ index(INDEX, \"item\", \"4\", jsonBuilder().startObject().startObject(\"suggest\").field(\"input\", \"Berlin Dahlem\").field(\"weight\", 1).startObject(\"context\").startObject(\"location\").field(\"lat\", dahlem.lat()).field(\"lon\", dahlem.lon()).endObject().endObject().endObject().endObject());\n+ refresh();\n+\n+ CompletionSuggestionBuilder suggestionBuilder = new CompletionSuggestionBuilder(\"suggestion\").field(\"suggest\").text(\"b\").size(10).addGeoLocation(\"location\", alexanderplatz.lat(), alexanderplatz.lon());\n+ SuggestResponse suggestResponse = client().prepareSuggest(INDEX).addSuggestion(suggestionBuilder).get();\n+ assertSuggestion(suggestResponse.getSuggest(), 0, \"suggestion\", \"Berlin Alexanderplatz\", \"Berlin Poelchaustr.\", \"Berlin Dahlem\");\n+ }\n+\n+ @Test\n+ public void testThatNeighborsCanBeExcluded() throws Exception {\n+ XContentBuilder xContentBuilder = jsonBuilder().startObject()\n+ .startObject(\"item\").startObject(\"properties\").startObject(\"suggest\")\n+ .field(\"type\", \"completion\")\n+ .startObject(\"context\").startObject(\"location\")\n+ .field(\"type\", \"geo\")\n+ .field(\"precision\", 6)\n+ .field(\"neighbors\", false)\n+ .endObject().endObject()\n+ .endObject().endObject().endObject()\n+ .endObject();\n+\n+ assertAcked(prepareCreate(INDEX).addMapping(\"item\", xContentBuilder));\n+ ensureYellow();\n+\n+ GeoPoint alexanderplatz = GeoHashUtils.decode(\"u33dc1\");\n+ // does not look like it, but is a direct neighbor\n+ // this test would fail, if the precision was set 4, as then both cells would be the same, u33d\n+ GeoPoint cellNeighbourOfAlexanderplatz = GeoHashUtils.decode(\"u33dbc\");\n+\n+ index(INDEX, \"item\", \"1\", jsonBuilder().startObject().startObject(\"suggest\").field(\"input\", \"Berlin Alexanderplatz\").field(\"weight\", 3).startObject(\"context\").startObject(\"location\").field(\"lat\", alexanderplatz.lat()).field(\"lon\", alexanderplatz.lon()).endObject().endObject().endObject().endObject());\n+ index(INDEX, \"item\", \"2\", jsonBuilder().startObject().startObject(\"suggest\").field(\"input\", \"Berlin Hackescher Markt\").field(\"weight\", 2).startObject(\"context\").startObject(\"location\").field(\"lat\", cellNeighbourOfAlexanderplatz.lat()).field(\"lon\", cellNeighbourOfAlexanderplatz.lon()).endObject().endObject().endObject().endObject());\n+ refresh();\n+\n+ CompletionSuggestionBuilder suggestionBuilder = new CompletionSuggestionBuilder(\"suggestion\").field(\"suggest\").text(\"b\").size(10).addGeoLocation(\"location\", alexanderplatz.lat(), alexanderplatz.lon());\n+ SuggestResponse suggestResponse = client().prepareSuggest(INDEX).addSuggestion(suggestionBuilder).get();\n+ assertSuggestion(suggestResponse.getSuggest(), 0, \"suggestion\", \"Berlin Alexanderplatz\");\n+ }\n+\n+ @Test\n+ public void testThatGeoPathCanBeSelected() throws Exception {\n+ XContentBuilder xContentBuilder = jsonBuilder().startObject()\n+ .startObject(\"item\").startObject(\"properties\").startObject(\"suggest\")\n+ .field(\"type\", \"completion\")\n+ .startObject(\"context\").startObject(\"location\")\n+ .field(\"type\", \"geo\")\n+ .field(\"path\", \"loc\")\n+ .endObject().endObject()\n+ .endObject().endObject().endObject()\n+ .endObject();\n+\n+ assertAcked(prepareCreate(INDEX).addMapping(\"item\", xContentBuilder));\n+ ensureYellow();\n+\n+ GeoPoint alexanderplatz = GeoHashUtils.decode(\"u33dc1\");\n+ index(INDEX, \"item\", \"1\", jsonBuilder().startObject().startObject(\"suggest\").field(\"input\", \"Berlin Alexanderplatz\").endObject().startObject(\"loc\").field(\"lat\", alexanderplatz.lat()).field(\"lon\", alexanderplatz.lon()).endObject().endObject());\n+ refresh();\n+\n+ CompletionSuggestionBuilder suggestionBuilder = new CompletionSuggestionBuilder(\"suggestion\").field(\"suggest\").text(\"b\").size(10).addGeoLocation(\"location\", alexanderplatz.lat(), alexanderplatz.lon());\n+ SuggestResponse suggestResponse = client().prepareSuggest(INDEX).addSuggestion(suggestionBuilder).get();\n+ assertSuggestion(suggestResponse.getSuggest(), 0, \"suggestion\", \"Berlin Alexanderplatz\");\n+ }\n+\n+ public void assertGeoSuggestionsInRange(String location, String suggest, double precision) throws IOException {\n+ String suggestionName = randomAsciiOfLength(10);\n CompletionSuggestionBuilder context = new CompletionSuggestionBuilder(suggestionName).field(FIELD).text(suggest).size(10)\n .addGeoLocation(\"st\", location);\n SuggestRequestBuilder suggestionRequest = client().prepareSuggest(INDEX).addSuggestion(context);\n@@ -491,7 +712,7 @@ public void assertGeoSuggestionsInRange(String location, String suggest, double\n }\n \n public void assertPrefixSuggestions(long prefix, String suggest, String... hits) throws IOException {\n- String suggestionName = RandomStrings.randomAsciiOfLength(new Random(), 10);\n+ String suggestionName = randomAsciiOfLength(10);\n CompletionSuggestionBuilder context = new CompletionSuggestionBuilder(suggestionName).field(FIELD).text(suggest)\n .size(hits.length + 1).addCategory(\"st\", Long.toString(prefix));\n SuggestRequestBuilder suggestionRequest = client().prepareSuggest(INDEX).addSuggestion(context);\n@@ -516,7 +737,7 @@ public void assertPrefixSuggestions(long prefix, String suggest, String... hits)\n }\n \n public void assertContextWithFuzzySuggestions(String[] prefix1, String[] prefix2, String suggest, String... hits) throws IOException {\n- String suggestionName = RandomStrings.randomAsciiOfLength(new Random(), 10);\n+ String suggestionName = randomAsciiOfLength(10);\n CompletionSuggestionFuzzyBuilder context = new CompletionSuggestionFuzzyBuilder(suggestionName).field(FIELD).text(suggest)\n .size(hits.length + 10).addContextField(\"st\", prefix1).addContextField(\"nd\", prefix2).setFuzziness(Fuzziness.TWO);\n SuggestRequestBuilder suggestionRequest = client().prepareSuggest(INDEX).addSuggestion(context);\n@@ -544,7 +765,7 @@ public void assertContextWithFuzzySuggestions(String[] prefix1, String[] prefix2\n }\n \n public void assertFieldSuggestions(String value, String suggest, String... hits) throws IOException {\n- String suggestionName = RandomStrings.randomAsciiOfLength(new Random(), 10);\n+ String suggestionName = randomAsciiOfLength(10);\n CompletionSuggestionBuilder context = new CompletionSuggestionBuilder(suggestionName).field(FIELD).text(suggest).size(10)\n .addContextField(\"st\", value);\n SuggestRequestBuilder suggestionRequest = client().prepareSuggest(INDEX).addSuggestion(context);\n@@ -569,7 +790,7 @@ public void assertFieldSuggestions(String value, String suggest, String... hits)\n }\n \n public void assertDoubleFieldSuggestions(String field1, String field2, String suggest, String... hits) throws IOException {\n- String suggestionName = RandomStrings.randomAsciiOfLength(new Random(), 10);\n+ String suggestionName = randomAsciiOfLength(10);\n CompletionSuggestionBuilder context = new CompletionSuggestionBuilder(suggestionName).field(FIELD).text(suggest).size(10)\n .addContextField(\"st\", field1).addContextField(\"nd\", field2);\n SuggestRequestBuilder suggestionRequest = client().prepareSuggest(INDEX).addSuggestion(context);\n@@ -593,7 +814,7 @@ public void assertDoubleFieldSuggestions(String field1, String field2, String su\n }\n \n public void assertMultiContextSuggestions(String value1, String value2, String suggest, String... hits) throws IOException {\n- String suggestionName = RandomStrings.randomAsciiOfLength(new Random(), 10);\n+ String suggestionName = randomAsciiOfLength(10);\n CompletionSuggestionBuilder context = new CompletionSuggestionBuilder(suggestionName).field(FIELD).text(suggest).size(10)\n .addContextField(\"st\", value1).addContextField(\"nd\", value2);\n ",
"filename": "src/test/java/org/elasticsearch/search/suggest/ContextSuggestSearchTests.java",
"status": "modified"
}
]
} |
{
"body": "The DateHistogramBuilder stores and builds the DateHistogram [with a `long` value for the `pre_offset` and `post_offset`](https://github.com/elasticsearch/elasticsearch/blob/master/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/DateHistogramBuilder.java#L44), which is neither what the [API docs specify](http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-aggregations-bucket-datehistogram-aggregation.html#_pre_post_offset_2) (which specify the format is the data format `1s`, `2d`, etc.) nor what the [DateHistogramParser expect](https://github.com/elasticsearch/elasticsearch/blob/master/src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/DateHistogramParser.java#L122).\n\nThis forces the improper construction of DateHistogram requests when using the Java API to construct queries. Both the `preOffset` and `postOffset` variables should be converted to `Strings`.\n",
"comments": [
{
"body": "I experience the same error so +1 on fixing this.\n",
"created_at": "2014-05-06T14:56:31Z"
}
],
"number": 5586,
"title": "Aggregations: DateHistogramBuilder uses wrong data type for pre_offset and post_offset"
} | {
"body": "Currently, if `preOffset` or `postOffset` are used in the `DateHistogramBuilder`, the generated query fails parsing in the `DateHistogramParser`.\n\nCloses #5586\n",
"number": 5587,
"review_comments": [],
"title": "Fix DateHistogramBuilder to use a String pre_offset and post_offset"
} | {
"commits": [
{
"message": "Fix DateHistogramBuilder to use a String pre_offset and post_offset\n\nCloses #5586"
}
],
"files": [
{
"diff": "@@ -41,8 +41,8 @@ public class DateHistogramBuilder extends ValuesSourceAggregationBuilder<DateHis\n private String postZone;\n private boolean preZoneAdjustLargeInterval;\n private String format;\n- long preOffset = 0;\n- long postOffset = 0;\n+ private String preOffset;\n+ private String postOffset;\n float factor = 1.0f;\n \n public DateHistogramBuilder(String name) {\n@@ -84,12 +84,12 @@ public DateHistogramBuilder preZoneAdjustLargeInterval(boolean preZoneAdjustLarg\n return this;\n }\n \n- public DateHistogramBuilder preOffset(long preOffset) {\n+ public DateHistogramBuilder preOffset(String preOffset) {\n this.preOffset = preOffset;\n return this;\n }\n \n- public DateHistogramBuilder postOffset(long postOffset) {\n+ public DateHistogramBuilder postOffset(String postOffset) {\n this.postOffset = postOffset;\n return this;\n }\n@@ -153,11 +153,11 @@ protected XContentBuilder doInternalXContent(XContentBuilder builder, Params par\n builder.field(\"pre_zone_adjust_large_interval\", true);\n }\n \n- if (preOffset != 0) {\n+ if (preOffset != null) {\n builder.field(\"pre_offset\", preOffset);\n }\n \n- if (postOffset != 0) {\n+ if (postOffset != null) {\n builder.field(\"post_offset\", postOffset);\n }\n ",
"filename": "src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/DateHistogramBuilder.java",
"status": "modified"
},
{
"diff": "@@ -1045,6 +1045,77 @@ public void singleValue_WithPreZone() throws Exception {\n assertThat(bucket.getDocCount(), equalTo(3l));\n }\n \n+ @Test\n+ public void singleValue_WithPreOffset() throws Exception {\n+ prepareCreate(\"idx2\").addMapping(\"type\", \"date\", \"type=date\").execute().actionGet();\n+ IndexRequestBuilder[] reqs = new IndexRequestBuilder[5];\n+ DateTime date = date(\"2014-03-11T00:00:00+00:00\");\n+ for (int i = 0; i < reqs.length; i++) {\n+ reqs[i] = client().prepareIndex(\"idx2\", \"type\", \"\" + i).setSource(jsonBuilder().startObject().field(\"date\", date).endObject());\n+ date = date.plusHours(1);\n+ }\n+ indexRandom(true, reqs);\n+\n+ SearchResponse response = client().prepareSearch(\"idx2\")\n+ .setQuery(matchAllQuery())\n+ .addAggregation(dateHistogram(\"date_histo\")\n+ .field(\"date\")\n+ .preOffset(\"-2h\")\n+ .interval(DateHistogram.Interval.DAY)\n+ .format(\"yyyy-MM-dd\"))\n+ .execute().actionGet();\n+\n+ assertThat(response.getHits().getTotalHits(), equalTo(5l));\n+\n+ DateHistogram histo = response.getAggregations().get(\"date_histo\");\n+ Collection<? extends DateHistogram.Bucket> buckets = histo.getBuckets();\n+ assertThat(buckets.size(), equalTo(2));\n+\n+ DateHistogram.Bucket bucket = histo.getBucketByKey(\"2014-03-10\");\n+ assertThat(bucket, Matchers.notNullValue());\n+ assertThat(bucket.getDocCount(), equalTo(2l));\n+\n+ bucket = histo.getBucketByKey(\"2014-03-11\");\n+ assertThat(bucket, Matchers.notNullValue());\n+ assertThat(bucket.getDocCount(), equalTo(3l));\n+ }\n+\n+\n+ @Test\n+ public void singleValue_WithPostOffset() throws Exception {\n+ prepareCreate(\"idx2\").addMapping(\"type\", \"date\", \"type=date\").execute().actionGet();\n+ IndexRequestBuilder[] reqs = new IndexRequestBuilder[5];\n+ DateTime date = date(\"2014-03-10T22:00:00+00:00\");\n+ for (int i = 0; i < reqs.length; i++) {\n+ reqs[i] = client().prepareIndex(\"idx2\", \"type\", \"\" + i).setSource(jsonBuilder().startObject().field(\"date\", date).endObject());\n+ date = date.plusHours(1);\n+ }\n+ indexRandom(true, reqs);\n+\n+ SearchResponse response = client().prepareSearch(\"idx2\")\n+ .setQuery(matchAllQuery())\n+ .addAggregation(dateHistogram(\"date_histo\")\n+ .field(\"date\")\n+ .postOffset(\"-1d\")\n+ .interval(DateHistogram.Interval.DAY)\n+ .format(\"yyyy-MM-dd\"))\n+ .execute().actionGet();\n+\n+ assertThat(response.getHits().getTotalHits(), equalTo(5l));\n+\n+ DateHistogram histo = response.getAggregations().get(\"date_histo\");\n+ Collection<? extends DateHistogram.Bucket> buckets = histo.getBuckets();\n+ assertThat(buckets.size(), equalTo(2));\n+\n+ DateHistogram.Bucket bucket = histo.getBucketByKey(\"2014-03-09\");\n+ assertThat(bucket, Matchers.notNullValue());\n+ assertThat(bucket.getDocCount(), equalTo(2l));\n+\n+ bucket = histo.getBucketByKey(\"2014-03-10\");\n+ assertThat(bucket, Matchers.notNullValue());\n+ assertThat(bucket.getDocCount(), equalTo(3l));\n+ }\n+\n @Test\n public void singleValue_WithPreZone_WithAadjustLargeInterval() throws Exception {\n prepareCreate(\"idx2\").addMapping(\"type\", \"date\", \"type=date\").execute().actionGet();",
"filename": "src/test/java/org/elasticsearch/search/aggregations/bucket/DateHistogramTests.java",
"status": "modified"
}
]
} |
{
"body": "Trying to set an existing binary field from `store:false` to `store:true` should throw an error. Currently it is just silently ignored:\n\n```\nPUT /t\n{\n \"mappings\": {\n \"foo\": {\n \"properties\": {\n \"bar\": {\n \"type\": \"binary\"\n }\n }\n }\n }\n}\n\nPUT /t/_mapping/foo\n{\n \"properties\": {\n \"bar\": {\n \"type\": \"binary\",\n \"store\": true\n }\n }\n}\n\nGET /t/_mapping\n```\n",
"comments": [
{
"body": "Similar thing with `boolean` and `geo_point` fields.\n\n#5502 and #5505 \n",
"created_at": "2014-03-24T03:57:34Z"
}
],
"number": 5474,
"title": "Throw error when updating binary field mapping to be stored"
} | {
"body": "closes #5474\n\nthe \"index_name\" is also ignored without throw exception, added check for that as well\n",
"number": 5585,
"review_comments": [
{
"body": "I'm wondering if we could call `super.merge` here instead. Although checks about index options are not relevant to this field which is not indexed, this would avoid duplicating the logic of merging two mappers that have a different class?\n",
"created_at": "2014-03-28T16:31:05Z"
},
{
"body": "I was thinking of calling super, but I thought there might be a reason why it doesn't call super before. Maybe because most of the options are not relevant to binary field? \nShould I change that to call super? Also for the toXContent, should it call super as well?\n",
"created_at": "2014-03-28T18:13:23Z"
},
{
"body": "I'm not sure myself why this hasn't been done this way. :-) It's fine, I was just curious if you had tested calling super and if it introduced issues.\n",
"created_at": "2014-03-28T19:32:17Z"
},
{
"body": "If you implement `equals`, can you implement `hashCode` as well?\n",
"created_at": "2014-03-28T20:59:26Z"
},
{
"body": "@jpountz added `hashCode`\n",
"created_at": "2014-03-29T01:35:01Z"
}
],
"title": "Check \"store\" parameter for binary mapper and check \"index_name\" for all mappers"
} | {
"commits": [
{
"message": "check \"store\" for binary mapper and check \"index_name\" for all mappers\n\ncloses #5474"
}
],
"files": [
{
"diff": "@@ -123,6 +123,31 @@ public Term createIndexNameTerm(String value) {\n public Term createIndexNameTerm(BytesRef value) {\n return new Term(indexName, value);\n }\n+\n+ @Override\n+ public boolean equals(Object o) {\n+ if (o == null || getClass() != o.getClass()) return false;\n+\n+ Names names = (Names) o;\n+\n+ if (!fullName.equals(names.fullName)) return false;\n+ if (!indexName.equals(names.indexName)) return false;\n+ if (!indexNameClean.equals(names.indexNameClean)) return false;\n+ if (!name.equals(names.name)) return false;\n+ if (!sourcePath.equals(names.sourcePath)) return false;\n+\n+ return true;\n+ }\n+\n+ @Override\n+ public int hashCode() {\n+ int result = name.hashCode();\n+ result = 31 * result + indexName.hashCode();\n+ result = 31 * result + indexNameClean.hashCode();\n+ result = 31 * result + fullName.hashCode();\n+ result = 31 * result + sourcePath.hashCode();\n+ return result;\n+ }\n }\n \n public static enum Loading {",
"filename": "src/main/java/org/elasticsearch/index/mapper/FieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -606,6 +606,9 @@ public void merge(Mapper mergeWith, MergeContext mergeContext) throws MergeMappi\n } else if (!this.indexAnalyzer.name().equals(fieldMergeWith.indexAnalyzer.name())) {\n mergeContext.addConflict(\"mapper [\" + names.fullName() + \"] has different index_analyzer\");\n }\n+ if (!this.names().equals(fieldMergeWith.names())) {\n+ mergeContext.addConflict(\"mapper [\" + names.fullName() + \"] has different index_name\");\n+ }\n \n if (this.similarity == null) {\n if (fieldMergeWith.similarity() != null) {",
"filename": "src/main/java/org/elasticsearch/index/mapper/core/AbstractFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -225,7 +225,25 @@ protected void doXContentBody(XContentBuilder builder, boolean includeDefaults,\n \n @Override\n public void merge(Mapper mergeWith, MergeContext mergeContext) throws MergeMappingException {\n+ if (!(mergeWith instanceof BinaryFieldMapper)) {\n+ String mergedType = mergeWith.getClass().getSimpleName();\n+ if (mergeWith instanceof AbstractFieldMapper) {\n+ mergedType = ((AbstractFieldMapper) mergeWith).contentType();\n+ }\n+ mergeContext.addConflict(\"mapper [\" + names.fullName() + \"] of different type, current_type [\" + contentType() + \"], merged_type [\" + mergedType + \"]\");\n+ // different types, return\n+ return;\n+ }\n+\n BinaryFieldMapper sourceMergeWith = (BinaryFieldMapper) mergeWith;\n+\n+ if (this.fieldType().stored() != sourceMergeWith.fieldType().stored()) {\n+ mergeContext.addConflict(\"mapper [\" + names.fullName() + \"] has different store values\");\n+ }\n+ if (!this.names().equals(sourceMergeWith.names())) {\n+ mergeContext.addConflict(\"mapper [\" + names.fullName() + \"] has different index_name\");\n+ }\n+\n if (!mergeContext.mergeFlags().simulate()) {\n if (sourceMergeWith.compress != null) {\n this.compress = sourceMergeWith.compress;",
"filename": "src/main/java/org/elasticsearch/index/mapper/core/BinaryFieldMapper.java",
"status": "modified"
}
]
} |
{
"body": "ElasticSearch 1.0.\n\nWhen using dynamic_template described here:\nhttp://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-root-object-type.html#_dynamic_templates\n\nthe MapperParsingException occurs. The gist with an example identical as in documentation is here:\n\nhttps://gist.github.com/nnegativ/9213854\n",
"comments": [],
"number": 5256,
"title": "Possible issue with dynamic mapping in Elasticsearch"
} | {
"body": "closes #5256\n",
"number": 5564,
"review_comments": [],
"title": "Fix dynamic_type in dynamic_template"
} | {
"commits": [
{
"message": "fix dynamic_type in dynamic_template\n\ncloses #5256"
}
],
"files": [
{
"diff": "@@ -154,7 +154,7 @@ public boolean hasType() {\n }\n \n public String mappingType(String dynamicType) {\n- return mapping.containsKey(\"type\") ? mapping.get(\"type\").toString() : dynamicType;\n+ return mapping.containsKey(\"type\") ? mapping.get(\"type\").toString().replace(\"{dynamic_type}\", dynamicType).replace(\"{dynamicType}\", dynamicType) : dynamicType;\n }\n \n private boolean patternMatch(String pattern, String str) {",
"filename": "src/main/java/org/elasticsearch/index/mapper/object/DynamicTemplate.java",
"status": "modified"
},
{
"diff": "@@ -5,13 +5,10 @@\n \"tempalte_1\":{\n \"match\":\"multi*\",\n \"mapping\":{\n- \"type\":\"multi_field\",\n+ \"type\":\"{dynamic_type}\",\n+ \"index\":\"analyzed\",\n+ \"store\":\"yes\",\n \"fields\":{\n- \"{name}\":{\n- \"type\":\"{dynamic_type}\",\n- \"index\":\"analyzed\",\n- \"store\":\"yes\"\n- },\n \"org\":{\n \"type\":\"{dynamic_type}\",\n \"index\":\"not_analyzed\",",
"filename": "src/test/java/org/elasticsearch/index/mapper/dynamictemplate/simple/test-mapping.json",
"status": "modified"
}
]
} |
{
"body": "A user reported this on the ML:\nhttps://groups.google.com/forum/?fromgroups=#!topic/elasticsearch/AxRU1UQP24U\n\n```\nCaused by: java.lang.IndexOutOfBoundsException: index (-2) must not be negative\n at org.elasticsearch.common.base.Preconditions.checkElementIndex(Preconditions.java:306)\n at org.elasticsearch.common.base.Preconditions.checkElementIndex(Preconditions.java:285)\n at org.elasticsearch.common.collect.RegularImmutableList.get(RegularImmutableList.java:65)\n at org.elasticsearch.cluster.routing.IndexShardRoutingTable.preferNodeActiveInitializingShardsIt(IndexShardRoutingTable.java:378)\n at org.elasticsearch.cluster.routing.operation.plain.PlainOperationRouting.preferenceActiveShardIterator(PlainOperationRouting.java:210)\n at org.elasticsearch.cluster.routing.operation.plain.PlainOperationRouting.getShards(PlainOperationRouting.java:80)\n at org.elasticsearch.action.get.TransportGetAction.shards(TransportGetAction.java:80)\n at org.elasticsearch.action.get.TransportGetAction.shards(TransportGetAction.java:42)\n at org.elasticsearch.action.support.single.shard.TransportShardSingleOperationAction$AsyncSingleAction.<init>(TransportShardSingleOperationAction.java:121)\n at org.elasticsearch.action.support.single.shard.TransportShardSingleOperationAction$AsyncSingleAction.<init>(TransportShardSingleOperationAction.java:97)\n at org.elasticsearch.action.support.single.shard.TransportShardSingleOperationAction.doExecute(TransportShardSingleOperationAction.java:74)\n at org.elasticsearch.action.support.single.shard.TransportShardSingleOperationAction.doExecute(TransportShardSingleOperationAction.java:49)\n at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:63)\n at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:49)\n at org.elasticsearch.client.node.NodeClient.execute(NodeClient.java:85)\n at org.elasticsearch.client.support.AbstractClient.get(AbstractClient.java:174)\n ... 9 more\n```\n\nthe code that causes this is:\n\n``` Java\n\n private int pickIndex() {\n return Math.abs(counter.incrementAndGet());\n }\n```\n\nwhich might return a negative number. `Math.abs()` returns `-1` for `Integer.MIN_VALUE` which causes the AIOOB mentioned above. The usage of this method seems to be pretty broken along those lines and we might need to think about fixing this generally...\n",
"comments": [],
"number": 5559,
"title": "IndexShardRoutingTable might barf if it has handled lots of searches..."
} | {
"body": "I initially wanted to make the diff minimal but this ended up being quite complicated\nso I finally refactored a bit the way shards are randomized. Yet, it uses the same logic:\n- rotations to shuffle shards,\n- an AtomicInteger to generate the distances to use for the rotations.\n\nClose #5559\n",
"number": 5561,
"review_comments": [
{
"body": "hmm I gues we should also test that the contents are the same here? \n",
"created_at": "2014-03-27T09:24:32Z"
},
{
"body": "hmm can't we just make this a static inner class instead of haveing a second `rotate1` method?\n",
"created_at": "2014-03-27T09:26:16Z"
},
{
"body": "grr nevermind I didn't see the last line pfff...\n",
"created_at": "2014-03-27T09:26:48Z"
},
{
"body": "one think that I think we should test is that the roation here is actually stable. It if you iterate the same list twice it has the same order?\n",
"created_at": "2014-03-27T09:27:36Z"
},
{
"body": "not sure if we need an interface for this but I guess it doesn't hurt either. I wonder if we can make it an abstract class and overload shuffle to only take a list?\n",
"created_at": "2014-03-27T09:28:51Z"
},
{
"body": "I like this class man that is so much cleaner\n",
"created_at": "2014-03-27T09:29:34Z"
},
{
"body": "\"BOOM!\"\n",
"created_at": "2014-03-27T09:29:48Z"
},
{
"body": "do you mean checking that rotating twice the same list gives the same output?\n\n``` java\nassertEquals(CollectionUtils.rotate(list, size), CollectionUtils.rotate(list, size));\n```\n",
"created_at": "2014-03-27T09:30:42Z"
},
{
"body": "shoudl this class allow to work without ThreadLocalRandom? it just take the seed?\n",
"created_at": "2014-03-27T09:31:13Z"
},
{
"body": "I'm not sure to understand what you are suggesting, but I didn't want to change the behavior at all, so since it was previously using an AtomicInteger initialized based on a random integer, I kept it.\n",
"created_at": "2014-03-27T09:32:52Z"
},
{
"body": "yeah\n",
"created_at": "2014-03-27T09:33:28Z"
},
{
"body": "or N times\n",
"created_at": "2014-03-27T09:33:34Z"
},
{
"body": "good point\n",
"created_at": "2014-03-27T09:34:05Z"
},
{
"body": "yeah I was just asking if we should pass the seed in the ctor and call ThreadLocalRandom where thsi is used? \n",
"created_at": "2014-03-27T09:34:29Z"
},
{
"body": "Ah ok. Will do.\n",
"created_at": "2014-03-27T09:35:40Z"
},
{
"body": "I'm not sure to understand, what default implementation would you provide?\n",
"created_at": "2014-03-27T09:36:16Z"
}
],
"title": "Fix IndexShardRoutingTable's shard randomization to not throw out-of-bounds exceptions."
} | {
"commits": [
{
"message": "Fix IndexShardRoutingTable's shard randomization to not throw out-of-bounds\nexceptions.\n\nClose #5559"
},
{
"message": "1st review round"
}
],
"files": [
{
"diff": "@@ -24,6 +24,7 @@\n import com.google.common.collect.ImmutableList;\n import com.google.common.collect.Sets;\n import com.google.common.collect.UnmodifiableIterator;\n+import jsr166y.ThreadLocalRandom;\n import org.elasticsearch.ElasticsearchIllegalStateException;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.MetaData;\n@@ -36,7 +37,6 @@\n import java.util.ArrayList;\n import java.util.List;\n import java.util.Set;\n-import java.util.concurrent.atomic.AtomicInteger;\n \n import static com.google.common.collect.Lists.newArrayList;\n \n@@ -58,6 +58,7 @@\n public class IndexRoutingTable implements Iterable<IndexShardRoutingTable> {\n \n private final String index;\n+ private final ShardShuffler shuffler;\n \n // note, we assume that when the index routing is created, ShardRoutings are created for all possible number of\n // shards with state set to UNASSIGNED\n@@ -66,10 +67,9 @@ public class IndexRoutingTable implements Iterable<IndexShardRoutingTable> {\n private final ImmutableList<ShardRouting> allShards;\n private final ImmutableList<ShardRouting> allActiveShards;\n \n- private final AtomicInteger counter = new AtomicInteger();\n-\n IndexRoutingTable(String index, ImmutableOpenIntMap<IndexShardRoutingTable> shards) {\n this.index = index;\n+ this.shuffler = new RotationShardShuffler(ThreadLocalRandom.current().nextInt());\n this.shards = shards;\n ImmutableList.Builder<ShardRouting> allShards = ImmutableList.builder();\n ImmutableList.Builder<ShardRouting> allActiveShards = ImmutableList.builder();\n@@ -273,14 +273,14 @@ public List<ShardRouting> shardsWithState(ShardRoutingState... states) {\n * Returns an unordered iterator over all shards (including replicas).\n */\n public ShardsIterator randomAllShardsIt() {\n- return new PlainShardsIterator(allShards, counter.incrementAndGet());\n+ return new PlainShardsIterator(shuffler.shuffle(allShards));\n }\n \n /**\n * Returns an unordered iterator over all active shards (including replicas).\n */\n public ShardsIterator randomAllActiveShardsIt() {\n- return new PlainShardsIterator(allActiveShards, counter.incrementAndGet());\n+ return new PlainShardsIterator(shuffler.shuffle(allActiveShards));\n }\n \n /**",
"filename": "src/main/java/org/elasticsearch/cluster/routing/IndexRoutingTable.java",
"status": "modified"
},
{
"diff": "@@ -32,7 +32,6 @@\n \n import java.io.IOException;\n import java.util.*;\n-import java.util.concurrent.atomic.AtomicInteger;\n \n import static com.google.common.collect.Lists.newArrayList;\n \n@@ -45,6 +44,7 @@\n */\n public class IndexShardRoutingTable implements Iterable<ShardRouting> {\n \n+ final ShardShuffler shuffler;\n final ShardId shardId;\n \n final ShardRouting primary;\n@@ -60,15 +60,13 @@ public class IndexShardRoutingTable implements Iterable<ShardRouting> {\n */\n final ImmutableList<ShardRouting> allInitializingShards;\n \n- final AtomicInteger counter;\n-\n final boolean primaryAllocatedPostApi;\n \n IndexShardRoutingTable(ShardId shardId, ImmutableList<ShardRouting> shards, boolean primaryAllocatedPostApi) {\n this.shardId = shardId;\n+ this.shuffler = new RotationShardShuffler(ThreadLocalRandom.current().nextInt());\n this.shards = shards;\n this.primaryAllocatedPostApi = primaryAllocatedPostApi;\n- this.counter = new AtomicInteger(ThreadLocalRandom.current().nextInt(shards.size()));\n \n ShardRouting primary = null;\n ImmutableList.Builder<ShardRouting> replicas = ImmutableList.builder();\n@@ -259,61 +257,61 @@ public int countWithState(ShardRoutingState state) {\n }\n \n public ShardIterator shardsRandomIt() {\n- return new PlainShardIterator(shardId, shards, pickIndex());\n+ return new PlainShardIterator(shardId, shuffler.shuffle(shards));\n }\n \n public ShardIterator shardsIt() {\n return new PlainShardIterator(shardId, shards);\n }\n \n- public ShardIterator shardsIt(int index) {\n- return new PlainShardIterator(shardId, shards, index);\n+ public ShardIterator shardsIt(int seed) {\n+ return new PlainShardIterator(shardId, shuffler.shuffle(shards, seed));\n }\n \n public ShardIterator activeShardsRandomIt() {\n- return new PlainShardIterator(shardId, activeShards, pickIndex());\n+ return new PlainShardIterator(shardId, shuffler.shuffle(activeShards));\n }\n \n public ShardIterator activeShardsIt() {\n return new PlainShardIterator(shardId, activeShards);\n }\n \n- public ShardIterator activeShardsIt(int index) {\n- return new PlainShardIterator(shardId, activeShards, index);\n+ public ShardIterator activeShardsIt(int seed) {\n+ return new PlainShardIterator(shardId, shuffler.shuffle(activeShards, seed));\n }\n \n /**\n * Returns an iterator over active and initializing shards. Making sure though that\n * its random within the active shards, and initializing shards are the last to iterate through.\n */\n public ShardIterator activeInitializingShardsRandomIt() {\n- return activeInitializingShardsIt(pickIndex());\n+ return activeInitializingShardsIt(shuffler.nextSeed());\n }\n \n /**\n * Returns an iterator over active and initializing shards. Making sure though that\n * its random within the active shards, and initializing shards are the last to iterate through.\n */\n- public ShardIterator activeInitializingShardsIt(int index) {\n+ public ShardIterator activeInitializingShardsIt(int seed) {\n if (allInitializingShards.isEmpty()) {\n- return new PlainShardIterator(shardId, activeShards, index);\n+ return new PlainShardIterator(shardId, shuffler.shuffle(activeShards, seed));\n }\n ArrayList<ShardRouting> ordered = new ArrayList<ShardRouting>(activeShards.size() + allInitializingShards.size());\n- addToListFromIndex(activeShards, ordered, index);\n+ ordered.addAll(shuffler.shuffle(activeShards, seed));\n ordered.addAll(allInitializingShards);\n return new PlainShardIterator(shardId, ordered);\n }\n \n public ShardIterator assignedShardsRandomIt() {\n- return new PlainShardIterator(shardId, assignedShards, pickIndex());\n+ return new PlainShardIterator(shardId, shuffler.shuffle(assignedShards));\n }\n \n public ShardIterator assignedShardsIt() {\n return new PlainShardIterator(shardId, assignedShards);\n }\n \n- public ShardIterator assignedShardsIt(int index) {\n- return new PlainShardIterator(shardId, assignedShards, index);\n+ public ShardIterator assignedShardsIt(int seed) {\n+ return new PlainShardIterator(shardId, shuffler.shuffle(assignedShards, seed));\n }\n \n /**\n@@ -334,14 +332,11 @@ public ShardIterator primaryActiveInitializingShardIt() {\n public ShardIterator primaryFirstActiveInitializingShardsIt() {\n ArrayList<ShardRouting> ordered = new ArrayList<ShardRouting>(activeShards.size() + allInitializingShards.size());\n // fill it in a randomized fashion\n- int index = Math.abs(pickIndex());\n- for (int i = 0; i < activeShards.size(); i++) {\n- int loc = (index + i) % activeShards.size();\n- ShardRouting shardRouting = activeShards.get(loc);\n+ for (ShardRouting shardRouting : shuffler.shuffle(activeShards)) {\n ordered.add(shardRouting);\n if (shardRouting.primary()) {\n // switch, its the matching node id\n- ordered.set(i, ordered.get(0));\n+ ordered.set(ordered.size() - 1, ordered.get(0));\n ordered.set(0, shardRouting);\n }\n }\n@@ -373,14 +368,11 @@ public ShardIterator onlyNodeActiveInitializingShardsIt(String nodeId) {\n public ShardIterator preferNodeActiveInitializingShardsIt(String nodeId) {\n ArrayList<ShardRouting> ordered = new ArrayList<ShardRouting>(activeShards.size() + allInitializingShards.size());\n // fill it in a randomized fashion\n- int index = pickIndex();\n- for (int i = 0; i < activeShards.size(); i++) {\n- int loc = (index + i) % activeShards.size();\n- ShardRouting shardRouting = activeShards.get(loc);\n+ for (ShardRouting shardRouting : shuffler.shuffle(activeShards)) {\n ordered.add(shardRouting);\n if (nodeId.equals(shardRouting.currentNodeId())) {\n // switch, its the matching node id\n- ordered.set(i, ordered.get(0));\n+ ordered.set(ordered.size() - 1, ordered.get(0));\n ordered.set(0, shardRouting);\n }\n }\n@@ -474,22 +466,21 @@ private static ImmutableList<ShardRouting> collectAttributeShards(AttributesKey\n }\n \n public ShardIterator preferAttributesActiveInitializingShardsIt(String[] attributes, DiscoveryNodes nodes) {\n- return preferAttributesActiveInitializingShardsIt(attributes, nodes, pickIndex());\n+ return preferAttributesActiveInitializingShardsIt(attributes, nodes, shuffler.nextSeed());\n }\n \n- public ShardIterator preferAttributesActiveInitializingShardsIt(String[] attributes, DiscoveryNodes nodes, int index) {\n+ public ShardIterator preferAttributesActiveInitializingShardsIt(String[] attributes, DiscoveryNodes nodes, int seed) {\n AttributesKey key = new AttributesKey(attributes);\n AttributesRoutings activeRoutings = getActiveAttribute(key, nodes);\n AttributesRoutings initializingRoutings = getInitializingAttribute(key, nodes);\n \n // we now randomize, once between the ones that have the same attributes, and once for the ones that don't\n // we don't want to mix between the two!\n ArrayList<ShardRouting> ordered = new ArrayList<ShardRouting>(activeRoutings.totalSize + initializingRoutings.totalSize);\n- index = Math.abs(index);\n- addToListFromIndex(activeRoutings.withSameAttribute, ordered, index);\n- addToListFromIndex(activeRoutings.withoutSameAttribute, ordered, index);\n- addToListFromIndex(initializingRoutings.withSameAttribute, ordered, index);\n- addToListFromIndex(initializingRoutings.withoutSameAttribute, ordered, index);\n+ ordered.addAll(shuffler.shuffle(activeRoutings.withSameAttribute, seed));\n+ ordered.addAll(shuffler.shuffle(activeRoutings.withoutSameAttribute, seed));\n+ ordered.addAll(shuffler.shuffle(initializingRoutings.withSameAttribute, seed));\n+ ordered.addAll(shuffler.shuffle(initializingRoutings.withoutSameAttribute, seed));\n return new PlainShardIterator(shardId, ordered);\n }\n \n@@ -525,23 +516,6 @@ public List<ShardRouting> shardsWithState(ShardRoutingState... states) {\n return shards;\n }\n \n- /**\n- * Adds from list to list, starting from the given index (wrapping around if needed).\n- */\n- @SuppressWarnings(\"unchecked\")\n- private void addToListFromIndex(List from, List to, int index) {\n- index = Math.abs(index);\n- for (int i = 0; i < from.size(); i++) {\n- int loc = (index + i) % from.size();\n- to.add(from.get(loc));\n- }\n- }\n-\n- // TODO: we can move to random based on ThreadLocalRandom, or make it pluggable\n- private int pickIndex() {\n- return Math.abs(counter.incrementAndGet());\n- }\n-\n public static class Builder {\n \n private ShardId shardId;",
"filename": "src/main/java/org/elasticsearch/cluster/routing/IndexShardRoutingTable.java",
"status": "modified"
},
{
"diff": "@@ -43,18 +43,6 @@ public PlainShardIterator(ShardId shardId, List<ShardRouting> shards) {\n this.shardId = shardId;\n }\n \n- /**\n- * Creates a {@link PlainShardIterator} instance that iterates over a subset of the given shards\n- * this the a given <code>shardId</code>.\n- *\n- * @param shardId shard id of the group\n- * @param shards shards to iterate\n- * @param index the offset in the shards list to start the iteration from\n- */\n- public PlainShardIterator(ShardId shardId, List<ShardRouting> shards, int index) {\n- super(shards, index);\n- this.shardId = shardId;\n- }\n \n @Override\n public ShardId shardId() {",
"filename": "src/main/java/org/elasticsearch/cluster/routing/PlainShardIterator.java",
"status": "modified"
},
{
"diff": "@@ -19,6 +19,7 @@\n package org.elasticsearch.cluster.routing;\n \n import java.util.List;\n+import java.util.ListIterator;\n \n /**\n * A simple {@link ShardsIterator} that iterates a list or sub-list of\n@@ -28,76 +29,50 @@ public class PlainShardsIterator implements ShardsIterator {\n \n private final List<ShardRouting> shards;\n \n- private final int size;\n-\n- private final int index;\n-\n- private final int limit;\n-\n- private volatile int counter;\n+ private ListIterator<ShardRouting> iterator;\n \n public PlainShardsIterator(List<ShardRouting> shards) {\n- this(shards, 0);\n- }\n-\n- public PlainShardsIterator(List<ShardRouting> shards, int index) {\n this.shards = shards;\n- this.size = shards.size();\n- if (size == 0) {\n- this.index = 0;\n- } else {\n- this.index = Math.abs(index % size);\n- }\n- this.counter = this.index;\n- this.limit = this.index + size;\n+ this.iterator = shards.listIterator();\n }\n \n @Override\n public void reset() {\n- this.counter = this.index;\n+ iterator = shards.listIterator();\n }\n \n @Override\n public int remaining() {\n- return limit - counter;\n+ return shards.size() - iterator.nextIndex();\n }\n \n @Override\n public ShardRouting firstOrNull() {\n- if (size == 0) {\n+ if (shards.isEmpty()) {\n return null;\n }\n- return shards.get(index);\n+ return shards.get(0);\n }\n \n @Override\n public ShardRouting nextOrNull() {\n- if (size == 0) {\n- return null;\n- }\n- int counter = (this.counter);\n- if (counter >= size) {\n- if (counter >= limit) {\n- return null;\n- }\n- this.counter = counter + 1;\n- return shards.get(counter - size);\n+ if (iterator.hasNext()) {\n+ return iterator.next();\n } else {\n- this.counter = counter + 1;\n- return shards.get(counter);\n+ return null;\n }\n }\n \n @Override\n public int size() {\n- return size;\n+ return shards.size();\n }\n \n @Override\n public int sizeActive() {\n int count = 0;\n- for (int i = 0; i < size; i++) {\n- if (shards.get(i).active()) {\n+ for (ShardRouting shard : shards) {\n+ if (shard.active()) {\n count++;\n }\n }\n@@ -107,8 +82,7 @@ public int sizeActive() {\n @Override\n public int assignedReplicasIncludingRelocating() {\n int count = 0;\n- for (int i = 0; i < size; i++) {\n- ShardRouting shard = shards.get(i);\n+ for (ShardRouting shard : shards) {\n if (shard.unassigned()) {\n continue;\n }",
"filename": "src/main/java/org/elasticsearch/cluster/routing/PlainShardsIterator.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,48 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.cluster.routing;\n+\n+import org.elasticsearch.common.util.CollectionUtils;\n+\n+import java.util.List;\n+import java.util.concurrent.atomic.AtomicInteger;\n+\n+/**\n+ * Basic {@link ShardShuffler} implementation that uses an {@link AtomicInteger} to generate seeds and uses a rotation to permute shards.\n+ */\n+public class RotationShardShuffler extends ShardShuffler {\n+\n+ private final AtomicInteger seed;\n+\n+ public RotationShardShuffler(int seed) {\n+ this.seed = new AtomicInteger(seed);\n+ }\n+\n+ @Override\n+ public int nextSeed() {\n+ return seed.getAndIncrement();\n+ }\n+\n+ @Override\n+ public List<ShardRouting> shuffle(List<ShardRouting> shards, int seed) {\n+ return CollectionUtils.rotate(shards, seed);\n+ }\n+\n+}",
"filename": "src/main/java/org/elasticsearch/cluster/routing/RotationShardShuffler.java",
"status": "added"
},
{
"diff": "@@ -0,0 +1,47 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.cluster.routing;\n+\n+import java.util.List;\n+\n+/**\n+ * A shuffler for shards whose primary goal is to balance load.\n+ */\n+public abstract class ShardShuffler {\n+\n+ /**\n+ * Return a new seed.\n+ */\n+ public abstract int nextSeed();\n+\n+ /**\n+ * Return a shuffled view over the list of shards. The behavior of this method must be deterministic: if the same list and the same seed\n+ * are provided twice, then the result needs to be the same.\n+ */\n+ public abstract List<ShardRouting> shuffle(List<ShardRouting> shards, int seed);\n+\n+ /**\n+ * Equivalent to calling <code>shuffle(shards, nextSeed())</code>.\n+ */\n+ public List<ShardRouting> shuffle(List<ShardRouting> shards) {\n+ return shuffle(shards, nextSeed());\n+ }\n+\n+}",
"filename": "src/main/java/org/elasticsearch/cluster/routing/ShardShuffler.java",
"status": "added"
},
{
"diff": "@@ -24,6 +24,11 @@\n import com.carrotsearch.hppc.LongArrayList;\n import com.google.common.primitives.Longs;\n import org.apache.lucene.util.IntroSorter;\n+import org.elasticsearch.common.Preconditions;\n+\n+import java.util.AbstractList;\n+import java.util.List;\n+import java.util.RandomAccess;\n \n /** Collections-related utility methods. */\n public enum CollectionUtils {\n@@ -187,16 +192,64 @@ public static int sortAndDedup(double[] array, int len) {\n }\n return uniqueCount;\n }\n- \n+\n /**\n * Checks if the given array contains any elements.\n- * \n+ *\n * @param array The array to check\n- * \n+ *\n * @return false if the array contains an element, true if not or the array is null.\n */\n public static boolean isEmpty(Object[] array) {\n return array == null || array.length == 0;\n }\n \n+ /**\n+ * Return a rotated view of the given list with the given distance.\n+ */\n+ public static <T> List<T> rotate(final List<T> list, int distance) {\n+ if (list.isEmpty()) {\n+ return list;\n+ }\n+\n+ int d = distance % list.size();\n+ if (d < 0) {\n+ d += list.size();\n+ }\n+\n+ if (d == 0) {\n+ return list;\n+ }\n+\n+ return new RotatedList<>(list, d);\n+ }\n+\n+ private static class RotatedList<T> extends AbstractList<T> implements RandomAccess {\n+\n+ private final List<T> in;\n+ private final int distance;\n+\n+ public RotatedList(List<T> list, int distance) {\n+ Preconditions.checkArgument(distance >= 0 && distance < list.size());\n+ Preconditions.checkArgument(list instanceof RandomAccess);\n+ this.in = list;\n+ this.distance = distance;\n+ }\n+\n+ @Override\n+ public T get(int index) {\n+ int idx = distance + index;\n+ if (idx < 0 || idx >= in.size()) {\n+ idx -= in.size();\n+ }\n+ return in.get(idx);\n+ }\n+\n+ @Override\n+ public int size() {\n+ return in.size();\n+ }\n+\n+ };\n+\n }",
"filename": "src/main/java/org/elasticsearch/common/util/CollectionUtils.java",
"status": "modified"
},
{
"diff": "@@ -43,7 +43,8 @@ public class RoutingIteratorTests extends ElasticsearchAllocationTestCase {\n \n @Test\n public void testEmptyIterator() {\n- ShardIterator shardIterator = new PlainShardIterator(new ShardId(\"test1\", 0), ImmutableList.<ShardRouting>of(), 0);\n+ ShardShuffler shuffler = new RotationShardShuffler(0);\n+ ShardIterator shardIterator = new PlainShardIterator(new ShardId(\"test1\", 0), shuffler.shuffle(ImmutableList.<ShardRouting>of()));\n assertThat(shardIterator.remaining(), equalTo(0));\n assertThat(shardIterator.firstOrNull(), nullValue());\n assertThat(shardIterator.remaining(), equalTo(0));\n@@ -52,7 +53,7 @@ public void testEmptyIterator() {\n assertThat(shardIterator.nextOrNull(), nullValue());\n assertThat(shardIterator.remaining(), equalTo(0));\n \n- shardIterator = new PlainShardIterator(new ShardId(\"test1\", 0), ImmutableList.<ShardRouting>of(), 1);\n+ shardIterator = new PlainShardIterator(new ShardId(\"test1\", 0), shuffler.shuffle(ImmutableList.<ShardRouting>of()));\n assertThat(shardIterator.remaining(), equalTo(0));\n assertThat(shardIterator.firstOrNull(), nullValue());\n assertThat(shardIterator.remaining(), equalTo(0));\n@@ -61,7 +62,7 @@ public void testEmptyIterator() {\n assertThat(shardIterator.nextOrNull(), nullValue());\n assertThat(shardIterator.remaining(), equalTo(0));\n \n- shardIterator = new PlainShardIterator(new ShardId(\"test1\", 0), ImmutableList.<ShardRouting>of(), 2);\n+ shardIterator = new PlainShardIterator(new ShardId(\"test1\", 0), shuffler.shuffle(ImmutableList.<ShardRouting>of()));\n assertThat(shardIterator.remaining(), equalTo(0));\n assertThat(shardIterator.firstOrNull(), nullValue());\n assertThat(shardIterator.remaining(), equalTo(0));\n@@ -70,7 +71,7 @@ public void testEmptyIterator() {\n assertThat(shardIterator.nextOrNull(), nullValue());\n assertThat(shardIterator.remaining(), equalTo(0));\n \n- shardIterator = new PlainShardIterator(new ShardId(\"test1\", 0), ImmutableList.<ShardRouting>of(), 3);\n+ shardIterator = new PlainShardIterator(new ShardId(\"test1\", 0), shuffler.shuffle(ImmutableList.<ShardRouting>of()));\n assertThat(shardIterator.remaining(), equalTo(0));\n assertThat(shardIterator.firstOrNull(), nullValue());\n assertThat(shardIterator.remaining(), equalTo(0));",
"filename": "src/test/java/org/elasticsearch/cluster/structure/RoutingIteratorTests.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,64 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.common.util;\n+\n+import com.google.common.collect.ImmutableList;\n+import com.google.common.collect.Iterables;\n+import org.elasticsearch.test.ElasticsearchTestCase;\n+import org.junit.Test;\n+\n+import java.util.ArrayList;\n+import java.util.HashSet;\n+import java.util.List;\n+\n+public class CollectionUtilsTests extends ElasticsearchTestCase {\n+\n+ @Test\n+ public void rotateEmpty() {\n+ assertTrue(CollectionUtils.rotate(ImmutableList.of(), randomInt()).isEmpty());\n+ }\n+\n+ @Test\n+ public void rotate() {\n+ final int iters = scaledRandomIntBetween(10, 100);\n+ for (int k = 0; k < iters; ++k) {\n+ final int size = randomIntBetween(1, 100);\n+ final int distance = randomInt();\n+ List<Object> list = new ArrayList<>();\n+ for (int i = 0; i < size; ++i) {\n+ list.add(new Object());\n+ }\n+ final List<Object> rotated = CollectionUtils.rotate(list, distance);\n+ // check content is the same\n+ assertEquals(rotated.size(), list.size());\n+ assertEquals(Iterables.size(rotated), list.size());\n+ assertEquals(new HashSet<>(rotated), new HashSet<>(list));\n+ // check stability\n+ for (int j = randomInt(4); j >= 0; --j) {\n+ assertEquals(rotated, CollectionUtils.rotate(list, distance));\n+ }\n+ // reverse\n+ if (distance != Integer.MIN_VALUE) {\n+ assertEquals(list, CollectionUtils.rotate(CollectionUtils.rotate(list, distance), -distance));\n+ }\n+ }\n+ }\n+\n+}",
"filename": "src/test/java/org/elasticsearch/common/util/CollectionUtilsTests.java",
"status": "added"
}
]
} |
{
"body": "This breaks in 1.1:\n\nCreation of an index and one document in it\n\n``` Json\ncurl -XPOST 'localhost:9200/twitter/tweet/1' -d '\n{\n\"user\" : \"kimchy\",\n\"post_date\" : \"2009-11-15T14:12:12\",\n\"message\" : \"trying out Elasticsearch\"\n}'\n```\n\nFollowing will work on 1.0.2 but fail on 1.1.0\n\n``` Json\ncurl -XPOST 'localhost:9200/twitter/tweet/_search' -d '\n{\n\"query\": {\n\"match_phrase_prefix\": {\n\"message\": \"try\"\n}\n}\n}'\n```\n",
"comments": [],
"number": 5551,
"title": "match_phrase_prefix broken"
} | {
"body": "We miss to add a single term to a prefix query if the query in only a\nsingle term.\n\nCloses #5551\n",
"number": 5553,
"review_comments": [
{
"body": "I think it should either be an `else if` or the `if` should be on the next line.\n",
"created_at": "2014-03-26T15:11:56Z"
},
{
"body": "sure thing!\n",
"created_at": "2014-03-26T15:13:40Z"
}
],
"title": "Convert TermQuery to PrefixQuery if PHRASE_PREFIX is set"
} | {
"commits": [
{
"message": "Convert TermQuery to PrefixQuery if PHRASE_PREFIX is set\n\nWe miss to add a single term to a prefix query if the query in only a\nsingle term.\n\nCloses #5551"
}
],
"files": [
{
"diff": "@@ -251,28 +251,28 @@ protected Query newTermQuery(Term term) {\n \n \n public Query createPhrasePrefixQuery(String field, String queryText, int phraseSlop, int maxExpansions) {\n- Query query = createFieldQuery(getAnalyzer(), Occur.MUST, field, queryText, true, phraseSlop);\n+ final Query query = createFieldQuery(getAnalyzer(), Occur.MUST, field, queryText, true, phraseSlop);\n+ final MultiPhrasePrefixQuery prefixQuery = new MultiPhrasePrefixQuery();\n+ prefixQuery.setMaxExpansions(maxExpansions);\n+ prefixQuery.setSlop(phraseSlop);\n if (query instanceof PhraseQuery) {\n PhraseQuery pq = (PhraseQuery)query;\n- MultiPhrasePrefixQuery prefixQuery = new MultiPhrasePrefixQuery();\n- prefixQuery.setMaxExpansions(maxExpansions);\n Term[] terms = pq.getTerms();\n int[] positions = pq.getPositions();\n for (int i = 0; i < terms.length; i++) {\n prefixQuery.add(new Term[] {terms[i]}, positions[i]);\n }\n- prefixQuery.setSlop(phraseSlop);\n return prefixQuery;\n } else if (query instanceof MultiPhraseQuery) {\n MultiPhraseQuery pq = (MultiPhraseQuery)query;\n- MultiPhrasePrefixQuery prefixQuery = new MultiPhrasePrefixQuery();\n- prefixQuery.setMaxExpansions(maxExpansions);\n List<Term[]> terms = pq.getTermArrays();\n int[] positions = pq.getPositions();\n for (int i = 0; i < terms.size(); i++) {\n prefixQuery.add(terms.get(i), positions[i]);\n }\n- prefixQuery.setSlop(phraseSlop);\n+ return prefixQuery;\n+ } else if (query instanceof TermQuery) {\n+ prefixQuery.add(((TermQuery) query).getTerm());\n return prefixQuery;\n }\n return query;",
"filename": "src/main/java/org/elasticsearch/index/search/MatchQuery.java",
"status": "modified"
},
{
"diff": "@@ -2295,11 +2295,18 @@ public void testNGramCopyField() {\n public void testMatchPhrasePrefixQuery() {\n createIndex(\"test1\");\n client().prepareIndex(\"test1\", \"type1\", \"1\").setSource(\"field\", \"Johnnie Walker Black Label\").get();\n+ client().prepareIndex(\"test1\", \"type1\", \"2\").setSource(\"field\", \"trying out Elasticsearch\").get();\n refresh();\n \n SearchResponse searchResponse = client().prepareSearch().setQuery(matchQuery(\"field\", \"Johnnie la\").slop(between(2,5)).type(Type.PHRASE_PREFIX)).get();\n assertHitCount(searchResponse, 1l);\n assertSearchHits(searchResponse, \"1\");\n+ searchResponse = client().prepareSearch().setQuery(matchQuery(\"field\", \"trying\").type(Type.PHRASE_PREFIX)).get();\n+ assertHitCount(searchResponse, 1l);\n+ assertSearchHits(searchResponse, \"2\");\n+ searchResponse = client().prepareSearch().setQuery(matchQuery(\"field\", \"try\").type(Type.PHRASE_PREFIX)).get();\n+ assertHitCount(searchResponse, 1l);\n+ assertSearchHits(searchResponse, \"2\");\n }\n \n }",
"filename": "src/test/java/org/elasticsearch/search/query/SimpleQueryTests.java",
"status": "modified"
}
]
} |
{
"body": "``` sh\n# add mapping\ncurl -XPUT localhost:9200/test/test/_mapping -d '{\n \"test\": {\n \"properties\": {\n \"testGeo\": {\n \"type\": \"geo_point\",\n \"validate\": false\n }\n }\n }\n}'\n\n# update validate to true\ncurl -XPUT localhost:9200/test/test/_mapping -d '{\n \"test\": {\n \"properties\": {\n \"testGeo\": {\n \"type\": \"geo_point\",\n \"validate\": true\n }\n }\n }\n}'\n```\n\nAfter update, the `validate` is still false and no exception is thrown. Other fields like `store` etc. works.\n",
"comments": [],
"number": 5505,
"title": "GeoPointFieldMapper doesn't merge GeoPoint specific properties"
} | {
"body": "closes #5505\nI assume only `validate_lat` and `validate_lon` can be updated, other fields will throw exceptions. Please let me know if I was wrong.\n\nAlso added `precision_step` to the document page\n",
"number": 5506,
"review_comments": [],
"title": "merge GeoPoint specific mapping properties"
} | {
"commits": [
{
"message": "merge GeoPoint specific mapping properties"
}
],
"files": [
{
"diff": "@@ -154,6 +154,10 @@ is `true`).\n |`normalize_lat` |Set to `true` to normalize latitude.\n \n |`normalize_lon` |Set to `true` to normalize longitude.\n+\n+|`precision_step` |The precision step (number of terms generated for\n+each number value) for `.lat` and `.lon` fields if `lat_lon` is set to `true`.\n+Defaults to `4`.\n |=======================================================================\n \n [float]",
"filename": "docs/reference/mapping/types/geo-point-type.asciidoc",
"status": "modified"
},
{
"diff": "@@ -394,8 +394,8 @@ public GeoPoint decode(long latBits, long lonBits, GeoPoint out) {\n \n private final StringFieldMapper geohashMapper;\n \n- private final boolean validateLon;\n- private final boolean validateLat;\n+ private boolean validateLon;\n+ private boolean validateLat;\n \n private final boolean normalizeLon;\n private final boolean normalizeLat;\n@@ -613,7 +613,38 @@ public void close() {\n @Override\n public void merge(Mapper mergeWith, MergeContext mergeContext) throws MergeMappingException {\n super.merge(mergeWith, mergeContext);\n- // TODO: geo-specific properties\n+ if (!this.getClass().equals(mergeWith.getClass())) {\n+ return;\n+ }\n+ GeoPointFieldMapper fieldMergeWith = (GeoPointFieldMapper) mergeWith;\n+\n+ if (this.enableLatLon != fieldMergeWith.enableLatLon) {\n+ mergeContext.addConflict(\"mapper [\" + names.fullName() + \"] has different lat_lon\");\n+ }\n+ if (this.enableGeoHash != fieldMergeWith.enableGeoHash) {\n+ mergeContext.addConflict(\"mapper [\" + names.fullName() + \"] has different geohash\");\n+ }\n+ if (this.geoHashPrecision != fieldMergeWith.geoHashPrecision) {\n+ mergeContext.addConflict(\"mapper [\" + names.fullName() + \"] has different geohash_precision\");\n+ }\n+ if (this.enableGeohashPrefix != fieldMergeWith.enableGeohashPrefix) {\n+ mergeContext.addConflict(\"mapper [\" + names.fullName() + \"] has different geohash_prefix\");\n+ }\n+ if (this.normalizeLat != fieldMergeWith.normalizeLat) {\n+ mergeContext.addConflict(\"mapper [\" + names.fullName() + \"] has different normalize_lat\");\n+ }\n+ if (this.normalizeLon != fieldMergeWith.normalizeLon) {\n+ mergeContext.addConflict(\"mapper [\" + names.fullName() + \"] has different normalize_lon\");\n+ }\n+ if (this.precisionStep != fieldMergeWith.precisionStep) {\n+ mergeContext.addConflict(\"mapper [\" + names.fullName() + \"] has different precision_step\");\n+ }\n+\n+\n+ if (!mergeContext.mergeFlags().simulate()) {\n+ this.validateLat = fieldMergeWith.validateLat;\n+ this.validateLon = fieldMergeWith.validateLon;\n+ }\n }\n \n @Override",
"filename": "src/main/java/org/elasticsearch/index/mapper/geo/GeoPointFieldMapper.java",
"status": "modified"
}
]
} |
{
"body": "Currently we're using the default `escape` method from Mustache, which is intended for escaping HTML, not JSON.\n\nThis results in things like `\"` -> `"`\n\nInstead, we should be using these escapes:\n\n```\n\\b Backspace (ascii code 08)\n\\f Form feed (ascii code 0C)\n\\n New line\n\\r Carriage return\n\\t Tab\n\\v Vertical tab\n\\\" Double quote\n\\\\ Backslash \n```\n",
"comments": [
{
"body": "Test case:\n\n```\nDELETE /t\n\nPUT /t\n{\n \"mappings\": {\n \"foo\": {\n \"properties\": {\n \"bar\": {\n \"type\": \"string\",\n \"index\": \"not_analyzed\"\n }\n }\n }\n }\n}\n\nPUT /t/foo/1\n{\n \"foo\": \"bar&\"\n}\n\nGET /_search/template\n{\n \"template\": {\n \"query\": {\n \"term\": {\n \"foo\": \"{{foo}}\"\n }\n }\n },\n \"params\": {\n \"foo\": \"bar&\"\n }\n}\n```\n",
"created_at": "2014-03-20T12:17:36Z"
},
{
"body": "cool I will take a look at it\n",
"created_at": "2014-03-20T12:44:55Z"
}
],
"number": 5473,
"title": "Mustache templates should escape JSON, not HTML"
} | {
"body": "The default mustache engine was using HTML escaping which breaks queries\nif used with JSON etc. This commit adds escaping for:\n\n```\n\\b Backspace (ascii code 08)\n\\f Form feed (ascii code 0C)\n\\n New line\n\\r Carriage return\n\\t Tab\n\\v Vertical tab\n\\\" Double quote\n\\\\ Backslash\n```\n\nCloses #5473\n",
"number": 5479,
"review_comments": [
{
"body": "Should it work on code points instead of chars? There seems to be valid code points that have some of the characters that we are escaping as a low surrogate and in that case I think we should not escape?\n",
"created_at": "2014-03-20T17:18:42Z"
},
{
"body": "the chars are fine it's just the name that is wrong :) we are not escaping any surrogates etc.\n",
"created_at": "2014-03-20T17:25:47Z"
}
],
"title": "Add simple escape method for special characters to template query"
} | {
"commits": [
{
"message": "Add simple escape method for special characters to template query\n\nThe default mustache engine was using HTML escaping which breaks queries\nif used with JSON etc. This commit adds escaping for:\n\n```\n\\b Backspace (ascii code 08)\n\\f Form feed (ascii code 0C)\n\\n New line\n\\r Carriage return\n\\t Tab\n\\v Vertical tab\n\\\" Double quote\n\\\\ Backslash\n```\n\nCloses #5473"
}
],
"files": [
{
"diff": "@@ -12,7 +12,7 @@\n index: test\n type: testtype\n id: 2\n- body: { \"text\": \"value2\" }\n+ body: { \"text\": \"value2 value3\" }\n - do:\n indices.refresh: {}\n \n@@ -39,3 +39,10 @@\n body: { \"query\": { \"template\": { \"query\": \"{\\\"match_{{template}}\\\": {}}\", \"params\" : { \"template\" : \"all\" } } } }\n \n - match: { hits.total: 2 }\n+\n+ - do:\n+ search:\n+ body: { \"query\": { \"template\": { \"query\": \"{\\\"query_string\\\": { \\\"query\\\" : \\\"{{query}}\\\" }}\", \"params\" : { \"query\" : \"text:\\\"value2 value3\\\"\" } } } }\n+\n+\n+ - match: { hits.total: 1 }",
"filename": "rest-api-spec/test/search/30_template_query_execution.yaml",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,67 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.script.mustache;\n+\n+import com.github.mustachejava.DefaultMustacheFactory;\n+import com.github.mustachejava.MustacheException;\n+\n+import java.io.IOException;\n+import java.io.Writer;\n+\n+/**\n+ * A MustacheFactory that does simple JSON escaping.\n+ */\n+public final class JsonEscapingMustacheFactory extends DefaultMustacheFactory {\n+\n+ @Override\n+ public void encode(String value, Writer writer) {\n+ try {\n+ escape(value, writer);\n+ } catch (IOException e) {\n+ throw new MustacheException(\"Failed to encode value: \" + value);\n+ }\n+ }\n+\n+ public static Writer escape(String value, Writer writer) throws IOException {\n+ for (int i = 0; i < value.length(); i++) {\n+ final char character = value.charAt(i);\n+ if (isEscapeChar(character)) {\n+ writer.write('\\\\');\n+ }\n+ writer.write(character);\n+ }\n+ return writer;\n+ }\n+\n+ public static boolean isEscapeChar(char c) {\n+ switch(c) {\n+ case '\\b':\n+ case '\\f':\n+ case '\\n':\n+ case '\\r':\n+ case '\"':\n+ case '\\\\':\n+ case '\\u000B': // vertical tab\n+ case '\\t':\n+ return true;\n+ }\n+ return false;\n+ }\n+\n+}",
"filename": "src/main/java/org/elasticsearch/script/mustache/JsonEscapingMustacheFactory.java",
"status": "added"
},
{
"diff": "@@ -18,7 +18,6 @@\n */\n package org.elasticsearch.script.mustache;\n \n-import com.github.mustachejava.DefaultMustacheFactory;\n import com.github.mustachejava.Mustache;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.component.AbstractComponent;\n@@ -79,7 +78,7 @@ public MustacheScriptEngineService(Settings settings) {\n * */\n public Object compile(String template) {\n /** Factory to generate Mustache objects from. */\n- return (new DefaultMustacheFactory()).compile(new FastStringReader(template), \"query-template\");\n+ return (new JsonEscapingMustacheFactory()).compile(new FastStringReader(template), \"query-template\");\n }\n \n /**",
"filename": "src/main/java/org/elasticsearch/script/mustache/MustacheScriptEngineService.java",
"status": "modified"
},
{
"diff": "@@ -18,38 +18,98 @@\n */\n package org.elasticsearch.script.mustache;\n \n+import com.carrotsearch.randomizedtesting.generators.RandomPicks;\n import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.settings.ImmutableSettings;\n import org.elasticsearch.test.ElasticsearchTestCase;\n import org.junit.Before;\n import org.junit.Test;\n \n+import java.io.IOException;\n+import java.io.StringWriter;\n import java.nio.charset.Charset;\n import java.util.HashMap;\n import java.util.Map;\n \n+import static org.hamcrest.Matchers.equalTo;\n+\n /**\n * Mustache based templating test\n- * */\n+ */\n public class MustacheScriptEngineTest extends ElasticsearchTestCase {\n private MustacheScriptEngineService qe;\n \n- private static String TEMPLATE = \"GET _search {\\\"query\\\": \" + \"{\\\"boosting\\\": {\" + \"\\\"positive\\\": {\\\"match\\\": {\\\"body\\\": \\\"gift\\\"}},\"\n- + \"\\\"negative\\\": {\\\"term\\\": {\\\"body\\\": {\\\"value\\\": \\\"solr\\\"}\" + \"}}, \\\"negative_boost\\\": {{boost_val}} } }}\";\n-\n @Before\n public void setup() {\n qe = new MustacheScriptEngineService(ImmutableSettings.Builder.EMPTY_SETTINGS);\n }\n \n @Test\n public void testSimpleParameterReplace() {\n- Map<String, Object> vars = new HashMap<String, Object>();\n- vars.put(\"boost_val\", \"0.3\");\n- BytesReference o = (BytesReference) qe.execute(qe.compile(TEMPLATE), vars);\n- assertEquals(\"GET _search {\\\"query\\\": {\\\"boosting\\\": {\\\"positive\\\": {\\\"match\\\": {\\\"body\\\": \\\"gift\\\"}},\"\n- + \"\\\"negative\\\": {\\\"term\\\": {\\\"body\\\": {\\\"value\\\": \\\"solr\\\"}}}, \\\"negative_boost\\\": 0.3 } }}\",\n- new String(o.toBytes(), Charset.forName(\"UTF-8\")));\n+ {\n+ String template = \"GET _search {\\\"query\\\": \" + \"{\\\"boosting\\\": {\" + \"\\\"positive\\\": {\\\"match\\\": {\\\"body\\\": \\\"gift\\\"}},\"\n+ + \"\\\"negative\\\": {\\\"term\\\": {\\\"body\\\": {\\\"value\\\": \\\"solr\\\"}\" + \"}}, \\\"negative_boost\\\": {{boost_val}} } }}\";\n+ Map<String, Object> vars = new HashMap<String, Object>();\n+ vars.put(\"boost_val\", \"0.3\");\n+ BytesReference o = (BytesReference) qe.execute(qe.compile(template), vars);\n+ assertEquals(\"GET _search {\\\"query\\\": {\\\"boosting\\\": {\\\"positive\\\": {\\\"match\\\": {\\\"body\\\": \\\"gift\\\"}},\"\n+ + \"\\\"negative\\\": {\\\"term\\\": {\\\"body\\\": {\\\"value\\\": \\\"solr\\\"}}}, \\\"negative_boost\\\": 0.3 } }}\",\n+ new String(o.toBytes(), Charset.forName(\"UTF-8\")));\n+ }\n+ {\n+ String template = \"GET _search {\\\"query\\\": \" + \"{\\\"boosting\\\": {\" + \"\\\"positive\\\": {\\\"match\\\": {\\\"body\\\": \\\"gift\\\"}},\"\n+ + \"\\\"negative\\\": {\\\"term\\\": {\\\"body\\\": {\\\"value\\\": \\\"{{body_val}}\\\"}\" + \"}}, \\\"negative_boost\\\": {{boost_val}} } }}\";\n+ Map<String, Object> vars = new HashMap<String, Object>();\n+ vars.put(\"boost_val\", \"0.3\");\n+ vars.put(\"body_val\", \"\\\"quick brown\\\"\");\n+ BytesReference o = (BytesReference) qe.execute(qe.compile(template), vars);\n+ assertEquals(\"GET _search {\\\"query\\\": {\\\"boosting\\\": {\\\"positive\\\": {\\\"match\\\": {\\\"body\\\": \\\"gift\\\"}},\"\n+ + \"\\\"negative\\\": {\\\"term\\\": {\\\"body\\\": {\\\"value\\\": \\\"\\\\\\\"quick brown\\\\\\\"\\\"}}}, \\\"negative_boost\\\": 0.3 } }}\",\n+ new String(o.toBytes(), Charset.forName(\"UTF-8\")));\n+ }\n+ }\n+\n+ @Test\n+ public void testEscapeJson() throws IOException {\n+ {\n+ StringWriter writer = new StringWriter();\n+ JsonEscapingMustacheFactory.escape(\"hello \\n world\", writer);\n+ assertThat(writer.toString(), equalTo(\"hello \\\\\\n world\"));\n+ }\n+ {\n+ StringWriter writer = new StringWriter();\n+ JsonEscapingMustacheFactory.escape(\"\\n\", writer);\n+ assertThat(writer.toString(), equalTo(\"\\\\\\n\"));\n+ }\n+\n+ Character[] specialChars = new Character[]{'\\f', '\\n', '\\r', '\"', '\\\\', (char) 11, '\\t', '\\b' };\n+ int iters = scaledRandomIntBetween(100, 1000);\n+ for (int i = 0; i < iters; i++) {\n+ int rounds = scaledRandomIntBetween(1, 20);\n+ StringWriter escaped = new StringWriter();\n+ StringWriter writer = new StringWriter();\n+ for (int j = 0; j < rounds; j++) {\n+ String s = getChars();\n+ writer.write(s);\n+ escaped.write(s);\n+ char c = RandomPicks.randomFrom(getRandom(), specialChars);\n+ writer.append(c);\n+ escaped.append('\\\\');\n+ escaped.append(c);\n+ }\n+ StringWriter target = new StringWriter();\n+ assertThat(escaped.toString(), equalTo(JsonEscapingMustacheFactory.escape(writer.toString(), target).toString()));\n+ }\n+ }\n+\n+ private String getChars() {\n+ String string = randomRealisticUnicodeOfCodepointLengthBetween(0, 10);\n+ for (int i = 0; i < string.length(); i++) {\n+ if (JsonEscapingMustacheFactory.isEscapeChar(string.charAt(i))) {\n+ return string.substring(0, i);\n+ }\n+ }\n+ return string;\n }\n \n }",
"filename": "src/test/java/org/elasticsearch/script/mustache/MustacheScriptEngineTest.java",
"status": "modified"
}
]
} |
{
"body": "```\nPUT /my_index/my_type/1\n{\n \"brand\": \"Johnnie Walker Black Label\"\n}\n\nGET /my_index/_search\n{\n \"query\": {\n \"match\": {\n \"brand\": {\n \"type\": \"phrase_prefix\", \n \"query\": \"Johnnie la\",\n \"slop\": 10\n }\n }\n }\n}\n```\n",
"comments": [
{
"body": "@s1monw I ran a git bisect on this and it was #5005 that broke this.\n",
"created_at": "2014-03-14T18:04:56Z"
}
],
"number": 5437,
"title": "match_phrase_prefix no longer supports slop"
} | {
"body": "This fixes a regression introduced by #5005 where the query slop\nwas simply ignored when a `match_phrase_prefix` type was set.\n\nCloses #5437\n",
"number": 5438,
"review_comments": [],
"title": "Add slop to prefix phrase query after parsing query string"
} | {
"commits": [
{
"message": "Add slop to prefix phrase query after parsing query string\n\nThis fixes a regression introduced by #5005 where the query slop\nwas simply ignored when a `match_phrase_prefix` type was set.\n\nCloses #5437"
}
],
"files": [
{
"diff": "@@ -261,6 +261,7 @@ public Query createPhrasePrefixQuery(String field, String queryText, int phraseS\n for (int i = 0; i < terms.length; i++) {\n prefixQuery.add(new Term[] {terms[i]}, positions[i]);\n }\n+ prefixQuery.setSlop(phraseSlop);\n return prefixQuery;\n } else if (query instanceof MultiPhraseQuery) {\n MultiPhraseQuery pq = (MultiPhraseQuery)query;\n@@ -271,6 +272,7 @@ public Query createPhrasePrefixQuery(String field, String queryText, int phraseS\n for (int i = 0; i < terms.size(); i++) {\n prefixQuery.add(terms.get(i), positions[i]);\n }\n+ prefixQuery.setSlop(phraseSlop);\n return prefixQuery;\n }\n return query;",
"filename": "src/main/java/org/elasticsearch/index/search/MatchQuery.java",
"status": "modified"
},
{
"diff": "@@ -2292,4 +2292,14 @@ public void testNGramCopyField() {\n assertHitCount(searchResponse, 1l);\n }\n \n+ public void testMatchPhrasePrefixQuery() {\n+ createIndex(\"test1\");\n+ client().prepareIndex(\"test1\", \"type1\", \"1\").setSource(\"field\", \"Johnnie Walker Black Label\").get();\n+ refresh();\n+\n+ SearchResponse searchResponse = client().prepareSearch().setQuery(matchQuery(\"field\", \"Johnnie la\").slop(between(2,5)).type(Type.PHRASE_PREFIX)).get();\n+ assertHitCount(searchResponse, 1l);\n+ assertSearchHits(searchResponse, \"1\");\n+ }\n+\n }",
"filename": "src/test/java/org/elasticsearch/search/query/SimpleQueryTests.java",
"status": "modified"
}
]
} |
{
"body": "Document scripts can return an array of up to 4 values, but more than 4 cause an array out of bounds exception to be thrown:\n\n```\nDELETE /myindex\nPUT /myindex/t/1\n{}\n\nGET /myindex/_search\n{\n \"aggs\": {\n \"foo\": {\n \"date_histogram\": {\n \"script\": \"[1388534400000,1388534400000,1388534400000,1388534400000]\",\n \"interval\": \"hour\"\n }\n }\n }\n}\n```\n\nThis throws an exception:\n\n```\nGET /myindex/_search\n{\n \"aggs\": {\n \"foo\": {\n \"date_histogram\": {\n \"script\": \"[1388534400000,1388534400000,1388534400000,1388534400000,1388534400000]\",\n \"interval\": \"hour\"\n }\n }\n }\n}\n```\n",
"comments": [
{
"body": "this also fails on 1.0. Should we backport this?\n",
"created_at": "2014-03-17T14:54:25Z"
},
{
"body": "it should, yes... backported that is\n",
"created_at": "2014-03-17T14:55:32Z"
},
{
"body": "I just backported it.\n",
"created_at": "2014-03-18T08:20:57Z"
}
],
"number": 5414,
"title": "Scripts in aggs can't return more than 4 values"
} | {
"body": "A missing call to ArrayUtil.grow prevented the array that stores the values\nfrom growing in case the number of values returned by the script was higher\nthan the original size of the array.\n\nClose #5414\n",
"number": 5416,
"review_comments": [],
"title": "Allow scripts to return more than 4 values in aggregations."
} | {
"commits": [
{
"message": "Allow scripts to return more than 4 values in aggregations.\n\nA missing call to ArrayUtil.grow prevented the array that stores the values\nfrom growing in case the number of values returned by the script was higher\nthan the original size of the array.\n\nClose #5414"
}
],
"files": [
{
"diff": "@@ -36,7 +36,7 @@ public class ScriptDoubleValues extends DoubleValues implements ScriptValues {\n final SearchScript script;\n \n private Object value;\n- private double[] values = new double[4];\n+ private double[] values = new double[1];\n private int valueCount;\n private int valueOffset;\n \n@@ -76,6 +76,7 @@ else if (value.getClass().isArray()) {\n \n else if (value instanceof Collection) {\n valueCount = ((Collection<?>) value).size();\n+ values = ArrayUtil.grow(values, valueCount);\n int i = 0;\n for (Iterator<?> it = ((Collection<?>) value).iterator(); it.hasNext(); ++i) {\n values[i] = ((Number) it.next()).doubleValue();",
"filename": "src/main/java/org/elasticsearch/search/aggregations/support/numeric/ScriptDoubleValues.java",
"status": "modified"
},
{
"diff": "@@ -36,7 +36,7 @@ public class ScriptLongValues extends LongValues implements ScriptValues {\n final SearchScript script;\n \n private Object value;\n- private long[] values = new long[4];\n+ private long[] values = new long[1];\n private int valueCount;\n private int valueOffset;\n \n@@ -69,12 +69,13 @@ else if (value.getClass().isArray()) {\n valueCount = Array.getLength(value);\n values = ArrayUtil.grow(values, valueCount);\n for (int i = 0; i < valueCount; ++i) {\n- values[i] = ((Number) Array.get(value, i++)).longValue();\n+ values[i] = ((Number) Array.get(value, i)).longValue();\n }\n }\n \n else if (value instanceof Collection) {\n valueCount = ((Collection<?>) value).size();\n+ values = ArrayUtil.grow(values, valueCount);\n int i = 0;\n for (Iterator<?> it = ((Collection<?>) value).iterator(); it.hasNext(); ++i) {\n values[i] = ((Number) it.next()).longValue();",
"filename": "src/main/java/org/elasticsearch/search/aggregations/support/numeric/ScriptLongValues.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,162 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.search.aggregations.support;\n+\n+import com.carrotsearch.randomizedtesting.generators.RandomStrings;\n+import org.apache.lucene.index.AtomicReaderContext;\n+import org.apache.lucene.search.Scorer;\n+import org.apache.lucene.util.BytesRef;\n+import org.elasticsearch.script.SearchScript;\n+import org.elasticsearch.search.aggregations.support.bytes.ScriptBytesValues;\n+import org.elasticsearch.search.aggregations.support.numeric.ScriptDoubleValues;\n+import org.elasticsearch.search.aggregations.support.numeric.ScriptLongValues;\n+import org.elasticsearch.test.ElasticsearchTestCase;\n+import org.junit.Test;\n+\n+import java.util.Arrays;\n+import java.util.Map;\n+\n+public class ScriptValuesTests extends ElasticsearchTestCase {\n+\n+ private static class FakeSearchScript implements SearchScript {\n+ \n+ private final Object[][] values;\n+ int index;\n+ \n+ FakeSearchScript(Object[][] values) {\n+ this.values = values;\n+ index = -1;\n+ }\n+\n+ @Override\n+ public void setNextVar(String name, Object value) {\n+ }\n+\n+ @Override\n+ public Object run() {\n+ // Script values are supposed to support null, single values, arrays and collections\n+ final Object[] values = this.values[index];\n+ if (values.length <= 1 && randomBoolean()) {\n+ return values.length == 0 ? null : values[0];\n+ }\n+ return randomBoolean() ? values : Arrays.asList(values);\n+ }\n+\n+ @Override\n+ public Object unwrap(Object value) {\n+ throw new UnsupportedOperationException();\n+ }\n+\n+ @Override\n+ public void setNextReader(AtomicReaderContext reader) {\n+ }\n+\n+ @Override\n+ public void setScorer(Scorer scorer) {\n+ }\n+\n+ @Override\n+ public void setNextDocId(int doc) {\n+ index = doc;\n+ }\n+\n+ @Override\n+ public void setNextSource(Map<String, Object> source) {\n+ }\n+\n+ @Override\n+ public void setNextScore(float score) {\n+ }\n+\n+ @Override\n+ public float runAsFloat() {\n+ throw new UnsupportedOperationException();\n+ }\n+\n+ @Override\n+ public long runAsLong() {\n+ throw new UnsupportedOperationException();\n+ }\n+\n+ @Override\n+ public double runAsDouble() {\n+ throw new UnsupportedOperationException();\n+ }\n+\n+ }\n+\n+ @Test\n+ public void longs() {\n+ final Object[][] values = new Long[randomInt(10)][];\n+ for (int i = 0; i < values.length; ++i) {\n+ values[i] = new Long[randomInt(8)];\n+ for (int j = 0; j < values[i].length; ++j) {\n+ values[i][j] = randomLong();\n+ }\n+ }\n+ FakeSearchScript script = new FakeSearchScript(values);\n+ ScriptLongValues scriptValues = new ScriptLongValues(script);\n+ for (int i = 0; i < values.length; ++i) {\n+ assertEquals(values[i].length, scriptValues.setDocument(i));\n+ for (int j = 0; j < values[i].length; ++j) {\n+ assertEquals(values[i][j], scriptValues.nextValue());\n+ }\n+ }\n+ }\n+\n+ @Test\n+ public void doubles() {\n+ final Object[][] values = new Double[randomInt(10)][];\n+ for (int i = 0; i < values.length; ++i) {\n+ values[i] = new Double[randomInt(8)];\n+ for (int j = 0; j < values[i].length; ++j) {\n+ values[i][j] = randomDouble();\n+ }\n+ }\n+ FakeSearchScript script = new FakeSearchScript(values);\n+ ScriptDoubleValues scriptValues = new ScriptDoubleValues(script);\n+ for (int i = 0; i < values.length; ++i) {\n+ assertEquals(values[i].length, scriptValues.setDocument(i));\n+ for (int j = 0; j < values[i].length; ++j) {\n+ assertEquals(values[i][j], scriptValues.nextValue());\n+ }\n+ }\n+ }\n+\n+ @Test\n+ public void bytes() {\n+ final String[][] values = new String[randomInt(10)][];\n+ for (int i = 0; i < values.length; ++i) {\n+ values[i] = new String[randomInt(8)];\n+ for (int j = 0; j < values[i].length; ++j) {\n+ values[i][j] = RandomStrings.randomAsciiOfLength(getRandom(), 5);\n+ }\n+ }\n+ FakeSearchScript script = new FakeSearchScript(values);\n+ ScriptBytesValues scriptValues = new ScriptBytesValues(script);\n+ for (int i = 0; i < values.length; ++i) {\n+ assertEquals(values[i].length, scriptValues.setDocument(i));\n+ for (int j = 0; j < values[i].length; ++j) {\n+ assertEquals(new BytesRef(values[i][j]), scriptValues.nextValue());\n+ }\n+ }\n+ }\n+\n+}",
"filename": "src/test/java/org/elasticsearch/search/aggregations/support/ScriptValuesTests.java",
"status": "added"
}
]
} |
{
"body": "If a badly formatted object is passed to a geopoint when indexing a doc, it should throw an error. Instead, any fields after the bad geopoint are just ignored:\n\n```\nPUT /test\n{\n \"mappings\": {\n \"foo\": {\n \"properties\": {\n \"loc\": {\n \"type\": \"geo_point\"\n }\n }\n }\n }\n}\n\nPUT /test/foo/1\n{\n \"loc\": { \"lat\": 0, \"lon\": 0 },\n \"tag\": \"ok\"\n}\n\nPUT /test/foo/2\n{\n \"loc\": {\n \"loc\": {\n \"lat\": 0,\n \"lon\": 0\n }\n },\n \"tag\": \"not_ok\"\n}\n\nGET /test/_search?search_type=count\n{\n \"facets\": {\n \"tags\": {\n \"terms\": {\n \"field\": \"tag\"\n }\n }\n }\n}\n```\n\nResult:\n\n```\n{\n \"took\": 1,\n \"timed_out\": false,\n \"_shards\": {\n \"total\": 5,\n \"successful\": 5,\n \"failed\": 0\n },\n \"hits\": {\n \"total\": 2,\n \"max_score\": 0,\n \"hits\": []\n },\n \"facets\": {\n \"tags\": {\n \"_type\": \"terms\",\n \"missing\": 1,\n \"total\": 1,\n \"other\": 0,\n \"terms\": [\n {\n \"term\": \"ok\",\n \"count\": 1\n }\n ]\n }\n }\n}\n```\n",
"comments": [
{
"body": "+1 for throwing an error @chilling can you take a look this should be straight forward\n",
"created_at": "2014-03-12T09:22:16Z"
},
{
"body": "@s1monw @clintongormley I will fix it\n",
"created_at": "2014-03-12T09:27:02Z"
},
{
"body": "Just in case somebody googles the corresponding exceptions. If you try to index a document containing incomplete or other invalid geo_point fields in Elasticsearch 1.1.0 and you get exceptions like \"MapperParsingException[failed to parse]; nested: ElasticsearchParseException[field [lat] missing];\" or \"MapperParsingException[failed to parse]; nested: ElasticsearchParseException[geo_point expected];\", the solution is to skip the whole geo_point field. See this gist: https://gist.github.com/hkorte/9936192\n",
"created_at": "2014-04-02T15:21:44Z"
}
],
"number": 5390,
"title": "Bad geopoint field should throw error"
} | {
"body": "Add exceptions to `GeoPointFieldMapper` when parsing `geo_point` object\nCloses #5390\n",
"number": 5403,
"review_comments": [
{
"body": "s/isInfinite/isNaN/\n",
"created_at": "2014-03-19T14:58:56Z"
},
{
"body": "thx. fixed it\n",
"created_at": "2014-03-19T15:04:26Z"
}
],
"title": "Add exceptions to GeoPointFieldMapper"
} | {
"commits": [
{
"message": "Add exceptions to `GeoPointFieldMapper` when parsing `geo_point` object\n\n* moved `geo_point` parsing to GeoUtils\n* cleaned up `gzipped.json` for bulktest\n* merged `GeoPointFieldMapper` and `GeoPoint` parsing methods\n\nCloses #5390"
}
],
"files": [
{
"diff": "@@ -19,22 +19,12 @@\n \n package org.elasticsearch.common.geo;\n \n-import org.elasticsearch.ElasticsearchParseException;\n-import org.elasticsearch.common.xcontent.XContentParser;\n-import org.elasticsearch.common.xcontent.XContentParser.Token;\n-import org.elasticsearch.index.mapper.geo.GeoPointFieldMapper;\n-\n-import java.io.IOException;\n \n /**\n *\n */\n public class GeoPoint {\n \n- public static final String LATITUDE = GeoPointFieldMapper.Names.LAT;\n- public static final String LONGITUDE = GeoPointFieldMapper.Names.LON;\n- public static final String GEOHASH = GeoPointFieldMapper.Names.GEOHASH;\n-\n private double lat;\n private double lon;\n \n@@ -140,98 +130,4 @@ public static GeoPoint parseFromLatLon(String latLon) {\n point.resetFromString(latLon);\n return point;\n }\n-\n- /**\n- * Parse a {@link GeoPoint} with a {@link XContentParser}:\n- * \n- * @param parser {@link XContentParser} to parse the value from\n- * @return new {@link GeoPoint} parsed from the parse\n- * \n- * @throws IOException\n- * @throws org.elasticsearch.ElasticsearchParseException\n- */\n- public static GeoPoint parse(XContentParser parser) throws IOException, ElasticsearchParseException {\n- return parse(parser, new GeoPoint());\n- }\n-\n- /**\n- * Parse a {@link GeoPoint} with a {@link XContentParser}. A geopoint has one of the following forms:\n- * \n- * <ul>\n- * <li>Object: <pre>{"lat": <i><latitude></i>, "lon": <i><longitude></i>}</pre></li>\n- * <li>String: <pre>"<i><latitude></i>,<i><longitude></i>"</pre></li>\n- * <li>Geohash: <pre>"<i><geohash></i>"</pre></li>\n- * <li>Array: <pre>[<i><longitude></i>,<i><latitude></i>]</pre></li>\n- * </ul>\n- * \n- * @param parser {@link XContentParser} to parse the value from\n- * @param point A {@link GeoPoint} that will be reset by the values parsed\n- * @return new {@link GeoPoint} parsed from the parse\n- * \n- * @throws IOException\n- * @throws org.elasticsearch.ElasticsearchParseException\n- */\n- public static GeoPoint parse(XContentParser parser, GeoPoint point) throws IOException, ElasticsearchParseException {\n- if(parser.currentToken() == Token.START_OBJECT) {\n- while(parser.nextToken() != Token.END_OBJECT) {\n- if(parser.currentToken() == Token.FIELD_NAME) {\n- String field = parser.text();\n- if(LATITUDE.equals(field)) {\n- if(parser.nextToken() == Token.VALUE_NUMBER) {\n- point.resetLat(parser.doubleValue());\n- } else {\n- throw new ElasticsearchParseException(\"latitude must be a number\");\n- }\n- } else if (LONGITUDE.equals(field)) {\n- if(parser.nextToken() == Token.VALUE_NUMBER) {\n- point.resetLon(parser.doubleValue());\n- } else {\n- throw new ElasticsearchParseException(\"latitude must be a number\");\n- }\n- } else if (GEOHASH.equals(field)) {\n- if(parser.nextToken() == Token.VALUE_STRING) {\n- point.resetFromGeoHash(parser.text());\n- } else {\n- throw new ElasticsearchParseException(\"geohash must be a string\");\n- }\n- } else {\n- throw new ElasticsearchParseException(\"field must be either '\" + LATITUDE + \"', '\" + LONGITUDE + \"' or '\" + GEOHASH + \"'\");\n- }\n- } else {\n- throw new ElasticsearchParseException(\"Token '\"+parser.currentToken()+\"' not allowed\");\n- }\n- }\n- return point;\n- } else if(parser.currentToken() == Token.START_ARRAY) {\n- int element = 0;\n- while(parser.nextToken() != Token.END_ARRAY) {\n- if(parser.currentToken() == Token.VALUE_NUMBER) {\n- element++;\n- if(element == 1) {\n- point.resetLon(parser.doubleValue());\n- } else if(element == 2) {\n- point.resetLat(parser.doubleValue());\n- } else {\n- throw new ElasticsearchParseException(\"only two values allowed\");\n- }\n- } else {\n- throw new ElasticsearchParseException(\"Numeric value expected\");\n- }\n- }\n- return point;\n- } else if(parser.currentToken() == Token.VALUE_STRING) {\n- String data = parser.text();\n- int comma = data.indexOf(',');\n- if(comma > 0) {\n- double lat = Double.parseDouble(data.substring(0, comma).trim());\n- double lon = Double.parseDouble(data.substring(comma + 1).trim());\n- return point.reset(lat, lon);\n- } else {\n- point.resetFromGeoHash(data);\n- return point;\n- }\n- } else {\n- throw new ElasticsearchParseException(\"geo_point expected\");\n- }\n- }\n }",
"filename": "src/main/java/org/elasticsearch/common/geo/GeoPoint.java",
"status": "modified"
},
{
"diff": "@@ -22,12 +22,22 @@\n import org.apache.lucene.spatial.prefix.tree.GeohashPrefixTree;\n import org.apache.lucene.spatial.prefix.tree.QuadPrefixTree;\n import org.apache.lucene.util.SloppyMath;\n+import org.elasticsearch.ElasticsearchParseException;\n import org.elasticsearch.common.unit.DistanceUnit;\n+import org.elasticsearch.common.xcontent.XContentParser;\n+import org.elasticsearch.common.xcontent.XContentParser.Token;\n+import org.elasticsearch.index.mapper.geo.GeoPointFieldMapper;\n+\n+import java.io.IOException;\n \n /**\n */\n public class GeoUtils {\n \n+ public static final String LATITUDE = GeoPointFieldMapper.Names.LAT;\n+ public static final String LONGITUDE = GeoPointFieldMapper.Names.LON;\n+ public static final String GEOHASH = GeoPointFieldMapper.Names.GEOHASH;\n+ \n /** Earth ellipsoid major axis defined by WGS 84 in meters */\n public static final double EARTH_SEMI_MAJOR_AXIS = 6378137.0; // meters (WGS 84)\n \n@@ -293,5 +303,114 @@ private static double centeredModulus(double dividend, double divisor) {\n }\n return rtn;\n }\n+ /**\n+ * Parse a {@link GeoPoint} with a {@link XContentParser}:\n+ * \n+ * @param parser {@link XContentParser} to parse the value from\n+ * @return new {@link GeoPoint} parsed from the parse\n+ * \n+ * @throws IOException\n+ * @throws org.elasticsearch.ElasticsearchParseException\n+ */\n+ public static GeoPoint parseGeoPoint(XContentParser parser) throws IOException, ElasticsearchParseException {\n+ return parseGeoPoint(parser, new GeoPoint());\n+ }\n+\n+ /**\n+ * Parse a {@link GeoPoint} with a {@link XContentParser}. A geopoint has one of the following forms:\n+ * \n+ * <ul>\n+ * <li>Object: <pre>{"lat": <i><latitude></i>, "lon": <i><longitude></i>}</pre></li>\n+ * <li>String: <pre>"<i><latitude></i>,<i><longitude></i>"</pre></li>\n+ * <li>Geohash: <pre>"<i><geohash></i>"</pre></li>\n+ * <li>Array: <pre>[<i><longitude></i>,<i><latitude></i>]</pre></li>\n+ * </ul>\n+ * \n+ * @param parser {@link XContentParser} to parse the value from\n+ * @param point A {@link GeoPoint} that will be reset by the values parsed\n+ * @return new {@link GeoPoint} parsed from the parse\n+ * \n+ * @throws IOException\n+ * @throws org.elasticsearch.ElasticsearchParseException\n+ */\n+ public static GeoPoint parseGeoPoint(XContentParser parser, GeoPoint point) throws IOException, ElasticsearchParseException {\n+ double lat = Double.NaN;\n+ double lon = Double.NaN;\n+ String geohash = null;\n+ \n+ if(parser.currentToken() == Token.START_OBJECT) {\n+ while(parser.nextToken() != Token.END_OBJECT) {\n+ if(parser.currentToken() == Token.FIELD_NAME) {\n+ String field = parser.text();\n+ if(LATITUDE.equals(field)) {\n+ if(parser.nextToken() == Token.VALUE_NUMBER) {\n+ lat = parser.doubleValue();\n+ } else {\n+ throw new ElasticsearchParseException(\"latitude must be a number\");\n+ }\n+ } else if (LONGITUDE.equals(field)) {\n+ if(parser.nextToken() == Token.VALUE_NUMBER) {\n+ lon = parser.doubleValue();\n+ } else {\n+ throw new ElasticsearchParseException(\"latitude must be a number\");\n+ }\n+ } else if (GEOHASH.equals(field)) {\n+ if(parser.nextToken() == Token.VALUE_STRING) {\n+ geohash = parser.text();\n+ } else {\n+ throw new ElasticsearchParseException(\"geohash must be a string\");\n+ }\n+ } else {\n+ throw new ElasticsearchParseException(\"field must be either '\" + LATITUDE + \"', '\" + LONGITUDE + \"' or '\" + GEOHASH + \"'\");\n+ }\n+ } else {\n+ throw new ElasticsearchParseException(\"Token '\"+parser.currentToken()+\"' not allowed\");\n+ }\n+ }\n \n+ if (geohash != null) {\n+ if(!Double.isNaN(lat) || !Double.isNaN(lon)) {\n+ throw new ElasticsearchParseException(\"field must be either lat/lon or geohash\");\n+ } else {\n+ return point.resetFromGeoHash(geohash);\n+ }\n+ } else if (Double.isNaN(lat)) {\n+ throw new ElasticsearchParseException(\"field [\" + LATITUDE + \"] missing\");\n+ } else if (Double.isNaN(lon)) {\n+ throw new ElasticsearchParseException(\"field [\" + LONGITUDE + \"] missing\");\n+ } else {\n+ return point.reset(lat, lon);\n+ }\n+ \n+ } else if(parser.currentToken() == Token.START_ARRAY) {\n+ int element = 0;\n+ while(parser.nextToken() != Token.END_ARRAY) {\n+ if(parser.currentToken() == Token.VALUE_NUMBER) {\n+ element++;\n+ if(element == 1) {\n+ lon = parser.doubleValue();\n+ } else if(element == 2) {\n+ lat = parser.doubleValue();\n+ } else {\n+ throw new ElasticsearchParseException(\"only two values allowed\");\n+ }\n+ } else {\n+ throw new ElasticsearchParseException(\"Numeric value expected\");\n+ }\n+ }\n+ return point.reset(lat, lon);\n+ } else if(parser.currentToken() == Token.VALUE_STRING) {\n+ String data = parser.text();\n+ int comma = data.indexOf(',');\n+ if(comma > 0) {\n+ lat = Double.parseDouble(data.substring(0, comma).trim());\n+ lon = Double.parseDouble(data.substring(comma + 1).trim());\n+ return point.reset(lat, lon);\n+ } else {\n+ return point.resetFromGeoHash(data);\n+ }\n+ } else {\n+ throw new ElasticsearchParseException(\"geo_point expected\");\n+ }\n+ }\n }",
"filename": "src/main/java/org/elasticsearch/common/geo/GeoUtils.java",
"status": "modified"
},
{
"diff": "@@ -57,9 +57,7 @@\n import java.util.Map;\n \n import static org.elasticsearch.index.mapper.MapperBuilders.*;\n-import static org.elasticsearch.index.mapper.core.TypeParsers.parseField;\n-import static org.elasticsearch.index.mapper.core.TypeParsers.parseMultiField;\n-import static org.elasticsearch.index.mapper.core.TypeParsers.parsePathType;\n+import static org.elasticsearch.index.mapper.core.TypeParsers.*;\n \n /**\n * Parsing: We handle:\n@@ -480,20 +478,15 @@ public void parse(ParseContext context) throws IOException {\n context.path().pathType(pathType);\n context.path().add(name());\n \n+ GeoPoint sparse = new GeoPoint();\n+ \n XContentParser.Token token = context.parser().currentToken();\n if (token == XContentParser.Token.START_ARRAY) {\n token = context.parser().nextToken();\n if (token == XContentParser.Token.START_ARRAY) {\n // its an array of array of lon/lat [ [1.2, 1.3], [1.4, 1.5] ]\n while (token != XContentParser.Token.END_ARRAY) {\n- token = context.parser().nextToken();\n- double lon = context.parser().doubleValue();\n- token = context.parser().nextToken();\n- double lat = context.parser().doubleValue();\n- while ((token = context.parser().nextToken()) != XContentParser.Token.END_ARRAY) {\n-\n- }\n- parseLatLon(context, lat, lon);\n+ parse(context, GeoUtils.parseGeoPoint(context.parser(), sparse), null);\n token = context.parser().nextToken();\n }\n } else {\n@@ -505,66 +498,28 @@ public void parse(ParseContext context) throws IOException {\n while ((token = context.parser().nextToken()) != XContentParser.Token.END_ARRAY) {\n \n }\n- parseLatLon(context, lat, lon);\n+ parse(context, sparse.reset(lat, lon), null);\n } else {\n while (token != XContentParser.Token.END_ARRAY) {\n- if (token == XContentParser.Token.START_OBJECT) {\n- parseObjectLatLon(context);\n- } else if (token == XContentParser.Token.VALUE_STRING) {\n- parseStringLatLon(context);\n+ if (token == XContentParser.Token.VALUE_STRING) {\n+ parsePointFromString(context, sparse, context.parser().text());\n+ } else {\n+ parse(context, GeoUtils.parseGeoPoint(context.parser(), sparse), null);\n }\n token = context.parser().nextToken();\n }\n }\n }\n- } else if (token == XContentParser.Token.START_OBJECT) {\n- parseObjectLatLon(context);\n } else if (token == XContentParser.Token.VALUE_STRING) {\n- parseStringLatLon(context);\n+ parsePointFromString(context, sparse, context.parser().text());\n+ } else {\n+ parse(context, GeoUtils.parseGeoPoint(context.parser(), sparse), null);\n }\n \n context.path().remove();\n context.path().pathType(origPathType);\n }\n \n- private void parseStringLatLon(ParseContext context) throws IOException {\n- String value = context.parser().text();\n- int comma = value.indexOf(',');\n- if (comma != -1) {\n- double lat = Double.parseDouble(value.substring(0, comma).trim());\n- double lon = Double.parseDouble(value.substring(comma + 1).trim());\n- parseLatLon(context, lat, lon);\n- } else { // geo hash\n- parseGeohash(context, value);\n- }\n- }\n-\n- private void parseObjectLatLon(ParseContext context) throws IOException {\n- XContentParser.Token token;\n- String currentName = context.parser().currentName();\n- Double lat = null;\n- Double lon = null;\n- String geohash = null;\n- while ((token = context.parser().nextToken()) != XContentParser.Token.END_OBJECT) {\n- if (token == XContentParser.Token.FIELD_NAME) {\n- currentName = context.parser().currentName();\n- } else if (token.isValue()) {\n- if (currentName.equals(Names.LAT)) {\n- lat = context.parser().doubleValue();\n- } else if (currentName.equals(Names.LON)) {\n- lon = context.parser().doubleValue();\n- } else if (currentName.equals(Names.GEOHASH)) {\n- geohash = context.parser().text();\n- }\n- }\n- }\n- if (geohash != null) {\n- parseGeohash(context, geohash);\n- } else if (lat != null && lon != null) {\n- parseLatLon(context, lat, lon);\n- }\n- }\n-\n private void parseGeohashField(ParseContext context, String geohash) throws IOException {\n int len = Math.min(geoHashPrecision, geohash.length());\n int min = enableGeohashPrefix ? 1 : geohash.length();\n@@ -576,13 +531,12 @@ private void parseGeohashField(ParseContext context, String geohash) throws IOEx\n }\n }\n \n- private void parseLatLon(ParseContext context, double lat, double lon) throws IOException {\n- parse(context, new GeoPoint(lat, lon), null);\n- }\n-\n- private void parseGeohash(ParseContext context, String geohash) throws IOException {\n- GeoPoint point = GeoHashUtils.decode(geohash);\n- parse(context, point, geohash);\n+ private void parsePointFromString(ParseContext context, GeoPoint sparse, String point) throws IOException {\n+ if (point.indexOf(',') < 0) {\n+ parse(context, sparse.resetFromGeoHash(point), point);\n+ } else {\n+ parse(context, sparse.resetFromString(point), null);\n+ }\n }\n \n private void parse(ParseContext context, GeoPoint point, String geohash) throws IOException {",
"filename": "src/main/java/org/elasticsearch/index/mapper/geo/GeoPointFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -113,19 +113,19 @@ public Filter parse(QueryParseContext parseContext) throws IOException, QueryPar\n right = parser.doubleValue();\n } else {\n if (TOP_LEFT.equals(currentFieldName) || TOPLEFT.equals(currentFieldName)) {\n- GeoPoint.parse(parser, sparse);\n+ GeoUtils.parseGeoPoint(parser, sparse);\n top = sparse.getLat();\n left = sparse.getLon();\n } else if (BOTTOM_RIGHT.equals(currentFieldName) || BOTTOMRIGHT.equals(currentFieldName)) {\n- GeoPoint.parse(parser, sparse);\n+ GeoUtils.parseGeoPoint(parser, sparse);\n bottom = sparse.getLat();\n right = sparse.getLon();\n } else if (TOP_RIGHT.equals(currentFieldName) || TOPRIGHT.equals(currentFieldName)) {\n- GeoPoint.parse(parser, sparse);\n+ GeoUtils.parseGeoPoint(parser, sparse);\n top = sparse.getLat();\n right = sparse.getLon();\n } else if (BOTTOM_LEFT.equals(currentFieldName) || BOTTOMLEFT.equals(currentFieldName)) {\n- GeoPoint.parse(parser, sparse);\n+ GeoUtils.parseGeoPoint(parser, sparse);\n bottom = sparse.getLat();\n left = sparse.getLon();\n } else {",
"filename": "src/main/java/org/elasticsearch/index/query/GeoBoundingBoxFilterParser.java",
"status": "modified"
},
{
"diff": "@@ -83,7 +83,7 @@ public Filter parse(QueryParseContext parseContext) throws IOException, QueryPar\n currentFieldName = parser.currentName();\n } else if (token == XContentParser.Token.START_ARRAY) {\n fieldName = currentFieldName;\n- GeoPoint.parse(parser, point);\n+ GeoUtils.parseGeoPoint(parser, point);\n } else if (token == XContentParser.Token.START_OBJECT) {\n // the json in the format of -> field : { lat : 30, lon : 12 }\n String currentName = parser.currentName();",
"filename": "src/main/java/org/elasticsearch/index/query/GeoDistanceFilterParser.java",
"status": "modified"
},
{
"diff": "@@ -84,12 +84,12 @@ public Filter parse(QueryParseContext parseContext) throws IOException, QueryPar\n if (token == XContentParser.Token.FIELD_NAME) {\n currentFieldName = parser.currentName();\n } else if (token == XContentParser.Token.START_ARRAY) {\n- GeoPoint.parse(parser, point);\n+ GeoUtils.parseGeoPoint(parser, point);\n fieldName = currentFieldName;\n } else if (token == XContentParser.Token.START_OBJECT) {\n // the json in the format of -> field : { lat : 30, lon : 12 }\n fieldName = currentFieldName;\n- GeoPoint.parse(parser, point);\n+ GeoUtils.parseGeoPoint(parser, point);\n } else if (token.isValue()) {\n if (currentFieldName.equals(\"from\")) {\n if (token == XContentParser.Token.VALUE_NULL) {",
"filename": "src/main/java/org/elasticsearch/index/query/GeoDistanceRangeFilterParser.java",
"status": "modified"
},
{
"diff": "@@ -93,7 +93,7 @@ public Filter parse(QueryParseContext parseContext) throws IOException, QueryPar\n } else if (token == XContentParser.Token.START_ARRAY) {\n if (POINTS.equals(currentFieldName)) {\n while ((token = parser.nextToken()) != Token.END_ARRAY) {\n- shell.add(GeoPoint.parse(parser));\n+ shell.add(GeoUtils.parseGeoPoint(parser));\n }\n } else {\n throw new QueryParsingException(parseContext.index(), \"[geo_polygon] filter does not support [\" + currentFieldName + \"]\");",
"filename": "src/main/java/org/elasticsearch/index/query/GeoPolygonFilterParser.java",
"status": "modified"
},
{
"diff": "@@ -216,12 +216,12 @@ public Filter parse(QueryParseContext parseContext) throws IOException, QueryPar\n // A string indicates either a gehash or a lat/lon string\n String location = parser.text();\n if(location.indexOf(\",\")>0) {\n- geohash = GeoPoint.parse(parser).geohash();\n+ geohash = GeoUtils.parseGeoPoint(parser).geohash();\n } else {\n geohash = location;\n }\n } else {\n- geohash = GeoPoint.parse(parser).geohash();\n+ geohash = GeoUtils.parseGeoPoint(parser).geohash();\n }\n }\n } else {",
"filename": "src/main/java/org/elasticsearch/index/query/GeohashCellFilter.java",
"status": "modified"
},
{
"diff": "@@ -26,6 +26,7 @@\n import org.elasticsearch.ElasticsearchParseException;\n import org.elasticsearch.common.geo.GeoDistance;\n import org.elasticsearch.common.geo.GeoPoint;\n+import org.elasticsearch.common.geo.GeoUtils;\n import org.elasticsearch.common.lucene.search.function.CombineFunction;\n import org.elasticsearch.common.lucene.search.function.ScoreFunction;\n import org.elasticsearch.common.unit.DistanceUnit;\n@@ -206,7 +207,7 @@ private ScoreFunction parseGeoVariable(String fieldName, XContentParser parser,\n } else if (parameterName.equals(DecayFunctionBuilder.SCALE)) {\n scaleString = parser.text();\n } else if (parameterName.equals(DecayFunctionBuilder.ORIGIN)) {\n- origin = GeoPoint.parse(parser);\n+ origin = GeoUtils.parseGeoPoint(parser);\n } else if (parameterName.equals(DecayFunctionBuilder.DECAY)) {\n decay = parser.doubleValue();\n } else if (parameterName.equals(DecayFunctionBuilder.OFFSET)) {",
"filename": "src/main/java/org/elasticsearch/index/query/functionscore/DecayFunctionParser.java",
"status": "modified"
},
{
"diff": "@@ -109,7 +109,7 @@ public FacetExecutor parse(String facetName, XContentParser parser, SearchContex\n entries.add(new GeoDistanceFacet.Entry(from, to, 0, 0, 0, Double.POSITIVE_INFINITY, Double.NEGATIVE_INFINITY));\n }\n } else {\n- GeoPoint.parse(parser, point);\n+ GeoUtils.parseGeoPoint(parser, point);\n fieldName = currentName;\n }\n } else if (token == XContentParser.Token.START_OBJECT) {\n@@ -118,7 +118,7 @@ public FacetExecutor parse(String facetName, XContentParser parser, SearchContex\n } else {\n // the json in the format of -> field : { lat : 30, lon : 12 }\n fieldName = currentName;\n- GeoPoint.parse(parser, point);\n+ GeoUtils.parseGeoPoint(parser, point);\n }\n } else if (token.isValue()) {\n if (currentName.equals(\"unit\")) {",
"filename": "src/main/java/org/elasticsearch/search/facet/geodistance/GeoDistanceFacetParser.java",
"status": "modified"
},
{
"diff": "@@ -69,7 +69,7 @@ public SortField parse(XContentParser parser, SearchContext context) throws Exce\n if (token == XContentParser.Token.FIELD_NAME) {\n currentName = parser.currentName();\n } else if (token == XContentParser.Token.START_ARRAY) {\n- GeoPoint.parse(parser, point);\n+ GeoUtils.parseGeoPoint(parser, point);\n fieldName = currentName;\n } else if (token == XContentParser.Token.START_OBJECT) {\n // the json in the format of -> field : { lat : 30, lon : 12 }\n@@ -78,7 +78,7 @@ public SortField parse(XContentParser parser, SearchContext context) throws Exce\n nestedFilter = parsedFilter == null ? null : parsedFilter.filter();\n } else {\n fieldName = currentName;\n- GeoPoint.parse(parser, point);\n+ GeoUtils.parseGeoPoint(parser, point);\n }\n } else if (token.isValue()) {\n if (\"reverse\".equals(currentName)) {",
"filename": "src/main/java/org/elasticsearch/search/sort/GeoDistanceSortParser.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,186 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.index.search.geo;\n+\n+\n+import org.elasticsearch.common.geo.GeoHashUtils;\n+import org.elasticsearch.common.geo.GeoPoint;\n+import org.elasticsearch.common.geo.GeoUtils;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.common.xcontent.XContentParser;\n+import org.elasticsearch.common.xcontent.json.JsonXContent;\n+import org.elasticsearch.test.ElasticsearchTestCase;\n+import org.junit.Test;\n+\n+import java.io.IOException;\n+\n+import static org.hamcrest.Matchers.closeTo;\n+\n+\n+public class GeoPointParsingTests extends ElasticsearchTestCase {\n+\n+ // mind geohash precision and error\n+ private static final double ERROR = 0.00001d;\n+\n+ @Test\n+ public void testGeoPointReset() throws IOException {\n+ double lat = 1 + randomDouble() * 89;\n+ double lon = 1 + randomDouble() * 179;\n+\n+ GeoPoint point = new GeoPoint(0, 0);\n+ assertCloseTo(point, 0, 0);\n+\n+ assertCloseTo(point.reset(lat, lon), lat, lon);\n+ assertCloseTo(point.reset(0, 0), 0, 0);\n+ assertCloseTo(point.resetLat(lat), lat, 0);\n+ assertCloseTo(point.resetLat(0), 0, 0);\n+ assertCloseTo(point.resetLon(lon), 0, lon);\n+ assertCloseTo(point.resetLon(0), 0, 0);\n+ assertCloseTo(point.resetFromGeoHash(GeoHashUtils.encode(lat, lon)), lat, lon);\n+ assertCloseTo(point.reset(0, 0), 0, 0);\n+ assertCloseTo(point.resetFromString(Double.toString(lat) + \", \" + Double.toHexString(lon)), lat, lon);\n+ assertCloseTo(point.reset(0, 0), 0, 0);\n+ }\n+ \n+ @Test\n+ public void testGeoPointParsing() throws IOException {\n+ double lat = randomDouble() * 180 - 90;\n+ double lon = randomDouble() * 360 - 180;\n+ \n+ GeoPoint point = GeoUtils.parseGeoPoint(objectLatLon(lat, lon));\n+ assertCloseTo(point, lat, lon);\n+ \n+ GeoUtils.parseGeoPoint(arrayLatLon(lat, lon), point);\n+ assertCloseTo(point, lat, lon);\n+\n+ GeoUtils.parseGeoPoint(geohash(lat, lon), point);\n+ assertCloseTo(point, lat, lon);\n+\n+ GeoUtils.parseGeoPoint(stringLatLon(lat, lon), point);\n+ assertCloseTo(point, lat, lon);\n+ }\n+\n+ // Based on issue5390\n+ @Test\n+ public void testInvalidPointEmbeddedObject() throws IOException {\n+ XContentBuilder content = JsonXContent.contentBuilder();\n+ content.startObject();\n+ content.startObject(\"location\");\n+ content.field(\"lat\", 0).field(\"lon\", 0);\n+ content.endObject();\n+ content.endObject();\n+\n+ XContentParser parser = JsonXContent.jsonXContent.createParser(content.bytes());\n+ parser.nextToken();\n+ \n+ try {\n+ GeoUtils.parseGeoPoint(parser);\n+ assertTrue(false);\n+ } catch (Throwable e) {}\n+ }\n+\n+ @Test\n+ public void testInvalidPointLatHashMix() throws IOException {\n+ XContentBuilder content = JsonXContent.contentBuilder();\n+ content.startObject();\n+ content.field(\"lat\", 0).field(\"geohash\", GeoHashUtils.encode(0, 0));\n+ content.endObject();\n+\n+ XContentParser parser = JsonXContent.jsonXContent.createParser(content.bytes());\n+ parser.nextToken();\n+\n+ try {\n+ GeoUtils.parseGeoPoint(parser);\n+ assertTrue(false);\n+ } catch (Throwable e) {}\n+ }\n+\n+ @Test\n+ public void testInvalidPointLonHashMix() throws IOException {\n+ XContentBuilder content = JsonXContent.contentBuilder();\n+ content.startObject();\n+ content.field(\"lon\", 0).field(\"geohash\", GeoHashUtils.encode(0, 0));\n+ content.endObject();\n+\n+ XContentParser parser = JsonXContent.jsonXContent.createParser(content.bytes());\n+ parser.nextToken();\n+\n+ try {\n+ GeoUtils.parseGeoPoint(parser);\n+ assertTrue(false);\n+ } catch (Throwable e) {}\n+ }\n+\n+ @Test\n+ public void testInvalidField() throws IOException {\n+ XContentBuilder content = JsonXContent.contentBuilder();\n+ content.startObject();\n+ content.field(\"lon\", 0).field(\"lat\", 0).field(\"test\", 0);\n+ content.endObject();\n+\n+ XContentParser parser = JsonXContent.jsonXContent.createParser(content.bytes());\n+ parser.nextToken();\n+\n+ try {\n+ GeoUtils.parseGeoPoint(parser);\n+ assertTrue(false);\n+ } catch (Throwable e) {}\n+ }\n+\n+ private static XContentParser objectLatLon(double lat, double lon) throws IOException {\n+ XContentBuilder content = JsonXContent.contentBuilder();\n+ content.startObject();\n+ content.field(\"lat\", lat).field(\"lon\", lon);\n+ content.endObject();\n+ XContentParser parser = JsonXContent.jsonXContent.createParser(content.bytes());\n+ parser.nextToken();\n+ return parser;\n+ }\n+\n+ private static XContentParser arrayLatLon(double lat, double lon) throws IOException {\n+ XContentBuilder content = JsonXContent.contentBuilder();\n+ content.startArray().value(lon).value(lat).endArray();\n+ XContentParser parser = JsonXContent.jsonXContent.createParser(content.bytes());\n+ parser.nextToken();\n+ return parser;\n+ }\n+\n+ private static XContentParser stringLatLon(double lat, double lon) throws IOException {\n+ XContentBuilder content = JsonXContent.contentBuilder();\n+ content.value(Double.toString(lat) + \", \" + Double.toString(lon));\n+ XContentParser parser = JsonXContent.jsonXContent.createParser(content.bytes());\n+ parser.nextToken();\n+ return parser;\n+ }\n+\n+ private static XContentParser geohash(double lat, double lon) throws IOException {\n+ XContentBuilder content = JsonXContent.contentBuilder();\n+ content.value(GeoHashUtils.encode(lat, lon));\n+ XContentParser parser = JsonXContent.jsonXContent.createParser(content.bytes());\n+ parser.nextToken();\n+ return parser;\n+ }\n+ \n+ public static void assertCloseTo(GeoPoint point, double lat, double lon) {\n+ assertThat(point.lat(), closeTo(lat, ERROR));\n+ assertThat(point.lon(), closeTo(lon, ERROR));\n+ }\n+\n+}",
"filename": "src/test/java/org/elasticsearch/index/search/geo/GeoPointParsingTests.java",
"status": "added"
},
{
"diff": "",
"filename": "src/test/java/org/elasticsearch/search/geo/gzippedmap.json",
"status": "modified"
},
{
"diff": "@@ -668,11 +668,9 @@ public void testSortMinValueScript() throws IOException {\n .field(\"lvalue\", new long[]{i, i + 1, i + 2})\n .field(\"dvalue\", new double[]{i, i + 1, i + 2})\n .startObject(\"gvalue\")\n- .startObject(\"location\")\n .field(\"lat\", (double) i + 1)\n .field(\"lon\", (double) i)\n .endObject()\n- .endObject()\n .endObject());\n req.execute().actionGet();\n }",
"filename": "src/test/java/org/elasticsearch/search/sort/SimpleSortTests.java",
"status": "modified"
}
]
} |
{
"body": "I'm seeing a strange behavior using elasticsearch 0.90.12.\n\nThe index contains 180269 docs\n\nI try to perform the following query:\n\n```\n{'query':\n {'filtered':\n {'filter': {\n 'terms': {'id': [188915, 189067, 183817, 188969, 188425]}\n },\n 'query': {'match_all': {}}\n }\n }\n}\n```\n\nIt returns more or less randomly the following list of doc ids: \n\n[183817, 188915, 188969, 189067, 188231]\nor\n[183817, 188915, 188969, 189067, 188425]\n\nAs you can see the first return value is entirely unexpected. If i add \"_cache: false\" to the terms filter it consistently returns the correct value. It also works as expected when i choose a bool execution strategy. It also work if i slightly change the list of requested ids (remove one, add one), only that specific list triggers the problem.\n\nI can't reduce the problem locally, it seems to depend on the exact state of that index and that exact lis of ID.\n\nCould it be some kind of cache collision because of the hashing algorithm used to generate the cache key, or the bitset ?\n\nIn any case this could lead to data leak since i can't trust that my filter will work as expected.. Thanks for any advice on how to debug this properly!\n",
"comments": [
{
"body": "@rslinckx Thank you for the report. I was able to reproduce the issue locally. It looks like there is a [bug](https://issues.apache.org/jira/browse/LUCENE-5502) in the method that compares two TermsFilters. As a result of this bug, it's possible to get cached results for a different terms filter in some rare circumstances. The only workaround that I can think of is to use your own cache_key with all your terms filters until the issue is fixed. \n",
"created_at": "2014-03-07T19:52:41Z"
},
{
"body": "argh! @imotov we should add tests for hashcode / equals for all our filters.\n",
"created_at": "2014-03-10T08:25:21Z"
},
{
"body": "I'm glad you were able to find the issue, it's been haunting me for the past few days :)\n\nDo you have any kind of timeframe when a fix can be pushed in elasticsearch ?\n",
"created_at": "2014-03-10T09:32:58Z"
},
{
"body": "@rslinckx usually we port the fixes we have on lucene quickly to elasticsearch it's really just a matter of a couple of days maybe even today. This will certainly be in the next release\n",
"created_at": "2014-03-10T10:31:42Z"
},
{
"body": "@s1monw that might be a good idea. In many of our filters equals and hashCode are auto-generated, but we have a few that we wrote by hand. However, in this particular case having tests didn't help us much. There even was a randomized test that was generating all these different filters, but it looks like it never managed to reproduce this particular use case. It's just pretty rare condition.\n",
"created_at": "2014-03-11T00:51:33Z"
},
{
"body": "So it's really that particular combination of numbers/list length/order ?\n",
"created_at": "2014-03-11T08:49:51Z"
},
{
"body": "@rslinckx yes, so thanks for providing a real example! It would have been much more difficult to figure out without having a repro.\n",
"created_at": "2014-03-11T19:30:50Z"
}
],
"number": 5363,
"title": "terms filter returning wrong results, maybe cache issue"
} | {
"body": "See LUCENE-5502\n\nCloses #5363\n",
"number": 5393,
"review_comments": [],
"title": "Use patched version of TermsFilter to prevent using wrong cached results"
} | {
"commits": [
{
"message": "Use patched version of TermsFilter to prevent using wrong cached results\n\nSee LUCENE-5502\n\nCloses #5363"
}
],
"files": [
{
"diff": "@@ -989,6 +989,7 @@\n <exclude>org/elasticsearch/Version.class</exclude>\n <exclude>org/apache/lucene/search/XReferenceManager.class</exclude>\n <exclude>org/apache/lucene/search/XSearcherManager.class</exclude>\n+ <exclude>org/apache/lucene/queries/XTermsFilter.class</exclude>\n <exclude>org/elasticsearch/index/percolator/stats/ShardPercolateService$RamEstimator.class</exclude>\n <exclude>org/elasticsearch/index/merge/Merges.class</exclude>\n <!-- end excludes for valid system-out -->",
"filename": "pom.xml",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,328 @@\n+package org.apache.lucene.queries;\n+\n+/*\n+ * Licensed to the Apache Software Foundation (ASF) under one or more\n+ * contributor license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright ownership.\n+ * The ASF licenses this file to You under the Apache License, Version 2.0\n+ * (the \"License\"); you may not use this file except in compliance with\n+ * the License. You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing, software\n+ * distributed under the License is distributed on an \"AS IS\" BASIS,\n+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+ * See the License for the specific language governing permissions and\n+ * limitations under the License.\n+ */\n+\n+import org.apache.lucene.index.*;\n+import org.apache.lucene.search.DocIdSet;\n+import org.apache.lucene.search.DocIdSetIterator;\n+import org.apache.lucene.search.Filter;\n+import org.apache.lucene.util.*;\n+\n+import java.io.IOException;\n+import java.util.ArrayList;\n+import java.util.Arrays;\n+import java.util.Collections;\n+import java.util.Iterator;\n+import java.util.List;\n+\n+/**\n+ * Constructs a filter for docs matching any of the terms added to this class.\n+ * Unlike a RangeFilter this can be used for filtering on multiple terms that are not necessarily in\n+ * a sequence. An example might be a collection of primary keys from a database query result or perhaps\n+ * a choice of \"category\" labels picked by the end user. As a filter, this is much faster than the\n+ * equivalent query (a BooleanQuery with many \"should\" TermQueries)\n+ */\n+public final class XTermsFilter extends Filter {\n+\n+ static {\n+ assert Version.LUCENE_47 == org.elasticsearch.Version.CURRENT.luceneVersion : \"Remove this once we are on LUCENE_48 - see LUCENE-5502\";\n+ }\n+\n+ /*\n+ * this class is often used for large number of terms in a single field.\n+ * to optimize for this case and to be filter-cache friendly we \n+ * serialize all terms into a single byte array and store offsets\n+ * in a parallel array to keep the # of object constant and speed up\n+ * equals / hashcode.\n+ * \n+ * This adds quite a bit of complexity but allows large term filters to\n+ * be efficient for GC and cache-lookups\n+ */\n+ private final int[] offsets;\n+ private final byte[] termsBytes;\n+ private final TermsAndField[] termsAndFields;\n+ private final int hashCode; // cached hashcode for fast cache lookups\n+ private static final int PRIME = 31;\n+\n+ /**\n+ * Creates a new {@link XTermsFilter} from the given list. The list\n+ * can contain duplicate terms and multiple fields.\n+ */\n+ public XTermsFilter(final List<Term> terms) {\n+ this(new FieldAndTermEnum() {\n+ // we need to sort for deduplication and to have a common cache key\n+ final Iterator<Term> iter = sort(terms).iterator();\n+ @Override\n+ public BytesRef next() {\n+ if (iter.hasNext()) {\n+ Term next = iter.next();\n+ field = next.field();\n+ return next.bytes();\n+ }\n+ return null;\n+ }}, terms.size());\n+ }\n+\n+ /**\n+ * Creates a new {@link XTermsFilter} from the given {@link BytesRef} list for\n+ * a single field.\n+ */\n+ public XTermsFilter(final String field, final List<BytesRef> terms) {\n+ this(new FieldAndTermEnum(field) {\n+ // we need to sort for deduplication and to have a common cache key\n+ final Iterator<BytesRef> iter = sort(terms).iterator();\n+ @Override\n+ public BytesRef next() {\n+ if (iter.hasNext()) {\n+ return iter.next();\n+ }\n+ return null;\n+ }\n+ }, terms.size());\n+ }\n+\n+ /**\n+ * Creates a new {@link XTermsFilter} from the given {@link BytesRef} array for\n+ * a single field.\n+ */\n+ public XTermsFilter(final String field, final BytesRef...terms) {\n+ // this ctor prevents unnecessary Term creations\n+ this(field, Arrays.asList(terms));\n+ }\n+\n+ /**\n+ * Creates a new {@link XTermsFilter} from the given array. The array can\n+ * contain duplicate terms and multiple fields.\n+ */\n+ public XTermsFilter(final Term... terms) {\n+ this(Arrays.asList(terms));\n+ }\n+\n+\n+ private XTermsFilter(FieldAndTermEnum iter, int length) {\n+ // TODO: maybe use oal.index.PrefixCodedTerms instead?\n+ // If number of terms is more than a few hundred it\n+ // should be a win\n+\n+ // TODO: we also pack terms in FieldCache/DocValues\n+ // ... maybe we can refactor to share that code\n+\n+ // TODO: yet another option is to build the union of the terms in\n+ // an automaton an call intersect on the termsenum if the density is high\n+\n+ int hash = 9;\n+ byte[] serializedTerms = new byte[0];\n+ this.offsets = new int[length+1];\n+ int lastEndOffset = 0;\n+ int index = 0;\n+ ArrayList<TermsAndField> termsAndFields = new ArrayList<TermsAndField>();\n+ TermsAndField lastTermsAndField = null;\n+ BytesRef previousTerm = null;\n+ String previousField = null;\n+ BytesRef currentTerm;\n+ String currentField;\n+ while((currentTerm = iter.next()) != null) {\n+ currentField = iter.field();\n+ if (currentField == null) {\n+ throw new IllegalArgumentException(\"Field must not be null\");\n+ }\n+ if (previousField != null) {\n+ // deduplicate\n+ if (previousField.equals(currentField)) {\n+ if (previousTerm.bytesEquals(currentTerm)){\n+ continue;\n+ }\n+ } else {\n+ final int start = lastTermsAndField == null ? 0 : lastTermsAndField.end;\n+ lastTermsAndField = new TermsAndField(start, index, previousField);\n+ termsAndFields.add(lastTermsAndField);\n+ }\n+ }\n+ hash = PRIME * hash + currentField.hashCode();\n+ hash = PRIME * hash + currentTerm.hashCode();\n+ if (serializedTerms.length < lastEndOffset+currentTerm.length) {\n+ serializedTerms = ArrayUtil.grow(serializedTerms, lastEndOffset+currentTerm.length);\n+ }\n+ System.arraycopy(currentTerm.bytes, currentTerm.offset, serializedTerms, lastEndOffset, currentTerm.length);\n+ offsets[index] = lastEndOffset;\n+ lastEndOffset += currentTerm.length;\n+ index++;\n+ previousTerm = currentTerm;\n+ previousField = currentField;\n+ }\n+ offsets[index] = lastEndOffset;\n+ final int start = lastTermsAndField == null ? 0 : lastTermsAndField.end;\n+ lastTermsAndField = new TermsAndField(start, index, previousField);\n+ termsAndFields.add(lastTermsAndField);\n+ this.termsBytes = ArrayUtil.shrink(serializedTerms, lastEndOffset);\n+ this.termsAndFields = termsAndFields.toArray(new TermsAndField[termsAndFields.size()]);\n+ this.hashCode = hash;\n+\n+ }\n+\n+\n+ @Override\n+ public DocIdSet getDocIdSet(AtomicReaderContext context, Bits acceptDocs) throws IOException {\n+ final AtomicReader reader = context.reader();\n+ FixedBitSet result = null; // lazy init if needed - no need to create a big bitset ahead of time\n+ final Fields fields = reader.fields();\n+ final BytesRef spare = new BytesRef(this.termsBytes);\n+ if (fields == null) {\n+ return result;\n+ }\n+ Terms terms = null;\n+ TermsEnum termsEnum = null;\n+ DocsEnum docs = null;\n+ for (TermsAndField termsAndField : this.termsAndFields) {\n+ if ((terms = fields.terms(termsAndField.field)) != null) {\n+ termsEnum = terms.iterator(termsEnum); // this won't return null\n+ for (int i = termsAndField.start; i < termsAndField.end; i++) {\n+ spare.offset = offsets[i];\n+ spare.length = offsets[i+1] - offsets[i];\n+ if (termsEnum.seekExact(spare)) {\n+ docs = termsEnum.docs(acceptDocs, docs, DocsEnum.FLAG_NONE); // no freq since we don't need them\n+ if (result == null) {\n+ if (docs.nextDoc() != DocIdSetIterator.NO_MORE_DOCS) {\n+ result = new FixedBitSet(reader.maxDoc());\n+ // lazy init but don't do it in the hot loop since we could read many docs\n+ result.set(docs.docID());\n+ }\n+ }\n+ while (docs.nextDoc() != DocIdSetIterator.NO_MORE_DOCS) {\n+ result.set(docs.docID());\n+ }\n+ }\n+ }\n+ }\n+ }\n+ return result;\n+ }\n+\n+ @Override\n+ public boolean equals(Object obj) {\n+ if (this == obj) {\n+ return true;\n+ }\n+ if ((obj == null) || (obj.getClass() != this.getClass())) {\n+ return false;\n+ }\n+\n+ XTermsFilter test = (XTermsFilter) obj;\n+ // first check the fields before even comparing the bytes\n+ if (test.hashCode == hashCode && Arrays.equals(termsAndFields, test.termsAndFields)) {\n+ int lastOffset = termsAndFields[termsAndFields.length - 1].end;\n+ // compare offsets since we sort they must be identical\n+ if (ArrayUtil.equals(offsets, 0, test.offsets, 0, lastOffset + 1)) {\n+ // straight byte comparison since we sort they must be identical\n+ return ArrayUtil.equals(termsBytes, 0, test.termsBytes, 0, offsets[lastOffset]);\n+ }\n+ }\n+ return false;\n+ }\n+\n+ @Override\n+ public int hashCode() {\n+ return hashCode;\n+ }\n+\n+ @Override\n+ public String toString() {\n+ StringBuilder builder = new StringBuilder();\n+ BytesRef spare = new BytesRef(termsBytes);\n+ boolean first = true;\n+ for (int i = 0; i < termsAndFields.length; i++) {\n+ TermsAndField current = termsAndFields[i];\n+ for (int j = current.start; j < current.end; j++) {\n+ spare.offset = offsets[j];\n+ spare.length = offsets[j+1] - offsets[j];\n+ if (!first) {\n+ builder.append(' ');\n+ }\n+ first = false;\n+ builder.append(current.field).append(':');\n+ builder.append(spare.utf8ToString());\n+ }\n+ }\n+\n+ return builder.toString();\n+ }\n+\n+ private static final class TermsAndField {\n+ final int start;\n+ final int end;\n+ final String field;\n+\n+\n+ TermsAndField(int start, int end, String field) {\n+ super();\n+ this.start = start;\n+ this.end = end;\n+ this.field = field;\n+ }\n+\n+ @Override\n+ public int hashCode() {\n+ final int prime = 31;\n+ int result = 1;\n+ result = prime * result + ((field == null) ? 0 : field.hashCode());\n+ result = prime * result + end;\n+ result = prime * result + start;\n+ return result;\n+ }\n+\n+ @Override\n+ public boolean equals(Object obj) {\n+ if (this == obj) return true;\n+ if (obj == null) return false;\n+ if (getClass() != obj.getClass()) return false;\n+ TermsAndField other = (TermsAndField) obj;\n+ if (field == null) {\n+ if (other.field != null) return false;\n+ } else if (!field.equals(other.field)) return false;\n+ if (end != other.end) return false;\n+ if (start != other.start) return false;\n+ return true;\n+ }\n+\n+ }\n+\n+ private static abstract class FieldAndTermEnum {\n+ protected String field;\n+\n+ public abstract BytesRef next();\n+\n+ public FieldAndTermEnum() {}\n+\n+ public FieldAndTermEnum(String field) { this.field = field; }\n+\n+ public String field() {\n+ return field;\n+ }\n+ }\n+\n+ /*\n+ * simple utility that returns the in-place sorted list\n+ */\n+ private static <T extends Comparable<? super T>> List<T> sort(List<T> toSort) {\n+ if (toSort.isEmpty()) {\n+ throw new IllegalArgumentException(\"no terms provided\");\n+ }\n+ Collections.sort(toSort);\n+ return toSort;\n+ }\n+}",
"filename": "src/main/java/org/apache/lucene/queries/XTermsFilter.java",
"status": "added"
},
{
"diff": "@@ -27,7 +27,7 @@\n import org.apache.lucene.index.Term;\n import org.apache.lucene.queries.FilterClause;\n import org.apache.lucene.queries.TermFilter;\n-import org.apache.lucene.queries.TermsFilter;\n+import org.apache.lucene.queries.XTermsFilter;\n import org.apache.lucene.search.BooleanClause;\n import org.apache.lucene.search.Filter;\n import org.apache.lucene.util.BytesRef;\n@@ -476,7 +476,7 @@ public Filter searchFilter(String... types) {\n for (int i = 0; i < typesBytes.length; i++) {\n typesBytes[i] = new BytesRef(types[i]);\n }\n- TermsFilter termsFilter = new TermsFilter(TypeFieldMapper.NAME, typesBytes);\n+ XTermsFilter termsFilter = new XTermsFilter(TypeFieldMapper.NAME, typesBytes);\n if (filterPercolateType) {\n return new AndFilter(ImmutableList.of(excludePercolatorType, termsFilter));\n } else {",
"filename": "src/main/java/org/elasticsearch/index/mapper/MapperService.java",
"status": "modified"
},
{
"diff": "@@ -30,7 +30,7 @@\n import org.apache.lucene.index.FieldInfo.IndexOptions;\n import org.apache.lucene.index.Term;\n import org.apache.lucene.queries.TermFilter;\n-import org.apache.lucene.queries.TermsFilter;\n+import org.apache.lucene.queries.XTermsFilter;\n import org.apache.lucene.search.*;\n import org.apache.lucene.util.BytesRef;\n import org.elasticsearch.ElasticsearchIllegalArgumentException;\n@@ -484,7 +484,7 @@ public Filter termsFilter(List values, @Nullable QueryParseContext context) {\n for (int i = 0; i < bytesRefs.length; i++) {\n bytesRefs[i] = indexedValueForSearch(values.get(i));\n }\n- return new TermsFilter(names.indexName(), bytesRefs);\n+ return new XTermsFilter(names.indexName(), bytesRefs);\n }\n \n /**",
"filename": "src/main/java/org/elasticsearch/index/mapper/core/AbstractFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -25,7 +25,7 @@\n import org.apache.lucene.document.FieldType;\n import org.apache.lucene.index.FieldInfo.IndexOptions;\n import org.apache.lucene.index.Term;\n-import org.apache.lucene.queries.TermsFilter;\n+import org.apache.lucene.queries.XTermsFilter;\n import org.apache.lucene.search.*;\n import org.apache.lucene.util.BytesRef;\n import org.elasticsearch.common.Nullable;\n@@ -180,15 +180,15 @@ public Filter termFilter(Object value, @Nullable QueryParseContext context) {\n if (fieldType.indexed() || context == null) {\n return super.termFilter(value, context);\n }\n- return new TermsFilter(UidFieldMapper.NAME, Uid.createTypeUids(context.queryTypes(), value));\n+ return new XTermsFilter(UidFieldMapper.NAME, Uid.createTypeUids(context.queryTypes(), value));\n }\n \n @Override\n public Filter termsFilter(List values, @Nullable QueryParseContext context) {\n if (fieldType.indexed() || context == null) {\n return super.termsFilter(values, context);\n }\n- return new TermsFilter(UidFieldMapper.NAME, Uid.createTypeUids(context.queryTypes(), values));\n+ return new XTermsFilter(UidFieldMapper.NAME, Uid.createTypeUids(context.queryTypes(), values));\n }\n \n @Override",
"filename": "src/main/java/org/elasticsearch/index/mapper/internal/IdFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -23,7 +23,7 @@\n import org.apache.lucene.index.FieldInfo.IndexOptions;\n import org.apache.lucene.index.Term;\n import org.apache.lucene.queries.TermFilter;\n-import org.apache.lucene.queries.TermsFilter;\n+import org.apache.lucene.queries.XTermsFilter;\n import org.apache.lucene.search.ConstantScoreQuery;\n import org.apache.lucene.search.Filter;\n import org.apache.lucene.search.Query;\n@@ -278,7 +278,7 @@ public Filter termFilter(Object value, @Nullable QueryParseContext context) {\n for (String type : context.mapperService().types()) {\n typesValues.add(Uid.createUidAsBytes(type, bValue));\n }\n- return new TermsFilter(names.indexName(), typesValues);\n+ return new XTermsFilter(names.indexName(), typesValues);\n }\n }\n \n@@ -311,7 +311,7 @@ public Filter termsFilter(List values, @Nullable QueryParseContext context) {\n }\n }\n }\n- return new TermsFilter(names.indexName(), bValues);\n+ return new XTermsFilter(names.indexName(), bValues);\n }\n \n /**",
"filename": "src/main/java/org/elasticsearch/index/mapper/internal/ParentFieldMapper.java",
"status": "modified"
},
{
"diff": "@@ -21,7 +21,7 @@\n \n import com.google.common.collect.ImmutableList;\n import com.google.common.collect.Iterables;\n-import org.apache.lucene.queries.TermsFilter;\n+import org.apache.lucene.queries.XTermsFilter;\n import org.apache.lucene.search.Filter;\n import org.apache.lucene.util.BytesRef;\n import org.elasticsearch.common.inject.Inject;\n@@ -108,7 +108,7 @@ public Filter parse(QueryParseContext parseContext) throws IOException, QueryPar\n types = parseContext.mapperService().types();\n }\n \n- TermsFilter filter = new TermsFilter(UidFieldMapper.NAME, Uid.createTypeUids(types, ids));\n+ XTermsFilter filter = new XTermsFilter(UidFieldMapper.NAME, Uid.createTypeUids(types, ids));\n if (filterName != null) {\n parseContext.addNamedFilter(filterName, filter);\n }",
"filename": "src/main/java/org/elasticsearch/index/query/IdsFilterParser.java",
"status": "modified"
},
{
"diff": "@@ -21,7 +21,7 @@\n \n import com.google.common.collect.ImmutableList;\n import com.google.common.collect.Iterables;\n-import org.apache.lucene.queries.TermsFilter;\n+import org.apache.lucene.queries.XTermsFilter;\n import org.apache.lucene.search.ConstantScoreQuery;\n import org.apache.lucene.search.Query;\n import org.apache.lucene.util.BytesRef;\n@@ -115,7 +115,7 @@ public Query parse(QueryParseContext parseContext) throws IOException, QueryPars\n types = parseContext.mapperService().types();\n }\n \n- TermsFilter filter = new TermsFilter(UidFieldMapper.NAME, Uid.createTypeUids(types, ids));\n+ XTermsFilter filter = new XTermsFilter(UidFieldMapper.NAME, Uid.createTypeUids(types, ids));\n // no need for constant score filter, since we don't cache the filter, and it always takes deletes into account\n ConstantScoreQuery query = new ConstantScoreQuery(filter);\n query.setBoost(boost);",
"filename": "src/main/java/org/elasticsearch/index/query/IdsQueryParser.java",
"status": "modified"
},
{
"diff": "@@ -22,7 +22,7 @@\n import com.google.common.collect.Lists;\n import org.apache.lucene.index.Term;\n import org.apache.lucene.queries.TermFilter;\n-import org.apache.lucene.queries.TermsFilter;\n+import org.apache.lucene.queries.XTermsFilter;\n import org.apache.lucene.search.BooleanClause;\n import org.apache.lucene.search.Filter;\n import org.apache.lucene.util.BytesRef;\n@@ -201,7 +201,7 @@ public Filter parse(QueryParseContext parseContext) throws IOException, QueryPar\n for (int i = 0; i < filterValues.length; i++) {\n filterValues[i] = BytesRefs.toBytesRef(terms.get(i));\n }\n- filter = new TermsFilter(fieldName, filterValues);\n+ filter = new XTermsFilter(fieldName, filterValues);\n }\n // cache the whole filter by default, or if explicitly told to\n if (cache == null || cache) {",
"filename": "src/main/java/org/elasticsearch/index/query/TermsFilterParser.java",
"status": "modified"
},
{
"diff": "@@ -25,7 +25,7 @@\n import org.apache.lucene.document.StringField;\n import org.apache.lucene.index.*;\n import org.apache.lucene.queries.TermFilter;\n-import org.apache.lucene.queries.TermsFilter;\n+import org.apache.lucene.queries.XTermsFilter;\n import org.apache.lucene.search.DocIdSet;\n import org.apache.lucene.store.Directory;\n import org.apache.lucene.store.RAMDirectory;\n@@ -97,19 +97,19 @@ public void testTermsFilter() throws Exception {\n AtomicReader reader = SlowCompositeReaderWrapper.wrap(DirectoryReader.open(w, true));\n w.close();\n \n- TermsFilter tf = new TermsFilter(new Term[]{new Term(fieldName, \"19\")});\n+ XTermsFilter tf = new XTermsFilter(new Term[]{new Term(fieldName, \"19\")});\n FixedBitSet bits = (FixedBitSet) tf.getDocIdSet(reader.getContext(), reader.getLiveDocs());\n assertThat(bits, nullValue());\n \n- tf = new TermsFilter(new Term[]{new Term(fieldName, \"19\"), new Term(fieldName, \"20\")});\n+ tf = new XTermsFilter(new Term[]{new Term(fieldName, \"19\"), new Term(fieldName, \"20\")});\n bits = (FixedBitSet) tf.getDocIdSet(reader.getContext(), reader.getLiveDocs());\n assertThat(bits.cardinality(), equalTo(1));\n \n- tf = new TermsFilter(new Term[]{new Term(fieldName, \"19\"), new Term(fieldName, \"20\"), new Term(fieldName, \"10\")});\n+ tf = new XTermsFilter(new Term[]{new Term(fieldName, \"19\"), new Term(fieldName, \"20\"), new Term(fieldName, \"10\")});\n bits = (FixedBitSet) tf.getDocIdSet(reader.getContext(), reader.getLiveDocs());\n assertThat(bits.cardinality(), equalTo(2));\n \n- tf = new TermsFilter(new Term[]{new Term(fieldName, \"19\"), new Term(fieldName, \"20\"), new Term(fieldName, \"10\"), new Term(fieldName, \"00\")});\n+ tf = new XTermsFilter(new Term[]{new Term(fieldName, \"19\"), new Term(fieldName, \"20\"), new Term(fieldName, \"10\"), new Term(fieldName, \"00\")});\n bits = (FixedBitSet) tf.getDocIdSet(reader.getContext(), reader.getLiveDocs());\n assertThat(bits.cardinality(), equalTo(2));\n ",
"filename": "src/test/java/org/elasticsearch/common/lucene/search/TermsFilterTests.java",
"status": "modified"
},
{
"diff": "@@ -25,7 +25,7 @@\n import org.apache.lucene.document.TextField;\n import org.apache.lucene.index.*;\n import org.apache.lucene.queries.FilterClause;\n-import org.apache.lucene.queries.TermsFilter;\n+import org.apache.lucene.queries.XTermsFilter;\n import org.apache.lucene.search.*;\n import org.apache.lucene.store.Directory;\n import org.apache.lucene.store.RAMDirectory;\n@@ -88,7 +88,7 @@ private Filter getRangeFilter(String field, String lowerPrice, String upperPrice\n }\n \n private Filter getTermsFilter(String field, String text) {\n- return new TermsFilter(new Term(field, text));\n+ return new XTermsFilter(new Term(field, text));\n }\n \n private Filter getWrappedTermQuery(String field, String text) {",
"filename": "src/test/java/org/elasticsearch/common/lucene/search/XBooleanFilterLuceneTests.java",
"status": "modified"
},
{
"diff": "@@ -1307,8 +1307,8 @@ public void testTermsFilterQueryBuilder() throws Exception {\n Query parsedQuery = queryParser.parse(filteredQuery(termQuery(\"name.first\", \"shay\"), termsFilter(\"name.last\", \"banon\", \"kimchy\"))).query();\n assertThat(parsedQuery, instanceOf(XFilteredQuery.class));\n XFilteredQuery filteredQuery = (XFilteredQuery) parsedQuery;\n- assertThat(filteredQuery.getFilter(), instanceOf(TermsFilter.class));\n- TermsFilter termsFilter = (TermsFilter) filteredQuery.getFilter();\n+ assertThat(filteredQuery.getFilter(), instanceOf(XTermsFilter.class));\n+ XTermsFilter termsFilter = (XTermsFilter) filteredQuery.getFilter();\n //assertThat(termsFilter.getTerms().length, equalTo(2));\n //assertThat(termsFilter.getTerms()[0].text(), equalTo(\"banon\"));\n }\n@@ -1321,8 +1321,8 @@ public void testTermsFilterQuery() throws Exception {\n Query parsedQuery = queryParser.parse(query).query();\n assertThat(parsedQuery, instanceOf(XFilteredQuery.class));\n XFilteredQuery filteredQuery = (XFilteredQuery) parsedQuery;\n- assertThat(filteredQuery.getFilter(), instanceOf(TermsFilter.class));\n- TermsFilter termsFilter = (TermsFilter) filteredQuery.getFilter();\n+ assertThat(filteredQuery.getFilter(), instanceOf(XTermsFilter.class));\n+ XTermsFilter termsFilter = (XTermsFilter) filteredQuery.getFilter();\n //assertThat(termsFilter.getTerms().length, equalTo(2));\n //assertThat(termsFilter.getTerms()[0].text(), equalTo(\"banon\"));\n }\n@@ -1335,8 +1335,8 @@ public void testTermsWithNameFilterQuery() throws Exception {\n assertThat(parsedQuery.namedFilters().containsKey(\"test\"), equalTo(true));\n assertThat(parsedQuery.query(), instanceOf(XFilteredQuery.class));\n XFilteredQuery filteredQuery = (XFilteredQuery) parsedQuery.query();\n- assertThat(filteredQuery.getFilter(), instanceOf(TermsFilter.class));\n- TermsFilter termsFilter = (TermsFilter) filteredQuery.getFilter();\n+ assertThat(filteredQuery.getFilter(), instanceOf(XTermsFilter.class));\n+ XTermsFilter termsFilter = (XTermsFilter) filteredQuery.getFilter();\n //assertThat(termsFilter.getTerms().length, equalTo(2));\n //assertThat(termsFilter.getTerms()[0].text(), equalTo(\"banon\"));\n }",
"filename": "src/test/java/org/elasticsearch/index/query/SimpleIndexQueryParserTests.java",
"status": "modified"
}
]
} |
{
"body": "The doc here says http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/common-options.html#fuzziness\n\n> Note: in all APIs except for the Fuzzy Like This Query, the maximum allowed edit distance is 2.\n\nHowever when one tries to do this in Java API:\n\n```\nfuzzyLikeThisQuery(fieldName).likeText(searchString).fuzziness(Fuzziness.fromEdits(4));\n```\n\nAn exception is thrown from the Fuzziness.fromEdits(4) method call\n\n> org.elasticsearch.ElasticsearchIllegalArgumentException: Valid edit distances are [0, 1, 2] but was [4]\n\nWe thought that maybe this is a bug in the Java API and tried the same thing with the REST API. The query was this:\n\n```\n{\n \"flt\": {\n \"fields\": [\n \"comment\"\n ],\n \"like_text\": \"FFFdfds\",\n \"fuzziness\": \"4\"\n }\n}\n```\n\nthe result was this:\n\n> ElasticsearchIllegalArgumentException[Can't get similarity from fuzziness [4]]; }]\n\nBut the query works fro values 1 and 2.\nSo I guess either the documentation is mistaken or the implementation. We were previously using the edit distances of up to 4-5 characters. After the update we're kind of lost for now :)\n",
"comments": [
{
"body": "I will look into this - thanks for opening it\n",
"created_at": "2014-02-28T11:26:06Z"
},
{
"body": "thank you for looking into this :) We're currently being late for a release date cause of this issue. Can you please tell us if we should expect any news on this in the near future?\n\nThanks once more\n",
"created_at": "2014-03-05T13:03:51Z"
},
{
"body": "hey @dnavre a fix for this will be in the next release.. thanks for you patience \n",
"created_at": "2014-03-10T08:57:10Z"
}
],
"number": 5292,
"title": "Edit distance allowed values for fuzzy_like_this query"
} | {
"body": "Due to a regression edit distances > 2 threw exceptions after unifying\nthe fuzziness factor in Elasticsearch `1.0`. This commit brings back the\nexpceted behavior.\n\nCloses #5292\n",
"number": 5374,
"review_comments": [
{
"body": "Should it rather be just `com.carrotsearch.randomizedtesting.generators.RandomPicks`?\n",
"created_at": "2014-03-13T08:09:12Z"
}
],
"title": "Allow edit distances > 2 on FuzzyLikeThisQuery"
} | {
"commits": [
{
"message": "Allow edit distances > 2 on FuzzyLikeThisQuery\n\nDue to a regression edit distances > 2 threw exceptions after unifying\nthe fuzziness factor in Elasticsearch `1.0`. This commit brings back the\nexpceted behavior.\n\nCloses #5292"
}
],
"files": [
{
"diff": "@@ -141,8 +141,15 @@ public Query parse(QueryParseContext parseContext) throws IOException, QueryPars\n if (fields.isEmpty()) {\n return null;\n }\n+ float minSimilarity = fuzziness.asFloat();\n+ if (minSimilarity >= 1.0f && minSimilarity != (int)minSimilarity) {\n+ throw new ElasticsearchIllegalArgumentException(\"fractional edit distances are not allowed\");\n+ }\n+ if (minSimilarity < 0.0f) {\n+ throw new ElasticsearchIllegalArgumentException(\"minimumSimilarity cannot be less than 0\");\n+ }\n for (String field : fields) {\n- query.addTerms(likeText, field, fuzziness.asSimilarity(), prefixLength);\n+ query.addTerms(likeText, field, minSimilarity, prefixLength);\n }\n query.setBoost(boost);\n query.setIgnoreTF(ignoreTF);",
"filename": "src/main/java/org/elasticsearch/index/query/FuzzyLikeThisQueryParser.java",
"status": "modified"
},
{
"diff": "@@ -362,4 +362,5 @@ public static Class<?> getTestClass() {\n public String getTestName() {\n return threadAndTestNameRule.testMethodName;\n }\n+\n }",
"filename": "src/test/java/org/apache/lucene/util/AbstractRandomizedTest.java",
"status": "modified"
},
{
"diff": "@@ -29,6 +29,7 @@\n import org.apache.lucene.spatial.prefix.IntersectsPrefixTreeFilter;\n import org.apache.lucene.util.BytesRef;\n import org.apache.lucene.util.NumericUtils;\n+import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.cache.recycler.CacheRecyclerModule;\n import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.common.bytes.BytesArray;\n@@ -1674,7 +1675,41 @@ public void testFuzzyLikeThisBuilder() throws Exception {\n IndexQueryParserService queryParser = queryParser();\n Query parsedQuery = queryParser.parse(fuzzyLikeThisQuery(\"name.first\", \"name.last\").likeText(\"something\").maxQueryTerms(12)).query();\n assertThat(parsedQuery, instanceOf(FuzzyLikeThisQuery.class));\n-// FuzzyLikeThisQuery fuzzyLikeThisQuery = (FuzzyLikeThisQuery) parsedQuery;\n+ parsedQuery = queryParser.parse(fuzzyLikeThisQuery(\"name.first\", \"name.last\").likeText(\"something\").maxQueryTerms(12).fuzziness(Fuzziness.build(\"4\"))).query();\n+ assertThat(parsedQuery, instanceOf(FuzzyLikeThisQuery.class));\n+\n+ Query parsedQuery1 = queryParser.parse(fuzzyLikeThisQuery(\"name.first\", \"name.last\").likeText(\"something\").maxQueryTerms(12).fuzziness(Fuzziness.build(\"4.0\"))).query();\n+ assertThat(parsedQuery1, instanceOf(FuzzyLikeThisQuery.class));\n+ assertThat(parsedQuery, equalTo(parsedQuery1));\n+\n+ try {\n+ queryParser.parse(fuzzyLikeThisQuery(\"name.first\", \"name.last\").likeText(\"something\").maxQueryTerms(12).fuzziness(Fuzziness.build(\"4.1\"))).query();\n+ fail(\"exception expected - fractional edit distance\");\n+ } catch (ElasticsearchException ex) {\n+ //\n+ }\n+\n+ try {\n+ queryParser.parse(fuzzyLikeThisQuery(\"name.first\", \"name.last\").likeText(\"something\").maxQueryTerms(12).fuzziness(Fuzziness.build(\"-\" + between(1, 100)))).query();\n+ fail(\"exception expected - negative edit distance\");\n+ } catch (ElasticsearchException ex) {\n+ //\n+ }\n+ String[] queries = new String[] {\n+ \"{\\\"flt\\\": {\\\"fields\\\": [\\\"comment\\\"], \\\"like_text\\\": \\\"FFFdfds\\\",\\\"fuzziness\\\": \\\"4\\\"}}\",\n+ \"{\\\"flt\\\": {\\\"fields\\\": [\\\"comment\\\"], \\\"like_text\\\": \\\"FFFdfds\\\",\\\"fuzziness\\\": \\\"4.00000000\\\"}}\",\n+ \"{\\\"flt\\\": {\\\"fields\\\": [\\\"comment\\\"], \\\"like_text\\\": \\\"FFFdfds\\\",\\\"fuzziness\\\": \\\"4.\\\"}}\",\n+ \"{\\\"flt\\\": {\\\"fields\\\": [\\\"comment\\\"], \\\"like_text\\\": \\\"FFFdfds\\\",\\\"fuzziness\\\": 4}}\",\n+ \"{\\\"flt\\\": {\\\"fields\\\": [\\\"comment\\\"], \\\"like_text\\\": \\\"FFFdfds\\\",\\\"fuzziness\\\": 4.0}}\"\n+ };\n+ int iters = atLeast(5);\n+ for (int i = 0; i < iters; i++) {\n+ parsedQuery = queryParser.parse(new BytesArray((String) randomFrom(queries))).query();\n+ parsedQuery1 = queryParser.parse(new BytesArray((String) randomFrom(queries))).query();\n+ assertThat(parsedQuery1, instanceOf(FuzzyLikeThisQuery.class));\n+ assertThat(parsedQuery, instanceOf(FuzzyLikeThisQuery.class));\n+ assertThat(parsedQuery, equalTo(parsedQuery1));\n+ }\n }\n \n @Test",
"filename": "src/test/java/org/elasticsearch/index/query/SimpleIndexQueryParserTests.java",
"status": "modified"
}
]
} |
{
"body": "As for now, ES ignores field level boost factors specified in mapping during computation of documents' score for common terms query.\nThis places certain restrictions on the client of ES. \nE.g., the client has to specify different boost factors manually if he want to have distinct scores for the same term retrieved from two fields.\nThis quickly becomes unmanageable if the client has no direct control over search fields or if this number grows fast.\nThe absence of the described functionally looks especially weird comparing to the way how regular terms query works (it does honor field level boosts, see below).\n\nSample setup:\n\n**1. Create index and mapping.**\n\n``` bash\ncurl -XPUT 'http://localhost:9200/messages/'\n```\n\n``` bash\ncurl -XPUT 'http://localhost:9200/messages/message/_mapping' -d '\n{\n \"message\" : {\n \"properties\" : {\n \"message\" : {\"type\" : \"string\", \"store\" : true },\n \"comment\" : {\"type\" : \"string\", \"store\" : true , \"boost\" : 5.0 }\n }\n }\n}'\n```\n\n**2. Create sample docs**\n\n``` bash\ncurl -XPUT 'http://localhost:9200/messages/message/1' -d '{\n \"user\" : \"user1\",\n \"message\" : \"test message\",\n \"comment\" : \"whatever\"\n}'\n\ncurl -XPUT 'http://localhost:9200/messages/message/2' -d '{\n \"user\" : \"user2\",\n \"message\" : \"hello world\",\n \"comment\" : \"test comment\"\n}'\n```\n\n**3. Wait for ES to be synced**\n\n``` bash\ncurl -XPOST 'http://localhost:9200/messages/_refresh' \n```\n\n**4. Search in default catch-all field using term query**\n\n``` bash\ncurl -XPOST 'http://localhost:9200/messages/_search' -d '{ \"query\" : { \"query_string\" : { \"query\" : \"test\" } } , \"explain\" : true }' | python -mjson.tool\n```\n\n**5. Search in default catch-all field using common terms query**\n\n``` bash\ncurl -XPOST 'http://localhost:9200/messages/_search' -d '{ \"query\" : { \"common\" : { \"_all\" : { \"query\" : \"test\" } } } , \"explain\" : true }' | python -mjson.tool\n```\n\nThe first query (step 4) works as expected -- document with id equal to 2 has higher score.\nThe result of second query (step 5) is opposite.\n",
"comments": [
{
"body": "ok I see the problem here I guess I have to fix that in lucene as well.... thanks for reporting this\n",
"created_at": "2014-02-26T16:30:45Z"
}
],
"number": 5258,
"title": "Add support of field boost in common terms query"
} | {
"body": "I added a patch to lucene to fix this upstream: \nhttps://issues.apache.org/jira/browse/LUCENE-5478\n\nCloses #5258\n",
"number": 5273,
"review_comments": [
{
"body": "s/obsolet/obsolete/\n",
"created_at": "2014-02-27T13:12:43Z"
},
{
"body": "indentation makes the `if` block a bit hard to read\n",
"created_at": "2014-02-27T13:16:08Z"
},
{
"body": "can `maxTermFrequency * maxDoc` overflow?\n",
"created_at": "2014-02-27T13:17:42Z"
},
{
"body": "(not as a float but due to the cast to int after `Math.ceil`)\n",
"created_at": "2014-02-27T13:19:23Z"
},
{
"body": "well maxDoc is an integer so maxTermFreq \\* maxDoc <= Integer.MAX_VALUE and therefore the ceil is also <= Integer.MAX_VALUE\n",
"created_at": "2014-02-27T13:47:57Z"
},
{
"body": "that is a copy from lucene but I will format that\n",
"created_at": "2014-02-27T13:48:12Z"
},
{
"body": "I will fix\n",
"created_at": "2014-02-27T13:48:18Z"
}
],
"title": "Use FieldMapper to create the low level term queries in CommonTermQuery"
} | {
"commits": [
{
"message": "Use FieldMapper to create the low level term queries in CommonTermQuery\n\nCloses #5258"
}
],
"files": [
{
"diff": "@@ -19,8 +19,20 @@\n \n package org.apache.lucene.queries;\n \n+import org.apache.lucene.index.IndexReader;\n+import org.apache.lucene.index.Term;\n+import org.apache.lucene.index.TermContext;\n+import org.apache.lucene.search.BooleanClause;\n import org.apache.lucene.search.BooleanClause.Occur;\n+import org.apache.lucene.search.BooleanQuery;\n+import org.apache.lucene.search.Query;\n+import org.apache.lucene.search.TermQuery;\n+import org.apache.lucene.util.Version;\n+import org.elasticsearch.common.lucene.Lucene;\n import org.elasticsearch.common.lucene.search.Queries;\n+import org.elasticsearch.index.mapper.FieldMapper;\n+\n+import java.io.IOException;\n \n /**\n * Extended version of {@link CommonTermsQuery} that allows to pass in a\n@@ -29,12 +41,11 @@\n */\n public class ExtendedCommonTermsQuery extends CommonTermsQuery {\n \n- public ExtendedCommonTermsQuery(Occur highFreqOccur, Occur lowFreqOccur, float maxTermFrequency, boolean disableCoord) {\n- super(highFreqOccur, lowFreqOccur, maxTermFrequency, disableCoord);\n- }\n+ private final FieldMapper<?> mapper;\n \n- public ExtendedCommonTermsQuery(Occur highFreqOccur, Occur lowFreqOccur, float maxTermFrequency) {\n- super(highFreqOccur, lowFreqOccur, maxTermFrequency);\n+ public ExtendedCommonTermsQuery(Occur highFreqOccur, Occur lowFreqOccur, float maxTermFrequency, boolean disableCoord, FieldMapper<?> mapper) {\n+ super(highFreqOccur, lowFreqOccur, maxTermFrequency, disableCoord);\n+ this.mapper = mapper;\n }\n \n private String lowFreqMinNumShouldMatchSpec;\n@@ -72,4 +83,94 @@ public void setLowFreqMinimumNumberShouldMatch(String spec) {\n public String getLowFreqMinimumNumberShouldMatchSpec() {\n return lowFreqMinNumShouldMatchSpec;\n }\n+\n+ // LUCENE-UPGRADE: remove this method if on 4.8\n+ @Override\n+ public Query rewrite(IndexReader reader) throws IOException {\n+ if (this.terms.isEmpty()) {\n+ return new BooleanQuery();\n+ } else if (this.terms.size() == 1) {\n+ final Query tq = newTermQuery(this.terms.get(0), null);\n+ tq.setBoost(getBoost());\n+ return tq;\n+ }\n+ return super.rewrite(reader);\n+ }\n+\n+ // LUCENE-UPGRADE: remove this method if on 4.8\n+ @Override\n+ protected Query buildQuery(final int maxDoc,\n+ final TermContext[] contextArray, final Term[] queryTerms) {\n+ BooleanQuery lowFreq = new BooleanQuery(disableCoord);\n+ BooleanQuery highFreq = new BooleanQuery(disableCoord);\n+ highFreq.setBoost(highFreqBoost);\n+ lowFreq.setBoost(lowFreqBoost);\n+ BooleanQuery query = new BooleanQuery(true);\n+ for (int i = 0; i < queryTerms.length; i++) {\n+ TermContext termContext = contextArray[i];\n+ if (termContext == null) {\n+ lowFreq.add(newTermQuery(queryTerms[i], null), lowFreqOccur);\n+ } else {\n+ if ((maxTermFrequency >= 1f && termContext.docFreq() > maxTermFrequency)\n+ || (termContext.docFreq() > (int) Math.ceil(maxTermFrequency * (float) maxDoc))) {\n+ highFreq.add(newTermQuery(queryTerms[i], termContext), highFreqOccur);\n+ } else {\n+ lowFreq.add(newTermQuery(queryTerms[i], termContext), lowFreqOccur);\n+ }\n+ }\n+\n+ }\n+ final int numLowFreqClauses = lowFreq.clauses().size();\n+ final int numHighFreqClauses = highFreq.clauses().size();\n+ if (lowFreqOccur == Occur.SHOULD && numLowFreqClauses > 0) {\n+ int minMustMatch = calcLowFreqMinimumNumberShouldMatch(numLowFreqClauses);\n+ lowFreq.setMinimumNumberShouldMatch(minMustMatch);\n+ }\n+ if (highFreqOccur == Occur.SHOULD && numHighFreqClauses > 0) {\n+ int minMustMatch = calcHighFreqMinimumNumberShouldMatch(numHighFreqClauses);\n+ highFreq.setMinimumNumberShouldMatch(minMustMatch);\n+ }\n+ if (lowFreq.clauses().isEmpty()) {\n+ /*\n+ * if lowFreq is empty we rewrite the high freq terms in a conjunction to\n+ * prevent slow queries.\n+ */\n+ if (highFreq.getMinimumNumberShouldMatch() == 0 && highFreqOccur != Occur.MUST) {\n+ for (BooleanClause booleanClause : highFreq) {\n+ booleanClause.setOccur(Occur.MUST);\n+ }\n+ }\n+ highFreq.setBoost(getBoost());\n+ return highFreq;\n+ } else if (highFreq.clauses().isEmpty()) {\n+ // only do low freq terms - we don't have high freq terms\n+ lowFreq.setBoost(getBoost());\n+ return lowFreq;\n+ } else {\n+ query.add(highFreq, Occur.SHOULD);\n+ query.add(lowFreq, Occur.MUST);\n+ query.setBoost(getBoost());\n+ return query;\n+ }\n+ }\n+\n+ static {\n+ assert Version.LUCENE_47.onOrAfter(Lucene.VERSION) : \"Remove obsolete code after upgrade to lucene 4.8\";\n+ }\n+\n+ //@Override\n+ // LUCENE-UPGRADE: remove this method if on 4.8\n+ protected Query newTermQuery(Term term, TermContext context) {\n+ if (mapper == null) {\n+ // this should be super.newTermQuery(term, context) once it's available in the super class\n+ return context == null ? new TermQuery(term) : new TermQuery(term, context);\n+ }\n+ final Query query = mapper.queryStringTermQuery(term);\n+ if (query == null) {\n+ // this should be super.newTermQuery(term, context) once it's available in the super class\n+ return context == null ? new TermQuery(term) : new TermQuery(term, context);\n+ } else {\n+ return query;\n+ }\n+ }\n }",
"filename": "src/main/java/org/apache/lucene/queries/ExtendedCommonTermsQuery.java",
"status": "modified"
},
{
"diff": "@@ -166,19 +166,6 @@ public Query parse(QueryParseContext parseContext) throws IOException, QueryPars\n if (value == null) {\n throw new QueryParsingException(parseContext.index(), \"No text specified for text query\");\n }\n- ExtendedCommonTermsQuery commonsQuery = new ExtendedCommonTermsQuery(highFreqOccur, lowFreqOccur, maxTermFrequency, disableCoords);\n- commonsQuery.setBoost(boost);\n- Query query = parseQueryString(commonsQuery, value.toString(), fieldName, parseContext, queryAnalyzer, lowFreqMinimumShouldMatch, highFreqMinimumShouldMatch);\n- if (queryName != null) {\n- parseContext.addNamedQuery(queryName, query);\n- }\n- return query;\n- }\n-\n-\n- private final Query parseQueryString(ExtendedCommonTermsQuery query, String queryString, String fieldName, QueryParseContext parseContext,\n- String queryAnalyzer, String lowFreqMinimumShouldMatch, String highFreqMinimumShouldMatch) throws IOException {\n-\n FieldMapper<?> mapper = null;\n String field;\n MapperService.SmartNameFieldMappers smartNameFieldMappers = parseContext.smartFieldMappers(fieldName);\n@@ -207,6 +194,18 @@ private final Query parseQueryString(ExtendedCommonTermsQuery query, String quer\n }\n }\n \n+ ExtendedCommonTermsQuery commonsQuery = new ExtendedCommonTermsQuery(highFreqOccur, lowFreqOccur, maxTermFrequency, disableCoords, mapper);\n+ commonsQuery.setBoost(boost);\n+ Query query = parseQueryString(commonsQuery, value.toString(), field, parseContext, analyzer, lowFreqMinimumShouldMatch, highFreqMinimumShouldMatch, smartNameFieldMappers);\n+ if (queryName != null) {\n+ parseContext.addNamedQuery(queryName, query);\n+ }\n+ return query;\n+ }\n+\n+\n+ private final Query parseQueryString(ExtendedCommonTermsQuery query, String queryString, String field, QueryParseContext parseContext,\n+ Analyzer analyzer, String lowFreqMinimumShouldMatch, String highFreqMinimumShouldMatch, MapperService.SmartNameFieldMappers smartNameFieldMappers) throws IOException {\n // Logic similar to QueryParser#getFieldQuery\n TokenStream source = analyzer.tokenStream(field, queryString.toString());\n int count = 0;",
"filename": "src/main/java/org/elasticsearch/index/query/CommonTermsQueryParser.java",
"status": "modified"
},
{
"diff": "@@ -209,7 +209,7 @@ public Query parse(Type type, String fieldName, Object value) throws IOException\n if (commonTermsCutoff == null) {\n query = builder.createBooleanQuery(field, value.toString(), occur);\n } else {\n- query = builder.createCommonTermsQuery(field, value.toString(), occur, occur, commonTermsCutoff);\n+ query = builder.createCommonTermsQuery(field, value.toString(), occur, occur, commonTermsCutoff, mapper);\n }\n break;\n case PHRASE:\n@@ -276,11 +276,11 @@ public Query createPhrasePrefixQuery(String field, String queryText, int phraseS\n return query;\n }\n \n- public Query createCommonTermsQuery(String field, String queryText, Occur highFreqOccur, Occur lowFreqOccur, float maxTermFrequency) {\n+ public Query createCommonTermsQuery(String field, String queryText, Occur highFreqOccur, Occur lowFreqOccur, float maxTermFrequency, FieldMapper<?> mapper) {\n Query booleanQuery = createBooleanQuery(field, queryText, Occur.SHOULD);\n if (booleanQuery != null && booleanQuery instanceof BooleanQuery) {\n BooleanQuery bq = (BooleanQuery) booleanQuery;\n- ExtendedCommonTermsQuery query = new ExtendedCommonTermsQuery(highFreqOccur, lowFreqOccur, maxTermFrequency, ((BooleanQuery)booleanQuery).isCoordDisabled());\n+ ExtendedCommonTermsQuery query = new ExtendedCommonTermsQuery(highFreqOccur, lowFreqOccur, maxTermFrequency, ((BooleanQuery)booleanQuery).isCoordDisabled(), mapper);\n for (BooleanClause clause : bq.clauses()) {\n if (!(clause.getQuery() instanceof TermQuery)) {\n return booleanQuery;",
"filename": "src/main/java/org/elasticsearch/index/search/MatchQuery.java",
"status": "modified"
},
{
"diff": "@@ -252,6 +252,21 @@ public void testAllDocsQueryString() throws InterruptedException, ExecutionExcep\n }\n }\n \n+ @Test\n+ public void testCommonTermsQueryOnAllField() throws Exception {\n+ client().admin().indices().prepareCreate(\"test\")\n+ .addMapping(\"type1\", \"message\", \"type=string\", \"comment\", \"type=string,boost=5.0\")\n+ .setSettings(SETTING_NUMBER_OF_SHARDS, 1).get();\n+ indexRandom(true, client().prepareIndex(\"test\", \"type1\", \"1\").setSource(\"message\", \"test message\", \"comment\", \"whatever\"),\n+ client().prepareIndex(\"test\", \"type1\", \"2\").setSource(\"message\", \"hello world\", \"comment\", \"test comment\"));\n+\n+ SearchResponse searchResponse = client().prepareSearch().setQuery(commonTerms(\"_all\", \"test\")).get();\n+ assertHitCount(searchResponse, 2l);\n+ assertFirstHit(searchResponse, hasId(\"2\"));\n+ assertSecondHit(searchResponse, hasId(\"1\"));\n+ assertThat(searchResponse.getHits().getHits()[0].getScore(), greaterThan(searchResponse.getHits().getHits()[1].getScore()));\n+ }\n+\n @Test\n public void testCommonTermsQuery() throws Exception {\n client().admin().indices().prepareCreate(\"test\")",
"filename": "src/test/java/org/elasticsearch/search/query/SimpleQueryTests.java",
"status": "modified"
}
]
} |
{
"body": "mapping extract\n\n```\n \"extension\": {\n \"type\": \"string\", // eg .xls, no empty fields\n \"index\": \"not_analyzed\",\n },\n \"sharepath\": {\n \"type\": \"string\", // eg //1.2.3.4/Someshare/, no empty fields\n \"index\": \"not_analyzed\",\n },\n \"doc_type\": {\n \"type\": \"string\", // eg Spreadsheet Files, no empty fields\n \"index\": \"not_analyzed\"\n },\n```\n\nquery works\n\n```\n{\n \"query\": {\n \"filtered\": {\n \"filter\": {\n \"and\": [\n {\n \"range\": {\n \"modified\": {\n \"lt\": 1391775892000\n }\n }\n }\n ]\n },\n \"query\": {\n \"match_all\": {}\n }\n }\n },\n \"aggs\": {\n \"sharepath\": {\n \"terms\": {\n \"field\": \"sharepath\",\n \"size\": 2147483647\n },\n \"aggs\": {\n \"total_size_sharepath\": {\n \"filter\": {\n \"term\": {\n \"doc_type\": \"Spreadsheet Files\"\n }\n },\n \"aggs\": {\n \"total_size\": {\n \"stats\": {\n \"field\": \"size\"\n }\n }\n }\n }\n }\n }\n },\n \"size\": 0\n}\n```\n\nquery fails (more or less same query as before)\n\n```\n{\n \"query\": {\n \"filtered\": {\n \"filter\": {\n \"and\": [\n {\n \"range\": {\n \"modified\": {\n \"lt\": 1391775892000\n }\n }\n }\n ]\n },\n \"query\": {\n \"match_all\": {}\n }\n }\n },\n \"aggs\": {\n \"extension\": {\n \"terms\": {\n \"field\": \"extension\", // previously 'sharepath' which works\n \"size\": 2147483647\n },\n \"aggs\": {\n \"total_size_extension\": {\n \"filter\": {\n \"term\": {\n \"doc_type\": \"Spreadsheet Files\"\n }\n },\n \"aggs\": {\n \"total_size\": {\n \"stats\": {\n \"field\": \"size\"\n }\n }\n }\n }\n }\n }\n },\n \"size\": 0\n}\n```\n\nStacktrace\n\n```\n[2014-02-07 14:07:35,301][DEBUG][action.search.type ] [Copycat] [files_v1][3], node[XlVxAUsKRNinZGxwgkLyeg], [P], s[STARTED]: Failed to execute [org.elasticsearch.action.search.SearchRequest@71c8e67c]\njava.lang.ArrayIndexOutOfBoundsException: 51\n at org.elasticsearch.common.util.BigArrays$LongArrayWrapper.get(BigArrays.java:118)\n at org.elasticsearch.search.aggregations.bucket.BucketsAggregator.bucketDocCount(BucketsAggregator.java:79)\n at org.elasticsearch.search.aggregations.bucket.filter.FilterAggregator.buildAggregation(FilterAggregator.java:73)\n at org.elasticsearch.search.aggregations.bucket.BucketsAggregator.bucketAggregations(BucketsAggregator.java:88)\n at org.elasticsearch.search.aggregations.bucket.terms.StringTermsAggregator.buildAggregation(StringTermsAggregator.java:121)\n at org.elasticsearch.search.aggregations.bucket.terms.StringTermsAggregator.buildAggregation(StringTermsAggregator.java:41)\n at org.elasticsearch.search.aggregations.AggregationPhase.execute(AggregationPhase.java:132)\n at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:137)\n at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:230)\n at org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteQuery(SearchServiceTransportAction.java:202)\n at org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$AsyncAction.sendExecuteFirstPhase(TransportSearchQueryThenFetchAction.java:80)\n at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:216)\n at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:203)\n at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$2.run(TransportSearchTypeAction.java:186)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n at java.lang.Thread.run(Thread.java:701)\n```\n",
"comments": [
{
"body": "Thanks for reporting this issue. Which version of Elasticsearch are you using? If not 1.0 RC2, could you try to reproduce with 1.0 RC2, there used to be such an issue in 1.0 Beta1 and Beta2 but this should be fixed now (by this commit: d0143703a19f6f8bac28096ab96f81ac3477a6eb).\n",
"created_at": "2014-02-07T14:50:11Z"
},
{
"body": "Thanks for your quick response. I'm using Beta2 as the mongo river plugin doesn't work with RC2 yet. \n\nI've just spent 20 mins looking back through the recent commits for a fix, either I missed it or didn't go back far enough ;). I should be able to test against RC2 on Monday.\n",
"created_at": "2014-02-07T14:57:15Z"
},
{
"body": "Same kind of stacktrace here, using 1.0.0 (buildhash= a46900e9c72c0a623d71b54016357d5f94c8ea32)\n\n```\nCaused by: java.lang.ArrayIndexOutOfBoundsException: 6712\n at org.elasticsearch.common.util.BigArrays$ObjectArrayWrapper.get(BigArrays.java:257)\n at org.elasticsearch.search.aggregations.AggregatorFactories$1.buildAggregation(AggregatorFactories.java:102)\n at org.elasticsearch.search.aggregations.bucket.BucketsAggregator.bucketAggregations(BucketsAggregator.java:107)\n at org.elasticsearch.search.aggregations.bucket.terms.StringTermsAggregator.buildAggregation(StringTermsAggregator.java:235)\n at org.elasticsearch.search.aggregations.bucket.terms.StringTermsAggregator.buildAggregation(StringTermsAggregator.java:50)\n at org.elasticsearch.search.aggregations.AggregatorFactories$1.buildAggregation(AggregatorFactories.java:102)\n at org.elasticsearch.search.aggregations.bucket.BucketsAggregator.bucketAggregations(BucketsAggregator.java:107)\n at org.elasticsearch.search.aggregations.bucket.filter.FilterAggregator.buildAggregation(FilterAggregator.java:72)\n at org.elasticsearch.search.aggregations.bucket.BucketsAggregator.bucketAggregations(BucketsAggregator.java:107)\n at org.elasticsearch.search.aggregations.bucket.filter.FilterAggregator.buildAggregation(FilterAggregator.java:72)\n at org.elasticsearch.search.aggregations.AggregationPhase.execute(AggregationPhase.java:134)\n at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:135)\n at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:244)\n at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:623)\n at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:612)\n at org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.run(MessageChannelHandler.java:270)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n at java.lang.Thread.run(Thread.java:744)\n```\n",
"created_at": "2014-02-25T07:18:28Z"
},
{
"body": "Looks good so far @jpountz :+1: \n",
"created_at": "2014-02-25T17:32:12Z"
}
],
"number": 5048,
"title": "aggregation error ArrayIndexOutOfBoundsException"
} | {
"body": "Close #5048\n",
"number": 5250,
"review_comments": [],
"title": "Fix NPE/AIOOBE when building a bucket which has not been collected."
} | {
"commits": [
{
"message": "Fix NPE/AIOOBE when building a bucket which has not been collected.\n\nClose #5048"
}
],
"files": [
{
"diff": "@@ -116,7 +116,13 @@ public void setNextReader(AtomicReaderContext reader) {\n \n @Override\n public InternalAggregation buildAggregation(long owningBucketOrdinal) {\n- return aggregators.get(owningBucketOrdinal).buildAggregation(0);\n+ // The bucket ordinal may be out of range in case of eg. a terms/filter/terms where\n+ // the filter matches no document in the highest buckets of the first terms agg\n+ if (owningBucketOrdinal >= aggregators.size() || aggregators.get(owningBucketOrdinal) == null) {\n+ return first.buildEmptyAggregation();\n+ } else {\n+ return aggregators.get(owningBucketOrdinal).buildAggregation(0);\n+ }\n }\n \n @Override",
"filename": "src/main/java/org/elasticsearch/search/aggregations/AggregatorFactories.java",
"status": "modified"
}
]
} |
{
"body": "This pull request fixes the sorting of aggregations as discussed in issue #5236.\n\nIt forces the `NaN` values to be pushed to the bottom of the list, no matter if you are sorting asc/desc. This way top lists show the most relevant aggregates in the top. Also it actually does what the comments above it already said it should do.\n\nCloses #5236.\n",
"comments": [
{
"body": "The fix look good. Can you write a unit test for this? (If that looks complicated to you, I can take care of this, just let me know!)\n",
"created_at": "2014-02-24T13:56:23Z"
},
{
"body": "I've been looking into the test, but I find it hard to see where to add what :).\n\nI found `StringTermsTests` (and my changes didn't break it) where some tests for terms aggregations are, including sorting on sub aggregate. But for this test I need to change the source dataset in a way I cannot really comprehense.\n\nWould you mind adding a test?\n",
"created_at": "2014-02-24T14:30:13Z"
},
{
"body": "Indeed, this would be easier to have a different test case.\n\n> Would you mind adding a test?\n\nSure, I will work on it!\n",
"created_at": "2014-02-24T14:43:30Z"
},
{
"body": "left one comment... other than that LGTM\n",
"created_at": "2014-02-25T15:07:42Z"
}
],
"number": 5237,
"title": "Changing the sorting for terms aggs,"
} | {
"body": "This pull request iterates on #5237 to add unit tests to terms and histogram aggregations when sorting on sub-aggregations that return NaN (eg. an average aggregation over an empty set).\n",
"number": 5248,
"review_comments": [
{
"body": "wondering whether we can randomize here between `avg`, `variance`, `std_deviation`?\n",
"created_at": "2014-02-25T15:04:11Z"
}
],
"title": "Iteration on #5237 to add unit tests"
} | {
"commits": [
{
"message": "Changing the sorting for terms aggs,\n\nCloses #5236."
},
{
"message": "Return 0 if both values are NaN."
},
{
"message": "Added unit tests for terms and histogram sorting on sub aggregations that return NaN."
},
{
"message": "Randomize sub aggregation."
}
],
"files": [
{
"diff": "@@ -0,0 +1,43 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.common.util;\n+\n+import java.util.Comparator;\n+\n+/**\n+ * {@link Comparator}-related utility methods.\n+ */\n+public enum Comparators {\n+ ;\n+\n+ /**\n+ * Compare <code>d1</code> against <code>d2</code>, pushing {@value Double#NaN} at the bottom.\n+ */\n+ public static int compareDiscardNaN(double d1, double d2, boolean asc) {\n+ if (Double.isNaN(d1)) {\n+ return Double.isNaN(d2) ? 0 : 1;\n+ } else if (Double.isNaN(d2)) {\n+ return -1;\n+ } else {\n+ return asc ? Double.compare(d1, d2) : Double.compare(d2, d1);\n+ }\n+ }\n+\n+}",
"filename": "src/main/java/org/elasticsearch/common/util/Comparators.java",
"status": "added"
},
{
"diff": "@@ -21,6 +21,7 @@\n \n import org.elasticsearch.ElasticsearchIllegalArgumentException;\n import org.elasticsearch.common.text.Text;\n+import org.elasticsearch.common.util.Comparators;\n import org.elasticsearch.search.aggregations.Aggregation;\n import org.elasticsearch.search.aggregations.Aggregations;\n import org.elasticsearch.search.aggregations.metrics.MetricsAggregation;\n@@ -99,7 +100,7 @@ public String valueName() {\n public int compare(B b1, B b2) {\n double v1 = value(b1);\n double v2 = value(b2);\n- return asc ? Double.compare(v1, v2) : Double.compare(v2, v1);\n+ return Comparators.compareDiscardNaN(v1, v2, asc);\n }\n \n private double value(B bucket) {",
"filename": "src/main/java/org/elasticsearch/search/aggregations/bucket/MultiBucketsAggregation.java",
"status": "modified"
},
{
"diff": "@@ -21,6 +21,7 @@\n import com.google.common.primitives.Longs;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n+import org.elasticsearch.common.util.Comparators;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.search.aggregations.AggregationExecutionException;\n import org.elasticsearch.search.aggregations.Aggregator;\n@@ -212,10 +213,7 @@ public int compare(Terms.Bucket o1, Terms.Bucket o2) {\n double v2 = ((MetricsAggregator.MultiValue) aggregator).metric(valueName, ((InternalTerms.Bucket) o2).bucketOrd);\n // some metrics may return NaN (eg. avg, variance, etc...) in which case we'd like to push all of those to\n // the bottom\n- if (v1 == Double.NaN) {\n- return asc ? 1 : -1;\n- }\n- return asc ? Double.compare(v1, v2) : Double.compare(v2, v1);\n+ return Comparators.compareDiscardNaN(v1, v2, asc);\n }\n };\n }\n@@ -227,10 +225,7 @@ public int compare(Terms.Bucket o1, Terms.Bucket o2) {\n double v2 = ((MetricsAggregator.SingleValue) aggregator).metric(((InternalTerms.Bucket) o2).bucketOrd);\n // some metrics may return NaN (eg. avg, variance, etc...) in which case we'd like to push all of those to\n // the bottom\n- if (v1 == Double.NaN) {\n- return asc ? 1 : -1;\n- }\n- return asc ? Double.compare(v1, v2) : Double.compare(v2, v1);\n+ return Comparators.compareDiscardNaN(v1, v2, asc);\n }\n };\n }",
"filename": "src/main/java/org/elasticsearch/search/aggregations/bucket/terms/InternalOrder.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,183 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.search.aggregations.bucket;\n+\n+import org.elasticsearch.action.search.SearchResponse;\n+import org.elasticsearch.common.settings.ImmutableSettings;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.util.Comparators;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.search.aggregations.Aggregation;\n+import org.elasticsearch.search.aggregations.bucket.histogram.Histogram;\n+import org.elasticsearch.search.aggregations.bucket.terms.Terms;\n+import org.elasticsearch.search.aggregations.metrics.MetricsAggregationBuilder;\n+import org.elasticsearch.search.aggregations.metrics.avg.Avg;\n+import org.elasticsearch.search.aggregations.metrics.stats.extended.ExtendedStats;\n+import org.elasticsearch.test.ElasticsearchIntegrationTest;\n+import org.junit.Before;\n+import org.junit.Test;\n+\n+import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n+import static org.elasticsearch.search.aggregations.AggregationBuilders.*;\n+import static org.hamcrest.core.IsNull.notNullValue;\n+\n+public class NaNSortingTests extends ElasticsearchIntegrationTest {\n+\n+ @Override\n+ public Settings indexSettings() {\n+ return ImmutableSettings.builder()\n+ .put(\"index.number_of_shards\", between(1, 5))\n+ .put(\"index.number_of_replicas\", between(0, 1))\n+ .build();\n+ }\n+\n+ private enum SubAggregation {\n+ AVG(\"avg\") {\n+ @Override\n+ public MetricsAggregationBuilder<?> builder() {\n+ return avg(name).field(\"numeric_field\");\n+ }\n+ @Override\n+ public double getValue(Aggregation aggregation) {\n+ return ((Avg) aggregation).getValue();\n+ }\n+ },\n+ VARIANCE(\"variance\") {\n+ @Override\n+ public MetricsAggregationBuilder<?> builder() {\n+ return extendedStats(name).field(\"numeric_field\");\n+ }\n+ @Override\n+ public String sortKey() {\n+ return name + \".variance\";\n+ }\n+ @Override\n+ public double getValue(Aggregation aggregation) {\n+ return ((ExtendedStats) aggregation).getVariance();\n+ }\n+ },\n+ STD_DEVIATION(\"std_deviation\"){\n+ @Override\n+ public MetricsAggregationBuilder<?> builder() {\n+ return extendedStats(name).field(\"numeric_field\");\n+ }\n+ @Override\n+ public String sortKey() {\n+ return name + \".std_deviation\";\n+ }\n+ @Override\n+ public double getValue(Aggregation aggregation) {\n+ return ((ExtendedStats) aggregation).getStdDeviation();\n+ }\n+ };\n+\n+ SubAggregation(String name) {\n+ this.name = name;\n+ }\n+\n+ public String name;\n+\n+ public abstract MetricsAggregationBuilder<?> builder();\n+\n+ public String sortKey() {\n+ return name;\n+ }\n+\n+ public abstract double getValue(Aggregation aggregation);\n+ }\n+\n+ @Before\n+ public void init() throws Exception {\n+ createIndex(\"idx\");\n+ final int numDocs = randomIntBetween(2, 10);\n+ for (int i = 0; i < numDocs; ++i) {\n+ final long value = randomInt(5);\n+ XContentBuilder source = jsonBuilder().startObject().field(\"long_value\", value).field(\"double_value\", value + 0.05).field(\"string_value\", \"str_\" + value);\n+ if (randomBoolean()) {\n+ source.field(\"numeric_value\", randomDouble());\n+ }\n+ client().prepareIndex(\"idx\", \"type\").setSource(source.endObject()).execute().actionGet();\n+ }\n+ refresh();\n+ ensureSearchable();\n+ }\n+\n+ private void assertCorrectlySorted(Terms terms, boolean asc, SubAggregation agg) {\n+ assertThat(terms, notNullValue());\n+ double previousValue = asc ? Double.NEGATIVE_INFINITY : Double.POSITIVE_INFINITY;\n+ for (Terms.Bucket bucket : terms.getBuckets()) {\n+ Aggregation sub = bucket.getAggregations().get(agg.name);\n+ double value = agg.getValue(sub);\n+ assertTrue(Comparators.compareDiscardNaN(previousValue, value, asc) <= 0);\n+ previousValue = value;\n+ }\n+ }\n+\n+ private void assertCorrectlySorted(Histogram histo, boolean asc, SubAggregation agg) {\n+ assertThat(histo, notNullValue());\n+ double previousValue = asc ? Double.NEGATIVE_INFINITY : Double.POSITIVE_INFINITY;\n+ for (Histogram.Bucket bucket : histo.getBuckets()) {\n+ Aggregation sub = bucket.getAggregations().get(agg.name);\n+ double value = agg.getValue(sub);\n+ assertTrue(Comparators.compareDiscardNaN(previousValue, value, asc) <= 0);\n+ previousValue = value;\n+ }\n+ }\n+\n+ public void testTerms(String fieldName) {\n+ final boolean asc = randomBoolean();\n+ SubAggregation agg = randomFrom(SubAggregation.values());\n+ SearchResponse response = client().prepareSearch(\"idx\")\n+ .addAggregation(terms(\"terms\").field(fieldName).subAggregation(agg.builder()).order(Terms.Order.aggregation(agg.sortKey(), asc)))\n+ .execute().actionGet();\n+\n+ final Terms terms = response.getAggregations().get(\"terms\");\n+ assertCorrectlySorted(terms, asc, agg);\n+ }\n+\n+ @Test\n+ public void stringTerms() {\n+ testTerms(\"string_value\");\n+ }\n+\n+ @Test\n+ public void longTerms() {\n+ testTerms(\"long_value\");\n+ }\n+\n+ @Test\n+ public void doubleTerms() {\n+ testTerms(\"double_value\");\n+ }\n+\n+ @Test\n+ public void longHistogram() {\n+ final boolean asc = randomBoolean();\n+ SubAggregation agg = randomFrom(SubAggregation.values());\n+ SearchResponse response = client().prepareSearch(\"idx\")\n+ .addAggregation(histogram(\"histo\")\n+ .field(\"long_value\").interval(randomIntBetween(1, 2)).subAggregation(agg.builder()).order(Histogram.Order.aggregation(agg.sortKey(), asc)))\n+ .execute().actionGet();\n+\n+ final Histogram histo = response.getAggregations().get(\"histo\");\n+ assertCorrectlySorted(histo, asc, agg);\n+ }\n+\n+}",
"filename": "src/test/java/org/elasticsearch/search/aggregations/bucket/NaNSortingTests.java",
"status": "added"
}
]
} |
{
"body": "the new BlendedTermQuery doesn't implement:\n\n```\n@Override\npublic void extractTerms(Set<Term> terms) {\n //...\n}\n```\n\nwhich causes an unsupported op exception to be thrown. This code has not yet been released.\n",
"comments": [],
"number": 5246,
"title": "MultiMatchQuery fails to highlight with new cross field mode"
} | {
"body": "some of the highlighters require term extraction to be implemented in\norder to work. BlendedTermQuery doesn't implement the trivial extraction.\n\nCloses #5246\n",
"number": 5247,
"review_comments": [],
"title": "Implement BlendedTermQuery#extractTerms to support highlighing."
} | {
"commits": [
{
"message": "Implement BlendedTermQuery#extractTerms to support highlighing.\n\nsome of the highlighters require term extraction to be implemented in\norder to work. BlendedTermQuery doesn't implement the trivial extraction.\n\nCloses #5246"
}
],
"files": [
{
"diff": "@@ -27,6 +27,7 @@\n import java.util.Arrays;\n import java.util.Comparator;\n import java.util.List;\n+import java.util.Set;\n \n /**\n * BlendedTermQuery can be used to unify term statistics across\n@@ -188,6 +189,13 @@ public String toString(String field) {\n \n }\n \n+ @Override\n+ public void extractTerms(Set<Term> terms) {\n+ for (Term term : this.terms) {\n+ terms.add(term);\n+ }\n+ }\n+\n private volatile Term[] equalTerms = null;\n \n private Term[] equalsTerms() {",
"filename": "src/main/java/org/apache/lucene/queries/BlendedTermQuery.java",
"status": "modified"
},
{
"diff": "@@ -21,12 +21,11 @@\n \n import org.apache.lucene.index.IndexReader;\n import org.apache.lucene.index.Term;\n+import org.apache.lucene.queries.BlendedTermQuery;\n import org.apache.lucene.queries.FilterClause;\n import org.apache.lucene.queries.TermFilter;\n import org.apache.lucene.search.*;\n import org.apache.lucene.search.spans.SpanTermQuery;\n-import org.apache.lucene.util.Version;\n-import org.elasticsearch.common.lucene.Lucene;\n import org.elasticsearch.common.lucene.search.MultiPhrasePrefixQuery;\n import org.elasticsearch.common.lucene.search.XBooleanFilter;\n import org.elasticsearch.common.lucene.search.XFilteredQuery;\n@@ -91,7 +90,10 @@ void flatten(Query sourceQuery, IndexReader reader, Collection<Query> flatQuerie\n flatten(((FiltersFunctionScoreQuery) sourceQuery).getSubQuery(), reader, flatQueries);\n } else if (sourceQuery instanceof MultiPhraseQuery) {\n MultiPhraseQuery q = ((MultiPhraseQuery) sourceQuery);\n- convertMultiPhraseQuery(0, new int[q.getTermArrays().size()] , q, q.getTermArrays(), q.getPositions(), reader, flatQueries);\n+ convertMultiPhraseQuery(0, new int[q.getTermArrays().size()], q, q.getTermArrays(), q.getPositions(), reader, flatQueries);\n+ } else if (sourceQuery instanceof BlendedTermQuery) {\n+ final BlendedTermQuery blendedTermQuery = (BlendedTermQuery) sourceQuery;\n+ flatten(blendedTermQuery.rewrite(reader), reader, flatQueries);\n } else {\n super.flatten(sourceQuery, reader, flatQueries);\n }",
"filename": "src/main/java/org/apache/lucene/search/vectorhighlight/CustomFieldQuery.java",
"status": "modified"
},
{
"diff": "@@ -19,10 +19,8 @@\n \n package org.elasticsearch.search.highlight;\n \n-import java.io.IOException;\n-import java.util.Map;\n-\n import org.apache.lucene.index.IndexReader;\n+import org.apache.lucene.queries.BlendedTermQuery;\n import org.apache.lucene.search.Query;\n import org.apache.lucene.search.highlight.QueryScorer;\n import org.apache.lucene.search.highlight.WeightedSpanTerm;\n@@ -31,6 +29,9 @@\n import org.elasticsearch.common.lucene.search.function.FiltersFunctionScoreQuery;\n import org.elasticsearch.common.lucene.search.function.FunctionScoreQuery;\n \n+import java.io.IOException;\n+import java.util.Map;\n+\n public final class CustomQueryScorer extends QueryScorer {\n \n public CustomQueryScorer(Query query, IndexReader reader, String field,\n@@ -86,6 +87,8 @@ protected void extractUnknownQuery(Query query,\n } else if (query instanceof XFilteredQuery) {\n query = ((XFilteredQuery) query).getQuery();\n extract(query, terms);\n+ } else if (query instanceof BlendedTermQuery) {\n+ extractWeightedTerms(terms, query);\n }\n }\n ",
"filename": "src/main/java/org/elasticsearch/search/highlight/CustomQueryScorer.java",
"status": "modified"
},
{
"diff": "@@ -34,16 +34,19 @@\n import org.apache.lucene.store.Directory;\n import org.apache.lucene.util._TestUtil;\n import org.elasticsearch.test.ElasticsearchLuceneTestCase;\n+import org.junit.Test;\n \n import java.io.IOException;\n-import java.util.Arrays;\n-import java.util.Collections;\n-import java.util.List;\n+import java.util.*;\n+\n+import static org.hamcrest.Matchers.containsInAnyOrder;\n+import static org.hamcrest.Matchers.equalTo;\n \n /**\n */\n public class BlendedTermQueryTest extends ElasticsearchLuceneTestCase {\n \n+ @Test\n public void testBooleanQuery() throws IOException {\n Directory dir = newDirectory();\n IndexWriter w = new IndexWriter(dir, newIndexWriterConfig(TEST_VERSION_CURRENT, new MockAnalyzer(random())));\n@@ -95,6 +98,7 @@ public void testBooleanQuery() throws IOException {\n \n }\n \n+ @Test\n public void testDismaxQuery() throws IOException {\n Directory dir = newDirectory();\n IndexWriter w = new IndexWriter(dir, newIndexWriterConfig(TEST_VERSION_CURRENT, new MockAnalyzer(random())));\n@@ -165,6 +169,7 @@ public void testDismaxQuery() throws IOException {\n dir.close();\n }\n \n+ @Test\n public void testBasics() {\n final int iters = atLeast(5);\n for (int j = 0; j < iters; j++) {\n@@ -201,4 +206,20 @@ public IndexSearcher setSimilarity(IndexSearcher searcher) {\n searcher.setSimilarity(similarity);\n return searcher;\n }\n+\n+ @Test\n+ public void testExtractTerms() {\n+ Set<Term> terms = new HashSet<Term>();\n+ int num = atLeast(1);\n+ for (int i = 0; i < num; i++) {\n+ terms.add(new Term(_TestUtil.randomRealisticUnicodeString(random(), 1, 10), _TestUtil.randomRealisticUnicodeString(random(), 1, 10)));\n+ }\n+\n+ BlendedTermQuery blendedTermQuery = random().nextBoolean() ? BlendedTermQuery.dismaxBlendedQuery(terms.toArray(new Term[0]), random().nextFloat()) :\n+ BlendedTermQuery.booleanBlendedQuery(terms.toArray(new Term[0]), random().nextBoolean());\n+ Set<Term> extracted = new HashSet<Term>();\n+ blendedTermQuery.extractTerms(extracted);\n+ assertThat(extracted.size(), equalTo(terms.size()));\n+ assertThat(extracted, containsInAnyOrder(terms.toArray(new Term[0])));\n+ }\n }",
"filename": "src/test/java/org/apache/lucene/queries/BlendedTermQueryTest.java",
"status": "modified"
},
{
"diff": "@@ -18,6 +18,7 @@\n */\n package org.elasticsearch.search.highlight;\n \n+import com.carrotsearch.randomizedtesting.generators.RandomPicks;\n import com.google.common.base.Joiner;\n import com.google.common.collect.Iterables;\n import org.apache.lucene.util.LuceneTestCase.Slow;\n@@ -54,6 +55,7 @@\n import static org.elasticsearch.search.builder.SearchSourceBuilder.searchSource;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.*;\n import static org.elasticsearch.test.hamcrest.RegexMatcher.matches;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertHitCount;\n import static org.hamcrest.Matchers.*;\n \n /**\n@@ -2065,14 +2067,13 @@ public void testPostingsHighlighterRequireFieldMatch() throws Exception {\n assertHighlight(searchResponse, 0, \"field2\", 0, equalTo(\"The quick brown <field2>fox</field2> jumps over the lazy dog.\"));\n assertHighlight(searchResponse, 0, \"field2\", 1, equalTo(\"The lazy red <field2>fox</field2> jumps over the quick dog.\"));\n assertHighlight(searchResponse, 0, \"field2\", 2, 3, equalTo(\"The quick brown dog jumps over the lazy <field2>fox</field2>.\"));\n-\n logger.info(\"--> highlighting and searching on field1 and field2 via multi_match query\");\n+ final MultiMatchQueryBuilder mmquery = multiMatchQuery(\"fox\", \"field1\", \"field2\").type(RandomPicks.randomFrom(getRandom(), MultiMatchQueryBuilder.Type.values()));\n source = searchSource()\n- .query(multiMatchQuery(\"fox\", \"field1\", \"field2\"))\n- .highlight(highlight()\n- .field(new HighlightBuilder.Field(\"field1\").requireFieldMatch(true).preTags(\"<field1>\").postTags(\"</field1>\"))\n- .field(new HighlightBuilder.Field(\"field2\").requireFieldMatch(true).preTags(\"<field2>\").postTags(\"</field2>\")));\n-\n+ .query(mmquery)\n+ .highlight(highlight().highlightQuery(randomBoolean() ? mmquery : null)\n+ .field(new HighlightBuilder.Field(\"field1\").requireFieldMatch(true).preTags(\"<field1>\").postTags(\"</field1>\"))\n+ .field(new HighlightBuilder.Field(\"field2\").requireFieldMatch(true).preTags(\"<field2>\").postTags(\"</field2>\")));\n searchResponse = client().search(searchRequest(\"test\").source(source)).actionGet();\n assertHitCount(searchResponse, 1l);\n \n@@ -2085,6 +2086,39 @@ public void testPostingsHighlighterRequireFieldMatch() throws Exception {\n assertHighlight(searchResponse, 0, \"field2\", 2, 3, equalTo(\"The quick brown dog jumps over the lazy <field2>fox</field2>.\"));\n }\n \n+ @Test\n+ public void testMultiMatchQueryHighlight() throws IOException {\n+ String[] highlighterTypes = new String[] {\"fvh\", \"plain\", \"postings\"};\n+ XContentBuilder mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type1\")\n+ .startObject(\"_all\").field(\"store\", \"yes\").field(\"index_options\", \"offsets\").endObject()\n+ .startObject(\"properties\")\n+ .startObject(\"field1\").field(\"type\", \"string\").field(\"index_options\", \"offsets\").field(\"term_vector\", \"with_positions_offsets\").endObject()\n+ .startObject(\"field2\").field(\"type\", \"string\").field(\"index_options\", \"offsets\").field(\"term_vector\", \"with_positions_offsets\").endObject()\n+ .endObject()\n+ .endObject().endObject();\n+ assertAcked(client().admin().indices().prepareCreate(\"test\").addMapping(\"type1\", mapping));\n+ ensureGreen();\n+ client().prepareIndex(\"test\", \"type1\")\n+ .setSource(\"field1\", \"The quick brown fox jumps over\",\n+ \"field2\", \"The quick brown fox jumps over\").get();\n+ refresh();\n+ final int iters = atLeast(20);\n+ for (int i = 0; i < iters; i++) {\n+ MultiMatchQueryBuilder.Type matchQueryType = rarely() ? null : RandomPicks.randomFrom(getRandom(), MultiMatchQueryBuilder.Type.values());\n+ final MultiMatchQueryBuilder multiMatchQueryBuilder = multiMatchQuery(\"the quick brown fox\", \"field1\", \"field2\").type(matchQueryType);\n+ String type = rarely() ? null : RandomPicks.randomFrom(getRandom(),highlighterTypes);\n+ SearchSourceBuilder source = searchSource()\n+ .query(multiMatchQueryBuilder)\n+ .highlight(highlight().highlightQuery(randomBoolean() ? multiMatchQueryBuilder : null).highlighterType(type)\n+ .field(new Field(\"field1\").requireFieldMatch(true).preTags(\"<field1>\").postTags(\"</field1>\")));\n+ logger.info(\"Running multi-match type: [\" + matchQueryType + \"] highlight with type: [\" + type + \"]\");\n+ SearchResponse searchResponse = client().search(searchRequest(\"test\").source(source)).actionGet();\n+ assertHitCount(searchResponse, 1l);\n+ assertHighlight(searchResponse, 0, \"field1\", 0, anyOf(equalTo(\"<field1>The quick brown fox</field1> jumps over\"),\n+ equalTo(\"<field1>The</field1> <field1>quick</field1> <field1>brown</field1> <field1>fox</field1> jumps over\")));\n+ }\n+ }\n+\n @Test\n public void testPostingsHighlighterOrderByScore() throws Exception {\n assertAcked(client().admin().indices().prepareCreate(\"test\").addMapping(\"type1\", type1PostingsffsetsMapping()));\n@@ -2680,7 +2714,7 @@ private void phraseBoostTestCase(String highlighterType) {\n queryString(\"\\\"highlight words together\\\"\").field(\"field1^100\").autoGeneratePhraseQueries(true));\n }\n \n- private <P extends QueryBuilder & BoostableQueryBuilder> void\n+ private <P extends QueryBuilder & BoostableQueryBuilder<?>> void\n phraseBoostTestCaseForClauses(String highlighterType, float boost, QueryBuilder terms, P phrase) {\n Matcher<String> highlightedMatcher = Matchers.<String>either(containsString(\"<em>highlight words together</em>\")).or(\n containsString(\"<em>highlight</em> <em>words</em> <em>together</em>\"));",
"filename": "src/test/java/org/elasticsearch/search/highlight/HighlighterSearchTests.java",
"status": "modified"
}
]
} |
{
"body": "The delete snapshot operation on a running snapshot should cancel the snapshot execution. However, it interrupts the snapshot only when currently running snapshot files are completely copied, which might take a long time for large files. \n",
"comments": [],
"number": 5242,
"title": "The delete snapshot operation on a running snapshot may take a long time on large shards"
} | {
"body": "Closes #5242\n",
"number": 5244,
"review_comments": [
{
"body": "You can extend FilterInputStream instead of InputStream so you don't have to override methods that just delegate to the original stream.\n",
"created_at": "2014-02-25T16:25:34Z"
},
{
"body": "Makes sense. Fixing it.\n",
"created_at": "2014-02-26T01:22:28Z"
}
],
"title": "Improve speed of running snapshot cancelation"
} | {
"commits": [
{
"message": "Improve speed of running snapshot cancelation\n\nThe delete snapshot operation on a running snapshot should cancel the snapshot execution. However, it interrupts the snapshot only when currently running snapshot files are completely copied, which might take a long time for large files.\n\nCloses #5242"
}
],
"files": [
{
"diff": "@@ -47,6 +47,7 @@\n import org.elasticsearch.indices.IndicesService;\n import org.elasticsearch.repositories.RepositoryName;\n \n+import java.io.FilterInputStream;\n import java.io.IOException;\n import java.io.InputStream;\n import java.util.Collections;\n@@ -500,6 +501,7 @@ private void snapshotFile(final BlobStoreIndexShardSnapshot.FileInfo fileInfo, f\n } else {\n inputStream = inputStreamIndexInput;\n }\n+ inputStream = new AbortableInputStream(inputStream, fileInfo.physicalName());\n blobContainer.writeBlob(fileInfo.partName(i), inputStream, size, new ImmutableBlobContainer.WriterListener() {\n @Override\n public void onCompleted() {\n@@ -554,6 +556,33 @@ private boolean snapshotFileExistsInBlobs(BlobStoreIndexShardSnapshot.FileInfo f\n return false;\n }\n \n+ private class AbortableInputStream extends FilterInputStream {\n+ private final String fileName;\n+\n+ public AbortableInputStream(InputStream delegate, String fileName) {\n+ super(delegate);\n+ this.fileName = fileName;\n+ }\n+\n+ @Override\n+ public int read() throws IOException {\n+ checkAborted();\n+ return in.read();\n+ }\n+\n+ @Override\n+ public int read(byte[] b, int off, int len) throws IOException {\n+ checkAborted();\n+ return in.read(b, off, len);\n+ }\n+\n+ private void checkAborted() {\n+ if (snapshotStatus.aborted()) {\n+ logger.debug(\"[{}] [{}] Aborted on the file [{}], exiting\", shardId, snapshotId, fileName);\n+ throw new IndexShardSnapshotFailedException(shardId, \"Aborted\");\n+ }\n+ }\n+ }\n }\n \n /**",
"filename": "src/main/java/org/elasticsearch/index/snapshots/blobstore/BlobStoreIndexShardRepository.java",
"status": "modified"
}
]
} |
{
"body": "The current implementation of the GetFieldMapping API uses information from the index service which is only available on a node if that node actively hosts a shard from that index. If the information is missing the call will act as if the type/field was not found and will not return information for it.\n\nDuring a rolling upgrade from <= 0.90.11 or 1.0.0 the get field mapping api might fail. This has to do with the way this issue has been fixed. The way how internally the request got handled has changed in order completely to fix it properly at the cost that during a rolling upgrade this api may fail.\n",
"comments": [
{
"body": "any news on this bug? we are facing the same error by upgrading ES from .90 to 1.0.\n\nRegards\n",
"created_at": "2014-02-20T16:23:17Z"
}
],
"number": 5177,
"title": "GetFieldMappings API will not return field mapping if the index is not hosted on the node executing it"
} | {
"body": "PR for #5177\n",
"number": 5225,
"review_comments": [
{
"body": "I think we should always write / read a default and remove the line entirely in master. So master doesn't know about this at all and `0.90 / 1.x / 1.0` will just write / read dummy values and ignore the actual value. \n",
"created_at": "2014-02-24T13:32:13Z"
}
],
"title": "Make sure get field mapping request is executed on node hosting the index"
} | {
"commits": [
{
"message": "Change GetFieldMapping API to broadcast requests to nodes hosting the relevant indices.\n\nThis is due to the fact that have to have mappers in order to return the response.\n\nCloses #5177"
},
{
"message": "[TEST] Added get field mapping test variant that tests with a cluster that has a master only node."
},
{
"message": "[TEST] Fix rest get field mapping test with missing type"
},
{
"message": "Return empty response directly when no indices exist. (otherwise it would hang forever)"
},
{
"message": "Added TransportGetFieldMappingsIndexAction that uses TransportSingleCustomOperationAction as base class, with the goal to reuse common logic (like: retry on failures, shard picking, connecting to nodes)"
}
],
"files": [
{
"diff": "@@ -84,10 +84,7 @@\n import org.elasticsearch.action.admin.indices.gateway.snapshot.TransportGatewaySnapshotAction;\n import org.elasticsearch.action.admin.indices.mapping.delete.DeleteMappingAction;\n import org.elasticsearch.action.admin.indices.mapping.delete.TransportDeleteMappingAction;\n-import org.elasticsearch.action.admin.indices.mapping.get.GetFieldMappingsAction;\n-import org.elasticsearch.action.admin.indices.mapping.get.GetMappingsAction;\n-import org.elasticsearch.action.admin.indices.mapping.get.TransportGetFieldMappingsAction;\n-import org.elasticsearch.action.admin.indices.mapping.get.TransportGetMappingsAction;\n+import org.elasticsearch.action.admin.indices.mapping.get.*;\n import org.elasticsearch.action.admin.indices.mapping.put.PutMappingAction;\n import org.elasticsearch.action.admin.indices.mapping.put.TransportPutMappingAction;\n import org.elasticsearch.action.admin.indices.open.OpenIndexAction;\n@@ -228,7 +225,7 @@ protected void configure() {\n registerAction(IndicesExistsAction.INSTANCE, TransportIndicesExistsAction.class);\n registerAction(TypesExistsAction.INSTANCE, TransportTypesExistsAction.class);\n registerAction(GetMappingsAction.INSTANCE, TransportGetMappingsAction.class);\n- registerAction(GetFieldMappingsAction.INSTANCE, TransportGetFieldMappingsAction.class);\n+ registerAction(GetFieldMappingsAction.INSTANCE, TransportGetFieldMappingsAction.class, TransportGetFieldMappingsIndexAction.class);\n registerAction(PutMappingAction.INSTANCE, TransportPutMappingAction.class);\n registerAction(DeleteMappingAction.INSTANCE, TransportDeleteMappingAction.class);\n registerAction(IndicesAliasesAction.INSTANCE, TransportIndicesAliasesAction.class);",
"filename": "src/main/java/org/elasticsearch/action/ActionModule.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,101 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.action.admin.indices.mapping.get;\n+\n+import org.elasticsearch.action.ActionRequestValidationException;\n+import org.elasticsearch.action.support.single.custom.SingleCustomOperationRequest;\n+import org.elasticsearch.common.Strings;\n+import org.elasticsearch.common.io.stream.StreamInput;\n+import org.elasticsearch.common.io.stream.StreamOutput;\n+\n+import java.io.IOException;\n+\n+class GetFieldMappingsIndexRequest extends SingleCustomOperationRequest<GetFieldMappingsIndexRequest> {\n+\n+ private String index;\n+\n+ private boolean probablySingleFieldRequest;\n+ private boolean includeDefaults;\n+ private String[] fields = Strings.EMPTY_ARRAY;\n+ private String[] types = Strings.EMPTY_ARRAY;\n+\n+ GetFieldMappingsIndexRequest() {\n+ }\n+\n+ GetFieldMappingsIndexRequest(GetFieldMappingsRequest other, String index, boolean probablySingleFieldRequest) {\n+ this.preferLocal(other.local);\n+ this.probablySingleFieldRequest = probablySingleFieldRequest;\n+ this.includeDefaults = other.includeDefaults();\n+ this.types = other.types();\n+ this.fields = other.fields();\n+ this.index = index;\n+ }\n+\n+ public String index() {\n+ return index;\n+ }\n+\n+ public String[] types() {\n+ return types;\n+ }\n+\n+ public String[] fields() {\n+ return fields;\n+ }\n+\n+ public boolean probablySingleFieldRequest() {\n+ return probablySingleFieldRequest;\n+ }\n+\n+ public boolean includeDefaults() {\n+ return includeDefaults;\n+ }\n+\n+ /** Indicates whether default mapping settings should be returned */\n+ public GetFieldMappingsIndexRequest includeDefaults(boolean includeDefaults) {\n+ this.includeDefaults = includeDefaults;\n+ return this;\n+ }\n+\n+ @Override\n+ public ActionRequestValidationException validate() {\n+ return null;\n+ }\n+\n+ @Override\n+ public void writeTo(StreamOutput out) throws IOException {\n+ super.writeTo(out);\n+ out.writeString(index);\n+ out.writeStringArray(types);\n+ out.writeStringArray(fields);\n+ out.writeBoolean(includeDefaults);\n+ out.writeBoolean(probablySingleFieldRequest);\n+ }\n+\n+ @Override\n+ public void readFrom(StreamInput in) throws IOException {\n+ super.readFrom(in);\n+ index = in.readString();\n+ types = in.readStringArray();\n+ fields = in.readStringArray();\n+ includeDefaults = in.readBoolean();\n+ probablySingleFieldRequest = in.readBoolean();\n+ }\n+}",
"filename": "src/main/java/org/elasticsearch/action/admin/indices/mapping/get/GetFieldMappingsIndexRequest.java",
"status": "added"
},
{
"diff": "@@ -19,21 +19,85 @@\n \n package org.elasticsearch.action.admin.indices.mapping.get;\n \n+import org.elasticsearch.Version;\n+import org.elasticsearch.action.ActionRequest;\n import org.elasticsearch.action.ActionRequestValidationException;\n-import org.elasticsearch.action.support.master.info.ClusterInfoRequest;\n+import org.elasticsearch.action.support.IndicesOptions;\n+import org.elasticsearch.action.support.master.MasterNodeOperationRequest;\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n+import org.elasticsearch.common.unit.TimeValue;\n \n import java.io.IOException;\n \n /** Request the mappings of specific fields */\n-public class GetFieldMappingsRequest extends ClusterInfoRequest<GetFieldMappingsRequest> {\n+public class GetFieldMappingsRequest extends ActionRequest<GetFieldMappingsRequest> {\n+\n+ protected boolean local = false;\n \n private String[] fields = Strings.EMPTY_ARRAY;\n \n private boolean includeDefaults = false;\n \n+ private String[] indices = Strings.EMPTY_ARRAY;\n+ private String[] types = Strings.EMPTY_ARRAY;\n+\n+ private IndicesOptions indicesOptions = IndicesOptions.strict();\n+\n+ public GetFieldMappingsRequest() {\n+\n+ }\n+\n+ public GetFieldMappingsRequest(GetFieldMappingsRequest other) {\n+ this.local = other.local;\n+ this.includeDefaults = other.includeDefaults;\n+ this.indices = other.indices;\n+ this.types = other.types;\n+ this.indicesOptions = other.indicesOptions;\n+ this.fields = other.fields;\n+ }\n+\n+ /**\n+ * Indicate whether the receiving node should operate based on local index information or forward requests,\n+ * where needed, to other nodes. If running locally, request will not raise errors if running locally & missing indices.\n+ */\n+ public GetFieldMappingsRequest local(boolean local) {\n+ this.local = local;\n+ return this;\n+ }\n+\n+ public boolean local() {\n+ return local;\n+ }\n+\n+ public GetFieldMappingsRequest indices(String... indices) {\n+ this.indices = indices;\n+ return this;\n+ }\n+\n+ public GetFieldMappingsRequest types(String... types) {\n+ this.types = types;\n+ return this;\n+ }\n+\n+ public GetFieldMappingsRequest indicesOptions(IndicesOptions indicesOptions) {\n+ this.indicesOptions = indicesOptions;\n+ return this;\n+ }\n+\n+ public String[] indices() {\n+ return indices;\n+ }\n+\n+ public String[] types() {\n+ return types;\n+ }\n+\n+ public IndicesOptions indicesOptions() {\n+ return indicesOptions;\n+ }\n+\n /** @param fields a list of fields to retrieve the mapping for */\n public GetFieldMappingsRequest fields(String... fields) {\n this.fields = fields;\n@@ -62,13 +126,29 @@ public ActionRequestValidationException validate() {\n @Override\n public void writeTo(StreamOutput out) throws IOException {\n super.writeTo(out);\n+ if (out.getVersion().onOrBefore(Version.V_1_0_0)) {\n+ // This request used to inherit from MasterNodeOperationRequest\n+ MasterNodeOperationRequest.DEFAULT_MASTER_NODE_TIMEOUT.writeTo(out);\n+ }\n+ out.writeStringArray(indices);\n+ out.writeStringArray(types);\n+ indicesOptions.writeIndicesOptions(out);\n+ out.writeBoolean(local);\n out.writeStringArray(fields);\n out.writeBoolean(includeDefaults);\n }\n \n @Override\n public void readFrom(StreamInput in) throws IOException {\n super.readFrom(in);\n+ if (in.getVersion().onOrBefore(Version.V_1_0_0)) {\n+ // This request used to inherit from MasterNodeOperationRequest\n+ TimeValue.readTimeValue(in);\n+ }\n+ indices = in.readStringArray();\n+ types = in.readStringArray();\n+ indicesOptions = IndicesOptions.readIndicesOptions(in);\n+ local = in.readBoolean();\n fields = in.readStringArray();\n includeDefaults = in.readBoolean();\n }",
"filename": "src/main/java/org/elasticsearch/action/admin/indices/mapping/get/GetFieldMappingsRequest.java",
"status": "modified"
},
{
"diff": "@@ -19,18 +19,45 @@\n \n package org.elasticsearch.action.admin.indices.mapping.get;\n \n+import com.google.common.collect.ObjectArrays;\n import org.elasticsearch.action.ActionListener;\n-import org.elasticsearch.action.support.master.info.ClusterInfoRequestBuilder;\n+import org.elasticsearch.action.ActionRequestBuilder;\n+import org.elasticsearch.action.support.IndicesOptions;\n import org.elasticsearch.client.IndicesAdminClient;\n import org.elasticsearch.client.internal.InternalGenericClient;\n \n /** A helper class to build {@link GetFieldMappingsRequest} objects */\n-public class GetFieldMappingsRequestBuilder extends ClusterInfoRequestBuilder<GetFieldMappingsRequest, GetFieldMappingsResponse, GetFieldMappingsRequestBuilder> {\n+public class GetFieldMappingsRequestBuilder extends ActionRequestBuilder<GetFieldMappingsRequest, GetFieldMappingsResponse, GetFieldMappingsRequestBuilder> {\n \n public GetFieldMappingsRequestBuilder(InternalGenericClient client, String... indices) {\n super(client, new GetFieldMappingsRequest().indices(indices));\n }\n \n+ public GetFieldMappingsRequestBuilder setIndices(String... indices) {\n+ request.indices(indices);\n+ return this;\n+ }\n+\n+ public GetFieldMappingsRequestBuilder addIndices(String... indices) {\n+ request.indices(ObjectArrays.concat(request.indices(), indices, String.class));\n+ return this;\n+ }\n+\n+ public GetFieldMappingsRequestBuilder setTypes(String... types) {\n+ request.types(types);\n+ return this;\n+ }\n+\n+ public GetFieldMappingsRequestBuilder addTypes(String... types) {\n+ request.types(ObjectArrays.concat(request.types(), types, String.class));\n+ return this;\n+ }\n+\n+ public GetFieldMappingsRequestBuilder setIndicesOptions(IndicesOptions indicesOptions) {\n+ request.indicesOptions(indicesOptions);\n+ return this;\n+ }\n+\n \n /** Sets the fields to retrieve. */\n public GetFieldMappingsRequestBuilder setFields(String... fields) {",
"filename": "src/main/java/org/elasticsearch/action/admin/indices/mapping/get/GetFieldMappingsRequestBuilder.java",
"status": "modified"
},
{
"diff": "@@ -19,225 +19,124 @@\n \n package org.elasticsearch.action.admin.indices.mapping.get;\n \n-import com.google.common.base.Predicate;\n-import com.google.common.collect.Collections2;\n-import com.google.common.collect.ImmutableList;\n import com.google.common.collect.ImmutableMap;\n-import com.google.common.collect.Sets;\n-import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.action.ActionListener;\n-import org.elasticsearch.action.admin.indices.mapping.get.GetFieldMappingsResponse.FieldMappingMetaData;\n-import org.elasticsearch.action.support.master.info.TransportClusterInfoAction;\n+import org.elasticsearch.action.support.TransportAction;\n import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.common.collect.MapBuilder;\n import org.elasticsearch.common.inject.Inject;\n-import org.elasticsearch.common.regex.Regex;\n import org.elasticsearch.common.settings.Settings;\n-import org.elasticsearch.common.xcontent.ToXContent;\n-import org.elasticsearch.common.xcontent.XContentBuilder;\n-import org.elasticsearch.common.xcontent.XContentFactory;\n-import org.elasticsearch.common.xcontent.XContentType;\n-import org.elasticsearch.index.mapper.DocumentMapper;\n-import org.elasticsearch.index.mapper.FieldMapper;\n-import org.elasticsearch.index.service.IndexService;\n-import org.elasticsearch.indices.IndicesService;\n import org.elasticsearch.threadpool.ThreadPool;\n+import org.elasticsearch.transport.BaseTransportRequestHandler;\n+import org.elasticsearch.transport.TransportChannel;\n import org.elasticsearch.transport.TransportService;\n \n-import java.io.IOException;\n-import java.util.Collection;\n+import java.util.concurrent.atomic.AtomicInteger;\n+import java.util.concurrent.atomic.AtomicReferenceArray;\n \n /**\n */\n-public class TransportGetFieldMappingsAction extends TransportClusterInfoAction<GetFieldMappingsRequest, GetFieldMappingsResponse> {\n+public class TransportGetFieldMappingsAction extends TransportAction<GetFieldMappingsRequest, GetFieldMappingsResponse> {\n \n- private final IndicesService indicesService;\n+ private final ClusterService clusterService;\n+ private final TransportGetFieldMappingsIndexAction shardAction;\n+ private final String transportAction;\n \n @Inject\n- public TransportGetFieldMappingsAction(Settings settings, TransportService transportService, ClusterService clusterService,\n- IndicesService indicesService, ThreadPool threadPool) {\n- super(settings, transportService, clusterService, threadPool);\n- this.indicesService = indicesService;\n+ public TransportGetFieldMappingsAction(Settings settings, TransportService transportService, ClusterService clusterService, ThreadPool threadPool, TransportGetFieldMappingsIndexAction shardAction) {\n+ super(settings, threadPool);\n+ this.clusterService = clusterService;\n+ this.shardAction = shardAction;\n+ this.transportAction = GetFieldMappingsAction.NAME;\n+ transportService.registerHandler(transportAction, new TransportHandler());\n }\n \n @Override\n- protected String transportAction() {\n- return GetFieldMappingsAction.NAME;\n- }\n-\n- @Override\n- protected GetFieldMappingsRequest newRequest() {\n- return new GetFieldMappingsRequest();\n- }\n-\n- @Override\n- protected GetFieldMappingsResponse newResponse() {\n- return new GetFieldMappingsResponse();\n- }\n-\n- @Override\n- protected void doMasterOperation(final GetFieldMappingsRequest request, final ClusterState state, final ActionListener<GetFieldMappingsResponse> listener) throws ElasticsearchException {\n-\n- listener.onResponse(new GetFieldMappingsResponse(findMappings(request.indices(), request.types(), request.fields(), request.includeDefaults())));\n- }\n-\n- private ImmutableMap<String, ImmutableMap<String, ImmutableMap<String, FieldMappingMetaData>>> findMappings(String[] concreteIndices,\n- final String[] types,\n- final String[] fields,\n- boolean includeDefaults) {\n- assert types != null;\n- assert concreteIndices != null;\n- if (concreteIndices.length == 0) {\n- return ImmutableMap.of();\n- }\n-\n- boolean isProbablySingleFieldRequest = concreteIndices.length == 1 && types.length == 1 && fields.length == 1;\n- ImmutableMap.Builder<String, ImmutableMap<String, ImmutableMap<String, FieldMappingMetaData>>> indexMapBuilder = ImmutableMap.builder();\n- Sets.SetView<String> intersection = Sets.intersection(Sets.newHashSet(concreteIndices), indicesService.indices());\n- for (String index : intersection) {\n- IndexService indexService = indicesService.indexService(index);\n- Collection<String> typeIntersection;\n- if (types.length == 0) {\n- typeIntersection = indexService.mapperService().types();\n-\n- } else {\n- typeIntersection = Collections2.filter(indexService.mapperService().types(), new Predicate<String>() {\n-\n+ protected void doExecute(GetFieldMappingsRequest request, final ActionListener<GetFieldMappingsResponse> listener) {\n+ ClusterState clusterState = clusterService.state();\n+ String[] concreteIndices = clusterState.metaData().concreteIndices(request.indices(), request.indicesOptions());\n+ final AtomicInteger indexCounter = new AtomicInteger();\n+ final AtomicInteger completionCounter = new AtomicInteger(concreteIndices.length);\n+ final AtomicReferenceArray<Object> indexResponses = new AtomicReferenceArray<Object>(concreteIndices.length);\n+\n+ if (concreteIndices == null || concreteIndices.length == 0) {\n+ listener.onResponse(new GetFieldMappingsResponse());\n+ } else {\n+ boolean probablySingleFieldRequest = concreteIndices.length == 1 && request.types().length == 1 && request.fields().length == 1;\n+ for (final String index : concreteIndices) {\n+ GetFieldMappingsIndexRequest shardRequest = new GetFieldMappingsIndexRequest(request, index, probablySingleFieldRequest);\n+ // no threading needed, all is done on the index replication one\n+ shardRequest.listenerThreaded(false);\n+ shardAction.execute(shardRequest, new ActionListener<GetFieldMappingsResponse>() {\n @Override\n- public boolean apply(String type) {\n- return Regex.simpleMatch(types, type);\n+ public void onResponse(GetFieldMappingsResponse result) {\n+ indexResponses.set(indexCounter.getAndIncrement(), result);\n+ if (completionCounter.decrementAndGet() == 0) {\n+ listener.onResponse(merge(indexResponses));\n+ }\n }\n \n+ @Override\n+ public void onFailure(Throwable e) {\n+ int index = indexCounter.getAndIncrement();\n+ indexResponses.set(index, e);\n+ if (completionCounter.decrementAndGet() == 0) {\n+ listener.onResponse(merge(indexResponses));\n+ }\n+ }\n });\n }\n+ }\n+ }\n \n- MapBuilder<String, ImmutableMap<String, FieldMappingMetaData>> typeMappings = new MapBuilder<String, ImmutableMap<String, FieldMappingMetaData>>();\n- for (String type : typeIntersection) {\n- DocumentMapper documentMapper = indexService.mapperService().documentMapper(type);\n- ImmutableMap<String, FieldMappingMetaData> fieldMapping = findFieldMappingsByType(documentMapper, fields, includeDefaults, isProbablySingleFieldRequest);\n- if (!fieldMapping.isEmpty()) {\n- typeMappings.put(type, fieldMapping);\n- }\n- }\n-\n- if (!typeMappings.isEmpty()) {\n- indexMapBuilder.put(index, typeMappings.immutableMap());\n+ private GetFieldMappingsResponse merge(AtomicReferenceArray<Object> indexResponses) {\n+ MapBuilder<String, ImmutableMap<String, ImmutableMap<String, GetFieldMappingsResponse.FieldMappingMetaData>>> mergedResponses = MapBuilder.newMapBuilder();\n+ for (int i = 0; i < indexResponses.length(); i++) {\n+ Object element = indexResponses.get(i);\n+ if (element instanceof GetFieldMappingsResponse) {\n+ GetFieldMappingsResponse response = (GetFieldMappingsResponse) element;\n+ mergedResponses.putAll(response.mappings());\n }\n }\n-\n- return indexMapBuilder.build();\n+ return new GetFieldMappingsResponse(mergedResponses.immutableMap());\n }\n \n- private static final ToXContent.Params includeDefaultsParams = new ToXContent.Params() {\n-\n- final static String INCLUDE_DEFAULTS = \"include_defaults\";\n+ private class TransportHandler extends BaseTransportRequestHandler<GetFieldMappingsRequest> {\n \n @Override\n- public String param(String key) {\n- if (INCLUDE_DEFAULTS.equals(key)) {\n- return \"true\";\n- }\n- return null;\n+ public GetFieldMappingsRequest newInstance() {\n+ return new GetFieldMappingsRequest();\n }\n \n @Override\n- public String param(String key, String defaultValue) {\n- if (INCLUDE_DEFAULTS.equals(key)) {\n- return \"true\";\n- }\n- return defaultValue;\n+ public String executor() {\n+ return ThreadPool.Names.SAME;\n }\n \n @Override\n- public boolean paramAsBoolean(String key, boolean defaultValue) {\n- if (INCLUDE_DEFAULTS.equals(key)) {\n- return true;\n- }\n- return defaultValue;\n- }\n-\n- public Boolean paramAsBoolean(String key, Boolean defaultValue) {\n- if (INCLUDE_DEFAULTS.equals(key)) {\n- return true;\n- }\n- return defaultValue;\n- }\n-\n- @Override @Deprecated\n- public Boolean paramAsBooleanOptional(String key, Boolean defaultValue) {\n- return paramAsBoolean(key, defaultValue);\n- }\n- };\n-\n- private ImmutableMap<String, FieldMappingMetaData> findFieldMappingsByType(DocumentMapper documentMapper, String[] fields,\n- boolean includeDefaults, boolean isProbablySingleFieldRequest) throws ElasticsearchException {\n- MapBuilder<String, FieldMappingMetaData> fieldMappings = new MapBuilder<String, FieldMappingMetaData>();\n- ImmutableList<FieldMapper> allFieldMappers = documentMapper.mappers().mappers();\n- for (String field : fields) {\n- if (Regex.isMatchAllPattern(field)) {\n- for (FieldMapper fieldMapper : allFieldMappers) {\n- addFieldMapper(fieldMapper.names().fullName(), fieldMapper, fieldMappings, includeDefaults);\n- }\n- } else if (Regex.isSimpleMatchPattern(field)) {\n- // go through the field mappers 3 times, to make sure we give preference to the resolve order: full name, index name, name.\n- // also make sure we only store each mapper once.\n- boolean[] resolved = new boolean[allFieldMappers.size()];\n- for (int i = 0; i < allFieldMappers.size(); i++) {\n- FieldMapper fieldMapper = allFieldMappers.get(i);\n- if (Regex.simpleMatch(field, fieldMapper.names().fullName())) {\n- addFieldMapper(fieldMapper.names().fullName(), fieldMapper, fieldMappings, includeDefaults);\n- resolved[i] = true;\n- }\n- }\n- for (int i = 0; i < allFieldMappers.size(); i++) {\n- if (resolved[i]) {\n- continue;\n- }\n- FieldMapper fieldMapper = allFieldMappers.get(i);\n- if (Regex.simpleMatch(field, fieldMapper.names().indexName())) {\n- addFieldMapper(fieldMapper.names().indexName(), fieldMapper, fieldMappings, includeDefaults);\n- resolved[i] = true;\n- }\n- }\n- for (int i = 0; i < allFieldMappers.size(); i++) {\n- if (resolved[i]) {\n- continue;\n- }\n- FieldMapper fieldMapper = allFieldMappers.get(i);\n- if (Regex.simpleMatch(field, fieldMapper.names().name())) {\n- addFieldMapper(fieldMapper.names().name(), fieldMapper, fieldMappings, includeDefaults);\n- resolved[i] = true;\n+ public void messageReceived(final GetFieldMappingsRequest request, final TransportChannel channel) throws Exception {\n+ // no need for a threaded listener, since we just send a response\n+ request.listenerThreaded(false);\n+ execute(request, new ActionListener<GetFieldMappingsResponse>() {\n+ @Override\n+ public void onResponse(GetFieldMappingsResponse result) {\n+ try {\n+ channel.sendResponse(result);\n+ } catch (Throwable e) {\n+ onFailure(e);\n }\n }\n \n- } else {\n- // not a pattern\n- FieldMapper fieldMapper = documentMapper.mappers().smartNameFieldMapper(field);\n- if (fieldMapper != null) {\n- addFieldMapper(field, fieldMapper, fieldMappings, includeDefaults);\n- } else if (isProbablySingleFieldRequest) {\n- fieldMappings.put(field, FieldMappingMetaData.NULL);\n+ @Override\n+ public void onFailure(Throwable e) {\n+ try {\n+ channel.sendResponse(e);\n+ } catch (Exception e1) {\n+ logger.warn(\"Failed to send error response for action [\" + transportAction + \"] and request [\" + request + \"]\", e1);\n+ }\n }\n- }\n+ });\n }\n- return fieldMappings.immutableMap();\n }\n-\n- private void addFieldMapper(String field, FieldMapper fieldMapper, MapBuilder<String, FieldMappingMetaData> fieldMappings, boolean includeDefaults) {\n- if (fieldMappings.containsKey(field)) {\n- return;\n- }\n- try {\n- XContentBuilder builder = XContentFactory.contentBuilder(XContentType.JSON);\n- builder.startObject();\n- fieldMapper.toXContent(builder, includeDefaults ? includeDefaultsParams : ToXContent.EMPTY_PARAMS);\n- builder.endObject();\n- fieldMappings.put(field, new FieldMappingMetaData(fieldMapper.names().fullName(), builder.bytes()));\n- } catch (IOException e) {\n- throw new ElasticsearchException(\"failed to serialize XContent of field [\" + field + \"]\", e);\n- }\n- }\n-\n-\n-}\n\\ No newline at end of file\n+}",
"filename": "src/main/java/org/elasticsearch/action/admin/indices/mapping/get/TransportGetFieldMappingsAction.java",
"status": "modified"
},
{
"diff": "@@ -0,0 +1,250 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.action.admin.indices.mapping.get;\n+\n+import com.google.common.base.Predicate;\n+import com.google.common.collect.Collections2;\n+import com.google.common.collect.ImmutableList;\n+import com.google.common.collect.ImmutableMap;\n+import org.elasticsearch.ElasticsearchException;\n+import org.elasticsearch.action.admin.indices.mapping.get.GetFieldMappingsResponse.FieldMappingMetaData;\n+import org.elasticsearch.action.support.single.custom.TransportSingleCustomOperationAction;\n+import org.elasticsearch.cluster.ClusterService;\n+import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.block.ClusterBlockException;\n+import org.elasticsearch.cluster.block.ClusterBlockLevel;\n+import org.elasticsearch.cluster.routing.ShardsIterator;\n+import org.elasticsearch.common.collect.MapBuilder;\n+import org.elasticsearch.common.inject.Inject;\n+import org.elasticsearch.common.regex.Regex;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.xcontent.ToXContent;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.common.xcontent.XContentFactory;\n+import org.elasticsearch.common.xcontent.XContentType;\n+import org.elasticsearch.index.Index;\n+import org.elasticsearch.index.mapper.DocumentMapper;\n+import org.elasticsearch.index.mapper.FieldMapper;\n+import org.elasticsearch.index.service.IndexService;\n+import org.elasticsearch.indices.IndicesService;\n+import org.elasticsearch.indices.TypeMissingException;\n+import org.elasticsearch.threadpool.ThreadPool;\n+import org.elasticsearch.transport.TransportService;\n+\n+import java.io.IOException;\n+import java.util.Collection;\n+\n+/**\n+ */\n+public class TransportGetFieldMappingsIndexAction extends TransportSingleCustomOperationAction<GetFieldMappingsIndexRequest, GetFieldMappingsResponse> {\n+\n+ protected final ClusterService clusterService;\n+ private final IndicesService indicesService;\n+\n+ @Inject\n+ public TransportGetFieldMappingsIndexAction(Settings settings, ClusterService clusterService,\n+ TransportService transportService,\n+ IndicesService indicesService,\n+ ThreadPool threadPool) {\n+ super(settings, threadPool, clusterService, transportService);\n+ this.clusterService = clusterService;\n+ this.indicesService = indicesService;\n+ }\n+\n+ @Override\n+ protected String transportAction() {\n+ return GetFieldMappingsAction.NAME + \"/shard\";\n+ }\n+\n+ @Override\n+ protected String executor() {\n+ return ThreadPool.Names.MANAGEMENT;\n+ }\n+\n+ @Override\n+ protected ShardsIterator shards(ClusterState state, GetFieldMappingsIndexRequest request) {\n+ // Will balance requests between shards\n+ return state.routingTable().index(request.index()).randomAllActiveShardsIt();\n+ }\n+\n+ @Override\n+ protected GetFieldMappingsResponse shardOperation(final GetFieldMappingsIndexRequest request, int shardId) throws ElasticsearchException {\n+ IndexService indexService = indicesService.indexServiceSafe(request.index());\n+ Collection<String> typeIntersection;\n+ if (request.types().length == 0) {\n+ typeIntersection = indexService.mapperService().types();\n+\n+ } else {\n+ typeIntersection = Collections2.filter(indexService.mapperService().types(), new Predicate<String>() {\n+\n+ @Override\n+ public boolean apply(String type) {\n+ return Regex.simpleMatch(request.types(), type);\n+ }\n+\n+ });\n+ if (typeIntersection.isEmpty()) {\n+ throw new TypeMissingException(new Index(request.index()), request.types());\n+ }\n+ }\n+\n+ MapBuilder<String, ImmutableMap<String, FieldMappingMetaData>> typeMappings = new MapBuilder<String, ImmutableMap<String, FieldMappingMetaData>>();\n+ for (String type : typeIntersection) {\n+ DocumentMapper documentMapper = indexService.mapperService().documentMapper(type);\n+ ImmutableMap<String, FieldMappingMetaData> fieldMapping = findFieldMappingsByType(documentMapper, request);\n+ if (!fieldMapping.isEmpty()) {\n+ typeMappings.put(type, fieldMapping);\n+ }\n+ }\n+\n+ return new GetFieldMappingsResponse(ImmutableMap.of(request.index(), typeMappings.immutableMap()));\n+ }\n+\n+ @Override\n+ protected GetFieldMappingsIndexRequest newRequest() {\n+ return new GetFieldMappingsIndexRequest();\n+ }\n+\n+ @Override\n+ protected GetFieldMappingsResponse newResponse() {\n+ return new GetFieldMappingsResponse();\n+ }\n+\n+ @Override\n+ protected ClusterBlockException checkGlobalBlock(ClusterState state, GetFieldMappingsIndexRequest request) {\n+ return state.blocks().globalBlockedException(ClusterBlockLevel.READ);\n+ }\n+\n+ @Override\n+ protected ClusterBlockException checkRequestBlock(ClusterState state, GetFieldMappingsIndexRequest request) {\n+ return state.blocks().indexBlockedException(ClusterBlockLevel.READ, request.index());\n+ }\n+\n+ private static final ToXContent.Params includeDefaultsParams = new ToXContent.Params() {\n+\n+ final static String INCLUDE_DEFAULTS = \"include_defaults\";\n+\n+ @Override\n+ public String param(String key) {\n+ if (INCLUDE_DEFAULTS.equals(key)) {\n+ return \"true\";\n+ }\n+ return null;\n+ }\n+\n+ @Override\n+ public String param(String key, String defaultValue) {\n+ if (INCLUDE_DEFAULTS.equals(key)) {\n+ return \"true\";\n+ }\n+ return defaultValue;\n+ }\n+\n+ @Override\n+ public boolean paramAsBoolean(String key, boolean defaultValue) {\n+ if (INCLUDE_DEFAULTS.equals(key)) {\n+ return true;\n+ }\n+ return defaultValue;\n+ }\n+\n+ public Boolean paramAsBoolean(String key, Boolean defaultValue) {\n+ if (INCLUDE_DEFAULTS.equals(key)) {\n+ return true;\n+ }\n+ return defaultValue;\n+ }\n+\n+ @Override\n+ @Deprecated\n+ public Boolean paramAsBooleanOptional(String key, Boolean defaultValue) {\n+ return paramAsBoolean(key, defaultValue);\n+ }\n+ };\n+\n+ private ImmutableMap<String, FieldMappingMetaData> findFieldMappingsByType(DocumentMapper documentMapper, GetFieldMappingsIndexRequest request) throws ElasticsearchException {\n+ MapBuilder<String, FieldMappingMetaData> fieldMappings = new MapBuilder<String, FieldMappingMetaData>();\n+ ImmutableList<FieldMapper> allFieldMappers = documentMapper.mappers().mappers();\n+ for (String field : request.fields()) {\n+ if (Regex.isMatchAllPattern(field)) {\n+ for (FieldMapper fieldMapper : allFieldMappers) {\n+ addFieldMapper(fieldMapper.names().fullName(), fieldMapper, fieldMappings, request.includeDefaults());\n+ }\n+ } else if (Regex.isSimpleMatchPattern(field)) {\n+ // go through the field mappers 3 times, to make sure we give preference to the resolve order: full name, index name, name.\n+ // also make sure we only store each mapper once.\n+ boolean[] resolved = new boolean[allFieldMappers.size()];\n+ for (int i = 0; i < allFieldMappers.size(); i++) {\n+ FieldMapper fieldMapper = allFieldMappers.get(i);\n+ if (Regex.simpleMatch(field, fieldMapper.names().fullName())) {\n+ addFieldMapper(fieldMapper.names().fullName(), fieldMapper, fieldMappings, request.includeDefaults());\n+ resolved[i] = true;\n+ }\n+ }\n+ for (int i = 0; i < allFieldMappers.size(); i++) {\n+ if (resolved[i]) {\n+ continue;\n+ }\n+ FieldMapper fieldMapper = allFieldMappers.get(i);\n+ if (Regex.simpleMatch(field, fieldMapper.names().indexName())) {\n+ addFieldMapper(fieldMapper.names().indexName(), fieldMapper, fieldMappings, request.includeDefaults());\n+ resolved[i] = true;\n+ }\n+ }\n+ for (int i = 0; i < allFieldMappers.size(); i++) {\n+ if (resolved[i]) {\n+ continue;\n+ }\n+ FieldMapper fieldMapper = allFieldMappers.get(i);\n+ if (Regex.simpleMatch(field, fieldMapper.names().name())) {\n+ addFieldMapper(fieldMapper.names().name(), fieldMapper, fieldMappings, request.includeDefaults());\n+ resolved[i] = true;\n+ }\n+ }\n+\n+ } else {\n+ // not a pattern\n+ FieldMapper fieldMapper = documentMapper.mappers().smartNameFieldMapper(field);\n+ if (fieldMapper != null) {\n+ addFieldMapper(field, fieldMapper, fieldMappings, request.includeDefaults());\n+ } else if (request.probablySingleFieldRequest()) {\n+ fieldMappings.put(field, FieldMappingMetaData.NULL);\n+ }\n+ }\n+ }\n+ return fieldMappings.immutableMap();\n+ }\n+\n+ private void addFieldMapper(String field, FieldMapper fieldMapper, MapBuilder<String, FieldMappingMetaData> fieldMappings, boolean includeDefaults) {\n+ if (fieldMappings.containsKey(field)) {\n+ return;\n+ }\n+ try {\n+ XContentBuilder builder = XContentFactory.contentBuilder(XContentType.JSON);\n+ builder.startObject();\n+ fieldMapper.toXContent(builder, includeDefaults ? includeDefaultsParams : ToXContent.EMPTY_PARAMS);\n+ builder.endObject();\n+ fieldMappings.put(field, new FieldMappingMetaData(fieldMapper.names().fullName(), builder.bytes()));\n+ } catch (IOException e) {\n+ throw new ElasticsearchException(\"failed to serialize XContent of field [\" + field + \"]\", e);\n+ }\n+ }\n+\n+}\n\\ No newline at end of file",
"filename": "src/main/java/org/elasticsearch/action/admin/indices/mapping/get/TransportGetFieldMappingsIndexAction.java",
"status": "added"
},
{
"diff": "@@ -0,0 +1,45 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.indices.mapping;\n+\n+import org.apache.lucene.util.LuceneTestCase;\n+import org.elasticsearch.common.settings.ImmutableSettings;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.test.ElasticsearchIntegrationTest;\n+import org.junit.Before;\n+\n+import static org.elasticsearch.common.settings.ImmutableSettings.settingsBuilder;\n+\n+/**\n+ */\n+@LuceneTestCase.Slow\n+@ElasticsearchIntegrationTest.ClusterScope(scope = ElasticsearchIntegrationTest.Scope.TEST, numNodes = 0)\n+public class DedicatedMasterGetFieldMappingTests extends SimpleGetFieldMappingsTests {\n+\n+ @Before\n+ public void before1() {\n+ Settings settings = settingsBuilder()\n+ .put(\"node.data\", false)\n+ .build();\n+ cluster().startNode(settings);\n+ cluster().startNode(ImmutableSettings.EMPTY);\n+ }\n+\n+}",
"filename": "src/test/java/org/elasticsearch/indices/mapping/DedicatedMasterGetFieldMappingTests.java",
"status": "added"
},
{
"diff": "@@ -22,6 +22,8 @@\n import com.google.common.base.Predicate;\n import org.elasticsearch.action.admin.indices.mapping.get.GetFieldMappingsResponse;\n import org.elasticsearch.common.Nullable;\n+import org.elasticsearch.common.settings.ImmutableSettings;\n+import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.test.ElasticsearchIntegrationTest;\n import org.hamcrest.Matchers;\n@@ -41,8 +43,10 @@ public class SimpleGetFieldMappingsTests extends ElasticsearchIntegrationTest {\n @Test\n public void getMappingsWhereThereAreNone() {\n createIndex(\"index\");\n+ ensureYellow();\n GetFieldMappingsResponse response = client().admin().indices().prepareGetFieldMappings().get();\n- assertThat(response.mappings().size(), equalTo(0));\n+ assertThat(response.mappings().size(), equalTo(1));\n+ assertThat(response.mappings().get(\"index\").size(), equalTo(0));\n \n assertThat(response.fieldMappings(\"index\", \"type\", \"field\"), Matchers.nullValue());\n }\n@@ -57,16 +61,22 @@ private XContentBuilder getMappingForType(String type) throws IOException {\n @Test\n public void simpleGetFieldMappings() throws Exception {\n \n-\n+ Settings.Builder settings = ImmutableSettings.settingsBuilder()\n+ .put(\"number_of_shards\", randomIntBetween(1, 3), \"number_of_replicas\", randomIntBetween(0, 1));\n+ \n assertTrue(client().admin().indices().prepareCreate(\"indexa\")\n .addMapping(\"typeA\", getMappingForType(\"typeA\"))\n .addMapping(\"typeB\", getMappingForType(\"typeB\"))\n+ .setSettings(settings)\n .get().isAcknowledged());\n assertTrue(client().admin().indices().prepareCreate(\"indexb\")\n .addMapping(\"typeA\", getMappingForType(\"typeA\"))\n .addMapping(\"typeB\", getMappingForType(\"typeB\"))\n+ .setSettings(settings)\n .get().isAcknowledged());\n \n+ ensureYellow();\n+\n // Get mappings by full name\n GetFieldMappingsResponse response = client().admin().indices().prepareGetFieldMappings(\"indexa\").setTypes(\"typeA\").setFields(\"field1\", \"obj.subfield\").get();\n assertThat(response.fieldMappings(\"indexa\", \"typeA\", \"field1\").fullName(), equalTo(\"field1\"));",
"filename": "src/test/java/org/elasticsearch/indices/mapping/SimpleGetFieldMappingsTests.java",
"status": "modified"
}
]
} |
{
"body": "`HighlightPhase.hitExecute(SearchContext context, HitContext hitContext)`\n\nAssume that the field object represents a wildcard field (i.e. field.field() returns \"*\")\nAssume that the index contains two fields:\nfield_1 with \"index_options\":\"freq\"\nfield_2 with \"index_options\":\"offsets\"\nAssume that a highlighter has not been explicitly specified (i.e. `field.highlighterType()==null`)\n\nThe object fieldNamesToHighlight will contain both of the above field names.\nThe first pass through the for (String fieldName : fieldNamesToHighlight) loop will evaluate the statement: `if (field.highlighterType() == null){}` to true.\nThe block will evaluate the index_options of the first field (from a HashSet, so essentially random choice) to determine the appropriate highlighter. The highlighter will then be set using `field.highlighterType(\"postings\")`.\n\nSubsequent executions of the fieldNamesToHighlight for loop will evaluate the statement: `if (field.highlighterType() == null){}` to false, based on the settings from the first execution.\n\nResult: The highlighter chosen for the (arbitrary) first field will be used for all subsequent fields. If the selected highlighter is not the plain highlighter then there is the potential for the code to throw an IllegalArgumentException due to a mismatch between the highlighter and the indexing options of the field.\n\nThe solution is to treat the user provided configuration (the field object) as immutable. For each iteration of the fieldNamesToHighlight loop, extract field.highlighterType() to a local variable. Test that local variable for null - and set to a selected highlighter as appropriate.\n",
"comments": [
{
"body": "For completeness, I can reproduce it with the following commands:\n\n```\ncurl -XPUT 'http://localhost:9202/test' -d '{ \"mappings\": { \"document\": { \"properties\": { \"field1\": { \"type\":\"string\" }, \"field2\": { \"type\":\"string\", \"index_options\":\"offsets\" } } } } }'\ncurl -XPUT 'http://localhost:9202/test/document/1' -d '{ \"field2\":\"This is the first field\", \"field1\":\"This is the second field\" }'\ncurl -XPOST 'http://localhost:9202/_search' -d '{ \"query\": { \"term\": { \"field2\": \"field\" } }, \"highlight\": { \"fields\": { \"field*\": {} } } }'\n```\n\nHowever, please note that the bug relies on a specific ordering of a HashSet of String objects (field names). To reproduce it you may have to modify which field name has postings. Also, you may have to modify the regular expression that selects the fields: \"field_\" and \"_\" seems to produce different results. On production I can get the error with \"*\", but not with the test scripts above.\n",
"created_at": "2014-02-19T11:55:46Z"
},
{
"body": "Great catch @ccw-morris , will fix this!\n",
"created_at": "2014-02-20T09:52:57Z"
}
],
"number": 5175,
"title": "Highlighting on a wildcard field name uses the same highlighter for all fields that match"
} | {
"body": "A Field instance can map to multiple actual fields when using wildcard expressions. Each actual field should use the proper highlighter depending on the available data structure (e.g. term_vectors), while we currently select the highlighter for the first field and we keep using the same for all the fields that match the wildcard expression.\n\nModified also how the PercolateContext sets the forceSource option, in a global manner now rather than per field.\n\nCloses #5175\n",
"number": 5223,
"review_comments": [
{
"body": "Can FieldOptions be really immutable? (final fields)\n",
"created_at": "2014-02-24T09:38:04Z"
},
{
"body": "would love that... how long does its constructor need to be then though? Also, we'd need to replicate all of its fields in the builder too, feels too verbose to me.\n",
"created_at": "2014-02-24T09:44:07Z"
}
],
"title": "Made SearchContextHighlight.Field class immutable to prevent from unwanted updates"
} | {
"commits": [
{
"message": "Made SearchContextHighlight.Field class immutable to prevent from erroneously updating it, as it doesn't necessarily map to a single field\n\nA Field instance can map to multiple actual fields when using wildcard expressions. Each actual field should use the proper highlighter depending on the available data structure (e.g. term_vectors), while we currently select the highlighter for the first field and we keep using the same for all the fields that match the wildcard expression.\n\nModified also how the PercolateContext sets the forceSource option, in a global manner now rather than per field.\n\nCloses #5175"
}
],
"files": [
{
"diff": "@@ -189,6 +189,10 @@ public SearchContextHighlight highlight() {\n \n @Override\n public void highlight(SearchContextHighlight highlight) {\n+ if (highlight != null) {\n+ // Enforce highlighting by source, because MemoryIndex doesn't support stored fields.\n+ highlight.globalForceSource(true);\n+ }\n this.highlight = highlight;\n }\n ",
"filename": "src/main/java/org/elasticsearch/percolator/PercolateContext.java",
"status": "modified"
},
{
"diff": "@@ -64,10 +64,7 @@\n import org.elasticsearch.index.service.IndexService;\n import org.elasticsearch.index.shard.service.IndexShard;\n import org.elasticsearch.indices.IndicesService;\n-import org.elasticsearch.percolator.QueryCollector.Count;\n-import org.elasticsearch.percolator.QueryCollector.Match;\n-import org.elasticsearch.percolator.QueryCollector.MatchAndScore;\n-import org.elasticsearch.percolator.QueryCollector.MatchAndSort;\n+import org.elasticsearch.percolator.QueryCollector.*;\n import org.elasticsearch.script.ScriptService;\n import org.elasticsearch.search.SearchParseElement;\n import org.elasticsearch.search.SearchShardTarget;\n@@ -79,7 +76,6 @@\n import org.elasticsearch.search.facet.InternalFacets;\n import org.elasticsearch.search.highlight.HighlightField;\n import org.elasticsearch.search.highlight.HighlightPhase;\n-import org.elasticsearch.search.highlight.SearchContextHighlight;\n import org.elasticsearch.search.internal.SearchContext;\n import org.elasticsearch.search.sort.SortParseElement;\n \n@@ -307,11 +303,6 @@ private ParsedDocument parseRequest(IndexService documentIndexService, Percolate\n // We need to get the actual source from the request body for highlighting, so parse the request body again\n // and only get the doc source.\n if (context.highlight() != null) {\n- // Enforce highlighting by source, because MemoryIndex doesn't support stored fields.\n- for (SearchContextHighlight.Field field : context.highlight().fields()) {\n- field.forceSource(true);\n- }\n-\n parser.close();\n currentFieldName = null;\n parser = XContentFactory.xContent(source).createParser(source);\n@@ -370,10 +361,6 @@ private ParsedDocument parseFetchedDoc(PercolateContext context, BytesReference\n doc = docMapper.parse(source(parser).type(type).flyweight(true));\n \n if (context.highlight() != null) {\n- // Enforce highlighting by source, because MemoryIndex doesn't support stored fields.\n- for (SearchContextHighlight.Field field : context.highlight().fields()) {\n- field.forceSource(true);\n- }\n doc.setSource(fetchedDoc);\n }\n } catch (Throwable e) {",
"filename": "src/main/java/org/elasticsearch/percolator/PercolatorService.java",
"status": "modified"
},
{
"diff": "@@ -68,7 +68,7 @@ public HighlightField highlight(HighlighterContext highlighterContext) {\n throw new ElasticsearchIllegalArgumentException(\"the field [\" + highlighterContext.fieldName + \"] should be indexed with term vector with position offsets to be used with fast vector highlighter\");\n }\n \n- Encoder encoder = field.encoder().equals(\"html\") ? HighlightUtils.Encoders.HTML : HighlightUtils.Encoders.DEFAULT;\n+ Encoder encoder = field.fieldOptions().encoder().equals(\"html\") ? HighlightUtils.Encoders.HTML : HighlightUtils.Encoders.DEFAULT;\n \n if (!hitContext.cache().containsKey(CACHE_KEY)) {\n hitContext.cache().put(CACHE_KEY, new HighlighterEntry());\n@@ -77,16 +77,16 @@ public HighlightField highlight(HighlighterContext highlighterContext) {\n \n try {\n FieldQuery fieldQuery;\n- if (field.requireFieldMatch()) {\n+ if (field.fieldOptions().requireFieldMatch()) {\n if (cache.fieldMatchFieldQuery == null) {\n // we use top level reader to rewrite the query against all readers, with use caching it across hits (and across readers...)\n- cache.fieldMatchFieldQuery = new CustomFieldQuery(highlighterContext.query.originalQuery(), hitContext.topLevelReader(), true, field.requireFieldMatch());\n+ cache.fieldMatchFieldQuery = new CustomFieldQuery(highlighterContext.query.originalQuery(), hitContext.topLevelReader(), true, field.fieldOptions().requireFieldMatch());\n }\n fieldQuery = cache.fieldMatchFieldQuery;\n } else {\n if (cache.noFieldMatchFieldQuery == null) {\n // we use top level reader to rewrite the query against all readers, with use caching it across hits (and across readers...)\n- cache.noFieldMatchFieldQuery = new CustomFieldQuery(highlighterContext.query.originalQuery(), hitContext.topLevelReader(), true, field.requireFieldMatch());\n+ cache.noFieldMatchFieldQuery = new CustomFieldQuery(highlighterContext.query.originalQuery(), hitContext.topLevelReader(), true, field.fieldOptions().requireFieldMatch());\n }\n fieldQuery = cache.noFieldMatchFieldQuery;\n }\n@@ -97,31 +97,31 @@ public HighlightField highlight(HighlighterContext highlighterContext) {\n BaseFragmentsBuilder fragmentsBuilder;\n \n BoundaryScanner boundaryScanner = DEFAULT_BOUNDARY_SCANNER;\n- if (field.boundaryMaxScan() != SimpleBoundaryScanner.DEFAULT_MAX_SCAN || field.boundaryChars() != SimpleBoundaryScanner.DEFAULT_BOUNDARY_CHARS) {\n- boundaryScanner = new SimpleBoundaryScanner(field.boundaryMaxScan(), field.boundaryChars());\n+ if (field.fieldOptions().boundaryMaxScan() != SimpleBoundaryScanner.DEFAULT_MAX_SCAN || field.fieldOptions().boundaryChars() != SimpleBoundaryScanner.DEFAULT_BOUNDARY_CHARS) {\n+ boundaryScanner = new SimpleBoundaryScanner(field.fieldOptions().boundaryMaxScan(), field.fieldOptions().boundaryChars());\n }\n-\n- if (field.numberOfFragments() == 0) {\n+ boolean forceSource = context.highlight().forceSource(field);\n+ if (field.fieldOptions().numberOfFragments() == 0) {\n fragListBuilder = new SingleFragListBuilder();\n \n- if (!field.forceSource() && mapper.fieldType().stored()) {\n- fragmentsBuilder = new SimpleFragmentsBuilder(mapper, field.preTags(), field.postTags(), boundaryScanner);\n+ if (!forceSource && mapper.fieldType().stored()) {\n+ fragmentsBuilder = new SimpleFragmentsBuilder(mapper, field.fieldOptions().preTags(), field.fieldOptions().postTags(), boundaryScanner);\n } else {\n- fragmentsBuilder = new SourceSimpleFragmentsBuilder(mapper, context, field.preTags(), field.postTags(), boundaryScanner);\n+ fragmentsBuilder = new SourceSimpleFragmentsBuilder(mapper, context, field.fieldOptions().preTags(), field.fieldOptions().postTags(), boundaryScanner);\n }\n } else {\n- fragListBuilder = field.fragmentOffset() == -1 ? new SimpleFragListBuilder() : new SimpleFragListBuilder(field.fragmentOffset());\n- if (field.scoreOrdered()) {\n- if (!field.forceSource() && mapper.fieldType().stored()) {\n- fragmentsBuilder = new ScoreOrderFragmentsBuilder(field.preTags(), field.postTags(), boundaryScanner);\n+ fragListBuilder = field.fieldOptions().fragmentOffset() == -1 ? new SimpleFragListBuilder() : new SimpleFragListBuilder(field.fieldOptions().fragmentOffset());\n+ if (field.fieldOptions().scoreOrdered()) {\n+ if (!forceSource && mapper.fieldType().stored()) {\n+ fragmentsBuilder = new ScoreOrderFragmentsBuilder(field.fieldOptions().preTags(), field.fieldOptions().postTags(), boundaryScanner);\n } else {\n- fragmentsBuilder = new SourceScoreOrderFragmentsBuilder(mapper, context, field.preTags(), field.postTags(), boundaryScanner);\n+ fragmentsBuilder = new SourceScoreOrderFragmentsBuilder(mapper, context, field.fieldOptions().preTags(), field.fieldOptions().postTags(), boundaryScanner);\n }\n } else {\n- if (!field.forceSource() && mapper.fieldType().stored()) {\n- fragmentsBuilder = new SimpleFragmentsBuilder(mapper, field.preTags(), field.postTags(), boundaryScanner);\n+ if (!forceSource && mapper.fieldType().stored()) {\n+ fragmentsBuilder = new SimpleFragmentsBuilder(mapper, field.fieldOptions().preTags(), field.fieldOptions().postTags(), boundaryScanner);\n } else {\n- fragmentsBuilder = new SourceSimpleFragmentsBuilder(mapper, context, field.preTags(), field.postTags(), boundaryScanner);\n+ fragmentsBuilder = new SourceSimpleFragmentsBuilder(mapper, context, field.fieldOptions().preTags(), field.fieldOptions().postTags(), boundaryScanner);\n }\n }\n }\n@@ -135,37 +135,37 @@ public HighlightField highlight(HighlighterContext highlighterContext) {\n // fragment builders are used explicitly\n cache.fvh = new org.apache.lucene.search.vectorhighlight.FastVectorHighlighter();\n }\n- CustomFieldQuery.highlightFilters.set(field.highlightFilter());\n+ CustomFieldQuery.highlightFilters.set(field.fieldOptions().highlightFilter());\n cache.mappers.put(mapper, entry);\n }\n- cache.fvh.setPhraseLimit(field.phraseLimit());\n+ cache.fvh.setPhraseLimit(field.fieldOptions().phraseLimit());\n \n String[] fragments;\n \n // a HACK to make highlighter do highlighting, even though its using the single frag list builder\n- int numberOfFragments = field.numberOfFragments() == 0 ? Integer.MAX_VALUE : field.numberOfFragments();\n- int fragmentCharSize = field.numberOfFragments() == 0 ? Integer.MAX_VALUE : field.fragmentCharSize();\n+ int numberOfFragments = field.fieldOptions().numberOfFragments() == 0 ? Integer.MAX_VALUE : field.fieldOptions().numberOfFragments();\n+ int fragmentCharSize = field.fieldOptions().numberOfFragments() == 0 ? Integer.MAX_VALUE : field.fieldOptions().fragmentCharSize();\n // we highlight against the low level reader and docId, because if we load source, we want to reuse it if possible\n // Only send matched fields if they were requested to save time.\n- if (field.matchedFields() != null && !field.matchedFields().isEmpty()) {\n- fragments = cache.fvh.getBestFragments(fieldQuery, hitContext.reader(), hitContext.docId(), mapper.names().indexName(), field.matchedFields(), fragmentCharSize,\n- numberOfFragments, entry.fragListBuilder, entry.fragmentsBuilder, field.preTags(), field.postTags(), encoder);\n+ if (field.fieldOptions().matchedFields() != null && !field.fieldOptions().matchedFields().isEmpty()) {\n+ fragments = cache.fvh.getBestFragments(fieldQuery, hitContext.reader(), hitContext.docId(), mapper.names().indexName(), field.fieldOptions().matchedFields(), fragmentCharSize,\n+ numberOfFragments, entry.fragListBuilder, entry.fragmentsBuilder, field.fieldOptions().preTags(), field.fieldOptions().postTags(), encoder);\n } else {\n fragments = cache.fvh.getBestFragments(fieldQuery, hitContext.reader(), hitContext.docId(), mapper.names().indexName(), fragmentCharSize,\n- numberOfFragments, entry.fragListBuilder, entry.fragmentsBuilder, field.preTags(), field.postTags(), encoder);\n+ numberOfFragments, entry.fragListBuilder, entry.fragmentsBuilder, field.fieldOptions().preTags(), field.fieldOptions().postTags(), encoder);\n }\n \n if (fragments != null && fragments.length > 0) {\n return new HighlightField(highlighterContext.fieldName, StringText.convertFromStringArray(fragments));\n }\n \n- int noMatchSize = highlighterContext.field.noMatchSize();\n+ int noMatchSize = highlighterContext.field.fieldOptions().noMatchSize();\n if (noMatchSize > 0) {\n // Essentially we just request that a fragment is built from 0 to noMatchSize using the normal fragmentsBuilder\n FieldFragList fieldFragList = new SimpleFieldFragList(-1 /*ignored*/);\n fieldFragList.add(0, noMatchSize, Collections.<WeightedPhraseInfo>emptyList());\n fragments = entry.fragmentsBuilder.createFragments(hitContext.reader(), hitContext.docId(), mapper.names().indexName(),\n- fieldFragList, 1, field.preTags(), field.postTags(), encoder);\n+ fieldFragList, 1, field.fieldOptions().preTags(), field.fieldOptions().postTags(), encoder);\n if (fragments != null && fragments.length > 0) {\n return new HighlightField(highlighterContext.fieldName, StringText.convertFromStringArray(fragments));\n }",
"filename": "src/main/java/org/elasticsearch/search/highlight/FastVectorHighlighter.java",
"status": "modified"
},
{
"diff": "@@ -86,7 +86,7 @@ public void hitExecute(SearchContext context, HitContext hitContext) throws Elas\n fieldNamesToHighlight = ImmutableSet.of(field.field());\n }\n \n- if (field.forceSource()) {\n+ if (context.highlight().forceSource(field)) {\n SourceFieldMapper sourceFieldMapper = context.mapperService().documentMapper(hitContext.hit().type()).sourceMapper();\n if (!sourceFieldMapper.enabled()) {\n throw new ElasticsearchIllegalArgumentException(\"source is forced for fields \" + fieldNamesToHighlight + \" but type [\" + hitContext.hit().type() + \"] has disabled _source\");\n@@ -99,27 +99,28 @@ public void hitExecute(SearchContext context, HitContext hitContext) throws Elas\n continue;\n }\n \n- if (field.highlighterType() == null) {\n+ String highlighterType = field.fieldOptions().highlighterType();\n+ if (highlighterType == null) {\n boolean useFastVectorHighlighter = fieldMapper.fieldType().storeTermVectors() && fieldMapper.fieldType().storeTermVectorOffsets() && fieldMapper.fieldType().storeTermVectorPositions();\n if (useFastVectorHighlighter) {\n- field.highlighterType(\"fvh\");\n+ highlighterType = \"fvh\";\n } else if (fieldMapper.fieldType().indexOptions() == FieldInfo.IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS) {\n- field.highlighterType(\"postings\");\n+ highlighterType = \"postings\";\n } else {\n- field.highlighterType(\"plain\");\n+ highlighterType = \"plain\";\n }\n }\n \n- Highlighter highlighter = highlighters.get(field.highlighterType());\n+ Highlighter highlighter = highlighters.get(highlighterType);\n if (highlighter == null) {\n- throw new ElasticsearchIllegalArgumentException(\"unknown highlighter type [\" + field.highlighterType() + \"] for the field [\" + fieldName + \"]\");\n+ throw new ElasticsearchIllegalArgumentException(\"unknown highlighter type [\" + highlighterType + \"] for the field [\" + fieldName + \"]\");\n }\n \n HighlighterContext.HighlightQuery highlightQuery;\n- if (field.highlightQuery() == null) {\n+ if (field.fieldOptions().highlightQuery() == null) {\n highlightQuery = new HighlighterContext.HighlightQuery(context.parsedQuery().query(), context.query(), context.queryRewritten());\n } else {\n- highlightQuery = new HighlighterContext.HighlightQuery(field.highlightQuery(), field.highlightQuery(), false);\n+ highlightQuery = new HighlighterContext.HighlightQuery(field.fieldOptions().highlightQuery(), field.fieldOptions().highlightQuery(), false);\n }\n HighlighterContext highlighterContext = new HighlighterContext(fieldName, field, fieldMapper, context, hitContext, highlightQuery);\n HighlightField highlightField = highlighter.highlight(highlighterContext);",
"filename": "src/main/java/org/elasticsearch/search/highlight/HighlightPhase.java",
"status": "modified"
},
{
"diff": "@@ -41,7 +41,9 @@ private HighlightUtils() {\n \n }\n \n- static List<Object> loadFieldValues(FieldMapper<?> mapper, SearchContext searchContext, FetchSubPhase.HitContext hitContext, boolean forceSource) throws IOException {\n+ static List<Object> loadFieldValues(SearchContextHighlight.Field field, FieldMapper<?> mapper, SearchContext searchContext, FetchSubPhase.HitContext hitContext) throws IOException {\n+ //percolator needs to always load from source, thus it sets the global force source to true\n+ boolean forceSource = searchContext.highlight().forceSource(field);\n List<Object> textsToHighlight;\n if (!forceSource && mapper.fieldType().stored()) {\n CustomFieldsVisitor fieldVisitor = new CustomFieldsVisitor(ImmutableSet.of(mapper.names().indexName()), false);",
"filename": "src/main/java/org/elasticsearch/search/highlight/HighlightUtils.java",
"status": "modified"
},
{
"diff": "@@ -21,15 +21,14 @@\n \n import com.google.common.collect.Lists;\n import com.google.common.collect.Sets;\n-import org.apache.lucene.search.Query;\n import org.apache.lucene.search.vectorhighlight.SimpleBoundaryScanner;\n+import org.elasticsearch.common.collect.Tuple;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.search.SearchParseElement;\n import org.elasticsearch.search.SearchParseException;\n import org.elasticsearch.search.internal.SearchContext;\n \n import java.util.List;\n-import java.util.Map;\n import java.util.Set;\n \n import static com.google.common.collect.Lists.newArrayList;\n@@ -68,25 +67,14 @@ public class HighlighterParseElement implements SearchParseElement {\n public void parse(XContentParser parser, SearchContext context) throws Exception {\n XContentParser.Token token;\n String topLevelFieldName = null;\n- List<SearchContextHighlight.Field> fields = newArrayList();\n+ List<Tuple<String, SearchContextHighlight.FieldOptions.Builder>> fieldsOptions = newArrayList();\n \n- String[] globalPreTags = DEFAULT_PRE_TAGS;\n- String[] globalPostTags = DEFAULT_POST_TAGS;\n- boolean globalScoreOrdered = false;\n- boolean globalHighlightFilter = false;\n- boolean globalRequireFieldMatch = false;\n- boolean globalForceSource = false;\n- int globalFragmentSize = 100;\n- int globalNumOfFragments = 5;\n- String globalEncoder = \"default\";\n- int globalBoundaryMaxScan = SimpleBoundaryScanner.DEFAULT_MAX_SCAN;\n- Character[] globalBoundaryChars = SimpleBoundaryScanner.DEFAULT_BOUNDARY_CHARS;\n- String globalHighlighterType = null;\n- String globalFragmenter = null;\n- Map<String, Object> globalOptions = null;\n- Query globalHighlightQuery = null;\n- int globalNoMatchSize = 0;\n- int globalPhraseLimit = 256;\n+ SearchContextHighlight.FieldOptions.Builder globalOptionsBuilder = new SearchContextHighlight.FieldOptions.Builder()\n+ .preTags(DEFAULT_PRE_TAGS).postTags(DEFAULT_POST_TAGS).scoreOrdered(false).highlightFilter(false)\n+ .requireFieldMatch(false).forceSource(false).fragmentCharSize(100).numberOfFragments(5)\n+ .encoder(\"default\").boundaryMaxScan(SimpleBoundaryScanner.DEFAULT_MAX_SCAN)\n+ .boundaryChars(SimpleBoundaryScanner.DEFAULT_BOUNDARY_CHARS)\n+ .noMatchSize(0).phraseLimit(256);\n \n while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {\n if (token == XContentParser.Token.FIELD_NAME) {\n@@ -97,62 +85,63 @@ public void parse(XContentParser parser, SearchContext context) throws Exception\n while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) {\n preTagsList.add(parser.text());\n }\n- globalPreTags = preTagsList.toArray(new String[preTagsList.size()]);\n+ globalOptionsBuilder.preTags(preTagsList.toArray(new String[preTagsList.size()]));\n } else if (\"post_tags\".equals(topLevelFieldName) || \"postTags\".equals(topLevelFieldName)) {\n List<String> postTagsList = Lists.newArrayList();\n while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) {\n postTagsList.add(parser.text());\n }\n- globalPostTags = postTagsList.toArray(new String[postTagsList.size()]);\n+ globalOptionsBuilder.postTags(postTagsList.toArray(new String[postTagsList.size()]));\n }\n } else if (token.isValue()) {\n if (\"order\".equals(topLevelFieldName)) {\n- globalScoreOrdered = \"score\".equals(parser.text());\n+ globalOptionsBuilder.scoreOrdered(\"score\".equals(parser.text()));\n } else if (\"tags_schema\".equals(topLevelFieldName) || \"tagsSchema\".equals(topLevelFieldName)) {\n String schema = parser.text();\n if (\"styled\".equals(schema)) {\n- globalPreTags = STYLED_PRE_TAG;\n- globalPostTags = STYLED_POST_TAGS;\n+ globalOptionsBuilder.preTags(STYLED_PRE_TAG);\n+ globalOptionsBuilder.postTags(STYLED_POST_TAGS);\n }\n } else if (\"highlight_filter\".equals(topLevelFieldName) || \"highlightFilter\".equals(topLevelFieldName)) {\n- globalHighlightFilter = parser.booleanValue();\n+ globalOptionsBuilder.highlightFilter(parser.booleanValue());\n } else if (\"fragment_size\".equals(topLevelFieldName) || \"fragmentSize\".equals(topLevelFieldName)) {\n- globalFragmentSize = parser.intValue();\n+ globalOptionsBuilder.fragmentCharSize(parser.intValue());\n } else if (\"number_of_fragments\".equals(topLevelFieldName) || \"numberOfFragments\".equals(topLevelFieldName)) {\n- globalNumOfFragments = parser.intValue();\n+ globalOptionsBuilder.numberOfFragments(parser.intValue());\n } else if (\"encoder\".equals(topLevelFieldName)) {\n- globalEncoder = parser.text();\n+ globalOptionsBuilder.encoder(parser.text());\n } else if (\"require_field_match\".equals(topLevelFieldName) || \"requireFieldMatch\".equals(topLevelFieldName)) {\n- globalRequireFieldMatch = parser.booleanValue();\n+ globalOptionsBuilder.requireFieldMatch(parser.booleanValue());\n } else if (\"boundary_max_scan\".equals(topLevelFieldName) || \"boundaryMaxScan\".equals(topLevelFieldName)) {\n- globalBoundaryMaxScan = parser.intValue();\n+ globalOptionsBuilder.boundaryMaxScan(parser.intValue());\n } else if (\"boundary_chars\".equals(topLevelFieldName) || \"boundaryChars\".equals(topLevelFieldName)) {\n char[] charsArr = parser.text().toCharArray();\n- globalBoundaryChars = new Character[charsArr.length];\n+ Character[] globalBoundaryChars = new Character[charsArr.length];\n for (int i = 0; i < charsArr.length; i++) {\n globalBoundaryChars[i] = charsArr[i];\n }\n+ globalOptionsBuilder.boundaryChars(globalBoundaryChars);\n } else if (\"type\".equals(topLevelFieldName)) {\n- globalHighlighterType = parser.text();\n+ globalOptionsBuilder.highlighterType(parser.text());\n } else if (\"fragmenter\".equals(topLevelFieldName)) {\n- globalFragmenter = parser.text();\n+ globalOptionsBuilder.fragmenter(parser.text());\n } else if (\"no_match_size\".equals(topLevelFieldName) || \"noMatchSize\".equals(topLevelFieldName)) {\n- globalNoMatchSize = parser.intValue();\n+ globalOptionsBuilder.noMatchSize(parser.intValue());\n } else if (\"force_source\".equals(topLevelFieldName) || \"forceSource\".equals(topLevelFieldName)) {\n- globalForceSource = parser.booleanValue();\n+ globalOptionsBuilder.forceSource(parser.booleanValue());\n } else if (\"phrase_limit\".equals(topLevelFieldName) || \"phraseLimit\".equals(topLevelFieldName)) {\n- globalPhraseLimit = parser.intValue();\n+ globalOptionsBuilder.phraseLimit(parser.intValue());\n }\n } else if (token == XContentParser.Token.START_OBJECT && \"options\".equals(topLevelFieldName)) {\n- globalOptions = parser.map();\n+ globalOptionsBuilder.options(parser.map());\n } else if (token == XContentParser.Token.START_OBJECT) {\n if (\"fields\".equals(topLevelFieldName)) {\n String highlightFieldName = null;\n while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {\n if (token == XContentParser.Token.FIELD_NAME) {\n highlightFieldName = parser.currentName();\n } else if (token == XContentParser.Token.START_OBJECT) {\n- SearchContextHighlight.Field field = new SearchContextHighlight.Field(highlightFieldName);\n+ SearchContextHighlight.FieldOptions.Builder fieldOptionsBuilder = new SearchContextHighlight.FieldOptions.Builder();\n String fieldName = null;\n while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {\n if (token == XContentParser.Token.FIELD_NAME) {\n@@ -163,126 +152,79 @@ public void parse(XContentParser parser, SearchContext context) throws Exception\n while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) {\n preTagsList.add(parser.text());\n }\n- field.preTags(preTagsList.toArray(new String[preTagsList.size()]));\n+ fieldOptionsBuilder.preTags(preTagsList.toArray(new String[preTagsList.size()]));\n } else if (\"post_tags\".equals(fieldName) || \"postTags\".equals(fieldName)) {\n List<String> postTagsList = Lists.newArrayList();\n while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) {\n postTagsList.add(parser.text());\n }\n- field.postTags(postTagsList.toArray(new String[postTagsList.size()]));\n+ fieldOptionsBuilder.postTags(postTagsList.toArray(new String[postTagsList.size()]));\n } else if (\"matched_fields\".equals(fieldName) || \"matchedFields\".equals(fieldName)) {\n Set<String> matchedFields = Sets.newHashSet();\n while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) {\n matchedFields.add(parser.text());\n }\n- field.matchedFields(matchedFields);\n+ fieldOptionsBuilder.matchedFields(matchedFields);\n }\n } else if (token.isValue()) {\n if (\"fragment_size\".equals(fieldName) || \"fragmentSize\".equals(fieldName)) {\n- field.fragmentCharSize(parser.intValue());\n+ fieldOptionsBuilder.fragmentCharSize(parser.intValue());\n } else if (\"number_of_fragments\".equals(fieldName) || \"numberOfFragments\".equals(fieldName)) {\n- field.numberOfFragments(parser.intValue());\n+ fieldOptionsBuilder.numberOfFragments(parser.intValue());\n } else if (\"fragment_offset\".equals(fieldName) || \"fragmentOffset\".equals(fieldName)) {\n- field.fragmentOffset(parser.intValue());\n+ fieldOptionsBuilder.fragmentOffset(parser.intValue());\n } else if (\"highlight_filter\".equals(fieldName) || \"highlightFilter\".equals(fieldName)) {\n- field.highlightFilter(parser.booleanValue());\n+ fieldOptionsBuilder.highlightFilter(parser.booleanValue());\n } else if (\"order\".equals(fieldName)) {\n- field.scoreOrdered(\"score\".equals(parser.text()));\n+ fieldOptionsBuilder.scoreOrdered(\"score\".equals(parser.text()));\n } else if (\"require_field_match\".equals(fieldName) || \"requireFieldMatch\".equals(fieldName)) {\n- field.requireFieldMatch(parser.booleanValue());\n+ fieldOptionsBuilder.requireFieldMatch(parser.booleanValue());\n } else if (\"boundary_max_scan\".equals(topLevelFieldName) || \"boundaryMaxScan\".equals(topLevelFieldName)) {\n- field.boundaryMaxScan(parser.intValue());\n+ fieldOptionsBuilder.boundaryMaxScan(parser.intValue());\n } else if (\"boundary_chars\".equals(topLevelFieldName) || \"boundaryChars\".equals(topLevelFieldName)) {\n char[] charsArr = parser.text().toCharArray();\n Character[] boundaryChars = new Character[charsArr.length];\n for (int i = 0; i < charsArr.length; i++) {\n boundaryChars[i] = charsArr[i];\n }\n- field.boundaryChars(boundaryChars);\n+ fieldOptionsBuilder.boundaryChars(boundaryChars);\n } else if (\"type\".equals(fieldName)) {\n- field.highlighterType(parser.text());\n+ fieldOptionsBuilder.highlighterType(parser.text());\n } else if (\"fragmenter\".equals(fieldName)) {\n- field.fragmenter(parser.text());\n+ fieldOptionsBuilder.fragmenter(parser.text());\n } else if (\"no_match_size\".equals(fieldName) || \"noMatchSize\".equals(fieldName)) {\n- field.noMatchSize(parser.intValue());\n+ fieldOptionsBuilder.noMatchSize(parser.intValue());\n } else if (\"force_source\".equals(fieldName) || \"forceSource\".equals(fieldName)) {\n- field.forceSource(parser.booleanValue());\n+ fieldOptionsBuilder.forceSource(parser.booleanValue());\n } else if (\"phrase_limit\".equals(fieldName) || \"phraseLimit\".equals(fieldName)) {\n- field.phraseLimit(parser.intValue());\n+ fieldOptionsBuilder.phraseLimit(parser.intValue());\n }\n } else if (token == XContentParser.Token.START_OBJECT) {\n if (\"highlight_query\".equals(fieldName) || \"highlightQuery\".equals(fieldName)) {\n- field.highlightQuery(context.queryParserService().parse(parser).query());\n- } else if (fieldName.equals(\"options\")) {\n- field.options(parser.map());\n+ fieldOptionsBuilder.highlightQuery(context.queryParserService().parse(parser).query());\n+ } else if (\"options\".equals(fieldName)) {\n+ fieldOptionsBuilder.options(parser.map());\n }\n }\n }\n- fields.add(field);\n+ fieldsOptions.add(Tuple.tuple(highlightFieldName, fieldOptionsBuilder));\n }\n }\n } else if (\"highlight_query\".equals(topLevelFieldName) || \"highlightQuery\".equals(topLevelFieldName)) {\n- globalHighlightQuery = context.queryParserService().parse(parser).query();\n+ globalOptionsBuilder.highlightQuery(context.queryParserService().parse(parser).query());\n }\n }\n }\n- if (globalPreTags != null && globalPostTags == null) {\n+\n+ SearchContextHighlight.FieldOptions globalOptions = globalOptionsBuilder.build();\n+ if (globalOptions.preTags() != null && globalOptions.postTags() == null) {\n throw new SearchParseException(context, \"Highlighter global preTags are set, but global postTags are not set\");\n }\n \n- // now, go over and fill all fields with default values from the global state\n- for (SearchContextHighlight.Field field : fields) {\n- if (field.preTags() == null) {\n- field.preTags(globalPreTags);\n- }\n- if (field.postTags() == null) {\n- field.postTags(globalPostTags);\n- }\n- if (field.highlightFilter() == null) {\n- field.highlightFilter(globalHighlightFilter);\n- }\n- if (field.scoreOrdered() == null) {\n- field.scoreOrdered(globalScoreOrdered);\n- }\n- if (field.fragmentCharSize() == -1) {\n- field.fragmentCharSize(globalFragmentSize);\n- }\n- if (field.numberOfFragments() == -1) {\n- field.numberOfFragments(globalNumOfFragments);\n- }\n- if (field.encoder() == null) {\n- field.encoder(globalEncoder);\n- }\n- if (field.requireFieldMatch() == null) {\n- field.requireFieldMatch(globalRequireFieldMatch);\n- }\n- if (field.boundaryMaxScan() == -1) {\n- field.boundaryMaxScan(globalBoundaryMaxScan);\n- }\n- if (field.boundaryChars() == null) {\n- field.boundaryChars(globalBoundaryChars);\n- }\n- if (field.highlighterType() == null) {\n- field.highlighterType(globalHighlighterType);\n- }\n- if (field.fragmenter() == null) {\n- field.fragmenter(globalFragmenter);\n- }\n- if (field.options() == null || field.options().size() == 0) {\n- field.options(globalOptions);\n- }\n- if (field.highlightQuery() == null && globalHighlightQuery != null) {\n- field.highlightQuery(globalHighlightQuery);\n- }\n- if (field.noMatchSize() == -1) {\n- field.noMatchSize(globalNoMatchSize);\n- }\n- if (field.forceSource() == null) {\n- field.forceSource(globalForceSource);\n- }\n- if (field.phraseLimit() == -1) {\n- field.phraseLimit(globalPhraseLimit);\n- }\n+ List<SearchContextHighlight.Field> fields = Lists.newArrayList();\n+ // now, go over and fill all fieldsOptions with default values from the global state\n+ for (Tuple<String, SearchContextHighlight.FieldOptions.Builder> tuple : fieldsOptions) {\n+ fields.add(new SearchContextHighlight.Field(tuple.v1(), tuple.v2().merge(globalOptions).build()));\n }\n \n context.highlight(new SearchContextHighlight(fields));",
"filename": "src/main/java/org/elasticsearch/search/highlight/HighlighterParseElement.java",
"status": "modified"
},
{
"diff": "@@ -58,32 +58,33 @@ public HighlightField highlight(HighlighterContext highlighterContext) {\n FetchSubPhase.HitContext hitContext = highlighterContext.hitContext;\n FieldMapper<?> mapper = highlighterContext.mapper;\n \n- Encoder encoder = field.encoder().equals(\"html\") ? HighlightUtils.Encoders.HTML : HighlightUtils.Encoders.DEFAULT;\n+ Encoder encoder = field.fieldOptions().encoder().equals(\"html\") ? HighlightUtils.Encoders.HTML : HighlightUtils.Encoders.DEFAULT;\n \n if (!hitContext.cache().containsKey(CACHE_KEY)) {\n Map<FieldMapper<?>, org.apache.lucene.search.highlight.Highlighter> mappers = Maps.newHashMap();\n hitContext.cache().put(CACHE_KEY, mappers);\n }\n+ @SuppressWarnings(\"unchecked\")\n Map<FieldMapper<?>, org.apache.lucene.search.highlight.Highlighter> cache = (Map<FieldMapper<?>, org.apache.lucene.search.highlight.Highlighter>) hitContext.cache().get(CACHE_KEY);\n \n org.apache.lucene.search.highlight.Highlighter entry = cache.get(mapper);\n if (entry == null) {\n Query query = highlighterContext.query.originalQuery();\n- QueryScorer queryScorer = new CustomQueryScorer(query, field.requireFieldMatch() ? mapper.names().indexName() : null);\n+ QueryScorer queryScorer = new CustomQueryScorer(query, field.fieldOptions().requireFieldMatch() ? mapper.names().indexName() : null);\n queryScorer.setExpandMultiTermQuery(true);\n Fragmenter fragmenter;\n- if (field.numberOfFragments() == 0) {\n+ if (field.fieldOptions().numberOfFragments() == 0) {\n fragmenter = new NullFragmenter();\n- } else if (field.fragmenter() == null) {\n- fragmenter = new SimpleSpanFragmenter(queryScorer, field.fragmentCharSize());\n- } else if (\"simple\".equals(field.fragmenter())) {\n- fragmenter = new SimpleFragmenter(field.fragmentCharSize());\n- } else if (\"span\".equals(field.fragmenter())) {\n- fragmenter = new SimpleSpanFragmenter(queryScorer, field.fragmentCharSize());\n+ } else if (field.fieldOptions().fragmenter() == null) {\n+ fragmenter = new SimpleSpanFragmenter(queryScorer, field.fieldOptions().fragmentCharSize());\n+ } else if (\"simple\".equals(field.fieldOptions().fragmenter())) {\n+ fragmenter = new SimpleFragmenter(field.fieldOptions().fragmentCharSize());\n+ } else if (\"span\".equals(field.fieldOptions().fragmenter())) {\n+ fragmenter = new SimpleSpanFragmenter(queryScorer, field.fieldOptions().fragmentCharSize());\n } else {\n- throw new ElasticsearchIllegalArgumentException(\"unknown fragmenter option [\" + field.fragmenter() + \"] for the field [\" + highlighterContext.fieldName + \"]\");\n+ throw new ElasticsearchIllegalArgumentException(\"unknown fragmenter option [\" + field.fieldOptions().fragmenter() + \"] for the field [\" + highlighterContext.fieldName + \"]\");\n }\n- Formatter formatter = new SimpleHTMLFormatter(field.preTags()[0], field.postTags()[0]);\n+ Formatter formatter = new SimpleHTMLFormatter(field.fieldOptions().preTags()[0], field.fieldOptions().postTags()[0]);\n \n entry = new org.apache.lucene.search.highlight.Highlighter(formatter, encoder, queryScorer);\n entry.setTextFragmenter(fragmenter);\n@@ -94,12 +95,12 @@ public HighlightField highlight(HighlighterContext highlighterContext) {\n }\n \n // a HACK to make highlighter do highlighting, even though its using the single frag list builder\n- int numberOfFragments = field.numberOfFragments() == 0 ? 1 : field.numberOfFragments();\n+ int numberOfFragments = field.fieldOptions().numberOfFragments() == 0 ? 1 : field.fieldOptions().numberOfFragments();\n ArrayList<TextFragment> fragsList = new ArrayList<TextFragment>();\n List<Object> textsToHighlight;\n \n try {\n- textsToHighlight = HighlightUtils.loadFieldValues(mapper, context, hitContext, field.forceSource());\n+ textsToHighlight = HighlightUtils.loadFieldValues(field, mapper, context, hitContext);\n \n for (Object textToHighlight : textsToHighlight) {\n String text = textToHighlight.toString();\n@@ -119,7 +120,7 @@ public HighlightField highlight(HighlighterContext highlighterContext) {\n } catch (Exception e) {\n throw new FetchPhaseExecutionException(context, \"Failed to highlight field [\" + highlighterContext.fieldName + \"]\", e);\n }\n- if (field.scoreOrdered()) {\n+ if (field.fieldOptions().scoreOrdered()) {\n CollectionUtil.introSort(fragsList, new Comparator<TextFragment>() {\n public int compare(TextFragment o1, TextFragment o2) {\n return Math.round(o2.getScore() - o1.getScore());\n@@ -128,7 +129,7 @@ public int compare(TextFragment o1, TextFragment o2) {\n }\n String[] fragments;\n // number_of_fragments is set to 0 but we have a multivalued field\n- if (field.numberOfFragments() == 0 && textsToHighlight.size() > 1 && fragsList.size() > 0) {\n+ if (field.fieldOptions().numberOfFragments() == 0 && textsToHighlight.size() > 1 && fragsList.size() > 0) {\n fragments = new String[fragsList.size()];\n for (int i = 0; i < fragsList.size(); i++) {\n fragments[i] = fragsList.get(i).toString();\n@@ -142,11 +143,11 @@ public int compare(TextFragment o1, TextFragment o2) {\n }\n }\n \n- if (fragments != null && fragments.length > 0) {\n+ if (fragments.length > 0) {\n return new HighlightField(highlighterContext.fieldName, StringText.convertFromStringArray(fragments));\n }\n \n- int noMatchSize = highlighterContext.field.noMatchSize();\n+ int noMatchSize = highlighterContext.field.fieldOptions().noMatchSize();\n if (noMatchSize > 0 && textsToHighlight.size() > 0) {\n // Pull an excerpt from the beginning of the string but make sure to split the string on a term boundary.\n String fieldContents = textsToHighlight.get(0).toString();",
"filename": "src/main/java/org/elasticsearch/search/highlight/PlainHighlighter.java",
"status": "modified"
},
{
"diff": "@@ -83,27 +83,27 @@ public HighlightField highlight(HighlighterContext highlighterContext) {\n MapperHighlighterEntry mapperHighlighterEntry = highlighterEntry.mappers.get(fieldMapper);\n \n if (mapperHighlighterEntry == null) {\n- Encoder encoder = field.encoder().equals(\"html\") ? HighlightUtils.Encoders.HTML : HighlightUtils.Encoders.DEFAULT;\n- CustomPassageFormatter passageFormatter = new CustomPassageFormatter(field.preTags()[0], field.postTags()[0], encoder);\n- BytesRef[] filteredQueryTerms = filterTerms(highlighterEntry.queryTerms, fieldMapper.names().indexName(), field.requireFieldMatch());\n+ Encoder encoder = field.fieldOptions().encoder().equals(\"html\") ? HighlightUtils.Encoders.HTML : HighlightUtils.Encoders.DEFAULT;\n+ CustomPassageFormatter passageFormatter = new CustomPassageFormatter(field.fieldOptions().preTags()[0], field.fieldOptions().postTags()[0], encoder);\n+ BytesRef[] filteredQueryTerms = filterTerms(highlighterEntry.queryTerms, fieldMapper.names().indexName(), field.fieldOptions().requireFieldMatch());\n mapperHighlighterEntry = new MapperHighlighterEntry(passageFormatter, filteredQueryTerms);\n }\n \n //we merge back multiple values into a single value using the paragraph separator, unless we have to highlight every single value separately (number_of_fragments=0).\n- boolean mergeValues = field.numberOfFragments() != 0;\n+ boolean mergeValues = field.fieldOptions().numberOfFragments() != 0;\n List<Snippet> snippets = new ArrayList<Snippet>();\n int numberOfFragments;\n \n try {\n //we manually load the field values (from source if needed)\n- List<Object> textsToHighlight = HighlightUtils.loadFieldValues(fieldMapper, context, hitContext, field.forceSource());\n- CustomPostingsHighlighter highlighter = new CustomPostingsHighlighter(mapperHighlighterEntry.passageFormatter, textsToHighlight, mergeValues, Integer.MAX_VALUE-1, field.noMatchSize());\n+ List<Object> textsToHighlight = HighlightUtils.loadFieldValues(field, fieldMapper, context, hitContext);\n+ CustomPostingsHighlighter highlighter = new CustomPostingsHighlighter(mapperHighlighterEntry.passageFormatter, textsToHighlight, mergeValues, Integer.MAX_VALUE-1, field.fieldOptions().noMatchSize());\n \n- if (field.numberOfFragments() == 0) {\n+ if (field.fieldOptions().numberOfFragments() == 0) {\n highlighter.setBreakIterator(new WholeBreakIterator());\n numberOfFragments = 1; //1 per value since we highlight per value\n } else {\n- numberOfFragments = field.numberOfFragments();\n+ numberOfFragments = field.fieldOptions().numberOfFragments();\n }\n \n //we highlight every value separately calling the highlight method multiple times, only if we need to have back a snippet per value (whole value)\n@@ -123,9 +123,9 @@ public HighlightField highlight(HighlighterContext highlighterContext) {\n throw new FetchPhaseExecutionException(context, \"Failed to highlight field [\" + highlighterContext.fieldName + \"]\", e);\n }\n \n- snippets = filterSnippets(snippets, field.numberOfFragments());\n+ snippets = filterSnippets(snippets, field.fieldOptions().numberOfFragments());\n \n- if (field.scoreOrdered()) {\n+ if (field.fieldOptions().scoreOrdered()) {\n //let's sort the snippets by score if needed\n CollectionUtil.introSort(snippets, new Comparator<Snippet>() {\n public int compare(Snippet o1, Snippet o2) {",
"filename": "src/main/java/org/elasticsearch/search/highlight/PostingsHighlighter.java",
"status": "modified"
},
{
"diff": "@@ -19,31 +19,68 @@\n \n package org.elasticsearch.search.highlight;\n \n+import com.google.common.collect.Maps;\n import org.apache.lucene.search.Query;\n \n-import java.util.List;\n-import java.util.Map;\n-import java.util.Set;\n+import java.util.*;\n \n /**\n *\n */\n public class SearchContextHighlight {\n \n- private final List<Field> fields;\n+ private final Map<String, Field> fields;\n \n- public SearchContextHighlight(List<Field> fields) {\n- this.fields = fields;\n+ private boolean globalForceSource = false;\n+\n+ public SearchContextHighlight(Collection<Field> fields) {\n+ assert fields != null;\n+ this.fields = Maps.newHashMap();\n+ for (Field field : fields) {\n+ this.fields.put(field.field, field);\n+ }\n+ }\n+\n+ public Collection<Field> fields() {\n+ return fields.values();\n }\n \n- public List<Field> fields() {\n- return fields;\n+ public void globalForceSource(boolean globalForceSource) {\n+ this.globalForceSource = globalForceSource;\n+ }\n+\n+ public boolean forceSource(Field field) {\n+ if (globalForceSource) {\n+ return true;\n+ }\n+\n+ Field _field = fields.get(field.field);\n+ return _field == null ? false : _field.fieldOptions.forceSource;\n }\n \n public static class Field {\n- // Fields that default to null or -1 are often set to their real default in HighlighterParseElement#parse\n private final String field;\n+ private final FieldOptions fieldOptions;\n+\n+ Field(String field, FieldOptions fieldOptions) {\n+ assert field != null;\n+ assert fieldOptions != null;\n+ this.field = field;\n+ this.fieldOptions = fieldOptions;\n+ }\n+\n+ public String field() {\n+ return field;\n+ }\n+\n+ public FieldOptions fieldOptions() {\n+ return fieldOptions;\n+ }\n+ }\n \n+ public static class FieldOptions {\n+\n+ // Field options that default to null or -1 are often set to their real default in HighlighterParseElement#parse\n private int fragmentCharSize = -1;\n \n private int numberOfFragments = -1;\n@@ -82,164 +119,236 @@ public static class Field {\n \n private int phraseLimit = -1;\n \n- public Field(String field) {\n- this.field = field;\n- }\n-\n- public String field() {\n- return field;\n- }\n-\n public int fragmentCharSize() {\n return fragmentCharSize;\n }\n \n- public void fragmentCharSize(int fragmentCharSize) {\n- this.fragmentCharSize = fragmentCharSize;\n- }\n-\n public int numberOfFragments() {\n return numberOfFragments;\n }\n \n- public void numberOfFragments(int numberOfFragments) {\n- this.numberOfFragments = numberOfFragments;\n- }\n-\n public int fragmentOffset() {\n return fragmentOffset;\n }\n \n- public void fragmentOffset(int fragmentOffset) {\n- this.fragmentOffset = fragmentOffset;\n- }\n-\n public String encoder() {\n return encoder;\n }\n \n- public void encoder(String encoder) {\n- this.encoder = encoder;\n- }\n-\n public String[] preTags() {\n return preTags;\n }\n \n- public void preTags(String[] preTags) {\n- this.preTags = preTags;\n- }\n-\n public String[] postTags() {\n return postTags;\n }\n \n- public void postTags(String[] postTags) {\n- this.postTags = postTags;\n- }\n-\n public Boolean scoreOrdered() {\n return scoreOrdered;\n }\n \n- public void scoreOrdered(boolean scoreOrdered) {\n- this.scoreOrdered = scoreOrdered;\n- }\n-\n public Boolean highlightFilter() {\n return highlightFilter;\n }\n \n- public void highlightFilter(boolean highlightFilter) {\n- this.highlightFilter = highlightFilter;\n- }\n-\n public Boolean requireFieldMatch() {\n return requireFieldMatch;\n }\n \n- public void requireFieldMatch(boolean requireFieldMatch) {\n- this.requireFieldMatch = requireFieldMatch;\n- }\n-\n public String highlighterType() {\n return highlighterType;\n }\n \n- public void highlighterType(String type) {\n- this.highlighterType = type;\n- }\n-\n- public Boolean forceSource() {\n- return forceSource;\n- }\n-\n- public void forceSource(boolean forceSource) {\n- this.forceSource = forceSource;\n- }\n-\n public String fragmenter() {\n return fragmenter;\n }\n \n- public void fragmenter(String fragmenter) {\n- this.fragmenter = fragmenter;\n- }\n-\n public int boundaryMaxScan() {\n return boundaryMaxScan;\n }\n \n- public void boundaryMaxScan(int boundaryMaxScan) {\n- this.boundaryMaxScan = boundaryMaxScan;\n- }\n-\n public Character[] boundaryChars() {\n return boundaryChars;\n }\n \n- public void boundaryChars(Character[] boundaryChars) {\n- this.boundaryChars = boundaryChars;\n- }\n-\n public Query highlightQuery() {\n return highlightQuery;\n }\n \n- public void highlightQuery(Query highlightQuery) {\n- this.highlightQuery = highlightQuery;\n- }\n-\n public int noMatchSize() {\n return noMatchSize;\n }\n \n- public void noMatchSize(int noMatchSize) {\n- this.noMatchSize = noMatchSize;\n- }\n-\n public int phraseLimit() {\n return phraseLimit;\n }\n \n- public void phraseLimit(int phraseLimit) {\n- this.phraseLimit = phraseLimit;\n- }\n-\n public Set<String> matchedFields() {\n return matchedFields;\n }\n \n- public void matchedFields(Set<String> matchedFields) {\n- this.matchedFields = matchedFields;\n- }\n-\n public Map<String, Object> options() {\n return options;\n }\n \n- public void options(Map<String, Object> options) {\n- this.options = options;\n+ static class Builder {\n+\n+ private final FieldOptions fieldOptions = new FieldOptions();\n+\n+ Builder fragmentCharSize(int fragmentCharSize) {\n+ fieldOptions.fragmentCharSize = fragmentCharSize;\n+ return this;\n+ }\n+\n+ Builder numberOfFragments(int numberOfFragments) {\n+ fieldOptions.numberOfFragments = numberOfFragments;\n+ return this;\n+ }\n+\n+ Builder fragmentOffset(int fragmentOffset) {\n+ fieldOptions.fragmentOffset = fragmentOffset;\n+ return this;\n+ }\n+\n+ Builder encoder(String encoder) {\n+ fieldOptions.encoder = encoder;\n+ return this;\n+ }\n+\n+ Builder preTags(String[] preTags) {\n+ fieldOptions.preTags = preTags;\n+ return this;\n+ }\n+\n+ Builder postTags(String[] postTags) {\n+ fieldOptions.postTags = postTags;\n+ return this;\n+ }\n+\n+ Builder scoreOrdered(boolean scoreOrdered) {\n+ fieldOptions.scoreOrdered = scoreOrdered;\n+ return this;\n+ }\n+\n+ Builder highlightFilter(boolean highlightFilter) {\n+ fieldOptions.highlightFilter = highlightFilter;\n+ return this;\n+ }\n+\n+ Builder requireFieldMatch(boolean requireFieldMatch) {\n+ fieldOptions.requireFieldMatch = requireFieldMatch;\n+ return this;\n+ }\n+\n+ Builder highlighterType(String type) {\n+ fieldOptions.highlighterType = type;\n+ return this;\n+ }\n+\n+ Builder forceSource(boolean forceSource) {\n+ fieldOptions.forceSource = forceSource;\n+ return this;\n+ }\n+\n+ Builder fragmenter(String fragmenter) {\n+ fieldOptions.fragmenter = fragmenter;\n+ return this;\n+ }\n+\n+ Builder boundaryMaxScan(int boundaryMaxScan) {\n+ fieldOptions.boundaryMaxScan = boundaryMaxScan;\n+ return this;\n+ }\n+\n+ Builder boundaryChars(Character[] boundaryChars) {\n+ fieldOptions.boundaryChars = boundaryChars;\n+ return this;\n+ }\n+\n+ Builder highlightQuery(Query highlightQuery) {\n+ fieldOptions.highlightQuery = highlightQuery;\n+ return this;\n+ }\n+\n+ Builder noMatchSize(int noMatchSize) {\n+ fieldOptions.noMatchSize = noMatchSize;\n+ return this;\n+ }\n+\n+ Builder phraseLimit(int phraseLimit) {\n+ fieldOptions.phraseLimit = phraseLimit;\n+ return this;\n+ }\n+\n+ Builder matchedFields(Set<String> matchedFields) {\n+ fieldOptions.matchedFields = matchedFields;\n+ return this;\n+ }\n+\n+ Builder options(Map<String, Object> options) {\n+ fieldOptions.options = options;\n+ return this;\n+ }\n+\n+ FieldOptions build() {\n+ return fieldOptions;\n+ }\n+\n+ Builder merge(FieldOptions globalOptions) {\n+ if (fieldOptions.preTags == null && globalOptions.preTags != null) {\n+ fieldOptions.preTags = Arrays.copyOf(globalOptions.preTags, globalOptions.preTags.length);\n+ }\n+ if (fieldOptions.postTags == null && globalOptions.postTags != null) {\n+ fieldOptions.postTags = Arrays.copyOf(globalOptions.postTags, globalOptions.postTags.length);\n+ }\n+ if (fieldOptions.highlightFilter == null) {\n+ fieldOptions.highlightFilter = globalOptions.highlightFilter;\n+ }\n+ if (fieldOptions.scoreOrdered == null) {\n+ fieldOptions.scoreOrdered = globalOptions.scoreOrdered;\n+ }\n+ if (fieldOptions.fragmentCharSize == -1) {\n+ fieldOptions.fragmentCharSize = globalOptions.fragmentCharSize;\n+ }\n+ if (fieldOptions.numberOfFragments == -1) {\n+ fieldOptions.numberOfFragments = globalOptions.numberOfFragments;\n+ }\n+ if (fieldOptions.encoder == null) {\n+ fieldOptions.encoder = globalOptions.encoder;\n+ }\n+ if (fieldOptions.requireFieldMatch == null) {\n+ fieldOptions.requireFieldMatch = globalOptions.requireFieldMatch;\n+ }\n+ if (fieldOptions.boundaryMaxScan == -1) {\n+ fieldOptions.boundaryMaxScan = globalOptions.boundaryMaxScan;\n+ }\n+ if (fieldOptions.boundaryChars == null && globalOptions.boundaryChars != null) {\n+ fieldOptions.boundaryChars = Arrays.copyOf(globalOptions.boundaryChars, globalOptions.boundaryChars.length);\n+ }\n+ if (fieldOptions.highlighterType == null) {\n+ fieldOptions.highlighterType = globalOptions.highlighterType;\n+ }\n+ if (fieldOptions.fragmenter == null) {\n+ fieldOptions.fragmenter = globalOptions.fragmenter;\n+ }\n+ if ((fieldOptions.options == null || fieldOptions.options.size() == 0) && globalOptions.options != null) {\n+ fieldOptions.options = Maps.newHashMap(globalOptions.options);\n+ }\n+ if (fieldOptions.highlightQuery == null && globalOptions.highlightQuery != null) {\n+ fieldOptions.highlightQuery = globalOptions.highlightQuery;\n+ }\n+ if (fieldOptions.noMatchSize == -1) {\n+ fieldOptions.noMatchSize = globalOptions.noMatchSize;\n+ }\n+ if (fieldOptions.forceSource == null) {\n+ fieldOptions.forceSource = globalOptions.forceSource;\n+ }\n+ if (fieldOptions.phraseLimit == -1) {\n+ fieldOptions.phraseLimit = globalOptions.phraseLimit;\n+ }\n+\n+ return this;\n+ }\n }\n }\n }",
"filename": "src/main/java/org/elasticsearch/search/highlight/SearchContextHighlight.java",
"status": "modified"
},
{
"diff": "@@ -42,8 +42,8 @@ public HighlightField highlight(HighlighterContext highlighterContext) {\n List<Text> responses = Lists.newArrayList();\n responses.add(new StringText(\"standard response\"));\n \n- if (field.options() != null) {\n- for (Map.Entry<String, Object> entry : field.options().entrySet()) {\n+ if (field.fieldOptions().options() != null) {\n+ for (Map.Entry<String, Object> entry : field.fieldOptions().options().entrySet()) {\n responses.add(new StringText(\"field:\" + entry.getKey() + \":\" + entry.getValue()));\n }\n }",
"filename": "src/test/java/org/elasticsearch/search/highlight/CustomHighlighter.java",
"status": "modified"
},
{
"diff": "@@ -490,25 +490,31 @@ public void testGlobalHighlightingSettingsOverriddenAtFieldLevel() {\n assertHighlight(searchResponse, 0, \"field2\", 0, 1, equalTo(\"this is another <field2>test</field2>\"));\n }\n \n- @Test\n+ @Test //https://github.com/elasticsearch/elasticsearch/issues/5175\n public void testHighlightingOnWildcardFields() throws Exception {\n- createIndex(\"test\");\n+ assertAcked(prepareCreate(\"test\")\n+ .addMapping(\"type1\",\n+ \"field-postings\", \"type=string,index_options=offsets\",\n+ \"field-fvh\", \"type=string,term_vector=with_positions_offsets\",\n+ \"field-plain\", \"type=string\"));\n ensureGreen();\n \n client().prepareIndex(\"test\", \"type1\")\n- .setSource(\"field1\", \"this is a test\", \"field2\", \"this is another test\").get();\n+ .setSource(\"field-postings\", \"This is the first test sentence. Here is the second one.\",\n+ \"field-fvh\", \"This is the test with term_vectors\",\n+ \"field-plain\", \"This is the test for the plain highlighter\").get();\n refresh();\n \n logger.info(\"--> highlighting and searching on field*\");\n SearchSourceBuilder source = searchSource()\n- .query(termQuery(\"field1\", \"test\"))\n- .from(0).size(60).explain(true)\n- .highlight(highlight().field(\"field*\").order(\"score\").preTags(\"<xxx>\").postTags(\"</xxx>\"));\n+ .query(termQuery(\"field-plain\", \"test\"))\n+ .highlight(highlight().field(\"field*\").preTags(\"<xxx>\").postTags(\"</xxx>\"));\n \n- SearchResponse searchResponse = client().search(searchRequest(\"test\").source(source).searchType(QUERY_THEN_FETCH)).actionGet();\n+ SearchResponse searchResponse = client().search(searchRequest(\"test\").source(source)).actionGet();\n \n- assertHighlight(searchResponse, 0, \"field1\", 0, 1, equalTo(\"this is a <xxx>test</xxx>\"));\n- assertHighlight(searchResponse, 0, \"field2\", 0, 1, equalTo(\"this is another <xxx>test</xxx>\"));\n+ assertHighlight(searchResponse, 0, \"field-postings\", 0, 1, equalTo(\"This is the first <xxx>test</xxx> sentence.\"));\n+ assertHighlight(searchResponse, 0, \"field-fvh\", 0, 1, equalTo(\"This is the <xxx>test</xxx> with term_vectors\"));\n+ assertHighlight(searchResponse, 0, \"field-plain\", 0, 1, equalTo(\"This is the <xxx>test</xxx> for the plain highlighter\"));\n }\n \n @Test",
"filename": "src/test/java/org/elasticsearch/search/highlight/HighlighterSearchTests.java",
"status": "modified"
}
]
} |
{
"body": "To reproduce: \n- snapshot an index A\n- create index B with the same number of shards as A\n- close index B\n- restore A while renaming it to B\n- search index B\n\nObserved behavior:\n- the search fails with `ClusterBlockException[blocked by: [FORBIDDEN/4/index closed];]` error\n\nExpected behavior:\n- the search should work\n",
"comments": [],
"number": 5212,
"title": "Restore of an existing index using rename doesn't completly open the index after restore"
} | {
"body": "Closes #5212\n",
"number": 5213,
"review_comments": [],
"title": "Open correct (renamed) index on restore"
} | {
"commits": [
{
"message": "Restore process should replace the mapping and settings if index already exists\n\nCloses #5210"
},
{
"message": "Open correct (renamed) index on restore\n\nCloses #5212"
}
],
"files": [
{
"diff": "@@ -191,10 +191,11 @@ public ClusterState execute(ClusterState currentState) {\n \"] shard from snapshot with [\" + snapshotIndexMetaData.getNumberOfShards() + \"] shards\");\n }\n // Index exists and it's closed - open it in metadata and start recovery\n- IndexMetaData.Builder indexMdBuilder = IndexMetaData.builder(currentIndexMetaData).state(IndexMetaData.State.OPEN);\n+ IndexMetaData.Builder indexMdBuilder = IndexMetaData.builder(snapshotIndexMetaData).state(IndexMetaData.State.OPEN);\n+ indexMdBuilder.version(Math.max(snapshotIndexMetaData.version(), currentIndexMetaData.version() + 1));\n IndexMetaData updatedIndexMetaData = indexMdBuilder.index(renamedIndex).build();\n rtBuilder.addAsRestore(updatedIndexMetaData, restoreSource);\n- blocks.removeIndexBlock(index, INDEX_CLOSED_BLOCK);\n+ blocks.removeIndexBlock(renamedIndex, INDEX_CLOSED_BLOCK);\n mdBuilder.put(updatedIndexMetaData, true);\n }\n for (int shard = 0; shard < snapshotIndexMetaData.getNumberOfShards(); shard++) {",
"filename": "src/main/java/org/elasticsearch/snapshots/RestoreService.java",
"status": "modified"
},
{
"diff": "@@ -29,12 +29,15 @@\n import org.elasticsearch.action.admin.cluster.snapshots.get.GetSnapshotsResponse;\n import org.elasticsearch.action.admin.cluster.snapshots.restore.RestoreSnapshotResponse;\n import org.elasticsearch.action.admin.cluster.state.ClusterStateResponse;\n+import org.elasticsearch.action.admin.indices.settings.get.GetSettingsResponse;\n import org.elasticsearch.action.count.CountResponse;\n import org.elasticsearch.client.Client;\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n+import org.elasticsearch.cluster.metadata.MappingMetaData;\n import org.elasticsearch.cluster.routing.allocation.decider.FilterAllocationDecider;\n import org.elasticsearch.common.Strings;\n+import org.elasticsearch.common.collect.ImmutableOpenMap;\n import org.elasticsearch.common.settings.ImmutableSettings;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n@@ -138,6 +141,54 @@ public void basicWorkFlowTest() throws Exception {\n assertThat(clusterState.getMetaData().hasIndex(\"test-idx-2\"), equalTo(false));\n }\n \n+ @Test\n+ public void restoreWithDifferentMappingsAndSettingsTest() throws Exception {\n+ Client client = client();\n+\n+ logger.info(\"--> creating repository\");\n+ PutRepositoryResponse putRepositoryResponse = client.admin().cluster().preparePutRepository(\"test-repo\")\n+ .setType(\"fs\").setSettings(ImmutableSettings.settingsBuilder()\n+ .put(\"location\", newTempDir(LifecycleScope.SUITE))\n+ .put(\"compress\", randomBoolean())\n+ .put(\"chunk_size\", randomIntBetween(100, 1000))\n+ ).get();\n+ assertThat(putRepositoryResponse.isAcknowledged(), equalTo(true));\n+\n+ logger.info(\"--> create index with foo type\");\n+ assertAcked(prepareCreate(\"test-idx\", 2, ImmutableSettings.builder().put(\"refresh_interval\", 10)));\n+\n+ assertAcked(client().admin().indices().preparePutMapping(\"test-idx\").setType(\"foo\").setSource(\"baz\", \"type=string\"));\n+ ensureGreen();\n+\n+ logger.info(\"--> snapshot it\");\n+ CreateSnapshotResponse createSnapshotResponse = client.admin().cluster().prepareCreateSnapshot(\"test-repo\", \"test-snap\").setWaitForCompletion(true).setIndices(\"test-idx\").get();\n+ assertThat(createSnapshotResponse.getSnapshotInfo().successfulShards(), greaterThan(0));\n+ assertThat(createSnapshotResponse.getSnapshotInfo().successfulShards(), equalTo(createSnapshotResponse.getSnapshotInfo().totalShards()));\n+\n+ logger.info(\"--> delete the index and recreate it with bar type\");\n+ wipeIndices(\"test-idx\");\n+ assertAcked(prepareCreate(\"test-idx\", 2, ImmutableSettings.builder().put(\"refresh_interval\", 5)));\n+ assertAcked(client().admin().indices().preparePutMapping(\"test-idx\").setType(\"bar\").setSource(\"baz\", \"type=string\"));\n+ ensureGreen();\n+\n+ logger.info(\"--> close index\");\n+ client.admin().indices().prepareClose(\"test-idx\").get();\n+\n+ logger.info(\"--> restore all indices from the snapshot\");\n+ RestoreSnapshotResponse restoreSnapshotResponse = client.admin().cluster().prepareRestoreSnapshot(\"test-repo\", \"test-snap\").setWaitForCompletion(true).execute().actionGet();\n+ assertThat(restoreSnapshotResponse.getRestoreInfo().totalShards(), greaterThan(0));\n+ ensureGreen();\n+\n+ logger.info(\"--> assert that old mapping is restored\");\n+ ImmutableOpenMap<String, MappingMetaData> mappings = client().admin().cluster().prepareState().get().getState().getMetaData().getIndices().get(\"test-idx\").getMappings();\n+ assertThat(mappings.get(\"foo\"), notNullValue());\n+ assertThat(mappings.get(\"bar\"), nullValue());\n+\n+ logger.info(\"--> assert that old settings are restored\");\n+ GetSettingsResponse getSettingsResponse = client.admin().indices().prepareGetSettings(\"test-idx\").execute().actionGet();\n+ assertThat(getSettingsResponse.getSetting(\"test-idx\", \"index.refresh_interval\"), equalTo(\"10\"));\n+ }\n+\n @Test\n public void emptySnapshotTest() throws Exception {\n Client client = client();\n@@ -629,6 +680,19 @@ public void renameOnRestoreTest() throws Exception {\n assertThat(client.prepareCount(\"test-idx-1-copy\").get().getCount(), equalTo(100L));\n assertThat(client.prepareCount(\"test-idx-2-copy\").get().getCount(), equalTo(100L));\n \n+ logger.info(\"--> close just restored indices\");\n+ client.admin().indices().prepareClose(\"test-idx-1-copy\", \"test-idx-2-copy\").get();\n+\n+ logger.info(\"--> and try to restore these indices again\");\n+ restoreSnapshotResponse = client.admin().cluster().prepareRestoreSnapshot(\"test-repo\", \"test-snap\")\n+ .setRenamePattern(\"(.+)\").setRenameReplacement(\"$1-copy\").setWaitForCompletion(true).execute().actionGet();\n+ assertThat(restoreSnapshotResponse.getRestoreInfo().totalShards(), greaterThan(0));\n+\n+ ensureGreen();\n+ assertThat(client.prepareCount(\"test-idx-1-copy\").get().getCount(), equalTo(100L));\n+ assertThat(client.prepareCount(\"test-idx-2-copy\").get().getCount(), equalTo(100L));\n+\n+\n logger.info(\"--> close indices\");\n client.admin().indices().prepareClose(\"test-idx-1\", \"test-idx-2-copy\").get();\n ",
"filename": "src/test/java/org/elasticsearch/snapshots/SharedClusterSnapshotRestoreTests.java",
"status": "modified"
}
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.