issue
dict
pr
dict
pr_details
dict
{ "body": "GeoJSON specification allows 2 or more elements for a position (x, y [z, ...]).\nhttp://geojson.org/geojson-spec.html#positions\n\nWhen trying to index a Polygon with coordinates containing more than 2 elements, Elasticsearch throws : \n`org.elasticsearch.ElasticsearchParseException: Invalid number of points in LinearRing (found 2 - must be >= 4)`\n\nHere the gist for a simple test : \nhttps://gist.github.com/clement-tourriere/aa2abdc94de7d1456c58\n", "comments": [ { "body": "Thanks for reporting! While Elasticsearch doesn't spatially index anything beyond 2 dimension (yet) this is indeed a bug. Will look into throwing a warning that the z position will be ignored. The other option is to silently ignore it.\n", "created_at": "2015-02-03T13:30:02Z" }, { "body": "@nknize I don't think that a warning in the logs is a good solution. The user causes this but putting a warning in logs will not inform the user that something is not right with the request, and even if it did, in many cases the user indexing the document will not have access to the log files. Also, if someone indexes a bulk load of document with extra dimensions they are going to create massive amounts of logs without any knowledge that there is any warnings. Throwing an exception gives the user feedback saying that something is not right. We need a solution here that is obvious to the user. Maybe something like having an option in the mapping to either throw an error on extra dimensions or to ignore them?\n", "created_at": "2015-02-06T09:41:16Z" }, { "body": "Fixed in #10539 by silently ignoring values beyond the 2nd dimension.\n", "created_at": "2015-04-20T16:24:02Z" } ], "number": 9540, "title": "[GEO] GeoJSON parser doesn't support more than two elements in a position" }
{ "body": "This pr fixes GeoJSON parsing with more than 2 elements in coordinates.\n\nCloses #9540 \n", "number": 9542, "review_comments": [], "title": "Coordinates can contain more than two elements (x,y) in GeoJSON parser." }
{ "commits": [ { "message": "Coordinates can contain more than two elements (x,y) in GeoJSON parser.\nCloses #9540" } ], "files": [ { "diff": "@@ -240,16 +240,24 @@ public String toString() {\n * XContentParser\n */\n private static CoordinateNode parseCoordinates(XContentParser parser) throws IOException {\n+ /** http://geojson.org/geojson-spec.html#positions\n+ * A position is represented by an array of numbers.\n+ * There must be at least two elements, and may be more\n+ */\n+\n XContentParser.Token token = parser.nextToken();\n \n // Base cases\n if (token != XContentParser.Token.START_ARRAY && \n- token != XContentParser.Token.END_ARRAY && \n+ token != XContentParser.Token.END_ARRAY &&\n token != XContentParser.Token.VALUE_NULL) {\n double lon = parser.doubleValue();\n token = parser.nextToken();\n double lat = parser.doubleValue();\n- token = parser.nextToken();\n+ // Only lon and lat elements are parsed, others (if any) are skipped\n+ while (token != XContentParser.Token.END_ARRAY) {\n+ token = parser.nextToken();\n+ }\n return new CoordinateNode(new Coordinate(lon, lat));\n } else if (token == XContentParser.Token.VALUE_NULL) {\n throw new ElasticsearchIllegalArgumentException(\"coordinates cannot contain NULL values)\");", "filename": "src/main/java/org/elasticsearch/common/geo/builders/ShapeBuilder.java", "status": "modified" }, { "diff": "@@ -185,6 +185,31 @@ public void testParse_polygonNoHoles() throws IOException {\n assertGeometryEquals(jtsGeom(expected), polygonGeoJson);\n }\n \n+ public void testParse_polygonWithCoordinateWithMoreThanTwoElements() throws IOException {\n+ String polygonGeoJson = XContentFactory.jsonBuilder().startObject().field(\"type\", \"Polygon\")\n+ .startArray(\"coordinates\")\n+ .startArray()\n+ .startArray().value(100.0).value(1.0).endArray()\n+ .startArray().value(101.0).value(1.0).value(2.0).endArray()\n+ .startArray().value(101.0).value(0.0).endArray()\n+ .startArray().value(100.0).value(0.0).endArray()\n+ .startArray().value(100.0).value(1.0).endArray()\n+ .endArray()\n+ .endArray()\n+ .endObject().string();\n+\n+ List<Coordinate> shellCoordinates = new ArrayList<>();\n+ shellCoordinates.add(new Coordinate(100, 0));\n+ shellCoordinates.add(new Coordinate(101, 0));\n+ shellCoordinates.add(new Coordinate(101, 1));\n+ shellCoordinates.add(new Coordinate(100, 1));\n+ shellCoordinates.add(new Coordinate(100, 0));\n+\n+ LinearRing shell = GEOMETRY_FACTORY.createLinearRing(shellCoordinates.toArray(new Coordinate[shellCoordinates.size()]));\n+ Polygon expected = GEOMETRY_FACTORY.createPolygon(shell, null);\n+ assertGeometryEquals(jtsGeom(expected), polygonGeoJson);\n+ }\n+\n public void testParse_invalidPoint() throws IOException {\n // test case 1: create an invalid point object with multipoint data format\n String invalidPoint1 = XContentFactory.jsonBuilder().startObject().field(\"type\", \"point\")", "filename": "src/test/java/org/elasticsearch/common/geo/GeoJSONShapeParserTests.java", "status": "modified" } ] }
{ "body": "Hi there, I have a lot of error messages in logs like this:\n\n```\norg.elasticsearch.transport.RemoteTransportException: Failed to deserialize response of type [org.elasticsearch.search.query.QuerySearchResult]\nCaused by: org.elasticsearch.transport.TransportSerializationException: Failed to deserialize response of type [org.elasticsearch.search.query.QuerySearchResult]\n at org.elasticsearch.transport.netty.MessageChannelHandler.handleResponse(MessageChannelHandler.java:152)\n at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:127)\n at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)\n at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)\n at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)\n at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)\n at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)\n at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)\n at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)\n at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)\n at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)\n at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)\n at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)\n at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)\n at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)\n at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)\n at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)\n at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)\n at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)\n at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)\n at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)\n at java.lang.Thread.run(Unknown Source)\nCaused by: java.lang.IndexOutOfBoundsException: Invalid combined index of 1471746, maximum is 64403\n at org.elasticsearch.common.netty.buffer.SlicedChannelBuffer.<init>(SlicedChannelBuffer.java:46)\n at org.elasticsearch.common.netty.buffer.HeapChannelBuffer.slice(HeapChannelBuffer.java:201)\n at org.elasticsearch.transport.netty.ChannelBufferStreamInput.readBytesReference(ChannelBufferStreamInput.java:56)\n at org.elasticsearch.common.io.stream.StreamInput.readText(StreamInput.java:222)\n at org.elasticsearch.common.io.stream.HandlesStreamInput.readSharedText(HandlesStreamInput.java:69)\n at org.elasticsearch.search.SearchShardTarget.readFrom(SearchShardTarget.java:103)\n at org.elasticsearch.search.SearchShardTarget.readSearchShardTarget(SearchShardTarget.java:87)\n at org.elasticsearch.search.internal.InternalSearchHits.readFrom(InternalSearchHits.java:217)\n at org.elasticsearch.search.internal.InternalSearchHits.readFrom(InternalSearchHits.java:203)\n at org.elasticsearch.search.internal.InternalSearchHits.readSearchHits(InternalSearchHits.java:197)\n at org.elasticsearch.search.aggregations.metrics.tophits.InternalTopHits.readFrom(InternalTopHits.java:137)\n at org.elasticsearch.search.aggregations.metrics.tophits.InternalTopHits$1.readResult(InternalTopHits.java:50)\n at org.elasticsearch.search.aggregations.metrics.tophits.InternalTopHits$1.readResult(InternalTopHits.java:46)\n at org.elasticsearch.search.aggregations.InternalAggregations.readFrom(InternalAggregations.java:190)\n at org.elasticsearch.search.aggregations.InternalAggregations.readAggregations(InternalAggregations.java:172)\n at org.elasticsearch.search.aggregations.bucket.terms.LongTerms.readFrom(LongTerms.java:148)\n at org.elasticsearch.search.aggregations.bucket.terms.LongTerms$1.readResult(LongTerms.java:48)\n at org.elasticsearch.search.aggregations.bucket.terms.LongTerms$1.readResult(LongTerms.java:44)\n at org.elasticsearch.search.aggregations.InternalAggregations.readFrom(InternalAggregations.java:190)\n at org.elasticsearch.search.aggregations.InternalAggregations.readAggregations(InternalAggregations.java:172)\n at org.elasticsearch.search.query.QuerySearchResult.readFromWithId(QuerySearchResult.java:175)\n at org.elasticsearch.search.query.QuerySearchResult.readFrom(QuerySearchResult.java:162)\n at org.elasticsearch.transport.netty.MessageChannelHandler.handleResponse(MessageChannelHandler.java:150)\n ... 23 more\n```\n\nQuery uses search type \"count\", uses a lot of filters, and have multiple aggregations, including top hits agg.\nIf I'm retry equals query to ES, I see correct answer without error messages in log in 99,9% times.\nIf I'm not use top hits agg, or standard search type \"query and fetch\", threre's no error message.\n\nI have ES test cluster with CentOs 6.4 with xeon and 20gb memory, Oracle Java 1.8.0_25, ElasticSearch 1.4.2 with 10gb heap, two instances.\n", "comments": [ { "body": "Hi @mitallast \n\nThanks for reporting. Could you provide the smallest recreation of the problem that you can come up with? Will help with debugging.\n\nthanks\n\n@martijnvg sounds like it could be a problem with top-hits?\n", "created_at": "2015-01-15T20:21:04Z" }, { "body": "@clintongormley Yes, this does look like a bug in `top_hits` agg. \n@mitallast A recreation would be helpful\n", "created_at": "2015-01-19T08:13:14Z" }, { "body": "I'm still try to reproduce like a integration test, but without success.\nBut at our test cluster, this bug also manually reproduced, and looks like this:\n\n2 nodes 1.4.2 with centos and Oracle JVM 8, first node contains shards 1/4/5/7, second contains 0/2/3/6 (8 total, 0 replicas)\nquery:\n\n```\n/index_name/type_name/_search?search_type=count&routing=3016\n{\n \"query\": {\n \"match_all\": {}\n },\n \"aggregations\": {\n \"clients\": {\n \"terms\": {\n \"field\": \"client_id\",\n \"size\": 5\n },\n \"aggregations\": {\n \"best\": {\n \"top_hits\": {\n \"size\": 10\n }\n }\n }\n }\n }\n}\n```\n\nError output:\n\n```\n{\nerror: SearchPhaseExecutionException[Failed to execute phase [query], all shards failed; shardFailures {[pBrJ6zkHS8CKfAopFnmKUg][online][7]: RemoteTransportException[Failed to deserialize response of type [org.elasticsearch.search.query.QuerySearchResult]]; nested: TransportSerializationException[Failed to deserialize response of type [org.elasticsearch.search.query.QuerySearchResult]]; nested: IndexOutOfBoundsException[Invalid index: 214 - Bytes needed: 1470744, maximum is 48585]; }]\nstatus: 500\n}\n```\n\nI have this error only if query executed at second node. If first, all is OK.\nWith search shards api, looks like 3016 route to 7th shard. Second node does not contain 7th, but first does.\n\nAny ideas?\n", "created_at": "2015-01-19T12:58:34Z" }, { "body": "Sorry, I can't show our index data - DNA restrictions. I have not find any reproduce test data.\n\nI'm reproduce it in debugger Intellij IDEA, and try to find where is an mistake.\n\nSeems like `SearchShardTarget#readFrom` code contains extra reads of two bytes with comparing to `SearchShardTarget#writeTo`\n\nSee at screenshots:\nhttps://www.dropbox.com/s/7ukfm9ozlg0rxzb/%D0%A1%D0%BA%D1%80%D0%B8%D0%BD%D1%88%D0%BE%D1%82%202015-01-19%2019.52.16.png?dl=0\nhttps://www.dropbox.com/s/bux416lf3gnzfpg/%D0%A1%D0%BA%D1%80%D0%B8%D0%BD%D1%88%D0%BE%D1%82%202015-01-19%2020.01.28.png?dl=0\n\n`int length` in line 221 must be an `22` , but `ChannelBufferStreamInput.buffer.readerIndex = 439`. Looks like It must be 437 (4 bytes for int, value 22 presents as `0-0-0-22` in byte-as-decimal view). Maybe somewhere uses `writeVInt` for write, and `readInt` for read, or something like this.\n\nLook:\nhttps://www.dropbox.com/s/gnuntjuamw1drjp/%D0%A1%D0%BA%D1%80%D0%B8%D0%BD%D1%88%D0%BE%D1%82%202015-01-19%2020.02.29.png?dl=0\nEarlier in program stack `InternalSearchHits#readFrom` lines between 207 and 210 have a correct parsed values for `totalHits`, `maxScore`, `size` and `lookupSize`.\n", "created_at": "2015-01-19T17:08:01Z" }, { "body": "Maybe I found mistake:\n\n`org.elasticsearch.search.SearchShardTarget` has this correct methods:\n\n```\n @Override\n public void readFrom(StreamInput in) throws IOException {\n if (in.readBoolean()) {\n nodeId = in.readSharedText();\n }\n index = in.readSharedText();\n shardId = in.readVInt();\n }\n\n @Override\n public void writeTo(StreamOutput out) throws IOException {\n if (nodeId == null) {\n out.writeBoolean(false);\n } else {\n out.writeBoolean(true);\n out.writeSharedText(nodeId);\n }\n out.writeSharedText(index);\n out.writeVInt(shardId);\n }\n```\n\nBut when ES at first time execute no-cached query, argument for `writeTo` is an `BytesStreamOutput` with parent class `StreamOutput` contains:\n\n```\n public void writeText(Text text) throws IOException {\n if (!text.hasBytes()) {\n final String string = text.string();\n spare.copyChars(string);\n writeInt(spare.length());\n write(spare.bytes(), 0, spare.length());\n } else {\n BytesReference bytes = text.bytes();\n writeInt(bytes.length());\n bytes.writeTo(this);\n }\n }\n\n public void writeSharedText(Text text) throws IOException {\n writeText(text);\n }\n```\n\nIt means that `org.elasticsearch.common.io.stream.StreamOutput#writeSharedText` just proxy to `org.elasticsearch.common.io.stream.StreamOutput#writeText`\n\nBut when it reads at next node, uses a HandlesStreamInput with this override implementation\n\n```\n@Override\n public Text readSharedText() throws IOException {\n byte b = in.readByte();\n if (b == 0) {\n int handle = in.readVInt();\n Text s = in.readText();\n handlesText.put(handle, s);\n return s;\n } else if (b == 1) {\n return handlesText.get(in.readVInt());\n } else if (b == 2) {\n return in.readText();\n } else {\n throw new IOException(\"Expected handle header, got [\" + b + \"]\");\n }\n }\n```\n\nThis two implementations are not byte compatible together. \nMaybe HandlesStreamInput should not override `readSharedText`, but should only `readSharedString` ?\n", "created_at": "2015-01-20T10:23:37Z" }, { "body": "Unfortunately I couldn't reproduce it either, but I can confirm the issue exactly as mitallast described. Not setting search_type to \"count\" makes it disappear.\n", "created_at": "2015-02-11T09:29:34Z" }, { "body": "@mitallast you are absolutely right. I found the same issue on a different occasion and must have missed this issue. We fixed this lately and removed the `read/writeSharedText` and friends in master completely. For 1.x and the upcoming 1.4.3 release this is fixed by #9500\n", "created_at": "2015-02-11T09:34:06Z" }, { "body": "> Unfortunately I couldn't reproduce it either, but I can confirm the issue exactly as mitallast described. Not setting search_type to \"count\" makes it disappear.\n\nyeah we don't use the query cache if `search_type` is not `count` :)\n", "created_at": "2015-02-11T09:35:08Z" } ], "number": 9294, "title": "Failed to deserialize response when using Top Hits aggregation with search_type \"count\"" }
{ "body": "The query-cache has an optimization to not deserialize the bytes at the shard\nlevel. However this is a bit fragile since it assumes that serialized streams\ncan be concatenated (which is not the case with shared strings) and also does\nnot update the QueryResult object that is held by the SearchContext. So you\nneed to make sure to use the right one.\n\nWith this change, the query cache just deserializes bytes into the QueryResult\nobject from the context.\n\nCloses #9294\n", "number": 9500, "review_comments": [], "title": "Remove query-cache serialization optimization." }
{ "commits": [ { "message": "Search: Remove query-cache serialization optimization.\n\nThe query-cache has an optimization to not deserialize the bytes at the shard\nlevel. However this is a bit fragile since it assumes that serialized streams\ncan be concatenanted (which is not the case with shared strings) and also does\nnot update the QueryResult object that is held by the SearchContext. So you\nneed to make sure to use the right one.\n\nWith this change, the query cache just deserializes bytes into the QueryResult\nobject from the context." } ], "files": [ { "diff": "@@ -32,37 +32,26 @@\n import org.apache.lucene.util.Accountable;\n import org.apache.lucene.util.RamUsageEstimator;\n import org.elasticsearch.ElasticsearchIllegalArgumentException;\n-import org.elasticsearch.ElasticsearchIllegalStateException;\n-import org.elasticsearch.ElasticsearchParseException;\n import org.elasticsearch.action.search.SearchType;\n import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.common.bytes.BytesReference;\n-import org.elasticsearch.common.bytes.PagedBytesReference;\n-import org.elasticsearch.common.bytes.ReleasablePagedBytesReference;\n import org.elasticsearch.common.component.AbstractComponent;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.io.stream.BytesStreamOutput;\n-import org.elasticsearch.common.io.stream.ReleasableBytesStreamOutput;\n-import org.elasticsearch.common.io.stream.StreamInput;\n-import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.MemorySizeValue;\n import org.elasticsearch.common.unit.TimeValue;\n-import org.elasticsearch.common.util.BigArrays;\n import org.elasticsearch.common.util.concurrent.ConcurrentCollections;\n import org.elasticsearch.common.util.concurrent.EsRejectedExecutionException;\n import org.elasticsearch.index.shard.IndexShard;\n import org.elasticsearch.index.shard.IndexShardState;\n-import org.elasticsearch.search.SearchShardTarget;\n import org.elasticsearch.search.internal.SearchContext;\n import org.elasticsearch.search.internal.ShardSearchRequest;\n import org.elasticsearch.search.query.QueryPhase;\n import org.elasticsearch.search.query.QuerySearchResult;\n-import org.elasticsearch.search.query.QuerySearchResultProvider;\n import org.elasticsearch.threadpool.ThreadPool;\n \n-import java.io.IOException;\n import java.util.Collection;\n import java.util.Collections;\n import java.util.Iterator;\n@@ -217,11 +206,12 @@ public boolean canCache(ShardSearchRequest request, SearchContext context) {\n }\n \n /**\n- * Loads the cache result, computing it if needed by executing the query phase. The combination of load + compute allows\n+ * Loads the cache result, computing it if needed by executing the query phase and otherwise deserializing the cached\n+ * value into the {@link SearchContext#queryResult() context's query result}. The combination of load + compute allows\n * to have a single load operation that will cause other requests with the same key to wait till its loaded an reuse\n * the same cache.\n */\n- public QuerySearchResultProvider load(final ShardSearchRequest request, final SearchContext context, final QueryPhase queryPhase) throws Exception {\n+ public void loadIntoContext(final ShardSearchRequest request, final SearchContext context, final QueryPhase queryPhase) throws Exception {\n assert canCache(request, context);\n Key key = buildKey(request, context);\n Loader loader = new Loader(queryPhase, context, key);\n@@ -238,10 +228,11 @@ public QuerySearchResultProvider load(final ShardSearchRequest request, final Se\n }\n } else {\n key.shard.queryCache().onHit();\n+ // restore the cached query result into the context\n+ final QuerySearchResult result = context.queryResult();\n+ result.readFromWithId(context.id(), value.reference.streamInput());\n+ result.shardTarget(context.shardTarget());\n }\n-\n- // try and be smart, and reuse an already loaded and constructed QueryResult of in VM execution\n- return new BytesQuerySearchResult(context.id(), context.shardTarget(), value.reference, loader.isLoaded() ? context.queryResult() : null);\n }\n \n private static class Loader implements Callable<Value> {\n@@ -278,7 +269,6 @@ public Value call() throws Exception {\n // for now, keep the paged data structure, which might have unused bytes to fill a page, but better to keep\n // the memory properly paged instead of having varied sized bytes\n final BytesReference reference = out.bytes();\n- assert verifyCacheSerializationSameAsQueryResult(reference, context, context.queryResult());\n loaded = true;\n Value value = new Value(reference, out.ramBytesUsed());\n key.shard.queryCache().onCached(key, value);\n@@ -459,89 +449,11 @@ synchronized void reap() {\n }\n }\n \n- private static boolean verifyCacheSerializationSameAsQueryResult(BytesReference cacheData, SearchContext context, QuerySearchResult result) throws Exception {\n- BytesStreamOutput out1 = new BytesStreamOutput();\n- new BytesQuerySearchResult(context.id(), context.shardTarget(), cacheData).writeTo(out1);\n- BytesStreamOutput out2 = new BytesStreamOutput();\n- result.writeTo(out2);\n- return out1.bytes().equals(out2.bytes());\n- }\n-\n private static Key buildKey(ShardSearchRequest request, SearchContext context) throws Exception {\n // TODO: for now, this will create different keys for different JSON order\n // TODO: tricky to get around this, need to parse and order all, which can be expensive\n return new Key(context.indexShard(),\n ((DirectoryReader) context.searcher().getIndexReader()).getVersion(),\n request.cacheKey());\n }\n-\n- /**\n- * this class aim is to just provide an on the wire *write* format that is the same as {@link QuerySearchResult}\n- * and also provide a nice wrapper for in node communication for an already constructed {@link QuerySearchResult}.\n- */\n- private static class BytesQuerySearchResult extends QuerySearchResultProvider {\n-\n- private long id;\n- private SearchShardTarget shardTarget;\n- private BytesReference data;\n-\n- private transient QuerySearchResult result;\n-\n- private BytesQuerySearchResult(long id, SearchShardTarget shardTarget, BytesReference data) {\n- this(id, shardTarget, data, null);\n- }\n-\n- private BytesQuerySearchResult(long id, SearchShardTarget shardTarget, BytesReference data, QuerySearchResult result) {\n- this.id = id;\n- this.shardTarget = shardTarget;\n- this.data = data;\n- this.result = result;\n- }\n-\n- @Override\n- public boolean includeFetch() {\n- return false;\n- }\n-\n- @Override\n- public QuerySearchResult queryResult() {\n- if (result == null) {\n- result = new QuerySearchResult(id, shardTarget);\n- try {\n- result.readFromWithId(id, data.streamInput());\n- } catch (Exception e) {\n- throw new ElasticsearchParseException(\"failed to parse a cached query\", e);\n- }\n- }\n- return result;\n- }\n-\n- @Override\n- public long id() {\n- return id;\n- }\n-\n- @Override\n- public SearchShardTarget shardTarget() {\n- return shardTarget;\n- }\n-\n- @Override\n- public void shardTarget(SearchShardTarget shardTarget) {\n- this.shardTarget = shardTarget;\n- }\n-\n- @Override\n- public void readFrom(StreamInput in) throws IOException {\n- throw new ElasticsearchIllegalStateException(\"readFrom should not be called\");\n- }\n-\n- @Override\n- public void writeTo(StreamOutput out) throws IOException {\n- super.writeTo(out);\n- out.writeLong(id);\n-// shardTarget.writeTo(out); not needed\n- data.writeTo(out); // we need to write teh bytes as is, to be the same as QuerySearchResult\n- }\n- }\n }", "filename": "src/main/java/org/elasticsearch/indices/cache/query/IndicesQueryCache.java", "status": "modified" }, { "diff": "@@ -24,6 +24,7 @@\n import com.carrotsearch.hppc.cursors.ObjectCursor;\n import com.google.common.base.Charsets;\n import com.google.common.collect.ImmutableMap;\n+\n import org.apache.lucene.index.IndexOptions;\n import org.apache.lucene.index.LeafReaderContext;\n import org.apache.lucene.index.NumericDocValues;\n@@ -271,21 +272,27 @@ public ScrollQueryFetchSearchResult executeScan(InternalScrollSearchRequest requ\n }\n }\n \n+ /**\n+ * Try to load the query results from the cache or execute the query phase directly if the cache cannot be used.\n+ */\n+ private void loadOrExecuteQueryPhase(final ShardSearchRequest request, final SearchContext context,\n+ final QueryPhase queryPhase) throws Exception {\n+ final boolean canCache = indicesQueryCache.canCache(request, context);\n+ if (canCache) {\n+ indicesQueryCache.loadIntoContext(request, context, queryPhase);\n+ } else {\n+ queryPhase.execute(context);\n+ }\n+ }\n+\n public QuerySearchResultProvider executeQueryPhase(ShardSearchRequest request) throws ElasticsearchException {\n final SearchContext context = createAndPutContext(request);\n try {\n context.indexShard().searchService().onPreQueryPhase(context);\n long time = System.nanoTime();\n contextProcessing(context);\n \n- QuerySearchResultProvider result;\n- boolean canCache = indicesQueryCache.canCache(request, context);\n- if (canCache) {\n- result = indicesQueryCache.load(request, context, queryPhase);\n- } else {\n- queryPhase.execute(context);\n- result = context.queryResult();\n- }\n+ loadOrExecuteQueryPhase(request, context, queryPhase);\n \n if (context.searchType() == SearchType.COUNT) {\n freeContext(context.id());\n@@ -294,7 +301,7 @@ public QuerySearchResultProvider executeQueryPhase(ShardSearchRequest request) t\n }\n context.indexShard().searchService().onQueryPhase(context, System.nanoTime() - time);\n \n- return result;\n+ return context.queryResult();\n } catch (Throwable e) {\n // execution exception can happen while loading the cache, strip it\n if (e instanceof ExecutionException) {\n@@ -989,11 +996,7 @@ public void run() {\n if (canCache != top) {\n return;\n }\n- if (canCache) {\n- indicesQueryCache.load(request, context, queryPhase);\n- } else {\n- queryPhase.execute(context);\n- }\n+ loadOrExecuteQueryPhase(request, context, queryPhase);\n long took = System.nanoTime() - now;\n if (indexShard.warmerService().logger().isTraceEnabled()) {\n indexShard.warmerService().logger().trace(\"warmed [{}], took [{}]\", entry.name(), TimeValue.timeValueNanos(took));", "filename": "src/main/java/org/elasticsearch/search/SearchService.java", "status": "modified" }, { "diff": "@@ -0,0 +1,69 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.indices.stats;\n+\n+import org.elasticsearch.action.search.SearchResponse;\n+import org.elasticsearch.action.search.SearchType;\n+import org.elasticsearch.indices.cache.query.IndicesQueryCache;\n+import org.elasticsearch.search.aggregations.bucket.histogram.Histogram;\n+import org.elasticsearch.search.aggregations.bucket.histogram.Histogram.Bucket;\n+import org.elasticsearch.test.ElasticsearchIntegrationTest;\n+\n+import java.util.List;\n+\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n+\n+import static org.elasticsearch.search.aggregations.AggregationBuilders.histogram;\n+import static org.hamcrest.Matchers.greaterThan;\n+\n+public class IndicesQueryCacheTests extends ElasticsearchIntegrationTest {\n+\n+ // One of the primary purposes of the query cache is to cache aggs results\n+ public void testCacheAggs() throws Exception {\n+ assertAcked(client().admin().indices().prepareCreate(\"index\").setSettings(IndicesQueryCache.INDEX_CACHE_QUERY_ENABLED, true).get());\n+ indexRandom(true,\n+ client().prepareIndex(\"index\", \"type\").setSource(\"f\", 4),\n+ client().prepareIndex(\"index\", \"type\").setSource(\"f\", 6),\n+ client().prepareIndex(\"index\", \"type\").setSource(\"f\", 7));\n+\n+ final SearchResponse r1 = client().prepareSearch(\"index\").setSearchType(SearchType.COUNT)\n+ .addAggregation(histogram(\"histo\").field(\"f\").interval(2)).get();\n+\n+ // The cached is actually used\n+ assertThat(client().admin().indices().prepareStats(\"index\").setQueryCache(true).get().getTotal().getQueryCache().getMemorySizeInBytes(), greaterThan(0l));\n+\n+ for (int i = 0; i < 10; ++i) {\n+ final SearchResponse r2 = client().prepareSearch(\"index\").setSearchType(SearchType.COUNT)\n+ .addAggregation(histogram(\"histo\").field(\"f\").interval(2)).get();\n+ Histogram h1 = r1.getAggregations().get(\"histo\");\n+ Histogram h2 = r2.getAggregations().get(\"histo\");\n+ final List<? extends Bucket> buckets1 = h1.getBuckets();\n+ final List<? extends Bucket> buckets2 = h2.getBuckets();\n+ assertEquals(buckets1.size(), buckets2.size());\n+ for (int j = 0; j < buckets1.size(); ++j) {\n+ final Bucket b1 = buckets1.get(j);\n+ final Bucket b2 = buckets2.get(j);\n+ assertEquals(b1.getKey(), b2.getKey());\n+ assertEquals(b1.getDocCount(), b2.getDocCount());\n+ }\n+ }\n+ }\n+\n+}", "filename": "src/test/java/org/elasticsearch/indices/stats/IndicesQueryCacheTests.java", "status": "added" } ] }
{ "body": "Setting the merge_factor, and other merge policies, from within the YML file are always overridden by the defaults. There is a workaround, which is to set them in a base index template, which will cause them to be taken. This is only an issue in 1.4.0 and 1.4.1.\n\nThis came up when trying to set `index.merge.policy.merge_factor` in `elasticsearch.yml` to a value other than `10`, which is the default. Peaking at the code, the sibling settings are also effected (e.g., `index.merge.policy.max_merge_size`).\n\nFor example, in an `elasticsearch.yml`:\n\n``` yml\nindex.merge.policy.type: log_byte_size \nindex.merge.policy.merge_factor: 30 \n```\n\nAfter creating any index, you will see something along the lines of\n\n```\n[2014-12-10 00:00:00,920][INFO ][cluster.metadata ] [cluster-name] [index-name-2014.12.10] creating index, cause [auto(bulk api)], shards [4]/[2], mappings [] \n[2014-12-10 00:00:03,664][INFO ][index.merge.policy ] [cluster-name] [index-name-2014.12.10][3] updating merge_factor from [30] to [10]\n```\n\nNotice that it's reverting the `merge_factor` (and it's not just a mistake in the log output).\n\nAll of the providers suffer the from the same issue:\n- [`LogDocMergePolicyProvider`](https://github.com/elasticsearch/elasticsearch/blob/v1.4.0/src/main/java/org/elasticsearch/index/merge/policy/LogDocMergePolicyProvider.java#L111)\n- [`LogByteSizeMergePolicyProvider`](https://github.com/elasticsearch/elasticsearch/blob/v1.4.0/src/main/java/org/elasticsearch/index/merge/policy/LogByteSizeMergePolicyProvider.java#L111)\n- [`TieredMergePolicyProvider`](https://github.com/elasticsearch/elasticsearch/blob/v1.4.0/src/main/java/org/elasticsearch/index/merge/policy/TieredMergePolicyProvider.java#L124)\n\nThey all get the value set at index-creation time -- defaulting it to the global default if it's unset -- then compare to see if it's different from the global value. If they're different, it uses the index-creation value, which means the global default if it is unset.\n\nThe fix is simple, the passed in \"default\" should just be the global value rather than the global default.\n", "comments": [], "number": 8890, "title": "Merge policy settings are ignored when set in YML" }
{ "body": "Due to some unreleased refactorings we lost the persitence of\na perviously set values in TieredMPProvider. This commit adds this\nback and adds a simple unittest.\n\nCloses #8890\n", "number": 9497, "review_comments": [ { "body": "I think we need to add a check that other settings do not spring back to their defaults, which I believe was the original issue (update one setting and have all the rest go back to default).\n", "created_at": "2015-01-30T10:28:29Z" }, { "body": "see the bottom of the test there is a update call with empty settings\n", "created_at": "2015-01-30T12:07:36Z" }, { "body": "missed that. All good then. Sorry for the noise.\n", "created_at": "2015-01-30T12:09:37Z" } ], "title": "Reset TieredMP settings only if the value actually changed" }
{ "commits": [ { "message": "Reset MergePolicProvider settings only if the value actually changed\n\nDue to some unreleased refactorings we lost the persitence of\na perviously set values in MergePolicyProvider. This commit adds this\nback and adds a simple unittest.\n\nCloses #8890" } ], "files": [ { "diff": "@@ -41,8 +41,8 @@ public class LogByteSizeMergePolicyProvider extends AbstractMergePolicyProvider<\n private final ApplySettings applySettings = new ApplySettings();\n private final LogByteSizeMergePolicy mergePolicy = new LogByteSizeMergePolicy();\n \n- private static final ByteSizeValue DEFAULT_MIN_MERGE_SIZE = new ByteSizeValue((long) (LogByteSizeMergePolicy.DEFAULT_MIN_MERGE_MB * 1024 * 1024), ByteSizeUnit.BYTES);\n- private static final ByteSizeValue DEFAULT_MAX_MERGE_SIZE = new ByteSizeValue((long) LogByteSizeMergePolicy.DEFAULT_MAX_MERGE_MB, ByteSizeUnit.MB);\n+ public static final ByteSizeValue DEFAULT_MIN_MERGE_SIZE = new ByteSizeValue((long) (LogByteSizeMergePolicy.DEFAULT_MIN_MERGE_MB * 1024 * 1024), ByteSizeUnit.BYTES);\n+ public static final ByteSizeValue DEFAULT_MAX_MERGE_SIZE = new ByteSizeValue((long) LogByteSizeMergePolicy.DEFAULT_MAX_MERGE_MB, ByteSizeUnit.MB);\n \n @Inject\n public LogByteSizeMergePolicyProvider(Store store, IndexSettingsService indexSettingsService) {\n@@ -88,35 +88,35 @@ class ApplySettings implements IndexSettingsService.Listener {\n @Override\n public void onRefreshSettings(Settings settings) {\n double oldMinMergeSizeMB = mergePolicy.getMinMergeMB();\n- ByteSizeValue minMergeSize = settings.getAsBytesSize(INDEX_MERGE_POLICY_MIN_MERGE_SIZE, DEFAULT_MIN_MERGE_SIZE);\n- if (minMergeSize.mbFrac() != oldMinMergeSizeMB) {\n+ ByteSizeValue minMergeSize = settings.getAsBytesSize(INDEX_MERGE_POLICY_MIN_MERGE_SIZE, null);\n+ if (minMergeSize != null && minMergeSize.mbFrac() != oldMinMergeSizeMB) {\n logger.info(\"updating min_merge_size from [{}mb] to [{}]\", oldMinMergeSizeMB, minMergeSize);\n mergePolicy.setMinMergeMB(minMergeSize.mbFrac());\n }\n \n double oldMaxMergeSizeMB = mergePolicy.getMaxMergeMB();\n- ByteSizeValue maxMergeSize = settings.getAsBytesSize(INDEX_MERGE_POLICY_MAX_MERGE_SIZE, DEFAULT_MAX_MERGE_SIZE);\n- if (maxMergeSize.mbFrac() != oldMaxMergeSizeMB) {\n+ ByteSizeValue maxMergeSize = settings.getAsBytesSize(INDEX_MERGE_POLICY_MAX_MERGE_SIZE, null);\n+ if (maxMergeSize != null && maxMergeSize.mbFrac() != oldMaxMergeSizeMB) {\n logger.info(\"updating max_merge_size from [{}mb] to [{}]\", oldMaxMergeSizeMB, maxMergeSize);\n mergePolicy.setMaxMergeMB(maxMergeSize.mbFrac());\n }\n \n int oldMaxMergeDocs = mergePolicy.getMaxMergeDocs();\n- int maxMergeDocs = settings.getAsInt(INDEX_MERGE_POLICY_MAX_MERGE_DOCS, LogByteSizeMergePolicy.DEFAULT_MAX_MERGE_DOCS);\n+ int maxMergeDocs = settings.getAsInt(INDEX_MERGE_POLICY_MAX_MERGE_DOCS, oldMaxMergeDocs);\n if (maxMergeDocs != oldMaxMergeDocs) {\n logger.info(\"updating max_merge_docs from [{}] to [{}]\", oldMaxMergeDocs, maxMergeDocs);\n mergePolicy.setMaxMergeDocs(maxMergeDocs);\n }\n \n int oldMergeFactor = mergePolicy.getMergeFactor();\n- int mergeFactor = settings.getAsInt(INDEX_MERGE_POLICY_MERGE_FACTOR, LogByteSizeMergePolicy.DEFAULT_MERGE_FACTOR);\n+ int mergeFactor = settings.getAsInt(INDEX_MERGE_POLICY_MERGE_FACTOR, oldMergeFactor);\n if (mergeFactor != oldMergeFactor) {\n logger.info(\"updating merge_factor from [{}] to [{}]\", oldMergeFactor, mergeFactor);\n mergePolicy.setMergeFactor(mergeFactor);\n }\n \n boolean oldCalibrateSizeByDeletes = mergePolicy.getCalibrateSizeByDeletes();\n- boolean calibrateSizeByDeletes = settings.getAsBoolean(INDEX_MERGE_POLICY_CALIBRATE_SIZE_BY_DELETES, true);\n+ boolean calibrateSizeByDeletes = settings.getAsBoolean(INDEX_MERGE_POLICY_CALIBRATE_SIZE_BY_DELETES, oldCalibrateSizeByDeletes);\n if (calibrateSizeByDeletes != oldCalibrateSizeByDeletes) {\n logger.info(\"updating calibrate_size_by_deletes from [{}] to [{}]\", oldCalibrateSizeByDeletes, calibrateSizeByDeletes);\n mergePolicy.setCalibrateSizeByDeletes(calibrateSizeByDeletes);", "filename": "src/main/java/org/elasticsearch/index/merge/policy/LogByteSizeMergePolicyProvider.java", "status": "modified" }, { "diff": "@@ -85,28 +85,28 @@ class ApplySettings implements IndexSettingsService.Listener {\n @Override\n public void onRefreshSettings(Settings settings) {\n int oldMinMergeDocs = mergePolicy.getMinMergeDocs();\n- int minMergeDocs = settings.getAsInt(INDEX_MERGE_POLICY_MIN_MERGE_DOCS, LogDocMergePolicy.DEFAULT_MIN_MERGE_DOCS);\n+ int minMergeDocs = settings.getAsInt(INDEX_MERGE_POLICY_MIN_MERGE_DOCS, oldMinMergeDocs);\n if (minMergeDocs != oldMinMergeDocs) {\n logger.info(\"updating min_merge_docs from [{}] to [{}]\", oldMinMergeDocs, minMergeDocs);\n mergePolicy.setMinMergeDocs(minMergeDocs);\n }\n \n int oldMaxMergeDocs = mergePolicy.getMaxMergeDocs();\n- int maxMergeDocs = settings.getAsInt(INDEX_MERGE_POLICY_MAX_MERGE_DOCS, LogDocMergePolicy.DEFAULT_MAX_MERGE_DOCS);\n+ int maxMergeDocs = settings.getAsInt(INDEX_MERGE_POLICY_MAX_MERGE_DOCS, oldMaxMergeDocs);\n if (maxMergeDocs != oldMaxMergeDocs) {\n logger.info(\"updating max_merge_docs from [{}] to [{}]\", oldMaxMergeDocs, maxMergeDocs);\n mergePolicy.setMaxMergeDocs(maxMergeDocs);\n }\n \n int oldMergeFactor = mergePolicy.getMergeFactor();\n- int mergeFactor = settings.getAsInt(INDEX_MERGE_POLICY_MERGE_FACTOR, LogDocMergePolicy.DEFAULT_MERGE_FACTOR);\n+ int mergeFactor = settings.getAsInt(INDEX_MERGE_POLICY_MERGE_FACTOR, oldMergeFactor);\n if (mergeFactor != oldMergeFactor) {\n logger.info(\"updating merge_factor from [{}] to [{}]\", oldMergeFactor, mergeFactor);\n mergePolicy.setMergeFactor(mergeFactor);\n }\n \n boolean oldCalibrateSizeByDeletes = mergePolicy.getCalibrateSizeByDeletes();\n- boolean calibrateSizeByDeletes = settings.getAsBoolean(INDEX_MERGE_POLICY_CALIBRATE_SIZE_BY_DELETES, true);\n+ boolean calibrateSizeByDeletes = settings.getAsBoolean(INDEX_MERGE_POLICY_CALIBRATE_SIZE_BY_DELETES, oldCalibrateSizeByDeletes);\n if (calibrateSizeByDeletes != oldCalibrateSizeByDeletes) {\n logger.info(\"updating calibrate_size_by_deletes from [{}] to [{}]\", oldCalibrateSizeByDeletes, calibrateSizeByDeletes);\n mergePolicy.setCalibrateSizeByDeletes(calibrateSizeByDeletes);", "filename": "src/main/java/org/elasticsearch/index/merge/policy/LogDocMergePolicyProvider.java", "status": "modified" }, { "diff": "@@ -106,51 +106,51 @@ public void close() throws ElasticsearchException {\n class ApplySettings implements IndexSettingsService.Listener {\n @Override\n public void onRefreshSettings(Settings settings) {\n- double oldExpungeDeletesPctAllowed = mergePolicy.getForceMergeDeletesPctAllowed();\n- double expungeDeletesPctAllowed = settings.getAsDouble(INDEX_MERGE_POLICY_EXPUNGE_DELETES_ALLOWED, DEFAULT_EXPUNGE_DELETES_ALLOWED);\n+ final double oldExpungeDeletesPctAllowed = mergePolicy.getForceMergeDeletesPctAllowed();\n+ final double expungeDeletesPctAllowed = settings.getAsDouble(INDEX_MERGE_POLICY_EXPUNGE_DELETES_ALLOWED, oldExpungeDeletesPctAllowed);\n if (expungeDeletesPctAllowed != oldExpungeDeletesPctAllowed) {\n logger.info(\"updating [expunge_deletes_allowed] from [{}] to [{}]\", oldExpungeDeletesPctAllowed, expungeDeletesPctAllowed);\n mergePolicy.setForceMergeDeletesPctAllowed(expungeDeletesPctAllowed);\n }\n \n- double oldFloorSegmentMB = mergePolicy.getFloorSegmentMB();\n- ByteSizeValue floorSegment = settings.getAsBytesSize(INDEX_MERGE_POLICY_FLOOR_SEGMENT, DEFAULT_FLOOR_SEGMENT);\n- if (floorSegment.mbFrac() != oldFloorSegmentMB) {\n+ final double oldFloorSegmentMB = mergePolicy.getFloorSegmentMB();\n+ final ByteSizeValue floorSegment = settings.getAsBytesSize(INDEX_MERGE_POLICY_FLOOR_SEGMENT, null);\n+ if (floorSegment != null && floorSegment.mbFrac() != oldFloorSegmentMB) {\n logger.info(\"updating [floor_segment] from [{}mb] to [{}]\", oldFloorSegmentMB, floorSegment);\n mergePolicy.setFloorSegmentMB(floorSegment.mbFrac());\n }\n \n- double oldSegmentsPerTier = mergePolicy.getSegmentsPerTier();\n- double segmentsPerTier = settings.getAsDouble(INDEX_MERGE_POLICY_SEGMENTS_PER_TIER, DEFAULT_SEGMENTS_PER_TIER);\n+ final double oldSegmentsPerTier = mergePolicy.getSegmentsPerTier();\n+ final double segmentsPerTier = settings.getAsDouble(INDEX_MERGE_POLICY_SEGMENTS_PER_TIER, oldSegmentsPerTier);\n if (segmentsPerTier != oldSegmentsPerTier) {\n logger.info(\"updating [segments_per_tier] from [{}] to [{}]\", oldSegmentsPerTier, segmentsPerTier);\n mergePolicy.setSegmentsPerTier(segmentsPerTier);\n }\n \n- int oldMaxMergeAtOnce = mergePolicy.getMaxMergeAtOnce();\n- int maxMergeAtOnce = settings.getAsInt(INDEX_MERGE_POLICY_MAX_MERGE_AT_ONCE, DEFAULT_MAX_MERGE_AT_ONCE);\n+ final int oldMaxMergeAtOnce = mergePolicy.getMaxMergeAtOnce();\n+ int maxMergeAtOnce = settings.getAsInt(INDEX_MERGE_POLICY_MAX_MERGE_AT_ONCE, oldMaxMergeAtOnce);\n if (maxMergeAtOnce != oldMaxMergeAtOnce) {\n logger.info(\"updating [max_merge_at_once] from [{}] to [{}]\", oldMaxMergeAtOnce, maxMergeAtOnce);\n maxMergeAtOnce = adjustMaxMergeAtOnceIfNeeded(maxMergeAtOnce, segmentsPerTier);\n mergePolicy.setMaxMergeAtOnce(maxMergeAtOnce);\n }\n \n- int oldMaxMergeAtOnceExplicit = mergePolicy.getMaxMergeAtOnceExplicit();\n- int maxMergeAtOnceExplicit = settings.getAsInt(INDEX_MERGE_POLICY_MAX_MERGE_AT_ONCE_EXPLICIT, DEFAULT_MAX_MERGE_AT_ONCE_EXPLICIT);\n+ final int oldMaxMergeAtOnceExplicit = mergePolicy.getMaxMergeAtOnceExplicit();\n+ final int maxMergeAtOnceExplicit = settings.getAsInt(INDEX_MERGE_POLICY_MAX_MERGE_AT_ONCE_EXPLICIT, oldMaxMergeAtOnceExplicit);\n if (maxMergeAtOnceExplicit != oldMaxMergeAtOnceExplicit) {\n logger.info(\"updating [max_merge_at_once_explicit] from [{}] to [{}]\", oldMaxMergeAtOnceExplicit, maxMergeAtOnceExplicit);\n mergePolicy.setMaxMergeAtOnceExplicit(maxMergeAtOnceExplicit);\n }\n \n- double oldMaxMergedSegmentMB = mergePolicy.getMaxMergedSegmentMB();\n- ByteSizeValue maxMergedSegment = settings.getAsBytesSize(INDEX_MERGE_POLICY_MAX_MERGED_SEGMENT, DEFAULT_MAX_MERGED_SEGMENT);\n- if (maxMergedSegment.mbFrac() != oldMaxMergedSegmentMB) {\n+ final double oldMaxMergedSegmentMB = mergePolicy.getMaxMergedSegmentMB();\n+ final ByteSizeValue maxMergedSegment = settings.getAsBytesSize(INDEX_MERGE_POLICY_MAX_MERGED_SEGMENT, null);\n+ if (maxMergedSegment != null && maxMergedSegment.mbFrac() != oldMaxMergedSegmentMB) {\n logger.info(\"updating [max_merged_segment] from [{}mb] to [{}]\", oldMaxMergedSegmentMB, maxMergedSegment);\n mergePolicy.setMaxMergedSegmentMB(maxMergedSegment.mbFrac());\n }\n \n- double oldReclaimDeletesWeight = mergePolicy.getReclaimDeletesWeight();\n- double reclaimDeletesWeight = settings.getAsDouble(INDEX_MERGE_POLICY_RECLAIM_DELETES_WEIGHT, DEFAULT_RECLAIM_DELETES_WEIGHT);\n+ final double oldReclaimDeletesWeight = mergePolicy.getReclaimDeletesWeight();\n+ final double reclaimDeletesWeight = settings.getAsDouble(INDEX_MERGE_POLICY_RECLAIM_DELETES_WEIGHT, oldReclaimDeletesWeight);\n if (reclaimDeletesWeight != oldReclaimDeletesWeight) {\n logger.info(\"updating [reclaim_deletes_weight] from [{}] to [{}]\", oldReclaimDeletesWeight, reclaimDeletesWeight);\n mergePolicy.setReclaimDeletesWeight(reclaimDeletesWeight);", "filename": "src/main/java/org/elasticsearch/index/merge/policy/TieredMergePolicyProvider.java", "status": "modified" }, { "diff": "@@ -18,11 +18,16 @@\n */\n package org.elasticsearch.index.merge.policy;\n \n+import org.apache.lucene.index.LogByteSizeMergePolicy;\n+import org.apache.lucene.index.LogDocMergePolicy;\n+import org.apache.lucene.index.TieredMergePolicy;\n import org.apache.lucene.store.Directory;\n import org.apache.lucene.store.RAMDirectory;\n import org.elasticsearch.ElasticsearchIllegalArgumentException;\n import org.elasticsearch.common.settings.ImmutableSettings;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.unit.ByteSizeUnit;\n+import org.elasticsearch.common.unit.ByteSizeValue;\n import org.elasticsearch.index.Index;\n import org.elasticsearch.index.settings.IndexSettingsService;\n import org.elasticsearch.index.shard.ShardId;\n@@ -156,6 +161,141 @@ public void testUpdateSettings() throws IOException {\n }\n }\n \n+ public void testLogDocSizeMergePolicySettingsUpdate() throws IOException {\n+ IndexSettingsService service = new IndexSettingsService(new Index(\"test\"), EMPTY_SETTINGS);\n+ LogDocMergePolicyProvider mp = new LogDocMergePolicyProvider(createStore(EMPTY_SETTINGS), service);\n+\n+ assertEquals(mp.getMergePolicy().getMaxMergeDocs(), LogDocMergePolicy.DEFAULT_MAX_MERGE_DOCS);\n+ service.refreshSettings(ImmutableSettings.builder().put(LogDocMergePolicyProvider.INDEX_MERGE_POLICY_MAX_MERGE_DOCS, LogDocMergePolicy.DEFAULT_MAX_MERGE_DOCS / 2).build());\n+ assertEquals(mp.getMergePolicy().getMaxMergeDocs(), LogDocMergePolicy.DEFAULT_MAX_MERGE_DOCS / 2);\n+\n+ assertEquals(mp.getMergePolicy().getMinMergeDocs(), LogDocMergePolicy.DEFAULT_MIN_MERGE_DOCS);\n+ service.refreshSettings(ImmutableSettings.builder().put(LogDocMergePolicyProvider.INDEX_MERGE_POLICY_MIN_MERGE_DOCS, LogDocMergePolicy.DEFAULT_MIN_MERGE_DOCS / 2).build());\n+ assertEquals(mp.getMergePolicy().getMinMergeDocs(), LogDocMergePolicy.DEFAULT_MIN_MERGE_DOCS / 2);\n+\n+ assertTrue(mp.getMergePolicy().getCalibrateSizeByDeletes());\n+ service.refreshSettings(ImmutableSettings.builder().put(LogDocMergePolicyProvider.INDEX_MERGE_POLICY_CALIBRATE_SIZE_BY_DELETES, false).build());\n+ assertFalse(mp.getMergePolicy().getCalibrateSizeByDeletes());\n+\n+ assertEquals(mp.getMergePolicy().getMergeFactor(), LogDocMergePolicy.DEFAULT_MERGE_FACTOR);\n+ service.refreshSettings(ImmutableSettings.builder().put(LogDocMergePolicyProvider.INDEX_MERGE_POLICY_MERGE_FACTOR, LogDocMergePolicy.DEFAULT_MERGE_FACTOR * 2).build());\n+ assertEquals(mp.getMergePolicy().getMergeFactor(), LogDocMergePolicy.DEFAULT_MERGE_FACTOR * 2);\n+\n+ service.refreshSettings(EMPTY_SETTINGS); // update without the settings and see if we stick to the values\n+ assertEquals(mp.getMergePolicy().getMaxMergeDocs(), LogDocMergePolicy.DEFAULT_MAX_MERGE_DOCS / 2);\n+ assertEquals(mp.getMergePolicy().getMinMergeDocs(), LogDocMergePolicy.DEFAULT_MIN_MERGE_DOCS / 2);\n+ assertFalse(mp.getMergePolicy().getCalibrateSizeByDeletes());\n+ assertEquals(mp.getMergePolicy().getMergeFactor(), LogByteSizeMergePolicy.DEFAULT_MERGE_FACTOR * 2);\n+\n+\n+ service = new IndexSettingsService(new Index(\"test\"), EMPTY_SETTINGS);\n+ mp = new LogDocMergePolicyProvider(createStore(ImmutableSettings.builder()\n+ .put(LogDocMergePolicyProvider.INDEX_MERGE_POLICY_MAX_MERGE_DOCS, LogByteSizeMergePolicy.DEFAULT_MAX_MERGE_DOCS / 2)\n+ .put(LogDocMergePolicyProvider.INDEX_MERGE_POLICY_MERGE_FACTOR, LogByteSizeMergePolicy.DEFAULT_MERGE_FACTOR / 2)\n+ .put(LogDocMergePolicyProvider.INDEX_MERGE_POLICY_CALIBRATE_SIZE_BY_DELETES, false)\n+ .put(LogDocMergePolicyProvider.INDEX_MERGE_POLICY_MIN_MERGE_DOCS, LogDocMergePolicy.DEFAULT_MIN_MERGE_DOCS - 1)\n+ .build()), service);\n+\n+\n+ assertEquals(mp.getMergePolicy().getMinMergeDocs(), LogDocMergePolicy.DEFAULT_MIN_MERGE_DOCS - 1);\n+ assertFalse(mp.getMergePolicy().getCalibrateSizeByDeletes());\n+ assertEquals(mp.getMergePolicy().getMergeFactor(), LogByteSizeMergePolicy.DEFAULT_MERGE_FACTOR / 2);\n+ assertEquals(mp.getMergePolicy().getMaxMergeDocs(), LogByteSizeMergePolicy.DEFAULT_MAX_MERGE_DOCS / 2);\n+ }\n+\n+ public void testLogByteSizeMergePolicySettingsUpdate() throws IOException {\n+ IndexSettingsService service = new IndexSettingsService(new Index(\"test\"), EMPTY_SETTINGS);\n+ LogByteSizeMergePolicyProvider mp = new LogByteSizeMergePolicyProvider(createStore(EMPTY_SETTINGS), service);\n+\n+ assertEquals(mp.getMergePolicy().getMaxMergeMB(), LogByteSizeMergePolicyProvider.DEFAULT_MAX_MERGE_SIZE.mbFrac(), 0.0d);\n+ service.refreshSettings(ImmutableSettings.builder().put(LogByteSizeMergePolicyProvider.INDEX_MERGE_POLICY_MAX_MERGE_SIZE, new ByteSizeValue(LogByteSizeMergePolicyProvider.DEFAULT_MAX_MERGE_SIZE.mb() / 2, ByteSizeUnit.MB)).build());\n+ assertEquals(mp.getMergePolicy().getMaxMergeMB(), new ByteSizeValue(LogByteSizeMergePolicyProvider.DEFAULT_MAX_MERGE_SIZE.mb() / 2, ByteSizeUnit.MB).mbFrac(), 0.0d);\n+\n+ assertEquals(mp.getMergePolicy().getMinMergeMB(), LogByteSizeMergePolicyProvider.DEFAULT_MIN_MERGE_SIZE.mbFrac(), 0.0d);\n+ service.refreshSettings(ImmutableSettings.builder().put(LogByteSizeMergePolicyProvider.INDEX_MERGE_POLICY_MIN_MERGE_SIZE, new ByteSizeValue(LogByteSizeMergePolicyProvider.DEFAULT_MIN_MERGE_SIZE.mb() + 1, ByteSizeUnit.MB)).build());\n+ assertEquals(mp.getMergePolicy().getMinMergeMB(), new ByteSizeValue(LogByteSizeMergePolicyProvider.DEFAULT_MIN_MERGE_SIZE.mb() + 1, ByteSizeUnit.MB).mbFrac(), 0.0d);\n+\n+ assertTrue(mp.getMergePolicy().getCalibrateSizeByDeletes());\n+ service.refreshSettings(ImmutableSettings.builder().put(LogByteSizeMergePolicyProvider.INDEX_MERGE_POLICY_CALIBRATE_SIZE_BY_DELETES, false).build());\n+ assertFalse(mp.getMergePolicy().getCalibrateSizeByDeletes());\n+\n+ assertEquals(mp.getMergePolicy().getMergeFactor(), LogByteSizeMergePolicy.DEFAULT_MERGE_FACTOR);\n+ service.refreshSettings(ImmutableSettings.builder().put(LogByteSizeMergePolicyProvider.INDEX_MERGE_POLICY_MERGE_FACTOR, LogByteSizeMergePolicy.DEFAULT_MERGE_FACTOR / 2).build());\n+ assertEquals(mp.getMergePolicy().getMergeFactor(), LogByteSizeMergePolicy.DEFAULT_MERGE_FACTOR / 2);\n+\n+ assertEquals(mp.getMergePolicy().getMaxMergeDocs(), LogByteSizeMergePolicy.DEFAULT_MAX_MERGE_DOCS);\n+ service.refreshSettings(ImmutableSettings.builder().put(LogByteSizeMergePolicyProvider.INDEX_MERGE_POLICY_MAX_MERGE_DOCS, LogByteSizeMergePolicy.DEFAULT_MAX_MERGE_DOCS / 2).build());\n+ assertEquals(mp.getMergePolicy().getMaxMergeDocs(), LogByteSizeMergePolicy.DEFAULT_MAX_MERGE_DOCS / 2);\n+\n+ service.refreshSettings(EMPTY_SETTINGS); // update without the settings and see if we stick to the values\n+ assertEquals(mp.getMergePolicy().getMaxMergeMB(), new ByteSizeValue(LogByteSizeMergePolicyProvider.DEFAULT_MAX_MERGE_SIZE.mb() / 2, ByteSizeUnit.MB).mbFrac(), 0.0d);\n+ assertEquals(mp.getMergePolicy().getMinMergeMB(), new ByteSizeValue(LogByteSizeMergePolicyProvider.DEFAULT_MIN_MERGE_SIZE.mb() + 1, ByteSizeUnit.MB).mbFrac(), 0.0d);\n+ assertFalse(mp.getMergePolicy().getCalibrateSizeByDeletes());\n+ assertEquals(mp.getMergePolicy().getMergeFactor(), LogByteSizeMergePolicy.DEFAULT_MERGE_FACTOR / 2);\n+ assertEquals(mp.getMergePolicy().getMaxMergeDocs(), LogByteSizeMergePolicy.DEFAULT_MAX_MERGE_DOCS / 2);\n+\n+\n+ service = new IndexSettingsService(new Index(\"test\"), EMPTY_SETTINGS);\n+ mp = new LogByteSizeMergePolicyProvider(createStore(ImmutableSettings.builder()\n+ .put(LogByteSizeMergePolicyProvider.INDEX_MERGE_POLICY_MAX_MERGE_DOCS, LogByteSizeMergePolicy.DEFAULT_MAX_MERGE_DOCS * 2)\n+ .put(LogByteSizeMergePolicyProvider.INDEX_MERGE_POLICY_MERGE_FACTOR, LogByteSizeMergePolicy.DEFAULT_MERGE_FACTOR * 2)\n+ .put(LogByteSizeMergePolicyProvider.INDEX_MERGE_POLICY_MAX_MERGE_SIZE, new ByteSizeValue(LogByteSizeMergePolicyProvider.DEFAULT_MAX_MERGE_SIZE.mb() / 2, ByteSizeUnit.MB))\n+ .put(LogByteSizeMergePolicyProvider.INDEX_MERGE_POLICY_CALIBRATE_SIZE_BY_DELETES, false)\n+ .put(LogByteSizeMergePolicyProvider.INDEX_MERGE_POLICY_MIN_MERGE_SIZE, new ByteSizeValue(LogByteSizeMergePolicyProvider.DEFAULT_MIN_MERGE_SIZE.mb() + 1, ByteSizeUnit.MB))\n+ .build()), service);\n+\n+\n+ assertEquals(mp.getMergePolicy().getMaxMergeMB(), new ByteSizeValue(LogByteSizeMergePolicyProvider.DEFAULT_MAX_MERGE_SIZE.mb() / 2, ByteSizeUnit.MB).mbFrac(), 0.0d);\n+ assertEquals(mp.getMergePolicy().getMinMergeMB(), new ByteSizeValue(LogByteSizeMergePolicyProvider.DEFAULT_MIN_MERGE_SIZE.mb() + 1, ByteSizeUnit.MB).mbFrac(), 0.0d);\n+ assertFalse(mp.getMergePolicy().getCalibrateSizeByDeletes());\n+ assertEquals(mp.getMergePolicy().getMergeFactor(), LogByteSizeMergePolicy.DEFAULT_MERGE_FACTOR * 2);\n+ assertEquals(mp.getMergePolicy().getMaxMergeDocs(), LogByteSizeMergePolicy.DEFAULT_MAX_MERGE_DOCS * 2);\n+ }\n+\n+ public void testTieredMergePolicySettingsUpdate() throws IOException {\n+ IndexSettingsService service = new IndexSettingsService(new Index(\"test\"), EMPTY_SETTINGS);\n+ TieredMergePolicyProvider mp = new TieredMergePolicyProvider(createStore(EMPTY_SETTINGS), service);\n+ assertThat(mp.getMergePolicy().getNoCFSRatio(), equalTo(0.1));\n+\n+ assertEquals(mp.getMergePolicy().getForceMergeDeletesPctAllowed(), TieredMergePolicyProvider.DEFAULT_EXPUNGE_DELETES_ALLOWED, 0.0d);\n+ service.refreshSettings(ImmutableSettings.builder().put(TieredMergePolicyProvider.INDEX_MERGE_POLICY_EXPUNGE_DELETES_ALLOWED, TieredMergePolicyProvider.DEFAULT_EXPUNGE_DELETES_ALLOWED + 1.0d).build());\n+ assertEquals(mp.getMergePolicy().getForceMergeDeletesPctAllowed(), TieredMergePolicyProvider.DEFAULT_EXPUNGE_DELETES_ALLOWED + 1.0d, 0.0d);\n+\n+ assertEquals(mp.getMergePolicy().getFloorSegmentMB(), TieredMergePolicyProvider.DEFAULT_FLOOR_SEGMENT.mbFrac(), 0);\n+ service.refreshSettings(ImmutableSettings.builder().put(TieredMergePolicyProvider.INDEX_MERGE_POLICY_FLOOR_SEGMENT, new ByteSizeValue(TieredMergePolicyProvider.DEFAULT_FLOOR_SEGMENT.mb() + 1, ByteSizeUnit.MB)).build());\n+ assertEquals(mp.getMergePolicy().getFloorSegmentMB(), new ByteSizeValue(TieredMergePolicyProvider.DEFAULT_FLOOR_SEGMENT.mb() + 1, ByteSizeUnit.MB).mbFrac(), 0.001);\n+\n+ assertEquals(mp.getMergePolicy().getMaxMergeAtOnce(), TieredMergePolicyProvider.DEFAULT_MAX_MERGE_AT_ONCE);\n+ service.refreshSettings(ImmutableSettings.builder().put(TieredMergePolicyProvider.INDEX_MERGE_POLICY_MAX_MERGE_AT_ONCE, TieredMergePolicyProvider.DEFAULT_MAX_MERGE_AT_ONCE -1 ).build());\n+ assertEquals(mp.getMergePolicy().getMaxMergeAtOnce(), TieredMergePolicyProvider.DEFAULT_MAX_MERGE_AT_ONCE-1);\n+\n+ assertEquals(mp.getMergePolicy().getMaxMergeAtOnceExplicit(), TieredMergePolicyProvider.DEFAULT_MAX_MERGE_AT_ONCE_EXPLICIT);\n+ service.refreshSettings(ImmutableSettings.builder().put(TieredMergePolicyProvider.INDEX_MERGE_POLICY_MAX_MERGE_AT_ONCE_EXPLICIT, TieredMergePolicyProvider.DEFAULT_MAX_MERGE_AT_ONCE_EXPLICIT -1 ).build());\n+ assertEquals(mp.getMergePolicy().getMaxMergeAtOnceExplicit(), TieredMergePolicyProvider.DEFAULT_MAX_MERGE_AT_ONCE_EXPLICIT-1);\n+\n+ assertEquals(mp.getMergePolicy().getMaxMergedSegmentMB(), TieredMergePolicyProvider.DEFAULT_MAX_MERGED_SEGMENT.mbFrac(), 0.0001);\n+ service.refreshSettings(ImmutableSettings.builder().put(TieredMergePolicyProvider.INDEX_MERGE_POLICY_MAX_MERGED_SEGMENT, new ByteSizeValue(TieredMergePolicyProvider.DEFAULT_MAX_MERGED_SEGMENT.bytes() + 1)).build());\n+ assertEquals(mp.getMergePolicy().getMaxMergedSegmentMB(), new ByteSizeValue(TieredMergePolicyProvider.DEFAULT_MAX_MERGED_SEGMENT.bytes() + 1).mbFrac(), 0.0001);\n+\n+ assertEquals(mp.getMergePolicy().getReclaimDeletesWeight(), TieredMergePolicyProvider.DEFAULT_RECLAIM_DELETES_WEIGHT, 0);\n+ service.refreshSettings(ImmutableSettings.builder().put(TieredMergePolicyProvider.INDEX_MERGE_POLICY_RECLAIM_DELETES_WEIGHT, TieredMergePolicyProvider.DEFAULT_RECLAIM_DELETES_WEIGHT + 1 ).build());\n+ assertEquals(mp.getMergePolicy().getReclaimDeletesWeight(), TieredMergePolicyProvider.DEFAULT_RECLAIM_DELETES_WEIGHT + 1, 0);\n+\n+ assertEquals(mp.getMergePolicy().getSegmentsPerTier(), TieredMergePolicyProvider.DEFAULT_SEGMENTS_PER_TIER, 0);\n+ service.refreshSettings(ImmutableSettings.builder().put(TieredMergePolicyProvider.INDEX_MERGE_POLICY_SEGMENTS_PER_TIER, TieredMergePolicyProvider.DEFAULT_SEGMENTS_PER_TIER + 1 ).build());\n+ assertEquals(mp.getMergePolicy().getSegmentsPerTier(), TieredMergePolicyProvider.DEFAULT_SEGMENTS_PER_TIER + 1, 0);\n+\n+ service.refreshSettings(EMPTY_SETTINGS); // update without the settings and see if we stick to the values\n+\n+ assertEquals(mp.getMergePolicy().getForceMergeDeletesPctAllowed(), TieredMergePolicyProvider.DEFAULT_EXPUNGE_DELETES_ALLOWED + 1.0d, 0.0d);\n+ assertEquals(mp.getMergePolicy().getFloorSegmentMB(), new ByteSizeValue(TieredMergePolicyProvider.DEFAULT_FLOOR_SEGMENT.mb() + 1, ByteSizeUnit.MB).mbFrac(), 0.001);\n+ assertEquals(mp.getMergePolicy().getMaxMergeAtOnce(), TieredMergePolicyProvider.DEFAULT_MAX_MERGE_AT_ONCE-1);\n+ assertEquals(mp.getMergePolicy().getMaxMergeAtOnceExplicit(), TieredMergePolicyProvider.DEFAULT_MAX_MERGE_AT_ONCE_EXPLICIT-1);\n+ assertEquals(mp.getMergePolicy().getMaxMergedSegmentMB(), new ByteSizeValue(TieredMergePolicyProvider.DEFAULT_MAX_MERGED_SEGMENT.bytes() + 1).mbFrac(), 0.0001);\n+ assertEquals(mp.getMergePolicy().getReclaimDeletesWeight(), TieredMergePolicyProvider.DEFAULT_RECLAIM_DELETES_WEIGHT + 1, 0);\n+ assertEquals(mp.getMergePolicy().getSegmentsPerTier(), TieredMergePolicyProvider.DEFAULT_SEGMENTS_PER_TIER + 1, 0);\n+ }\n+\n public Settings build(String value) {\n return ImmutableSettings.builder().put(AbstractMergePolicyProvider.INDEX_COMPOUND_FORMAT, value).build();\n }", "filename": "src/test/java/org/elasticsearch/index/merge/policy/MergePolicySettingsTest.java", "status": "modified" } ] }
{ "body": "I decided to reindex my data to take advantage of `doc_values`, but one of 30 indices (~120m docs in each) got less documents after reindexing. I reindexed again and docs disappeared again.\n\nThen I bisected the problem to specific docs and found that some docs in source index has duplicate ids.\n\n```\ncurl -s \"http://web245:9200/statistics-20141110/_search?pretty&q=_id:1jC2LxTjTMS1KHCn0Prf1w\"\n```\n\n``` json\n{\n \"took\" : 1156,\n \"timed_out\" : false,\n \"_shards\" : {\n \"total\" : 5,\n \"successful\" : 5,\n \"failed\" : 0\n },\n \"hits\" : {\n \"total\" : 2,\n \"max_score\" : 1.0,\n \"hits\" : [ {\n \"_index\" : \"statistics-20141110\",\n \"_type\" : \"events\",\n \"_id\" : \"1jC2LxTjTMS1KHCn0Prf1w\",\n \"_score\" : 1.0,\n \"_source\":{\"@timestamp\":\"2014-11-10T14:30:00+0300\",\"@key\":\"client_belarussia_msg_sended_from_mutual__22_1\",\"@value\":\"149\"}\n }, {\n \"_index\" : \"statistics-20141110\",\n \"_type\" : \"events\",\n \"_id\" : \"1jC2LxTjTMS1KHCn0Prf1w\",\n \"_score\" : 1.0,\n \"_source\":{\"@timestamp\":\"2014-11-10T14:30:00+0300\",\"@key\":\"client_belarussia_msg_sended_from_mutual__22_1\",\"@value\":\"149\"}\n } ]\n }\n}\n```\n\nHere are two indices, source and destination:\n\n```\nhealth status index pri rep docs.count docs.deleted store.size pri.store.size\ngreen open statistics-20141110 5 0 116217042 0 12.3gb 12.3gb\ngreen open statistics-20141110-dv 5 1 116216507 0 32.3gb 16.1gb\n```\n\nSegments of problematic index:\n\n```\nindex shard prirep ip segment generation docs.count docs.deleted size size.memory committed searchable version compound\nstatistics-20141110 0 p 192.168.0.190 _gga 21322 14939669 0 1.6gb 4943008 true true 4.9.0 false\nstatistics-20141110 0 p 192.168.0.190 _isc 24348 10913518 0 1.1gb 4101712 true true 4.9.0 false\nstatistics-20141110 1 p 192.168.0.245 _7i7 9727 7023269 0 766mb 2264472 true true 4.9.0 false\nstatistics-20141110 1 p 192.168.0.245 _i01 23329 14689581 0 1.5gb 4788872 true true 4.9.0 false\nstatistics-20141110 2 p 192.168.1.212 _9wx 12849 8995444 0 987.7mb 3326288 true true 4.9.0 false\nstatistics-20141110 2 p 192.168.1.212 _il1 24085 13205585 0 1.4gb 4343736 true true 4.9.0 false\nstatistics-20141110 3 p 192.168.1.212 _8pc 11280 10046395 0 1gb 4003824 true true 4.9.0 false\nstatistics-20141110 3 p 192.168.1.212 _hwt 23213 13226096 0 1.3gb 4287544 true true 4.9.0 false\nstatistics-20141110 4 p 192.168.2.88 _91i 11718 8328558 0 909.2mb 2822712 true true 4.9.0 false\nstatistics-20141110 4 p 192.168.2.88 _hms 22852 14848927 0 1.5gb 4777472 true true 4.9.0 false\n```\n\nThe only thing that happened with index besides indexing is optimizing to 2 segments per shard.\n", "comments": [ { "body": "Hi @bobrik \n\nIs there any chance this index was written with Elasticsearch 1.2.0?\n\nPlease could you provide the output of this request:\n\n```\ncurl -s \"http://web245:9200/statistics-20141110/_search?pretty&q=_id:1jC2LxTjTMS1KHCn0Prf1w&explain&fields=_source,_routing\"\n```\n", "created_at": "2014-12-09T12:14:51Z" }, { "body": "Routing is automatically inferred from `@key`\n\n``` json\n{\n \"took\" : 1744,\n \"timed_out\" : false,\n \"_shards\" : {\n \"total\" : 5,\n \"successful\" : 5,\n \"failed\" : 0\n },\n \"hits\" : {\n \"total\" : 2,\n \"max_score\" : 1.0,\n \"hits\" : [ {\n \"_shard\" : 3,\n \"_node\" : \"YOK_20U7Qee-XSasg0J8VA\",\n \"_index\" : \"statistics-20141110\",\n \"_type\" : \"events\",\n \"_id\" : \"1jC2LxTjTMS1KHCn0Prf1w\",\n \"_score\" : 1.0,\n \"_source\":{\"@timestamp\":\"2014-11-10T14:30:00+0300\",\"@key\":\"client_belarussia_msg_sended_from_mutual__22_1\",\"@value\":\"149\"},\n \"fields\" : {\n \"_routing\" : \"client_belarussia_msg_sended_from_mutual__22_1\"\n },\n \"_explanation\" : {\n \"value\" : 1.0,\n \"description\" : \"ConstantScore(_uid:events#1jC2LxTjTMS1KHCn0Prf1w _uid:markers#1jC2LxTjTMS1KHCn0Prf1w _uid:precise#1jC2LxTjTMS1KHCn0Prf1w _uid:rfm_users#1jC2LxTjTMS1KHCn0Prf1w), product of:\",\n \"details\" : [ {\n \"value\" : 1.0,\n \"description\" : \"boost\"\n }, {\n \"value\" : 1.0,\n \"description\" : \"queryNorm\"\n } ]\n }\n }, {\n \"_shard\" : 3,\n \"_node\" : \"YOK_20U7Qee-XSasg0J8VA\",\n \"_index\" : \"statistics-20141110\",\n \"_type\" : \"events\",\n \"_id\" : \"1jC2LxTjTMS1KHCn0Prf1w\",\n \"_score\" : 1.0,\n \"_source\":{\"@timestamp\":\"2014-11-10T14:30:00+0300\",\"@key\":\"client_belarussia_msg_sended_from_mutual__22_1\",\"@value\":\"149\"},\n \"fields\" : {\n \"_routing\" : \"client_belarussia_msg_sended_from_mutual__22_1\"\n },\n \"_explanation\" : {\n \"value\" : 1.0,\n \"description\" : \"ConstantScore(_uid:events#1jC2LxTjTMS1KHCn0Prf1w _uid:markers#1jC2LxTjTMS1KHCn0Prf1w _uid:precise#1jC2LxTjTMS1KHCn0Prf1w _uid:rfm_users#1jC2LxTjTMS1KHCn0Prf1w), product of:\",\n \"details\" : [ {\n \"value\" : 1.0,\n \"description\" : \"boost\"\n }, {\n \"value\" : 1.0,\n \"description\" : \"queryNorm\"\n } ]\n }\n } ]\n }\n}\n```\n\nIndex was created on 1.3.4, we upgraded from 1.0.1 to 1.3.2 on 2014-09-22\n", "created_at": "2014-12-09T12:33:09Z" }, { "body": "Hi @bobrik \n\nHmm, these two docs are on the same shard! Do you ever run updates on these docs? Could you send the output of this command please?\n\n```\ncurl -s \"http://web245:9200/statistics-20141110/_search?pretty&q=_id:1jC2LxTjTMS1KHCn0Prf1w&explain&fields=_source,_routing,_version\"\n```\n", "created_at": "2014-12-09T13:44:59Z" }, { "body": "Of course they are, that's how routing works :)\n\nI didn't run any updates, because my code only does indexing. It doesn't even know ids that are assigned by elasticsearch.\n\n``` json\n{\n \"took\" : 51,\n \"timed_out\" : false,\n \"_shards\" : {\n \"total\" : 5,\n \"successful\" : 5,\n \"failed\" : 0\n },\n \"hits\" : {\n \"total\" : 2,\n \"max_score\" : 1.0,\n \"hits\" : [ {\n \"_shard\" : 3,\n \"_node\" : \"YOK_20U7Qee-XSasg0J8VA\",\n \"_index\" : \"statistics-20141110\",\n \"_type\" : \"events\",\n \"_id\" : \"1jC2LxTjTMS1KHCn0Prf1w\",\n \"_score\" : 1.0,\n \"_source\":{\"@timestamp\":\"2014-11-10T14:30:00+0300\",\"@key\":\"client_belarussia_msg_sended_from_mutual__22_1\",\"@value\":\"149\"},\n \"fields\" : {\n \"_routing\" : \"client_belarussia_msg_sended_from_mutual__22_1\"\n },\n \"_explanation\" : {\n \"value\" : 1.0,\n \"description\" : \"ConstantScore(_uid:events#1jC2LxTjTMS1KHCn0Prf1w _uid:markers#1jC2LxTjTMS1KHCn0Prf1w _uid:precise#1jC2LxTjTMS1KHCn0Prf1w _uid:rfm_users#1jC2LxTjTMS1KHCn0Prf1w), product of:\",\n \"details\" : [ {\n \"value\" : 1.0,\n \"description\" : \"boost\"\n }, {\n \"value\" : 1.0,\n \"description\" : \"queryNorm\"\n } ]\n }\n }, {\n \"_shard\" : 3,\n \"_node\" : \"YOK_20U7Qee-XSasg0J8VA\",\n \"_index\" : \"statistics-20141110\",\n \"_type\" : \"events\",\n \"_id\" : \"1jC2LxTjTMS1KHCn0Prf1w\",\n \"_score\" : 1.0,\n \"_source\":{\"@timestamp\":\"2014-11-10T14:30:00+0300\",\"@key\":\"client_belarussia_msg_sended_from_mutual__22_1\",\"@value\":\"149\"},\n \"fields\" : {\n \"_routing\" : \"client_belarussia_msg_sended_from_mutual__22_1\"\n },\n \"_explanation\" : {\n \"value\" : 1.0,\n \"description\" : \"ConstantScore(_uid:events#1jC2LxTjTMS1KHCn0Prf1w _uid:markers#1jC2LxTjTMS1KHCn0Prf1w _uid:precise#1jC2LxTjTMS1KHCn0Prf1w _uid:rfm_users#1jC2LxTjTMS1KHCn0Prf1w), product of:\",\n \"details\" : [ {\n \"value\" : 1.0,\n \"description\" : \"boost\"\n }, {\n \"value\" : 1.0,\n \"description\" : \"queryNorm\"\n } ]\n }\n } ]\n }\n}\n```\n", "created_at": "2014-12-09T13:48:41Z" }, { "body": "Sorry @bobrik - I gave you the wrong request, it should be:\n\n```\ncurl -s \"http://web245:9200/statistics-20141110/_search?pretty&q=_id:1jC2LxTjTMS1KHCn0Prf1w&explain&fields=_source,_routing,version\"\n```\n\nAnd so you're using auto-assigned IDs? Did any of your shards migrate to other nodes, or did a primary fail during optimization?\n", "created_at": "2014-12-09T16:56:22Z" }, { "body": "I think this is caused by https://github.com/elasticsearch/elasticsearch/pull/7729 @bobrik are you coming from < 1.3.3 with this index and are you using bulk?\n", "created_at": "2014-12-09T17:05:25Z" }, { "body": "```\ncurl -s 'http://web605:9200/statistics-20141110/_search?pretty&q=_id:1jC2LxTjTMS1KHCn0Prf1w&explain&fields=_source,_routing,version'\n```\n\n``` json\n{\n \"took\" : 46,\n \"timed_out\" : false,\n \"_shards\" : {\n \"total\" : 5,\n \"successful\" : 5,\n \"failed\" : 0\n },\n \"hits\" : {\n \"total\" : 2,\n \"max_score\" : 1.0,\n \"hits\" : [ {\n \"_shard\" : 3,\n \"_node\" : \"YOK_20U7Qee-XSasg0J8VA\",\n \"_index\" : \"statistics-20141110\",\n \"_type\" : \"events\",\n \"_id\" : \"1jC2LxTjTMS1KHCn0Prf1w\",\n \"_score\" : 1.0,\n \"_source\":{\"@timestamp\":\"2014-11-10T14:30:00+0300\",\"@key\":\"client_belarussia_msg_sended_from_mutual__22_1\",\"@value\":\"149\"},\n \"fields\" : {\n \"_routing\" : \"client_belarussia_msg_sended_from_mutual__22_1\"\n },\n \"_explanation\" : {\n \"value\" : 1.0,\n \"description\" : \"ConstantScore(_uid:events#1jC2LxTjTMS1KHCn0Prf1w _uid:markers#1jC2LxTjTMS1KHCn0Prf1w _uid:precise#1jC2LxTjTMS1KHCn0Prf1w _uid:rfm_users#1jC2LxTjTMS1KHCn0Prf1w), product of:\",\n \"details\" : [ {\n \"value\" : 1.0,\n \"description\" : \"boost\"\n }, {\n \"value\" : 1.0,\n \"description\" : \"queryNorm\"\n } ]\n }\n }, {\n \"_shard\" : 3,\n \"_node\" : \"YOK_20U7Qee-XSasg0J8VA\",\n \"_index\" : \"statistics-20141110\",\n \"_type\" : \"events\",\n \"_id\" : \"1jC2LxTjTMS1KHCn0Prf1w\",\n \"_score\" : 1.0,\n \"_source\":{\"@timestamp\":\"2014-11-10T14:30:00+0300\",\"@key\":\"client_belarussia_msg_sended_from_mutual__22_1\",\"@value\":\"149\"},\n \"fields\" : {\n \"_routing\" : \"client_belarussia_msg_sended_from_mutual__22_1\"\n },\n \"_explanation\" : {\n \"value\" : 1.0,\n \"description\" : \"ConstantScore(_uid:events#1jC2LxTjTMS1KHCn0Prf1w _uid:markers#1jC2LxTjTMS1KHCn0Prf1w _uid:precise#1jC2LxTjTMS1KHCn0Prf1w _uid:rfm_users#1jC2LxTjTMS1KHCn0Prf1w), product of:\",\n \"details\" : [ {\n \"value\" : 1.0,\n \"description\" : \"boost\"\n }, {\n \"value\" : 1.0,\n \"description\" : \"queryNorm\"\n } ]\n }\n } ]\n }\n}\n```\n\nI bet you wanted this:\n\n```\ncurl -s 'http://web605:9200/statistics-20141110/_search?pretty&q=_id:1jC2LxTjTMS1KHCn0Prf1w&explain&fields=_source,_routing' -d '{\"version\":true}'\n```\n\n``` json\n{\n \"took\" : 1,\n \"timed_out\" : false,\n \"_shards\" : {\n \"total\" : 5,\n \"successful\" : 5,\n \"failed\" : 0\n },\n \"hits\" : {\n \"total\" : 2,\n \"max_score\" : 1.0,\n \"hits\" : [ {\n \"_shard\" : 3,\n \"_node\" : \"YOK_20U7Qee-XSasg0J8VA\",\n \"_index\" : \"statistics-20141110\",\n \"_type\" : \"events\",\n \"_id\" : \"1jC2LxTjTMS1KHCn0Prf1w\",\n \"_version\" : 1,\n \"_score\" : 1.0,\n \"_source\":{\"@timestamp\":\"2014-11-10T14:30:00+0300\",\"@key\":\"client_belarussia_msg_sended_from_mutual__22_1\",\"@value\":\"149\"},\n \"fields\" : {\n \"_routing\" : \"client_belarussia_msg_sended_from_mutual__22_1\"\n },\n \"_explanation\" : {\n \"value\" : 1.0,\n \"description\" : \"ConstantScore(_uid:events#1jC2LxTjTMS1KHCn0Prf1w _uid:markers#1jC2LxTjTMS1KHCn0Prf1w _uid:precise#1jC2LxTjTMS1KHCn0Prf1w _uid:rfm_users#1jC2LxTjTMS1KHCn0Prf1w), product of:\",\n \"details\" : [ {\n \"value\" : 1.0,\n \"description\" : \"boost\"\n }, {\n \"value\" : 1.0,\n \"description\" : \"queryNorm\"\n } ]\n }\n }, {\n \"_shard\" : 3,\n \"_node\" : \"YOK_20U7Qee-XSasg0J8VA\",\n \"_index\" : \"statistics-20141110\",\n \"_type\" : \"events\",\n \"_id\" : \"1jC2LxTjTMS1KHCn0Prf1w\",\n \"_version\" : 1,\n \"_score\" : 1.0,\n \"_source\":{\"@timestamp\":\"2014-11-10T14:30:00+0300\",\"@key\":\"client_belarussia_msg_sended_from_mutual__22_1\",\"@value\":\"149\"},\n \"fields\" : {\n \"_routing\" : \"client_belarussia_msg_sended_from_mutual__22_1\"\n },\n \"_explanation\" : {\n \"value\" : 1.0,\n \"description\" : \"ConstantScore(_uid:events#1jC2LxTjTMS1KHCn0Prf1w _uid:markers#1jC2LxTjTMS1KHCn0Prf1w _uid:precise#1jC2LxTjTMS1KHCn0Prf1w _uid:rfm_users#1jC2LxTjTMS1KHCn0Prf1w), product of:\",\n \"details\" : [ {\n \"value\" : 1.0,\n \"description\" : \"boost\"\n }, {\n \"value\" : 1.0,\n \"description\" : \"queryNorm\"\n } ]\n }\n } ]\n }\n}\n```\n\nThere were many migrations, but not during optimization, unless es moves shards after new index is created. Basically at 00:00 new index is created and at 00:45 optimization for old indices starts.\n", "created_at": "2014-12-09T17:06:19Z" }, { "body": "do you have client nodes that are pre 1.3.3?\n", "created_at": "2014-12-09T17:09:22Z" }, { "body": "@s1monw index is created on 1.3.4:\n\n```\n[2014-09-30 12:03:49,991][INFO ][node ] [statistics04] version[1.3.3], pid[17937], build[ddf796d/2014-09-29T13:39:00Z]\n```\n\n```\n[2014-09-30 14:03:19,205][INFO ][node ] [statistics04] version[1.3.4], pid[89485], build[a70f3cc/2014-09-30T09:07:17Z]\n```\n\nNov 11 is definitely after Sep 30. Shouldn't be #7729 then.\n\nWe don't have client nodes, everything is over http. But yeah, we use bulk indexing and automatically assigned ids.\n", "created_at": "2014-12-09T17:12:53Z" }, { "body": "Hi @bobrik \n\n(you guessed right about `version=true` :) )\n\nOK - we're going to need more info. Please could you send:\n\n```\ncurl -s 'http://web605:9200/statistics-20141110/_settings?pretty'\ncurl -s 'http://web605:9200/statistics-20141110/_segments?pretty'\n```\n", "created_at": "2014-12-09T17:25:10Z" }, { "body": "``` json\n{\n \"statistics-20141110\" : {\n \"settings\" : {\n \"index\" : {\n \"codec\" : {\n \"bloom\" : {\n \"load\" : \"false\"\n }\n },\n \"uuid\" : \"JZXC-8C3TFC71EnMGMHSWw\",\n \"number_of_replicas\" : \"0\",\n \"number_of_shards\" : \"5\",\n \"version\" : {\n \"created\" : \"1030499\"\n }\n }\n }\n }\n}\n```\n\n``` json\n{\n \"_shards\" : {\n \"total\" : 5,\n \"successful\" : 5,\n \"failed\" : 0\n },\n \"indices\" : {\n \"statistics-20141110\" : {\n \"shards\" : {\n \"0\" : [ {\n \"routing\" : {\n \"state\" : \"STARTED\",\n \"primary\" : true,\n \"node\" : \"hBg3FpLGQw6B9l-Hil2c8Q\"\n },\n \"num_committed_segments\" : 2,\n \"num_search_segments\" : 2,\n \"segments\" : {\n \"_gga\" : {\n \"generation\" : 21322,\n \"num_docs\" : 14939669,\n \"deleted_docs\" : 0,\n \"size_in_bytes\" : 1729206228,\n \"memory_in_bytes\" : 4943008,\n \"committed\" : true,\n \"search\" : true,\n \"version\" : \"4.9.0\",\n \"compound\" : false\n },\n \"_isc\" : {\n \"generation\" : 24348,\n \"num_docs\" : 10913518,\n \"deleted_docs\" : 0,\n \"size_in_bytes\" : 1254410507,\n \"memory_in_bytes\" : 4101712,\n \"committed\" : true,\n \"search\" : true,\n \"version\" : \"4.9.0\",\n \"compound\" : false\n }\n }\n } ],\n \"1\" : [ {\n \"routing\" : {\n \"state\" : \"STARTED\",\n \"primary\" : true,\n \"node\" : \"ajMe-w2lSIO0Tz5WEUs4qQ\"\n },\n \"num_committed_segments\" : 2,\n \"num_search_segments\" : 2,\n \"segments\" : {\n \"_7i7\" : {\n \"generation\" : 9727,\n \"num_docs\" : 7023269,\n \"deleted_docs\" : 0,\n \"size_in_bytes\" : 803299557,\n \"memory_in_bytes\" : 2264472,\n \"committed\" : true,\n \"search\" : true,\n \"version\" : \"4.9.0\",\n \"compound\" : false\n },\n \"_i01\" : {\n \"generation\" : 23329,\n \"num_docs\" : 14689581,\n \"deleted_docs\" : 0,\n \"size_in_bytes\" : 1659303375,\n \"memory_in_bytes\" : 4788872,\n \"committed\" : true,\n \"search\" : true,\n \"version\" : \"4.9.0\",\n \"compound\" : false\n }\n }\n } ],\n \"2\" : [ {\n \"routing\" : {\n \"state\" : \"STARTED\",\n \"primary\" : true,\n \"node\" : \"hyUu93q7SRehHBVZfSmvOg\"\n },\n \"num_committed_segments\" : 2,\n \"num_search_segments\" : 2,\n \"segments\" : {\n \"_9wx\" : {\n \"generation\" : 12849,\n \"num_docs\" : 8995444,\n \"deleted_docs\" : 0,\n \"size_in_bytes\" : 1035711205,\n \"memory_in_bytes\" : 3326288,\n \"committed\" : true,\n \"search\" : true,\n \"version\" : \"4.9.0\",\n \"compound\" : false\n },\n \"_il1\" : {\n \"generation\" : 24085,\n \"num_docs\" : 13205585,\n \"deleted_docs\" : 0,\n \"size_in_bytes\" : 1510021893,\n \"memory_in_bytes\" : 4343736,\n \"committed\" : true,\n \"search\" : true,\n \"version\" : \"4.9.0\",\n \"compound\" : false\n }\n }\n } ],\n \"3\" : [ {\n \"routing\" : {\n \"state\" : \"STARTED\",\n \"primary\" : true,\n \"node\" : \"hyUu93q7SRehHBVZfSmvOg\"\n },\n \"num_committed_segments\" : 2,\n \"num_search_segments\" : 2,\n \"segments\" : {\n \"_8pc\" : {\n \"generation\" : 11280,\n \"num_docs\" : 10046395,\n \"deleted_docs\" : 0,\n \"size_in_bytes\" : 1143637974,\n \"memory_in_bytes\" : 4003824,\n \"committed\" : true,\n \"search\" : true,\n \"version\" : \"4.9.0\",\n \"compound\" : false\n },\n \"_hwt\" : {\n \"generation\" : 23213,\n \"num_docs\" : 13226096,\n \"deleted_docs\" : 0,\n \"size_in_bytes\" : 1485110397,\n \"memory_in_bytes\" : 4287544,\n \"committed\" : true,\n \"search\" : true,\n \"version\" : \"4.9.0\",\n \"compound\" : false\n }\n }\n } ],\n \"4\" : [ {\n \"routing\" : {\n \"state\" : \"STARTED\",\n \"primary\" : true,\n \"node\" : \"hyUu93q7SRehHBVZfSmvOg\"\n },\n \"num_committed_segments\" : 2,\n \"num_search_segments\" : 2,\n \"segments\" : {\n \"_91i\" : {\n \"generation\" : 11718,\n \"num_docs\" : 8328558,\n \"deleted_docs\" : 0,\n \"size_in_bytes\" : 953452801,\n \"memory_in_bytes\" : 2822712,\n \"committed\" : true,\n \"search\" : true,\n \"version\" : \"4.9.0\",\n \"compound\" : false\n },\n \"_hms\" : {\n \"generation\" : 22852,\n \"num_docs\" : 14848927,\n \"deleted_docs\" : 0,\n \"size_in_bytes\" : 1673336536,\n \"memory_in_bytes\" : 4777472,\n \"committed\" : true,\n \"search\" : true,\n \"version\" : \"4.9.0\",\n \"compound\" : false\n }\n }\n } ]\n }\n }\n }\n}\n```\n", "created_at": "2014-12-09T18:18:44Z" }, { "body": "Reopen because the test added with #9125 just failed and the failure is reproducible (about 1/10 runs with same seed and added stress), see http://build-us-00.elasticsearch.org/job/es_core_master_window-2012/725/ \n", "created_at": "2015-01-02T18:23:22Z" }, { "body": "We've just seen this issue for the second time. The first time produced only a single duplicate; this time produced over 16000, across a comparatively tiny index (< 300k docs). We're using 1.3.4, doing bulk indexing with the Java client API's `BulkProcessor` and `TransportClient`. \n\nHowever, we're **not** using autogenerated ids, so from my reading of the fix for this issue it's unlikely to help us. Should I open a separate issue, or should this one be reopened?\n\nMiscellanous other info:\n- The index has not been migrated from an earlier version.\n- Around the time the duplicates appeared, we saw problems in other (non-Elastic) parts of the system. I can't see any way that they could directly cause the duplication, but it's possible that network issues were the common cause of both.\n- We still have the index containing duplicates for now, though it may not last long; this is on an alpha cluster that gets reset fairly often.\n- I'm very much a newbie to Elastic, so may be missing something obvious.\n", "created_at": "2015-04-08T14:59:55Z" }, { "body": "@mrec It would be great if you could open a new issue. Please also add a query that finds duplicates together with `?explain=true` option set and the output of that query like above. \nSomething like: \n\n```\ncurl -s 'http://HOST:PORT/YOURINDEX/_search?pretty&q=_id:A_DUPLICATE_ID&explain&fields=_source,_routing' -d '{\"version\":true}'\n```\n\nIs there a way that you can make available the elasticsearch logs from the time where you had the network issues?\nAlso, the output of\ncurl -s 'http://web605:9200/statistics-20141110/_segments?pretty' might be helpful.\n", "created_at": "2015-04-08T16:26:55Z" } ], "number": 8788, "title": "Duplicate id in index" }
{ "body": "This pr removes the optimization for auto generated ids.\nPreviously, when ids were auto generated by elasticsearch then there was no \ncheck to see if a document with same id already existed and instead the new\ndocument was only appended. However, due to lucene improvements this\noptimization does not add much value. In addition, under rare circumstances it might\ncause duplicate documents:\n\nWhen an indexing request is retried (due to connect lost, node closed etc),\nthen a flag 'canHaveDuplicates' is set to true for the indexing request\nthat is send a second time. This was to make sure that even\nwhen an indexing request for a document with autogenerated id comes in\nwe do not have to update unless this flag is set and instead only append.\n\nHowever, it might happen that for a retry or for the replication the\nindexing request that has the canHaveDuplicates set to true (the retried request) arrives\nat the destination before the original request that does have it set false.\nIn this case both request add a document and we have a duplicated a document.\nThis commit adds a workaround: remove the optimization for auto\ngenerated ids and always update the document.\nThe asumtion is that this will not slow down indexing more than 10 percent,\nsee: http://benchmarks.elasticsearch.org/\n\ncloses #8788\n", "number": 9468, "review_comments": [ { "body": "do we need this change here? I mean removing the optimization should be enough?\n", "created_at": "2015-01-29T08:40:49Z" }, { "body": "I'd want to remove this entirely but this change is supposed to go into 1.4 as well afaik so I guess that is fine.\n", "created_at": "2015-01-29T08:41:23Z" }, { "body": "I think this is wrong? I this is an indexing request on a replica and we're here, that means that there is already a doc with this id. In this case we want to just ignore the request. +1 on what Simon said regarding not changing this code.\n", "created_at": "2015-01-29T09:26:07Z" }, { "body": "This shouldn't have any effect any more, right? do we want to randomize it with a comment it's being removed?\n", "created_at": "2015-01-29T09:31:36Z" }, { "body": "same comment here. Randomize?\n", "created_at": "2015-01-29T09:34:52Z" }, { "body": "to simulate replication we need to update the version type . see https://github.com/elasticsearch/elasticsearch/blob/master/src/main/java/org/elasticsearch/action/index/TransportIndexAction.java#L216\n", "created_at": "2015-01-29T09:36:50Z" }, { "body": "Oh, I see. you have two tests. We can maybe fold it into one and have the canHaveDuplicates set randomly and then set it at the reverse?\n", "created_at": "2015-01-29T09:40:00Z" }, { "body": "Without the change there are two scenarios:\n1. First and second create request arrive in order on replica: on the replica 'doUpdate' is set but the version is not set to 1 -> versions on primary and replica are out of sync. \n2. First and second create request arrive out of order on primary: we get a 'DocumentAlreadyExistsException' because none of the other criteria match.\n\nThe tests I added in InternalEngineTests fail that way.\nIt seems to me we have to change something to make this work but I might be missing something.\n\n@bleskes I agree we do not have to update the doc at all if we find it already exists and create.autoGeneratedId() is true. We can just return. I added a commit, test pass.\n", "created_at": "2015-01-29T09:40:46Z" }, { "body": "I tried to make it as minimal invasive as possible for 1.4.\n", "created_at": "2015-01-29T09:40:55Z" }, { "body": "ok, ignore the comment for now, might just have gotten the versioning wrong...\n", "created_at": "2015-01-29T09:43:12Z" }, { "body": "It looked like the two tests do the exact same but now that I fixed the test it is not so anymore. The behavior is different depending on the order. Please take a look and let me know if you still want to fold.\n", "created_at": "2015-01-29T10:20:11Z" }, { "body": "yes, that was wrong. I removed the change and fixed the test instead\n", "created_at": "2015-01-29T10:20:43Z" }, { "body": "indeed, I removed the change and fixed the test instead\n", "created_at": "2015-01-29T10:21:07Z" }, { "body": "we can use ElasticsearchAssertions.assertThrows. Slightly cleaner code.\n", "created_at": "2015-01-29T10:56:17Z" }, { "body": "same here. ElasticsearchAssertions.assertThrows wil help\n", "created_at": "2015-01-29T10:56:37Z" }, { "body": "Yeah, I see now. We'll clean it up later.\n", "created_at": "2015-01-29T10:59:20Z" }, { "body": "assertThrows currently only accepts ActionFuture. we can change that, but I think that should probably a different pr.\n", "created_at": "2015-01-29T13:10:48Z" } ], "title": "Disable auto gen id optimization" }
{ "commits": [ { "message": "core: fix duplicate docs with autogenerated ids\n\nWhen an indexing request is retried (due to connect lost, node closed etc),\nthen a flag 'canHaveDuplicates' is set to true for the indexing request\nthat is send a second time. This was to make sure that even\nwhen an indexing request for a document with autogenerated id comes in\nwe do not have to update unless this flag is set and instead only append.\n\nHowever, it might happen that for a retry or for the replication the\nindexing request that has the canHaveDuplicates set to true (the retried request) arrives\nat the destination before the original request that does have it set false.\nIn this case both request add a document and we have a duplicated a document.\nThis commit adds a workaround: remove the optimization for auto\ngenerated ids and always update the document.\nThe asumtion is that this will not slow down indexing more than 10 percent,\nsee: http://benchmarks.elasticsearch.org/\n\ncloses #8788" }, { "message": "just return if autoGeneratedId and doc exists already" }, { "message": "undo changes to InternalEngine and fix tests instead" } ], "files": [ { "diff": "@@ -212,7 +212,7 @@ public IndexShard(ShardId shardId, @IndexSettings Settings indexSettings, IndexS\n /* create engine config */\n \n this.config = new EngineConfig(shardId,\n- indexSettings.getAsBoolean(EngineConfig.INDEX_OPTIMIZE_AUTOGENERATED_ID_SETTING, true),\n+ indexSettings.getAsBoolean(EngineConfig.INDEX_OPTIMIZE_AUTOGENERATED_ID_SETTING, false),\n threadPool,indexingService,indexSettingsService, warmer, store, deletionPolicy, translog, mergePolicyProvider, mergeScheduler,\n analysisService.defaultIndexAnalyzer(), similarityService.similarity(), codecService, failedEngineListener);\n ", "filename": "src/main/java/org/elasticsearch/index/shard/IndexShard.java", "status": "modified" }, { "diff": "@@ -30,7 +30,9 @@\n import org.apache.lucene.document.NumericDocValuesField;\n import org.apache.lucene.document.TextField;\n import org.apache.lucene.index.*;\n+import org.apache.lucene.search.MatchAllDocsQuery;\n import org.apache.lucene.search.TermQuery;\n+import org.apache.lucene.search.TopDocs;\n import org.apache.lucene.store.Directory;\n import org.apache.lucene.store.MockDirectoryWrapper;\n import org.elasticsearch.ExceptionsHelper;\n@@ -52,6 +54,7 @@\n import org.elasticsearch.index.engine.*;\n import org.elasticsearch.index.indexing.ShardIndexingService;\n import org.elasticsearch.index.indexing.slowlog.ShardSlowLogIndexingService;\n+import org.elasticsearch.index.mapper.DocumentMapper;\n import org.elasticsearch.index.mapper.ParseContext.Document;\n import org.elasticsearch.index.mapper.ParsedDocument;\n import org.elasticsearch.index.mapper.internal.SourceFieldMapper;\n@@ -231,7 +234,7 @@ protected Engine createEngine(IndexSettingsService indexSettingsService, Store s\n \n public EngineConfig config(IndexSettingsService indexSettingsService, Store store, Translog translog, MergeSchedulerProvider mergeSchedulerProvider) {\n IndexWriterConfig iwc = newIndexWriterConfig();\n- EngineConfig config = new EngineConfig(shardId, true, threadPool, new ShardIndexingService(shardId, EMPTY_SETTINGS, new ShardSlowLogIndexingService(shardId, EMPTY_SETTINGS, indexSettingsService)), indexSettingsService\n+ EngineConfig config = new EngineConfig(shardId, false/*per default optimization for auto generated ids is disabled*/, threadPool, new ShardIndexingService(shardId, EMPTY_SETTINGS, new ShardSlowLogIndexingService(shardId, EMPTY_SETTINGS, indexSettingsService)), indexSettingsService\n , null, store, createSnapshotDeletionPolicy(), translog, createMergePolicy(), mergeSchedulerProvider,\n iwc.getAnalyzer(), iwc.getSimilarity() , new CodecService(shardId.index()), new Engine.FailedEngineListener() {\n @Override\n@@ -1559,4 +1562,88 @@ public void testSettings() {\n \n }\n }\n+\n+ @Test\n+ public void testRetryWithAutogeneratedIdWorksAndNoDuplicateDocs() throws IOException {\n+\n+ ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocument(), B_1, false);\n+ boolean canHaveDuplicates = false;\n+ boolean autoGeneratedId = true;\n+\n+ Engine.Create index = new Engine.Create(null, analyzer, newUid(\"1\"), doc, Versions.MATCH_ANY, VersionType.INTERNAL, PRIMARY, System.nanoTime(), canHaveDuplicates, autoGeneratedId);\n+ engine.create(index);\n+ assertThat(index.version(), equalTo(1l));\n+\n+ index = new Engine.Create(null, analyzer, newUid(\"1\"), doc, index.version(), index.versionType().versionTypeForReplicationAndRecovery(), REPLICA, System.nanoTime(), canHaveDuplicates, autoGeneratedId);\n+ replicaEngine.create(index);\n+ assertThat(index.version(), equalTo(1l));\n+\n+ canHaveDuplicates = true;\n+ index = new Engine.Create(null, analyzer, newUid(\"1\"), doc, Versions.MATCH_ANY, VersionType.INTERNAL, PRIMARY, System.nanoTime(), canHaveDuplicates, autoGeneratedId);\n+ engine.create(index);\n+ assertThat(index.version(), equalTo(1l));\n+ engine.refresh(\"test\", true);\n+ Engine.Searcher searcher = engine.acquireSearcher(\"test\");\n+ TopDocs topDocs = searcher.searcher().search(new MatchAllDocsQuery(), 10);\n+ assertThat(topDocs.totalHits, equalTo(1));\n+\n+ index = new Engine.Create(null, analyzer, newUid(\"1\"), doc, index.version(), index.versionType().versionTypeForReplicationAndRecovery(), REPLICA, System.nanoTime(), canHaveDuplicates, autoGeneratedId);\n+ try {\n+ replicaEngine.create(index);\n+ fail();\n+ } catch (VersionConflictEngineException e) {\n+ // we ignore version conflicts on replicas, see TransportShardReplicationOperationAction.ignoreReplicaException\n+ }\n+ replicaEngine.refresh(\"test\", true);\n+ Engine.Searcher replicaSearcher = replicaEngine.acquireSearcher(\"test\");\n+ topDocs = replicaSearcher.searcher().search(new MatchAllDocsQuery(), 10);\n+ assertThat(topDocs.totalHits, equalTo(1));\n+ searcher.close();\n+ replicaSearcher.close();\n+ }\n+\n+ @Test\n+ public void testRetryWithAutogeneratedIdsAndWrongOrderWorksAndNoDuplicateDocs() throws IOException {\n+\n+ ParsedDocument doc = testParsedDocument(\"1\", \"1\", \"test\", null, -1, -1, testDocument(), B_1, false);\n+ boolean canHaveDuplicates = true;\n+ boolean autoGeneratedId = true;\n+\n+ Engine.Create firstIndexRequest = new Engine.Create(null, analyzer, newUid(\"1\"), doc, Versions.MATCH_ANY, VersionType.INTERNAL, PRIMARY, System.nanoTime(), canHaveDuplicates, autoGeneratedId);\n+ engine.create(firstIndexRequest);\n+ assertThat(firstIndexRequest.version(), equalTo(1l));\n+\n+ Engine.Create firstIndexRequestReplica = new Engine.Create(null, analyzer, newUid(\"1\"), doc, firstIndexRequest.version(), firstIndexRequest.versionType().versionTypeForReplicationAndRecovery(), REPLICA, System.nanoTime(), canHaveDuplicates, autoGeneratedId);\n+ replicaEngine.create(firstIndexRequestReplica);\n+ assertThat(firstIndexRequestReplica.version(), equalTo(1l));\n+\n+ canHaveDuplicates = false;\n+ Engine.Create secondIndexRequest = new Engine.Create(null, analyzer, newUid(\"1\"), doc, Versions.MATCH_ANY, VersionType.INTERNAL, PRIMARY, System.nanoTime(), canHaveDuplicates, autoGeneratedId);\n+ try {\n+ engine.create(secondIndexRequest);\n+ fail();\n+ } catch (DocumentAlreadyExistsException e) {\n+ // we can ignore the exception. In case this happens because the retry request arrived first then this error will not be sent back anyway.\n+ // in any other case this is an actual error\n+ }\n+ engine.refresh(\"test\", true);\n+ Engine.Searcher searcher = engine.acquireSearcher(\"test\");\n+ TopDocs topDocs = searcher.searcher().search(new MatchAllDocsQuery(), 10);\n+ assertThat(topDocs.totalHits, equalTo(1));\n+\n+ Engine.Create secondIndexRequestReplica = new Engine.Create(null, analyzer, newUid(\"1\"), doc, firstIndexRequest.version(), firstIndexRequest.versionType().versionTypeForReplicationAndRecovery(), REPLICA, System.nanoTime(), canHaveDuplicates, autoGeneratedId);\n+ try {\n+ replicaEngine.create(secondIndexRequestReplica);\n+ fail();\n+ } catch (VersionConflictEngineException e) {\n+ // we ignore version conflicts on replicas, see TransportShardReplicationOperationAction.ignoreReplicaException.\n+ }\n+ replicaEngine.refresh(\"test\", true);\n+ Engine.Searcher replicaSearcher = replicaEngine.acquireSearcher(\"test\");\n+ topDocs = replicaSearcher.searcher().search(new MatchAllDocsQuery(), 10);\n+ assertThat(topDocs.totalHits, equalTo(1));\n+ searcher.close();\n+ replicaSearcher.close();\n+ }\n+\n }", "filename": "src/test/java/org/elasticsearch/index/engine/internal/InternalEngineTests.java", "status": "modified" }, { "diff": "@@ -67,7 +67,6 @@ protected Settings nodeSettings(int nodeOrdinal) {\n * see https://github.com/elasticsearch/elasticsearch/issues/8788\n */\n @Test\n- @LuceneTestCase.AwaitsFix(bugUrl = \"https://github.com/elasticsearch/elasticsearch/issues/8788\")\n public void testRetryDueToExceptionOnNetworkLayer() throws ExecutionException, InterruptedException, IOException {\n final AtomicBoolean exceptionThrown = new AtomicBoolean(false);\n int numDocs = scaledRandomIntBetween(100, 1000);", "filename": "src/test/java/org/elasticsearch/index/store/ExceptionRetryTests.java", "status": "modified" } ] }
{ "body": "jts dependency is currently optional in our maven pom, but `GeoPoint` depends on it at the moment (after https://github.com/elasticsearch/elasticsearch/commit/06667c6aa898895acd624b8a71a6e00ff7ae32b8). We should change this and restore `GeoPoint` to not depend on jts, so that the dependency is effectively optional.\n", "comments": [ { "body": "+1\n", "created_at": "2015-01-28T13:50:39Z" }, { "body": "++1 \n\nFor bookkeeping I'm linking this issue with PR #9339 \n\nHistory: `GeoPolygonFilter` for `geo_point` types do not handle pole and dateline wrapping correctly. I corrected this issue in #8672 for `geo_shape` type but it relies on JTS (which is fine for `geo_shape` since the ShapeModule is optional). Many users rely on `geo_point` type's `GeoPolygonFilter` which suffers from the same ambiguous polygon problem. This was reported in #5968. PR #9339 corrects the issue but introduces this JTS dependency. Will submit a new PR linked to this issue that eliminates the JTS dependency.\n", "created_at": "2015-01-28T15:05:40Z" }, { "body": "@nknize I think this was solved right? Can we close or is there still work in progress to fix the issue?\n", "created_at": "2015-02-26T10:17:46Z" }, { "body": "This issue was solved. Thanks for the reminder.\n", "created_at": "2015-02-26T22:50:17Z" } ], "number": 9462, "title": "Make sure jts dependency is optional for geo_point types" }
{ "body": "This reverts PR #9339 which introduces a dependency on JTS. This reopens issues #5968 and #9304 which will be closed with #9462\n", "number": 9463, "review_comments": [], "title": "Revert \"[GEO] Update GeoPolygonFilter to handle ambiguous polygons\"" }
{ "commits": [ { "message": "Revert \"[GEO] Update GeoPolygonFilter to handle ambiguous polygons\"\n\nThis reverts commit 06667c6aa898895acd624b8a71a6e00ff7ae32b8 which introduces an undesireable dependency on JTS." } ], "files": [ { "diff": "@@ -20,12 +20,13 @@\n package org.elasticsearch.common.geo;\n \n \n-import com.vividsolutions.jts.geom.Coordinate;\n-\n /**\n *\n */\n-public final class GeoPoint extends Coordinate {\n+public final class GeoPoint {\n+\n+ private double lat;\n+ private double lon;\n \n public GeoPoint() {\n }\n@@ -40,36 +41,32 @@ public GeoPoint(String value) {\n this.resetFromString(value);\n }\n \n- public GeoPoint(GeoPoint other) {\n- super(other);\n- }\n-\n public GeoPoint(double lat, double lon) {\n- this.y = lat;\n- this.x = lon;\n+ this.lat = lat;\n+ this.lon = lon;\n }\n \n public GeoPoint reset(double lat, double lon) {\n- this.y = lat;\n- this.x = lon;\n+ this.lat = lat;\n+ this.lon = lon;\n return this;\n }\n \n public GeoPoint resetLat(double lat) {\n- this.y = lat;\n+ this.lat = lat;\n return this;\n }\n \n public GeoPoint resetLon(double lon) {\n- this.x = lon;\n+ this.lon = lon;\n return this;\n }\n \n public GeoPoint resetFromString(String value) {\n int comma = value.indexOf(',');\n if (comma != -1) {\n- this.y = Double.parseDouble(value.substring(0, comma).trim());\n- this.x = Double.parseDouble(value.substring(comma + 1).trim());\n+ lat = Double.parseDouble(value.substring(0, comma).trim());\n+ lon = Double.parseDouble(value.substring(comma + 1).trim());\n } else {\n resetFromGeoHash(value);\n }\n@@ -82,40 +79,38 @@ public GeoPoint resetFromGeoHash(String hash) {\n }\n \n public final double lat() {\n- return this.y;\n+ return this.lat;\n }\n \n public final double getLat() {\n- return this.y;\n+ return this.lat;\n }\n \n public final double lon() {\n- return this.x;\n+ return this.lon;\n }\n \n public final double getLon() {\n- return this.x;\n+ return this.lon;\n }\n \n public final String geohash() {\n- return GeoHashUtils.encode(y, x);\n+ return GeoHashUtils.encode(lat, lon);\n }\n \n public final String getGeohash() {\n- return GeoHashUtils.encode(y, x);\n+ return GeoHashUtils.encode(lat, lon);\n }\n \n @Override\n public boolean equals(Object o) {\n if (this == o) return true;\n- if (o == null) return false;\n- if (o instanceof Coordinate) {\n- Coordinate c = (Coordinate)o;\n- return Double.compare(c.x, this.x) == 0\n- && Double.compare(c.y, this.y) == 0\n- && Double.compare(c.z, this.z) == 0;\n- }\n- if (getClass() != o.getClass()) return false;\n+ if (o == null || getClass() != o.getClass()) return false;\n+\n+ GeoPoint geoPoint = (GeoPoint) o;\n+\n+ if (Double.compare(geoPoint.lat, lat) != 0) return false;\n+ if (Double.compare(geoPoint.lon, lon) != 0) return false;\n \n return true;\n }\n@@ -124,15 +119,15 @@ public boolean equals(Object o) {\n public int hashCode() {\n int result;\n long temp;\n- temp = y != +0.0d ? Double.doubleToLongBits(y) : 0L;\n+ temp = lat != +0.0d ? Double.doubleToLongBits(lat) : 0L;\n result = (int) (temp ^ (temp >>> 32));\n- temp = x != +0.0d ? Double.doubleToLongBits(x) : 0L;\n+ temp = lon != +0.0d ? Double.doubleToLongBits(lon) : 0L;\n result = 31 * result + (int) (temp ^ (temp >>> 32));\n return result;\n }\n \n public String toString() {\n- return \"[\" + y + \", \" + x + \"]\";\n+ return \"[\" + lat + \", \" + lon + \"]\";\n }\n \n public static GeoPoint parseFromLatLon(String latLon) {", "filename": "src/main/java/org/elasticsearch/common/geo/GeoPoint.java", "status": "modified" }, { "diff": "@@ -19,7 +19,6 @@\n \n package org.elasticsearch.common.geo;\n \n-import org.apache.commons.lang3.tuple.Pair;\n import org.apache.lucene.spatial.prefix.tree.GeohashPrefixTree;\n import org.apache.lucene.spatial.prefix.tree.QuadPrefixTree;\n import org.apache.lucene.util.SloppyMath;\n@@ -38,9 +37,7 @@ public class GeoUtils {\n public static final String LATITUDE = GeoPointFieldMapper.Names.LAT;\n public static final String LONGITUDE = GeoPointFieldMapper.Names.LON;\n public static final String GEOHASH = GeoPointFieldMapper.Names.GEOHASH;\n-\n- public static final double DATELINE = 180.0D;\n-\n+ \n /** Earth ellipsoid major axis defined by WGS 84 in meters */\n public static final double EARTH_SEMI_MAJOR_AXIS = 6378137.0; // meters (WGS 84)\n \n@@ -425,113 +422,6 @@ public static GeoPoint parseGeoPoint(XContentParser parser, GeoPoint point) thro\n }\n }\n \n- public static boolean correctPolyAmbiguity(GeoPoint[] points, boolean handedness) {\n- return correctPolyAmbiguity(points, handedness, computePolyOrientation(points), 0, points.length, false);\n- }\n-\n- public static boolean correctPolyAmbiguity(GeoPoint[] points, boolean handedness, boolean orientation, int component, int length,\n- boolean shellCorrected) {\n- // OGC requires shell as ccw (Right-Handedness) and holes as cw (Left-Handedness)\n- // since GeoJSON doesn't specify (and doesn't need to) GEO core will assume OGC standards\n- // thus if orientation is computed as cw, the logic will translate points across dateline\n- // and convert to a right handed system\n-\n- // compute the bounding box and calculate range\n- Pair<Pair, Pair> range = GeoUtils.computeBBox(points, length);\n- final double rng = (Double)range.getLeft().getRight() - (Double)range.getLeft().getLeft();\n- // translate the points if the following is true\n- // 1. shell orientation is cw and range is greater than a hemisphere (180 degrees) but not spanning 2 hemispheres\n- // (translation would result in a collapsed poly)\n- // 2. the shell of the candidate hole has been translated (to preserve the coordinate system)\n- boolean incorrectOrientation = component == 0 && handedness != orientation;\n- boolean translated = ((incorrectOrientation && (rng > DATELINE && rng != 360.0)) || (shellCorrected && component != 0));\n- if (translated) {\n- for (GeoPoint c : points) {\n- if (c.x < 0.0) {\n- c.x += 360.0;\n- }\n- }\n- }\n- return translated;\n- }\n-\n- public static boolean computePolyOrientation(GeoPoint[] points) {\n- return computePolyOrientation(points, points.length);\n- }\n-\n- public static boolean computePolyOrientation(GeoPoint[] points, int length) {\n- // calculate the direction of the points:\n- // find the point at the top of the set and check its\n- // neighbors orientation. So direction is equivalent\n- // to clockwise/counterclockwise\n- final int top = computePolyOrigin(points, length);\n- final int prev = ((top + length - 1) % length);\n- final int next = ((top + 1) % length);\n- return (points[prev].x > points[next].x);\n- }\n-\n- private static final int computePolyOrigin(GeoPoint[] points, int length) {\n- int top = 0;\n- // we start at 1 here since top points to 0\n- for (int i = 1; i < length; i++) {\n- if (points[i].y < points[top].y) {\n- top = i;\n- } else if (points[i].y == points[top].y) {\n- if (points[i].x < points[top].x) {\n- top = i;\n- }\n- }\n- }\n- return top;\n- }\n-\n- public static final Pair computeBBox(GeoPoint[] points) {\n- return computeBBox(points, 0);\n- }\n-\n- public static final Pair computeBBox(GeoPoint[] points, int length) {\n- double minX = points[0].x;\n- double maxX = points[0].x;\n- double minY = points[0].y;\n- double maxY = points[0].y;\n- // compute the bounding coordinates (@todo: cleanup brute force)\n- for (int i = 1; i < length; ++i) {\n- if (points[i].x < minX) {\n- minX = points[i].x;\n- }\n- if (points[i].x > maxX) {\n- maxX = points[i].x;\n- }\n- if (points[i].y < minY) {\n- minY = points[i].y;\n- }\n- if (points[i].y > maxY) {\n- maxY = points[i].y;\n- }\n- }\n- // return a pair of ranges on the X and Y axis, respectively\n- return Pair.of(Pair.of(minX, maxX), Pair.of(minY, maxY));\n- }\n-\n- public static GeoPoint convertToGreatCircle(GeoPoint point) {\n- return convertToGreatCircle(point.y, point.x);\n- }\n-\n- public static GeoPoint convertToGreatCircle(double lat, double lon) {\n- GeoPoint p = new GeoPoint(lat, lon);\n- // convert the point to standard lat/lon bounds\n- normalizePoint(p);\n-\n- if (p.x < 0.0D) {\n- p.x += 360.0D;\n- }\n-\n- if (p.y < 0.0D) {\n- p.y +=180.0D;\n- }\n- return p;\n- }\n-\n private GeoUtils() {\n }\n }", "filename": "src/main/java/org/elasticsearch/common/geo/GeoUtils.java", "status": "modified" }, { "diff": "@@ -23,22 +23,22 @@\n import java.util.ArrayList;\n import java.util.Arrays;\n \n-import org.elasticsearch.common.geo.GeoPoint;\n-import org.elasticsearch.common.geo.GeoUtils;\n+import com.spatial4j.core.shape.ShapeCollection;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n \n import com.spatial4j.core.shape.Shape;\n+import com.vividsolutions.jts.geom.Coordinate;\n import com.vividsolutions.jts.geom.Geometry;\n import com.vividsolutions.jts.geom.GeometryFactory;\n import com.vividsolutions.jts.geom.LineString;\n \n public abstract class BaseLineStringBuilder<E extends BaseLineStringBuilder<E>> extends PointCollection<E> {\n \n protected BaseLineStringBuilder() {\n- this(new ArrayList<GeoPoint>());\n+ this(new ArrayList<Coordinate>());\n }\n \n- protected BaseLineStringBuilder(ArrayList<GeoPoint> points) {\n+ protected BaseLineStringBuilder(ArrayList<Coordinate> points) {\n super(points);\n }\n \n@@ -49,7 +49,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n \n @Override\n public Shape build() {\n- GeoPoint[] coordinates = points.toArray(new GeoPoint[points.size()]);\n+ Coordinate[] coordinates = points.toArray(new Coordinate[points.size()]);\n Geometry geometry;\n if(wrapdateline) {\n ArrayList<LineString> strings = decompose(FACTORY, coordinates, new ArrayList<LineString>());\n@@ -67,9 +67,9 @@ public Shape build() {\n return jtsGeometry(geometry);\n }\n \n- protected static ArrayList<LineString> decompose(GeometryFactory factory, GeoPoint[] coordinates, ArrayList<LineString> strings) {\n- for(GeoPoint[] part : decompose(+DATELINE, coordinates)) {\n- for(GeoPoint[] line : decompose(-DATELINE, part)) {\n+ protected static ArrayList<LineString> decompose(GeometryFactory factory, Coordinate[] coordinates, ArrayList<LineString> strings) {\n+ for(Coordinate[] part : decompose(+DATELINE, coordinates)) {\n+ for(Coordinate[] line : decompose(-DATELINE, part)) {\n strings.add(factory.createLineString(line));\n }\n }\n@@ -83,16 +83,16 @@ protected static ArrayList<LineString> decompose(GeometryFactory factory, GeoPoi\n * @param coordinates coordinates forming the linestring\n * @return array of linestrings given as coordinate arrays \n */\n- protected static GeoPoint[][] decompose(double dateline, GeoPoint[] coordinates) {\n+ protected static Coordinate[][] decompose(double dateline, Coordinate[] coordinates) {\n int offset = 0;\n- ArrayList<GeoPoint[]> parts = new ArrayList<>();\n+ ArrayList<Coordinate[]> parts = new ArrayList<>();\n \n double shift = coordinates[0].x > DATELINE ? DATELINE : (coordinates[0].x < -DATELINE ? -DATELINE : 0);\n \n for (int i = 1; i < coordinates.length; i++) {\n double t = intersection(coordinates[i-1], coordinates[i], dateline);\n if(!Double.isNaN(t)) {\n- GeoPoint[] part;\n+ Coordinate[] part;\n if(t<1) {\n part = Arrays.copyOfRange(coordinates, offset, i+1);\n part[part.length-1] = Edge.position(coordinates[i-1], coordinates[i], t);\n@@ -111,16 +111,16 @@ protected static GeoPoint[][] decompose(double dateline, GeoPoint[] coordinates)\n if(offset == 0) {\n parts.add(shift(shift, coordinates));\n } else if(offset < coordinates.length-1) {\n- GeoPoint[] part = Arrays.copyOfRange(coordinates, offset, coordinates.length);\n+ Coordinate[] part = Arrays.copyOfRange(coordinates, offset, coordinates.length);\n parts.add(shift(shift, part));\n }\n- return parts.toArray(new GeoPoint[parts.size()][]);\n+ return parts.toArray(new Coordinate[parts.size()][]);\n }\n \n- private static GeoPoint[] shift(double shift, GeoPoint...coordinates) {\n+ private static Coordinate[] shift(double shift, Coordinate...coordinates) {\n if(shift != 0) {\n for (int j = 0; j < coordinates.length; j++) {\n- coordinates[j] = new GeoPoint(coordinates[j].y, coordinates[j].x - 2 * shift);\n+ coordinates[j] = new Coordinate(coordinates[j].x - 2 * shift, coordinates[j].y);\n }\n }\n return coordinates;", "filename": "src/main/java/org/elasticsearch/common/geo/builders/BaseLineStringBuilder.java", "status": "modified" }, { "diff": "@@ -20,14 +20,8 @@\n package org.elasticsearch.common.geo.builders;\n \n import com.spatial4j.core.shape.Shape;\n-import com.vividsolutions.jts.geom.Geometry;\n-import com.vividsolutions.jts.geom.GeometryFactory;\n-import com.vividsolutions.jts.geom.LinearRing;\n-import com.vividsolutions.jts.geom.MultiPolygon;\n-import com.vividsolutions.jts.geom.Polygon;\n+import com.vividsolutions.jts.geom.*;\n import org.elasticsearch.ElasticsearchParseException;\n-import org.elasticsearch.common.geo.GeoPoint;\n-import org.elasticsearch.common.geo.GeoUtils;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n \n import java.io.IOException;\n@@ -73,7 +67,7 @@ public E point(double longitude, double latitude) {\n * @param coordinate coordinate of the new point\n * @return this\n */\n- public E point(GeoPoint coordinate) {\n+ public E point(Coordinate coordinate) {\n shell.point(coordinate);\n return thisRef();\n }\n@@ -83,7 +77,7 @@ public E point(GeoPoint coordinate) {\n * @param coordinates coordinates of the new points to add\n * @return this\n */\n- public E points(GeoPoint...coordinates) {\n+ public E points(Coordinate...coordinates) {\n shell.points(coordinates);\n return thisRef();\n }\n@@ -127,7 +121,7 @@ public ShapeBuilder close() {\n * \n * @return coordinates of the polygon\n */\n- public GeoPoint[][][] coordinates() {\n+ public Coordinate[][][] coordinates() {\n int numEdges = shell.points.size()-1; // Last point is repeated \n for (int i = 0; i < holes.size(); i++) {\n numEdges += holes.get(i).points.size()-1;\n@@ -176,7 +170,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n \n public Geometry buildGeometry(GeometryFactory factory, boolean fixDateline) {\n if(fixDateline) {\n- GeoPoint[][][] polygons = coordinates();\n+ Coordinate[][][] polygons = coordinates();\n return polygons.length == 1\n ? polygon(factory, polygons[0])\n : multipolygon(factory, polygons);\n@@ -199,16 +193,16 @@ protected Polygon toPolygon(GeometryFactory factory) {\n return factory.createPolygon(shell, holes);\n }\n \n- protected static LinearRing linearRing(GeometryFactory factory, ArrayList<GeoPoint> coordinates) {\n- return factory.createLinearRing(coordinates.toArray(new GeoPoint[coordinates.size()]));\n+ protected static LinearRing linearRing(GeometryFactory factory, ArrayList<Coordinate> coordinates) {\n+ return factory.createLinearRing(coordinates.toArray(new Coordinate[coordinates.size()]));\n }\n \n @Override\n public GeoShapeType type() {\n return TYPE;\n }\n \n- protected static Polygon polygon(GeometryFactory factory, GeoPoint[][] polygon) {\n+ protected static Polygon polygon(GeometryFactory factory, Coordinate[][] polygon) {\n LinearRing shell = factory.createLinearRing(polygon[0]);\n LinearRing[] holes;\n \n@@ -233,7 +227,7 @@ protected static Polygon polygon(GeometryFactory factory, GeoPoint[][] polygon)\n * @param polygons definition of polygons\n * @return a new Multipolygon\n */\n- protected static MultiPolygon multipolygon(GeometryFactory factory, GeoPoint[][][] polygons) {\n+ protected static MultiPolygon multipolygon(GeometryFactory factory, Coordinate[][][] polygons) {\n Polygon[] polygonSet = new Polygon[polygons.length];\n for (int i = 0; i < polygonSet.length; i++) {\n polygonSet[i] = polygon(factory, polygons[i]);\n@@ -289,18 +283,18 @@ private static int component(final Edge edge, final int id, final ArrayList<Edge\n * @param coordinates Array of coordinates to write the result to\n * @return the coordinates parameter\n */\n- private static GeoPoint[] coordinates(Edge component, GeoPoint[] coordinates) {\n+ private static Coordinate[] coordinates(Edge component, Coordinate[] coordinates) {\n for (int i = 0; i < coordinates.length; i++) {\n coordinates[i] = (component = component.next).coordinate;\n }\n return coordinates;\n }\n \n- private static GeoPoint[][][] buildCoordinates(ArrayList<ArrayList<GeoPoint[]>> components) {\n- GeoPoint[][][] result = new GeoPoint[components.size()][][];\n+ private static Coordinate[][][] buildCoordinates(ArrayList<ArrayList<Coordinate[]>> components) {\n+ Coordinate[][][] result = new Coordinate[components.size()][][];\n for (int i = 0; i < result.length; i++) {\n- ArrayList<GeoPoint[]> component = components.get(i);\n- result[i] = component.toArray(new GeoPoint[component.size()][]);\n+ ArrayList<Coordinate[]> component = components.get(i);\n+ result[i] = component.toArray(new Coordinate[component.size()][]);\n }\n \n if(debugEnabled()) {\n@@ -315,45 +309,44 @@ private static GeoPoint[][][] buildCoordinates(ArrayList<ArrayList<GeoPoint[]>>\n return result;\n } \n \n- private static final GeoPoint[][] EMPTY = new GeoPoint[0][];\n+ private static final Coordinate[][] EMPTY = new Coordinate[0][];\n \n- private static GeoPoint[][] holes(Edge[] holes, int numHoles) {\n+ private static Coordinate[][] holes(Edge[] holes, int numHoles) {\n if (numHoles == 0) {\n return EMPTY;\n }\n- final GeoPoint[][] points = new GeoPoint[numHoles][];\n+ final Coordinate[][] points = new Coordinate[numHoles][];\n \n for (int i = 0; i < numHoles; i++) {\n int length = component(holes[i], -(i+1), null); // mark as visited by inverting the sign\n- points[i] = coordinates(holes[i], new GeoPoint[length+1]);\n+ points[i] = coordinates(holes[i], new Coordinate[length+1]);\n }\n \n return points;\n } \n \n- private static Edge[] edges(Edge[] edges, int numHoles, ArrayList<ArrayList<GeoPoint[]>> components) {\n+ private static Edge[] edges(Edge[] edges, int numHoles, ArrayList<ArrayList<Coordinate[]>> components) {\n ArrayList<Edge> mainEdges = new ArrayList<>(edges.length);\n \n for (int i = 0; i < edges.length; i++) {\n if (edges[i].component >= 0) {\n int length = component(edges[i], -(components.size()+numHoles+1), mainEdges);\n- ArrayList<GeoPoint[]> component = new ArrayList<>();\n- component.add(coordinates(edges[i], new GeoPoint[length+1]));\n+ ArrayList<Coordinate[]> component = new ArrayList<>();\n+ component.add(coordinates(edges[i], new Coordinate[length+1]));\n components.add(component);\n }\n }\n \n return mainEdges.toArray(new Edge[mainEdges.size()]);\n }\n \n- private static GeoPoint[][][] compose(Edge[] edges, Edge[] holes, int numHoles) {\n- final ArrayList<ArrayList<GeoPoint[]>> components = new ArrayList<>();\n+ private static Coordinate[][][] compose(Edge[] edges, Edge[] holes, int numHoles) {\n+ final ArrayList<ArrayList<Coordinate[]>> components = new ArrayList<>();\n assign(holes, holes(holes, numHoles), numHoles, edges(edges, numHoles, components), components);\n return buildCoordinates(components);\n }\n \n- private static void assign(Edge[] holes, GeoPoint[][] points, int numHoles, Edge[] edges, ArrayList<ArrayList<GeoPoint[]>>\n- components) {\n+ private static void assign(Edge[] holes, Coordinate[][] points, int numHoles, Edge[] edges, ArrayList<ArrayList<Coordinate[]>> components) {\n // Assign Hole to related components\n // To find the new component the hole belongs to all intersections of the\n // polygon edges with a vertical line are calculated. This vertical line\n@@ -468,13 +461,14 @@ private static void connect(Edge in, Edge out) {\n }\n \n private static int createEdges(int component, Orientation orientation, BaseLineStringBuilder<?> shell,\n- BaseLineStringBuilder<?> hole, Edge[] edges, int edgeOffset) {\n+ BaseLineStringBuilder<?> hole,\n+ Edge[] edges, int offset) {\n // inner rings (holes) have an opposite direction than the outer rings\n // XOR will invert the orientation for outer ring cases (Truth Table:, T/T = F, T/F = T, F/T = T, F/F = F)\n boolean direction = (component != 0 ^ orientation == Orientation.RIGHT);\n // set the points array accordingly (shell or hole)\n- GeoPoint[] points = (hole != null) ? hole.coordinates(false) : shell.coordinates(false);\n- Edge.ring(component, direction, orientation == Orientation.LEFT, shell, points, edges, edgeOffset, points.length-1);\n+ Coordinate[] points = (hole != null) ? hole.coordinates(false) : shell.coordinates(false);\n+ Edge.ring(component, direction, orientation == Orientation.LEFT, shell, points, 0, edges, offset, points.length-1);\n return points.length-1;\n }\n \n@@ -483,17 +477,17 @@ public static class Ring<P extends ShapeBuilder> extends BaseLineStringBuilder<R\n private final P parent;\n \n protected Ring(P parent) {\n- this(parent, new ArrayList<GeoPoint>());\n+ this(parent, new ArrayList<Coordinate>());\n }\n \n- protected Ring(P parent, ArrayList<GeoPoint> points) {\n+ protected Ring(P parent, ArrayList<Coordinate> points) {\n super(points);\n this.parent = parent;\n }\n \n public P close() {\n- GeoPoint start = points.get(0);\n- GeoPoint end = points.get(points.size()-1);\n+ Coordinate start = points.get(0);\n+ Coordinate end = points.get(points.size()-1);\n if(start.x != end.x || start.y != end.y) {\n points.add(start);\n }", "filename": "src/main/java/org/elasticsearch/common/geo/builders/BasePolygonBuilder.java", "status": "modified" }, { "diff": "@@ -20,7 +20,7 @@\n package org.elasticsearch.common.geo.builders;\n \n import com.spatial4j.core.shape.Circle;\n-import org.elasticsearch.common.geo.GeoPoint;\n+import com.vividsolutions.jts.geom.Coordinate;\n import org.elasticsearch.common.unit.DistanceUnit;\n import org.elasticsearch.common.unit.DistanceUnit.Distance;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n@@ -34,15 +34,15 @@ public class CircleBuilder extends ShapeBuilder {\n \n private DistanceUnit unit;\n private double radius;\n- private GeoPoint center;\n+ private Coordinate center;\n \n /**\n * Set the center of the circle\n * \n * @param center coordinate of the circles center\n * @return this\n */\n- public CircleBuilder center(GeoPoint center) {\n+ public CircleBuilder center(Coordinate center) {\n this.center = center;\n return this;\n }\n@@ -54,7 +54,7 @@ public CircleBuilder center(GeoPoint center) {\n * @return this\n */\n public CircleBuilder center(double lon, double lat) {\n- return center(new GeoPoint(lat, lon));\n+ return center(new Coordinate(lon, lat));\n }\n \n /**", "filename": "src/main/java/org/elasticsearch/common/geo/builders/CircleBuilder.java", "status": "modified" }, { "diff": "@@ -20,7 +20,7 @@\n package org.elasticsearch.common.geo.builders;\n \n import com.spatial4j.core.shape.Rectangle;\n-import org.elasticsearch.common.geo.GeoPoint;\n+import com.vividsolutions.jts.geom.Coordinate;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n \n import java.io.IOException;\n@@ -29,8 +29,8 @@ public class EnvelopeBuilder extends ShapeBuilder {\n \n public static final GeoShapeType TYPE = GeoShapeType.ENVELOPE; \n \n- protected GeoPoint topLeft;\n- protected GeoPoint bottomRight;\n+ protected Coordinate topLeft;\n+ protected Coordinate bottomRight;\n \n public EnvelopeBuilder() {\n this(Orientation.RIGHT);\n@@ -40,7 +40,7 @@ public EnvelopeBuilder(Orientation orientation) {\n super(orientation);\n }\n \n- public EnvelopeBuilder topLeft(GeoPoint topLeft) {\n+ public EnvelopeBuilder topLeft(Coordinate topLeft) {\n this.topLeft = topLeft;\n return this;\n }\n@@ -49,7 +49,7 @@ public EnvelopeBuilder topLeft(double longitude, double latitude) {\n return topLeft(coordinate(longitude, latitude));\n }\n \n- public EnvelopeBuilder bottomRight(GeoPoint bottomRight) {\n+ public EnvelopeBuilder bottomRight(Coordinate bottomRight) {\n this.bottomRight = bottomRight;\n return this;\n }", "filename": "src/main/java/org/elasticsearch/common/geo/builders/EnvelopeBuilder.java", "status": "modified" }, { "diff": "@@ -19,10 +19,11 @@\n \n package org.elasticsearch.common.geo.builders;\n \n-import org.elasticsearch.common.geo.GeoPoint;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n \n import com.spatial4j.core.shape.Shape;\n+import com.spatial4j.core.shape.jts.JtsGeometry;\n+import com.vividsolutions.jts.geom.Coordinate;\n import com.vividsolutions.jts.geom.Geometry;\n import com.vividsolutions.jts.geom.LineString;\n \n@@ -47,8 +48,8 @@ public MultiLineStringBuilder linestring(BaseLineStringBuilder<?> line) {\n return this;\n }\n \n- public GeoPoint[][] coordinates() {\n- GeoPoint[][] result = new GeoPoint[lines.size()][];\n+ public Coordinate[][] coordinates() {\n+ Coordinate[][] result = new Coordinate[lines.size()][];\n for (int i = 0; i < result.length; i++) {\n result[i] = lines.get(i).coordinates(false);\n }\n@@ -112,7 +113,7 @@ public MultiLineStringBuilder end() {\n return collection;\n }\n \n- public GeoPoint[] coordinates() {\n+ public Coordinate[] coordinates() {\n return super.coordinates(false);\n }\n ", "filename": "src/main/java/org/elasticsearch/common/geo/builders/MultiLineStringBuilder.java", "status": "modified" }, { "diff": "@@ -22,7 +22,7 @@\n import com.spatial4j.core.shape.Point;\n import com.spatial4j.core.shape.Shape;\n import com.spatial4j.core.shape.ShapeCollection;\n-import org.elasticsearch.common.geo.GeoPoint;\n+import com.vividsolutions.jts.geom.Coordinate;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n \n import java.io.IOException;\n@@ -48,7 +48,7 @@ public Shape build() {\n //Could wrap JtsGeometry but probably slower due to conversions to/from JTS in relate()\n //MultiPoint geometry = FACTORY.createMultiPoint(points.toArray(new Coordinate[points.size()]));\n List<Point> shapes = new ArrayList<>(points.size());\n- for (GeoPoint coord : points) {\n+ for (Coordinate coord : points) {\n shapes.add(SPATIAL_CONTEXT.makePoint(coord.x, coord.y));\n }\n return new ShapeCollection<>(shapes, SPATIAL_CONTEXT);", "filename": "src/main/java/org/elasticsearch/common/geo/builders/MultiPointBuilder.java", "status": "modified" }, { "diff": "@@ -24,10 +24,10 @@\n import java.util.List;\n \n import com.spatial4j.core.shape.ShapeCollection;\n-import org.elasticsearch.common.geo.GeoPoint;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n \n import com.spatial4j.core.shape.Shape;\n+import com.vividsolutions.jts.geom.Coordinate;\n \n public class MultiPolygonBuilder extends ShapeBuilder {\n \n@@ -84,7 +84,7 @@ public Shape build() {\n \n if(wrapdateline) {\n for (BasePolygonBuilder<?> polygon : this.polygons) {\n- for(GeoPoint[][] part : polygon.coordinates()) {\n+ for(Coordinate[][] part : polygon.coordinates()) {\n shapes.add(jtsGeometry(PolygonBuilder.polygon(FACTORY, part)));\n }\n }", "filename": "src/main/java/org/elasticsearch/common/geo/builders/MultiPolygonBuilder.java", "status": "modified" }, { "diff": "@@ -21,18 +21,18 @@\n \n import java.io.IOException;\n \n-import org.elasticsearch.common.geo.GeoPoint;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n \n import com.spatial4j.core.shape.Point;\n+import com.vividsolutions.jts.geom.Coordinate;\n \n public class PointBuilder extends ShapeBuilder {\n \n public static final GeoShapeType TYPE = GeoShapeType.POINT;\n \n- private GeoPoint coordinate;\n+ private Coordinate coordinate;\n \n- public PointBuilder coordinate(GeoPoint coordinate) {\n+ public PointBuilder coordinate(Coordinate coordinate) {\n this.coordinate = coordinate;\n return this;\n }", "filename": "src/main/java/org/elasticsearch/common/geo/builders/PointBuilder.java", "status": "modified" }, { "diff": "@@ -24,22 +24,23 @@\n import java.util.Arrays;\n import java.util.Collection;\n \n-import org.elasticsearch.common.geo.GeoPoint;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n \n+import com.vividsolutions.jts.geom.Coordinate;\n+\n /**\n * The {@link PointCollection} is an abstract base implementation for all GeoShapes. It simply handles a set of points. \n */\n public abstract class PointCollection<E extends PointCollection<E>> extends ShapeBuilder {\n \n- protected final ArrayList<GeoPoint> points;\n+ protected final ArrayList<Coordinate> points;\n protected boolean translated = false;\n \n protected PointCollection() {\n- this(new ArrayList<GeoPoint>());\n+ this(new ArrayList<Coordinate>());\n }\n \n- protected PointCollection(ArrayList<GeoPoint> points) {\n+ protected PointCollection(ArrayList<Coordinate> points) {\n this.points = points;\n }\n \n@@ -63,28 +64,28 @@ public E point(double longitude, double latitude) {\n * @param coordinate coordinate of the point\n * @return this\n */\n- public E point(GeoPoint coordinate) {\n+ public E point(Coordinate coordinate) {\n this.points.add(coordinate);\n return thisRef();\n }\n \n /**\n * Add a array of points to the collection\n * \n- * @param coordinates array of {@link GeoPoint}s to add\n+ * @param coordinates array of {@link Coordinate}s to add\n * @return this\n */\n- public E points(GeoPoint...coordinates) {\n+ public E points(Coordinate...coordinates) {\n return this.points(Arrays.asList(coordinates));\n }\n \n /**\n * Add a collection of points to the collection\n * \n- * @param coordinates array of {@link GeoPoint}s to add\n+ * @param coordinates array of {@link Coordinate}s to add\n * @return this\n */\n- public E points(Collection<? extends GeoPoint> coordinates) {\n+ public E points(Collection<? extends Coordinate> coordinates) {\n this.points.addAll(coordinates);\n return thisRef();\n }\n@@ -95,8 +96,8 @@ public E points(Collection<? extends GeoPoint> coordinates) {\n * @param closed if set to true the first point of the array is repeated as last element\n * @return Array of coordinates\n */\n- protected GeoPoint[] coordinates(boolean closed) {\n- GeoPoint[] result = points.toArray(new GeoPoint[points.size() + (closed?1:0)]);\n+ protected Coordinate[] coordinates(boolean closed) {\n+ Coordinate[] result = points.toArray(new Coordinate[points.size() + (closed?1:0)]);\n if(closed) {\n result[result.length-1] = result[0];\n }\n@@ -113,12 +114,12 @@ protected GeoPoint[] coordinates(boolean closed) {\n */\n protected XContentBuilder coordinatesToXcontent(XContentBuilder builder, boolean closed) throws IOException {\n builder.startArray();\n- for(GeoPoint point : points) {\n+ for(Coordinate point : points) {\n toXContent(builder, point);\n }\n if(closed) {\n- GeoPoint start = points.get(0);\n- GeoPoint end = points.get(points.size()-1);\n+ Coordinate start = points.get(0);\n+ Coordinate end = points.get(points.size()-1);\n if(start.x != end.x || start.y != end.y) {\n toXContent(builder, points.get(0));\n }", "filename": "src/main/java/org/elasticsearch/common/geo/builders/PointCollection.java", "status": "modified" }, { "diff": "@@ -21,19 +21,19 @@\n \n import java.util.ArrayList;\n \n-import org.elasticsearch.common.geo.GeoPoint;\n+import com.vividsolutions.jts.geom.Coordinate;\n \n public class PolygonBuilder extends BasePolygonBuilder<PolygonBuilder> {\n \n public PolygonBuilder() {\n- this(new ArrayList<GeoPoint>(), Orientation.RIGHT);\n+ this(new ArrayList<Coordinate>(), Orientation.RIGHT);\n }\n \n public PolygonBuilder(Orientation orientation) {\n- this(new ArrayList<GeoPoint>(), orientation);\n+ this(new ArrayList<Coordinate>(), orientation);\n }\n \n- protected PolygonBuilder(ArrayList<GeoPoint> points, Orientation orientation) {\n+ protected PolygonBuilder(ArrayList<Coordinate> points, Orientation orientation) {\n super(orientation);\n this.shell = new Ring<>(this, points);\n }", "filename": "src/main/java/org/elasticsearch/common/geo/builders/PolygonBuilder.java", "status": "modified" }, { "diff": "@@ -22,12 +22,12 @@\n import com.spatial4j.core.context.jts.JtsSpatialContext;\n import com.spatial4j.core.shape.Shape;\n import com.spatial4j.core.shape.jts.JtsGeometry;\n+import com.vividsolutions.jts.geom.Coordinate;\n import com.vividsolutions.jts.geom.Geometry;\n import com.vividsolutions.jts.geom.GeometryFactory;\n+import org.apache.commons.lang3.tuple.Pair;\n import org.elasticsearch.ElasticsearchIllegalArgumentException;\n import org.elasticsearch.ElasticsearchParseException;\n-import org.elasticsearch.common.geo.GeoPoint;\n-import org.elasticsearch.common.geo.GeoUtils;\n import org.elasticsearch.common.logging.ESLogger;\n import org.elasticsearch.common.logging.ESLoggerFactory;\n import org.elasticsearch.common.unit.DistanceUnit.Distance;\n@@ -57,7 +57,7 @@ public abstract class ShapeBuilder implements ToXContent {\n DEBUG = debug;\n }\n \n- public static final double DATELINE = GeoUtils.DATELINE;\n+ public static final double DATELINE = 180;\n // TODO how might we use JtsSpatialContextFactory to configure the context (esp. for non-geo)?\n public static final JtsSpatialContext SPATIAL_CONTEXT = JtsSpatialContext.GEO;\n public static final GeometryFactory FACTORY = SPATIAL_CONTEXT.getGeometryFactory();\n@@ -84,8 +84,8 @@ protected ShapeBuilder(Orientation orientation) {\n this.orientation = orientation;\n }\n \n- protected static GeoPoint coordinate(double longitude, double latitude) {\n- return new GeoPoint(latitude, longitude);\n+ protected static Coordinate coordinate(double longitude, double latitude) {\n+ return new Coordinate(longitude, latitude);\n }\n \n protected JtsGeometry jtsGeometry(Geometry geom) {\n@@ -106,15 +106,15 @@ protected JtsGeometry jtsGeometry(Geometry geom) {\n * @return a new {@link PointBuilder}\n */\n public static PointBuilder newPoint(double longitude, double latitude) {\n- return newPoint(new GeoPoint(latitude, longitude));\n+ return newPoint(new Coordinate(longitude, latitude));\n }\n \n /**\n- * Create a new {@link PointBuilder} from a {@link GeoPoint}\n+ * Create a new {@link PointBuilder} from a {@link Coordinate}\n * @param coordinate coordinate defining the position of the point\n * @return a new {@link PointBuilder}\n */\n- public static PointBuilder newPoint(GeoPoint coordinate) {\n+ public static PointBuilder newPoint(Coordinate coordinate) {\n return new PointBuilder().coordinate(coordinate);\n }\n \n@@ -250,7 +250,7 @@ private static CoordinateNode parseCoordinates(XContentParser parser) throws IOE\n token = parser.nextToken();\n double lat = parser.doubleValue();\n token = parser.nextToken();\n- return new CoordinateNode(new GeoPoint(lat, lon));\n+ return new CoordinateNode(new Coordinate(lon, lat));\n } else if (token == XContentParser.Token.VALUE_NULL) {\n throw new ElasticsearchIllegalArgumentException(\"coordinates cannot contain NULL values)\");\n }\n@@ -289,7 +289,7 @@ public static ShapeBuilder parse(XContentParser parser, GeoShapeFieldMapper geoD\n return GeoShapeType.parse(parser, geoDocMapper);\n }\n \n- protected static XContentBuilder toXContent(XContentBuilder builder, GeoPoint coordinate) throws IOException {\n+ protected static XContentBuilder toXContent(XContentBuilder builder, Coordinate coordinate) throws IOException {\n return builder.startArray().value(coordinate.x).value(coordinate.y).endArray();\n }\n \n@@ -309,11 +309,11 @@ public static Orientation orientationFromString(String orientation) {\n }\n }\n \n- protected static GeoPoint shift(GeoPoint coordinate, double dateline) {\n+ protected static Coordinate shift(Coordinate coordinate, double dateline) {\n if (dateline == 0) {\n return coordinate;\n } else {\n- return new GeoPoint(coordinate.y, -2 * dateline + coordinate.x);\n+ return new Coordinate(-2 * dateline + coordinate.x, coordinate.y);\n }\n }\n \n@@ -325,7 +325,7 @@ protected static GeoPoint shift(GeoPoint coordinate, double dateline) {\n \n /**\n * Calculate the intersection of a line segment and a vertical dateline.\n- *\n+ * \n * @param p1\n * start-point of the line segment\n * @param p2\n@@ -336,7 +336,7 @@ protected static GeoPoint shift(GeoPoint coordinate, double dateline) {\n * segment intersects with the line segment. Otherwise this method\n * returns {@link Double#NaN}\n */\n- protected static final double intersection(GeoPoint p1, GeoPoint p2, double dateline) {\n+ protected static final double intersection(Coordinate p1, Coordinate p2, double dateline) {\n if (p1.x == p2.x && p1.x != dateline) {\n return Double.NaN;\n } else if (p1.x == p2.x && p1.x == dateline) {\n@@ -366,8 +366,8 @@ protected static int intersections(double dateline, Edge[] edges) {\n int numIntersections = 0;\n assert !Double.isNaN(dateline);\n for (int i = 0; i < edges.length; i++) {\n- GeoPoint p1 = edges[i].coordinate;\n- GeoPoint p2 = edges[i].next.coordinate;\n+ Coordinate p1 = edges[i].coordinate;\n+ Coordinate p2 = edges[i].next.coordinate;\n assert !Double.isNaN(p2.x) && !Double.isNaN(p1.x); \n edges[i].intersect = Edge.MAX_COORDINATE;\n \n@@ -384,21 +384,21 @@ protected static int intersections(double dateline, Edge[] edges) {\n /**\n * Node used to represent a tree of coordinates.\n * <p/>\n- * Can either be a leaf node consisting of a GeoPoint, or a parent with\n+ * Can either be a leaf node consisting of a Coordinate, or a parent with\n * children\n */\n protected static class CoordinateNode implements ToXContent {\n \n- protected final GeoPoint coordinate;\n+ protected final Coordinate coordinate;\n protected final List<CoordinateNode> children;\n \n /**\n * Creates a new leaf CoordinateNode\n * \n * @param coordinate\n- * GeoPoint for the Node\n+ * Coordinate for the Node\n */\n- protected CoordinateNode(GeoPoint coordinate) {\n+ protected CoordinateNode(Coordinate coordinate) {\n this.coordinate = coordinate;\n this.children = null;\n }\n@@ -434,17 +434,17 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n }\n \n /**\n- * This helper class implements a linked list for {@link GeoPoint}. It contains\n+ * This helper class implements a linked list for {@link Coordinate}. It contains\n * fields for a dateline intersection and component id \n */\n protected static final class Edge {\n- GeoPoint coordinate; // coordinate of the start point\n+ Coordinate coordinate; // coordinate of the start point\n Edge next; // next segment\n- GeoPoint intersect; // potential intersection with dateline\n+ Coordinate intersect; // potential intersection with dateline\n int component = -1; // id of the component this edge belongs to\n- public static final GeoPoint MAX_COORDINATE = new GeoPoint(Double.POSITIVE_INFINITY, Double.POSITIVE_INFINITY);\n+ public static final Coordinate MAX_COORDINATE = new Coordinate(Double.POSITIVE_INFINITY, Double.POSITIVE_INFINITY);\n \n- protected Edge(GeoPoint coordinate, Edge next, GeoPoint intersection) {\n+ protected Edge(Coordinate coordinate, Edge next, Coordinate intersection) {\n this.coordinate = coordinate;\n this.next = next;\n this.intersect = intersection;\n@@ -453,11 +453,11 @@ protected Edge(GeoPoint coordinate, Edge next, GeoPoint intersection) {\n }\n }\n \n- protected Edge(GeoPoint coordinate, Edge next) {\n+ protected Edge(Coordinate coordinate, Edge next) {\n this(coordinate, next, Edge.MAX_COORDINATE);\n }\n \n- private static final int top(GeoPoint[] points, int offset, int length) {\n+ private static final int top(Coordinate[] points, int offset, int length) {\n int top = 0; // we start at 1 here since top points to 0\n for (int i = 1; i < length; i++) {\n if (points[offset + i].y < points[offset + top].y) {\n@@ -471,6 +471,29 @@ private static final int top(GeoPoint[] points, int offset, int length) {\n return top;\n }\n \n+ private static final Pair range(Coordinate[] points, int offset, int length) {\n+ double minX = points[0].x;\n+ double maxX = points[0].x;\n+ double minY = points[0].y;\n+ double maxY = points[0].y;\n+ // compute the bounding coordinates (@todo: cleanup brute force)\n+ for (int i = 1; i < length; ++i) {\n+ if (points[offset + i].x < minX) {\n+ minX = points[offset + i].x;\n+ }\n+ if (points[offset + i].x > maxX) {\n+ maxX = points[offset + i].x;\n+ }\n+ if (points[offset + i].y < minY) {\n+ minY = points[offset + i].y;\n+ }\n+ if (points[offset + i].y > maxY) {\n+ maxY = points[offset + i].y;\n+ }\n+ }\n+ return Pair.of(Pair.of(minX, maxX), Pair.of(minY, maxY));\n+ }\n+\n /**\n * Concatenate a set of points to a polygon\n * \n@@ -480,6 +503,8 @@ private static final int top(GeoPoint[] points, int offset, int length) {\n * direction of the ring\n * @param points\n * list of points to concatenate\n+ * @param pointOffset\n+ * index of the first point\n * @param edges\n * Array of edges to write the result to\n * @param edgeOffset\n@@ -488,29 +513,27 @@ private static final int top(GeoPoint[] points, int offset, int length) {\n * number of points to use\n * @return the edges creates\n */\n- private static Edge[] concat(int component, boolean direction, GeoPoint[] points, Edge[] edges, final int edgeOffset,\n- int length) {\n+ private static Edge[] concat(int component, boolean direction, Coordinate[] points, final int pointOffset, Edge[] edges, final int edgeOffset,\n+ int length) {\n assert edges.length >= length+edgeOffset;\n- assert points.length >= length;\n- edges[edgeOffset] = new Edge(points[0], null);\n- int edgeEnd = edgeOffset + length;\n-\n- for (int i = edgeOffset+1, p = 1; i < edgeEnd; ++i, ++p) {\n+ assert points.length >= length+pointOffset;\n+ edges[edgeOffset] = new Edge(points[pointOffset], null);\n+ for (int i = 1; i < length; i++) {\n if (direction) {\n- edges[i] = new Edge(points[p], edges[i - 1]);\n- edges[i].component = component;\n+ edges[edgeOffset + i] = new Edge(points[pointOffset + i], edges[edgeOffset + i - 1]);\n+ edges[edgeOffset + i].component = component;\n } else {\n- edges[i - 1].next = edges[i] = new Edge(points[p], null);\n- edges[i - 1].component = component;\n+ edges[edgeOffset + i - 1].next = edges[edgeOffset + i] = new Edge(points[pointOffset + i], null);\n+ edges[edgeOffset + i - 1].component = component;\n }\n }\n \n if (direction) {\n- edges[edgeOffset].next = edges[edgeEnd - 1];\n+ edges[edgeOffset].next = edges[edgeOffset + length - 1];\n edges[edgeOffset].component = component;\n } else {\n- edges[edgeEnd - 1].next = edges[edgeOffset];\n- edges[edgeEnd - 1].component = component;\n+ edges[edgeOffset + length - 1].next = edges[edgeOffset];\n+ edges[edgeOffset + length - 1].component = component;\n }\n \n return edges;\n@@ -521,47 +544,82 @@ private static Edge[] concat(int component, boolean direction, GeoPoint[] points\n * \n * @param points\n * array of point\n+ * @param offset\n+ * index of the first point\n * @param length\n * number of points\n * @return Array of edges\n */\n protected static Edge[] ring(int component, boolean direction, boolean handedness, BaseLineStringBuilder<?> shell,\n- GeoPoint[] points, Edge[] edges, int edgeOffset, int length) {\n+ Coordinate[] points, int offset, Edge[] edges, int toffset, int length) {\n // calculate the direction of the points:\n- boolean orientation = GeoUtils.computePolyOrientation(points, length);\n- boolean corrected = GeoUtils.correctPolyAmbiguity(points, handedness, orientation, component, length,\n- shell.translated);\n-\n- // correct the orientation post translation (ccw for shell, cw for holes)\n- if (corrected && (component == 0 || (component != 0 && handedness == orientation))) {\n+ // find the point a the top of the set and check its\n+ // neighbors orientation. So direction is equivalent\n+ // to clockwise/counterclockwise\n+ final int top = top(points, offset, length);\n+ final int prev = (offset + ((top + length - 1) % length));\n+ final int next = (offset + ((top + 1) % length));\n+ boolean orientation = points[offset + prev].x > points[offset + next].x;\n+\n+ // OGC requires shell as ccw (Right-Handedness) and holes as cw (Left-Handedness) \n+ // since GeoJSON doesn't specify (and doesn't need to) GEO core will assume OGC standards\n+ // thus if orientation is computed as cw, the logic will translate points across dateline\n+ // and convert to a right handed system\n+\n+ // compute the bounding box and calculate range\n+ Pair<Pair, Pair> range = range(points, offset, length);\n+ final double rng = (Double)range.getLeft().getRight() - (Double)range.getLeft().getLeft();\n+ // translate the points if the following is true\n+ // 1. shell orientation is cw and range is greater than a hemisphere (180 degrees) but not spanning 2 hemispheres \n+ // (translation would result in a collapsed poly)\n+ // 2. the shell of the candidate hole has been translated (to preserve the coordinate system)\n+ boolean incorrectOrientation = component == 0 && handedness != orientation;\n+ if ( (incorrectOrientation && (rng > DATELINE && rng != 2*DATELINE)) || (shell.translated && component != 0)) {\n+ translate(points);\n+ // flip the translation bit if the shell is being translated\n if (component == 0) {\n- shell.translated = corrected;\n+ shell.translated = true;\n+ }\n+ // correct the orientation post translation (ccw for shell, cw for holes)\n+ if (component == 0 || (component != 0 && handedness == orientation)) {\n+ orientation = !orientation;\n+ }\n+ }\n+ return concat(component, direction ^ orientation, points, offset, edges, toffset, length);\n+ }\n+\n+ /**\n+ * Transforms coordinates in the eastern hemisphere (-180:0) to a (180:360) range \n+ * @param points\n+ */\n+ protected static void translate(Coordinate[] points) {\n+ for (Coordinate c : points) {\n+ if (c.x < 0) {\n+ c.x += 2*DATELINE;\n }\n- orientation = !orientation;\n }\n- return concat(component, direction ^ orientation, points, edges, edgeOffset, length);\n }\n \n /**\n * Set the intersection of this line segment to the given position\n * \n * @param position\n * position of the intersection [0..1]\n- * @return the {@link GeoPoint} of the intersection\n+ * @return the {@link Coordinate} of the intersection\n */\n- protected GeoPoint intersection(double position) {\n+ protected Coordinate intersection(double position) {\n return intersect = position(coordinate, next.coordinate, position);\n }\n \n- public static GeoPoint position(GeoPoint p1, GeoPoint p2, double position) {\n+ public static Coordinate position(Coordinate p1, Coordinate p2, double position) {\n if (position == 0) {\n return p1;\n } else if (position == 1) {\n return p2;\n } else {\n final double x = p1.x + position * (p2.x - p1.x);\n final double y = p1.y + position * (p2.y - p1.y);\n- return new GeoPoint(y, x);\n+ return new Coordinate(x, y);\n }\n }\n \n@@ -735,12 +793,12 @@ protected static EnvelopeBuilder parseEnvelope(CoordinateNode coordinates, Orien\n \"geo_shape ('envelope') when expecting an array of 2 coordinates\");\n }\n // verify coordinate bounds, correct if necessary\n- GeoPoint uL = coordinates.children.get(0).coordinate;\n- GeoPoint lR = coordinates.children.get(1).coordinate;\n+ Coordinate uL = coordinates.children.get(0).coordinate;\n+ Coordinate lR = coordinates.children.get(1).coordinate;\n if (((lR.x < uL.x) || (uL.y < lR.y))) {\n- GeoPoint uLtmp = uL;\n- uL = new GeoPoint(Math.max(uL.y, lR.y), Math.min(uL.x, lR.x));\n- lR = new GeoPoint(Math.min(uLtmp.y, lR.y), Math.max(uLtmp.x, lR.x));\n+ Coordinate uLtmp = uL;\n+ uL = new Coordinate(Math.min(uL.x, lR.x), Math.max(uL.y, lR.y));\n+ lR = new Coordinate(Math.max(uLtmp.x, lR.x), Math.min(uLtmp.y, lR.y));\n }\n return newEnvelope(orientation).topLeft(uL).bottomRight(lR);\n }", "filename": "src/main/java/org/elasticsearch/common/geo/builders/ShapeBuilder.java", "status": "modified" }, { "diff": "@@ -294,10 +294,6 @@ protected void doXContentBody(XContentBuilder builder, boolean includeDefaults,\n if (includeDefaults || defaultStrategy.getDistErrPct() != Defaults.DISTANCE_ERROR_PCT) {\n builder.field(Names.DISTANCE_ERROR_PCT, defaultStrategy.getDistErrPct());\n }\n-\n- if (includeDefaults || shapeOrientation != Defaults.ORIENTATION ) {\n- builder.field(Names.ORIENTATION, shapeOrientation);\n- }\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/index/mapper/geo/GeoShapeFieldMapper.java", "status": "modified" }, { "diff": "@@ -29,7 +29,6 @@\n import org.apache.lucene.util.Bits;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.geo.GeoPoint;\n-import org.elasticsearch.common.geo.GeoUtils;\n import org.elasticsearch.index.fielddata.IndexGeoPointFieldData;\n import org.elasticsearch.index.fielddata.MultiGeoPointValues;\n \n@@ -94,27 +93,15 @@ protected boolean matchDoc(int doc) {\n \n private static boolean pointInPolygon(GeoPoint[] points, double lat, double lon) {\n boolean inPoly = false;\n- // @TODO handedness will be an option provided by the parser\n- boolean corrected = GeoUtils.correctPolyAmbiguity(points, false);\n- GeoPoint p = (corrected) ?\n- GeoUtils.convertToGreatCircle(lat, lon) :\n- new GeoPoint(lat, lon);\n-\n- GeoPoint pp0 = (corrected) ? GeoUtils.convertToGreatCircle(points[0]) : points[0] ;\n- GeoPoint pp1;\n- // simple even-odd PIP computation\n- // 1. Determine if point is contained in the longitudinal range\n- // 2. Determine whether point crosses the edge by computing the latitudinal delta\n- // between the end-point of a parallel vector (originating at the point) and the\n- // y-component of the edge sink\n+\n for (int i = 1; i < points.length; i++) {\n- pp1 = points[i];\n- if (pp1.x < p.x && pp0.x >= p.x || pp0.x < p.x && pp1.x >= p.x) {\n- if (pp1.y + (p.x - pp1.x) / (pp0.x - pp1.x) * (pp0.y - pp1.y) < p.y) {\n+ if (points[i].lon() < lon && points[i-1].lon() >= lon\n+ || points[i-1].lon() < lon && points[i].lon() >= lon) {\n+ if (points[i].lat() + (lon - points[i].lon()) /\n+ (points[i-1].lon() - points[i].lon()) * (points[i-1].lat() - points[i].lat()) < lat) {\n inPoly = !inPoly;\n }\n }\n- pp0 = pp1;\n }\n return inPoly;\n }", "filename": "src/main/java/org/elasticsearch/index/search/geo/GeoPolygonFilter.java", "status": "modified" }, { "diff": "@@ -26,13 +26,7 @@\n import com.spatial4j.core.shape.ShapeCollection;\n import com.spatial4j.core.shape.jts.JtsGeometry;\n import com.spatial4j.core.shape.jts.JtsPoint;\n-import com.vividsolutions.jts.geom.Geometry;\n-import com.vividsolutions.jts.geom.GeometryFactory;\n-import com.vividsolutions.jts.geom.LineString;\n-import com.vividsolutions.jts.geom.LinearRing;\n-import com.vividsolutions.jts.geom.MultiLineString;\n-import com.vividsolutions.jts.geom.Point;\n-import com.vividsolutions.jts.geom.Polygon;\n+import com.vividsolutions.jts.geom.*;\n import org.elasticsearch.ElasticsearchIllegalArgumentException;\n import org.elasticsearch.ElasticsearchParseException;\n import org.elasticsearch.common.geo.builders.ShapeBuilder;\n@@ -63,7 +57,7 @@ public void testParse_simplePoint() throws IOException {\n .startArray(\"coordinates\").value(100.0).value(0.0).endArray()\n .endObject().string();\n \n- Point expected = GEOMETRY_FACTORY.createPoint(new GeoPoint(0.0, 100.0));\n+ Point expected = GEOMETRY_FACTORY.createPoint(new Coordinate(100.0, 0.0));\n assertGeometryEquals(new JtsPoint(expected, SPATIAL_CONTEXT), pointGeoJson);\n }\n \n@@ -75,12 +69,12 @@ public void testParse_lineString() throws IOException {\n .endArray()\n .endObject().string();\n \n- List<GeoPoint> lineCoordinates = new ArrayList<>();\n- lineCoordinates.add(new GeoPoint(0, 100));\n- lineCoordinates.add(new GeoPoint(1, 101));\n+ List<Coordinate> lineCoordinates = new ArrayList<>();\n+ lineCoordinates.add(new Coordinate(100, 0));\n+ lineCoordinates.add(new Coordinate(101, 1));\n \n LineString expected = GEOMETRY_FACTORY.createLineString(\n- lineCoordinates.toArray(new GeoPoint[lineCoordinates.size()]));\n+ lineCoordinates.toArray(new Coordinate[lineCoordinates.size()]));\n assertGeometryEquals(jtsGeom(expected), lineGeoJson);\n }\n \n@@ -99,13 +93,13 @@ public void testParse_multiLineString() throws IOException {\n .endObject().string();\n \n MultiLineString expected = GEOMETRY_FACTORY.createMultiLineString(new LineString[]{\n- GEOMETRY_FACTORY.createLineString(new GeoPoint[]{\n- new GeoPoint(0, 100),\n- new GeoPoint(1, 101),\n+ GEOMETRY_FACTORY.createLineString(new Coordinate[]{\n+ new Coordinate(100, 0),\n+ new Coordinate(101, 1),\n }),\n- GEOMETRY_FACTORY.createLineString(new GeoPoint[]{\n- new GeoPoint(2, 102),\n- new GeoPoint(3, 103),\n+ GEOMETRY_FACTORY.createLineString(new Coordinate[]{\n+ new Coordinate(102, 2),\n+ new Coordinate(103, 3),\n }),\n });\n assertGeometryEquals(jtsGeom(expected), multilinesGeoJson);\n@@ -179,14 +173,14 @@ public void testParse_polygonNoHoles() throws IOException {\n .endArray()\n .endObject().string();\n \n- List<GeoPoint> shellCoordinates = new ArrayList<>();\n- shellCoordinates.add(new GeoPoint(0, 100));\n- shellCoordinates.add(new GeoPoint(0, 101));\n- shellCoordinates.add(new GeoPoint(1, 101));\n- shellCoordinates.add(new GeoPoint(1, 100));\n- shellCoordinates.add(new GeoPoint(0, 100));\n+ List<Coordinate> shellCoordinates = new ArrayList<>();\n+ shellCoordinates.add(new Coordinate(100, 0));\n+ shellCoordinates.add(new Coordinate(101, 0));\n+ shellCoordinates.add(new Coordinate(101, 1));\n+ shellCoordinates.add(new Coordinate(100, 1));\n+ shellCoordinates.add(new Coordinate(100, 0));\n \n- LinearRing shell = GEOMETRY_FACTORY.createLinearRing(shellCoordinates.toArray(new GeoPoint[shellCoordinates.size()]));\n+ LinearRing shell = GEOMETRY_FACTORY.createLinearRing(shellCoordinates.toArray(new Coordinate[shellCoordinates.size()]));\n Polygon expected = GEOMETRY_FACTORY.createPolygon(shell, null);\n assertGeometryEquals(jtsGeom(expected), polygonGeoJson);\n }\n@@ -573,25 +567,25 @@ public void testParse_polygonWithHole() throws IOException {\n .endArray()\n .endObject().string();\n \n- List<GeoPoint> shellCoordinates = new ArrayList<>();\n- shellCoordinates.add(new GeoPoint(0, 100));\n- shellCoordinates.add(new GeoPoint(0, 101));\n- shellCoordinates.add(new GeoPoint(1, 101));\n- shellCoordinates.add(new GeoPoint(1, 100));\n- shellCoordinates.add(new GeoPoint(0, 100));\n+ List<Coordinate> shellCoordinates = new ArrayList<>();\n+ shellCoordinates.add(new Coordinate(100, 0));\n+ shellCoordinates.add(new Coordinate(101, 0));\n+ shellCoordinates.add(new Coordinate(101, 1));\n+ shellCoordinates.add(new Coordinate(100, 1));\n+ shellCoordinates.add(new Coordinate(100, 0));\n \n- List<GeoPoint> holeCoordinates = new ArrayList<>();\n- holeCoordinates.add(new GeoPoint(0.2, 100.2));\n- holeCoordinates.add(new GeoPoint(0.2, 100.8));\n- holeCoordinates.add(new GeoPoint(0.8, 100.8));\n- holeCoordinates.add(new GeoPoint(0.8, 100.2));\n- holeCoordinates.add(new GeoPoint(0.2, 100.2));\n+ List<Coordinate> holeCoordinates = new ArrayList<>();\n+ holeCoordinates.add(new Coordinate(100.2, 0.2));\n+ holeCoordinates.add(new Coordinate(100.8, 0.2));\n+ holeCoordinates.add(new Coordinate(100.8, 0.8));\n+ holeCoordinates.add(new Coordinate(100.2, 0.8));\n+ holeCoordinates.add(new Coordinate(100.2, 0.2));\n \n LinearRing shell = GEOMETRY_FACTORY.createLinearRing(\n- shellCoordinates.toArray(new GeoPoint[shellCoordinates.size()]));\n+ shellCoordinates.toArray(new Coordinate[shellCoordinates.size()]));\n LinearRing[] holes = new LinearRing[1];\n holes[0] = GEOMETRY_FACTORY.createLinearRing(\n- holeCoordinates.toArray(new GeoPoint[holeCoordinates.size()]));\n+ holeCoordinates.toArray(new Coordinate[holeCoordinates.size()]));\n Polygon expected = GEOMETRY_FACTORY.createPolygon(shell, holes);\n assertGeometryEquals(jtsGeom(expected), polygonGeoJson);\n }\n@@ -663,34 +657,34 @@ public void testParse_multiPolygon() throws IOException {\n .endArray()\n .endObject().string();\n \n- List<GeoPoint> shellCoordinates = new ArrayList<>();\n- shellCoordinates.add(new GeoPoint(0, 100));\n- shellCoordinates.add(new GeoPoint(0, 101));\n- shellCoordinates.add(new GeoPoint(1, 101));\n- shellCoordinates.add(new GeoPoint(1, 100));\n- shellCoordinates.add(new GeoPoint(0, 100));\n+ List<Coordinate> shellCoordinates = new ArrayList<>();\n+ shellCoordinates.add(new Coordinate(100, 0));\n+ shellCoordinates.add(new Coordinate(101, 0));\n+ shellCoordinates.add(new Coordinate(101, 1));\n+ shellCoordinates.add(new Coordinate(100, 1));\n+ shellCoordinates.add(new Coordinate(100, 0));\n \n- List<GeoPoint> holeCoordinates = new ArrayList<>();\n- holeCoordinates.add(new GeoPoint(0.2, 100.2));\n- holeCoordinates.add(new GeoPoint(0.2, 100.8));\n- holeCoordinates.add(new GeoPoint(0.8, 100.8));\n- holeCoordinates.add(new GeoPoint(0.8, 100.2));\n- holeCoordinates.add(new GeoPoint(0.2, 100.2));\n+ List<Coordinate> holeCoordinates = new ArrayList<>();\n+ holeCoordinates.add(new Coordinate(100.2, 0.2));\n+ holeCoordinates.add(new Coordinate(100.8, 0.2));\n+ holeCoordinates.add(new Coordinate(100.8, 0.8));\n+ holeCoordinates.add(new Coordinate(100.2, 0.8));\n+ holeCoordinates.add(new Coordinate(100.2, 0.2));\n \n- LinearRing shell = GEOMETRY_FACTORY.createLinearRing(shellCoordinates.toArray(new GeoPoint[shellCoordinates.size()]));\n+ LinearRing shell = GEOMETRY_FACTORY.createLinearRing(shellCoordinates.toArray(new Coordinate[shellCoordinates.size()]));\n LinearRing[] holes = new LinearRing[1];\n- holes[0] = GEOMETRY_FACTORY.createLinearRing(holeCoordinates.toArray(new GeoPoint[holeCoordinates.size()]));\n+ holes[0] = GEOMETRY_FACTORY.createLinearRing(holeCoordinates.toArray(new Coordinate[holeCoordinates.size()]));\n Polygon withHoles = GEOMETRY_FACTORY.createPolygon(shell, holes);\n \n shellCoordinates = new ArrayList<>();\n- shellCoordinates.add(new GeoPoint(3, 102));\n- shellCoordinates.add(new GeoPoint(3, 103));\n- shellCoordinates.add(new GeoPoint(2, 103));\n- shellCoordinates.add(new GeoPoint(2, 102));\n- shellCoordinates.add(new GeoPoint(3, 102));\n+ shellCoordinates.add(new Coordinate(102, 3));\n+ shellCoordinates.add(new Coordinate(103, 3));\n+ shellCoordinates.add(new Coordinate(103, 2));\n+ shellCoordinates.add(new Coordinate(102, 2));\n+ shellCoordinates.add(new Coordinate(102, 3));\n \n \n- shell = GEOMETRY_FACTORY.createLinearRing(shellCoordinates.toArray(new GeoPoint[shellCoordinates.size()]));\n+ shell = GEOMETRY_FACTORY.createLinearRing(shellCoordinates.toArray(new Coordinate[shellCoordinates.size()]));\n Polygon withoutHoles = GEOMETRY_FACTORY.createPolygon(shell, null);\n \n Shape expected = shapeCollection(withoutHoles, withHoles);\n@@ -722,22 +716,22 @@ public void testParse_multiPolygon() throws IOException {\n .endObject().string();\n \n shellCoordinates = new ArrayList<>();\n- shellCoordinates.add(new GeoPoint(1, 100));\n- shellCoordinates.add(new GeoPoint(1, 101));\n- shellCoordinates.add(new GeoPoint(0, 101));\n- shellCoordinates.add(new GeoPoint(0, 100));\n- shellCoordinates.add(new GeoPoint(1, 100));\n+ shellCoordinates.add(new Coordinate(100, 1));\n+ shellCoordinates.add(new Coordinate(101, 1));\n+ shellCoordinates.add(new Coordinate(101, 0));\n+ shellCoordinates.add(new Coordinate(100, 0));\n+ shellCoordinates.add(new Coordinate(100, 1));\n \n holeCoordinates = new ArrayList<>();\n- holeCoordinates.add(new GeoPoint(0.8, 100.2));\n- holeCoordinates.add(new GeoPoint(0.2, 100.2));\n- holeCoordinates.add(new GeoPoint(0.2, 100.8));\n- holeCoordinates.add(new GeoPoint(0.8, 100.8));\n- holeCoordinates.add(new GeoPoint(0.8, 100.2));\n+ holeCoordinates.add(new Coordinate(100.2, 0.8));\n+ holeCoordinates.add(new Coordinate(100.2, 0.2));\n+ holeCoordinates.add(new Coordinate(100.8, 0.2));\n+ holeCoordinates.add(new Coordinate(100.8, 0.8));\n+ holeCoordinates.add(new Coordinate(100.2, 0.8));\n \n- shell = GEOMETRY_FACTORY.createLinearRing(shellCoordinates.toArray(new GeoPoint[shellCoordinates.size()]));\n+ shell = GEOMETRY_FACTORY.createLinearRing(shellCoordinates.toArray(new Coordinate[shellCoordinates.size()]));\n holes = new LinearRing[1];\n- holes[0] = GEOMETRY_FACTORY.createLinearRing(holeCoordinates.toArray(new GeoPoint[holeCoordinates.size()]));\n+ holes[0] = GEOMETRY_FACTORY.createLinearRing(holeCoordinates.toArray(new Coordinate[holeCoordinates.size()]));\n withHoles = GEOMETRY_FACTORY.createPolygon(shell, holes);\n \n assertGeometryEquals(jtsGeom(withHoles), multiPolygonGeoJson);\n@@ -763,12 +757,12 @@ public void testParse_geometryCollection() throws IOException {\n .string();\n \n Shape[] expected = new Shape[2];\n- LineString expectedLineString = GEOMETRY_FACTORY.createLineString(new GeoPoint[]{\n- new GeoPoint(0, 100),\n- new GeoPoint(1, 101),\n+ LineString expectedLineString = GEOMETRY_FACTORY.createLineString(new Coordinate[]{\n+ new Coordinate(100, 0),\n+ new Coordinate(101, 1),\n });\n expected[0] = jtsGeom(expectedLineString);\n- Point expectedPoint = GEOMETRY_FACTORY.createPoint(new GeoPoint(2.0, 102.0));\n+ Point expectedPoint = GEOMETRY_FACTORY.createPoint(new Coordinate(102.0, 2.0));\n expected[1] = new JtsPoint(expectedPoint, SPATIAL_CONTEXT);\n \n //equals returns true only if geometries are in the same order\n@@ -791,7 +785,7 @@ public void testThatParserExtractsCorrectTypeAndCoordinatesFromArbitraryJson() t\n .startObject(\"lala\").field(\"type\", \"NotAPoint\").endObject()\n .endObject().string();\n \n- Point expected = GEOMETRY_FACTORY.createPoint(new GeoPoint(0.0, 100.0));\n+ Point expected = GEOMETRY_FACTORY.createPoint(new Coordinate(100.0, 0.0));\n assertGeometryEquals(new JtsPoint(expected, SPATIAL_CONTEXT), pointGeoJson);\n }\n ", "filename": "src/test/java/org/elasticsearch/common/geo/GeoJSONShapeParserTests.java", "status": "modified" }, { "diff": "@@ -24,6 +24,7 @@\n import com.spatial4j.core.shape.Rectangle;\n import com.spatial4j.core.shape.Shape;\n import com.spatial4j.core.shape.impl.PointImpl;\n+import com.vividsolutions.jts.geom.Coordinate;\n import com.vividsolutions.jts.geom.LineString;\n import com.vividsolutions.jts.geom.Polygon;\n import org.elasticsearch.common.geo.builders.PolygonBuilder;\n@@ -63,39 +64,38 @@ public void testNewPolygon() {\n .point(-45, 30).toPolygon();\n \n LineString exterior = polygon.getExteriorRing();\n- assertEquals(exterior.getCoordinateN(0), new GeoPoint(30, -45));\n- assertEquals(exterior.getCoordinateN(1), new GeoPoint(30, 45));\n- assertEquals(exterior.getCoordinateN(2), new GeoPoint(-30, 45));\n- assertEquals(exterior.getCoordinateN(3), new GeoPoint(-30, -45));\n+ assertEquals(exterior.getCoordinateN(0), new Coordinate(-45, 30));\n+ assertEquals(exterior.getCoordinateN(1), new Coordinate(45, 30));\n+ assertEquals(exterior.getCoordinateN(2), new Coordinate(45, -30));\n+ assertEquals(exterior.getCoordinateN(3), new Coordinate(-45, -30));\n }\n \n @Test\n public void testNewPolygon_coordinate() {\n Polygon polygon = ShapeBuilder.newPolygon()\n- .point(new GeoPoint(30, -45))\n- .point(new GeoPoint(30, 45))\n- .point(new GeoPoint(-30, 45))\n- .point(new GeoPoint(-30, -45))\n- .point(new GeoPoint(30, -45)).toPolygon();\n+ .point(new Coordinate(-45, 30))\n+ .point(new Coordinate(45, 30))\n+ .point(new Coordinate(45, -30))\n+ .point(new Coordinate(-45, -30))\n+ .point(new Coordinate(-45, 30)).toPolygon();\n \n LineString exterior = polygon.getExteriorRing();\n- assertEquals(exterior.getCoordinateN(0), new GeoPoint(30, -45));\n- assertEquals(exterior.getCoordinateN(1), new GeoPoint(30, 45));\n- assertEquals(exterior.getCoordinateN(2), new GeoPoint(-30, 45));\n- assertEquals(exterior.getCoordinateN(3), new GeoPoint(-30, -45));\n+ assertEquals(exterior.getCoordinateN(0), new Coordinate(-45, 30));\n+ assertEquals(exterior.getCoordinateN(1), new Coordinate(45, 30));\n+ assertEquals(exterior.getCoordinateN(2), new Coordinate(45, -30));\n+ assertEquals(exterior.getCoordinateN(3), new Coordinate(-45, -30));\n }\n \n @Test\n public void testNewPolygon_coordinates() {\n Polygon polygon = ShapeBuilder.newPolygon()\n- .points(new GeoPoint(30, -45), new GeoPoint(30, 45), new GeoPoint(-30, 45), new GeoPoint(-30, -45),\n- new GeoPoint(30, -45)).toPolygon();\n+ .points(new Coordinate(-45, 30), new Coordinate(45, 30), new Coordinate(45, -30), new Coordinate(-45, -30), new Coordinate(-45, 30)).toPolygon();\n \n LineString exterior = polygon.getExteriorRing();\n- assertEquals(exterior.getCoordinateN(0), new GeoPoint(30, -45));\n- assertEquals(exterior.getCoordinateN(1), new GeoPoint(30, 45));\n- assertEquals(exterior.getCoordinateN(2), new GeoPoint(-30, 45));\n- assertEquals(exterior.getCoordinateN(3), new GeoPoint(-30, -45));\n+ assertEquals(exterior.getCoordinateN(0), new Coordinate(-45, 30));\n+ assertEquals(exterior.getCoordinateN(1), new Coordinate(45, 30));\n+ assertEquals(exterior.getCoordinateN(2), new Coordinate(45, -30));\n+ assertEquals(exterior.getCoordinateN(3), new Coordinate(-45, -30));\n }\n \n @Test", "filename": "src/test/java/org/elasticsearch/common/geo/ShapeBuilderTests.java", "status": "modified" } ] }
{ "body": "Using Elasticsearch 1.1.1. I'm seeing geo_polygon behave oddly when the input polygon cross the date line. From a quick look at GeoPolygonFilter.GeoPolygonDocSet.pointInPolygon it doesn't seem that this is explicitly handled. \n\nThe reproduce this create an index/mapping as:\n\nPOST /geo\n\n``` json\n{ \"mappings\": { \"docs\": { \"properties\": { \"p\": { \"type\": \"geo_point\" } } } } }\n```\n\nUpload a document:\n\nPUT /geo/docs/1\n\n``` json\n{ \"p\": { \"lat\": 40, \"lon\": 179 } }\n```\n\nSearch with a polygon that's a box around the uploaded point and that crosses the date line:\n\nPOST /geo/docs/_search\n\n``` json\n{\n \"filter\": { \"geo_polygon\": { \"p\": { \"points\": [\n { \"lat\": 42, \"lon\": 178 },\n { \"lat\": 39, \"lon\": 178 },\n { \"lat\": 39, \"lon\": -179 },\n { \"lat\": 42, \"lon\": -179 },\n { \"lat\": 42, \"lon\": 178 }\n ] } } }\n}\n```\n\nES returns 0 results. If I use a polygon that stays to the west of the date line I do get results:\n\n``` json\n{\n \"filter\": { \"geo_polygon\": { \"p\": { \"points\": [\n { \"lat\": 42, \"lon\": 178 },\n { \"lat\": 39, \"lon\": 178 },\n { \"lat\": 39, \"lon\": 179.5 },\n { \"lat\": 42, \"lon\": 179.5 },\n { \"lat\": 42, \"lon\": 178 }\n ] } } }\n}\n```\n\nAlso, if I use a bounding box query with the same coordinates as the initial polygon, it does work:\n\n``` json\n{\n \"filter\": { \"geo_bounding_box\": { \"p\": \n { \"top_left\": { \"lat\": 42, \"lon\": 178 },\n \"bottom_right\": { \"lat\": 39, \"lon\": -179 }\n }\n } }\n}\n```\n\nIt seems that this code needs to either split the check into east and west checks or normalize the input values. Am I missing something?\n", "comments": [ { "body": "This is actually working as expected. Your first query will resolve as shown in the following GeoJSON gist which will not contain your document:\nhttps://gist.github.com/anonymous/82b50b74a7b6d170bfc6\n\nTo create the desired results you specified you would need to split the polygon in to two polygons, one to the left of the date line and the other to the right. This can be done with the following query:\n\n```\ncurl -XPOST 'localhost:9200/geo/_search?pretty' -d '{\n \"query\" : {\n \"filtered\" : {\n \"query\" : {\n \"match_all\" : {}\n },\n \"filter\" : {\n \"or\" : [\n {\n \"geo_polygon\" : {\n \"p\" : {\n \"points\" : [\n { \"lat\": 42, \"lon\": 178 },\n { \"lat\": 39, \"lon\": 178 },\n { \"lat\": 39, \"lon\": 180 },\n { \"lat\": 42, \"lon\": 180 },\n { \"lat\": 42, \"lon\": 178 }\n ]\n }\n }\n },\n {\n \"geo_polygon\" : {\n \"p\" : {\n \"points\" : [\n { \"lat\": 42, \"lon\": -180 },\n { \"lat\": 39, \"lon\": -180 },\n { \"lat\": 39, \"lon\": -179 },\n { \"lat\": 42, \"lon\": -179 },\n { \"lat\": 42, \"lon\": -180 }\n ]\n }\n }\n }\n ]\n }\n }\n }\n}'\n```\n\nThe bounding box is a little different since by specifying which coordinate is top_left and which is top_right you are fixing the box to overlap the date line.\n", "created_at": "2014-05-27T15:58:46Z" }, { "body": "For me, I'd expect the line segment between two points to lie in the same direction as the great circle arc. Splitting an arbitrary query into sub-polygons isn't entirely straightforward, because it could intersect longitude 180 multiple times.\n\nI have a rough draft of a commit that fixes this issue by shifting the polygon in GeoPolygonFilter (and all points passed in to pointInPolygon) so that it lies completely on one side of longitude 180. It is low-overhead and has basically no effect on the normal case. The only constraint is that the polygon can't span more than 360 degrees in longitude.\n\nDoes this sound reasonable, and is it worth submitting a PR?\n", "created_at": "2014-09-17T21:44:43Z" }, { "body": "@colings86 could this be solved with a `left/right` parameter or something similar?\n", "created_at": "2014-09-25T18:21:19Z" }, { "body": "@nknize is this fixed by https://github.com/elasticsearch/elasticsearch/pull/8521 ?\n", "created_at": "2014-11-28T09:49:36Z" }, { "body": "It does not. This is the infamous \"ambiguous polygon\" problem that occurs when treating a spherical coordinate system as a cartesian plane. I opened a discussion and feature branch to address this in #8672 \n\ntldr: GeoJSON doesn't specify order, but OGC does.\n\nFeature fix: Default behavior = For GeoJSON poly's specified in OGC order (shell: ccw, holes: cw) ES Ring logic will correctly transform and split polys across the dateline (e.g., see https://gist.github.com/nknize/d122b243dc63dcba8474). For GeoJSON poly's provided in the opposite order original behavior will occur (e.g., @colings86 example https://gist.github.com/anonymous/82b50b74a7b6d170bfc6). \n\nAdditionally, I like @clintongormley suggestion of adding an optional left/right parameter. Its an easy fix letting user's clearly specify intent.\n", "created_at": "2014-12-01T14:32:21Z" }, { "body": "This is now addressed in PR #8762\n", "created_at": "2014-12-03T14:00:29Z" }, { "body": "Optional left/right parameter added in PR #8978 \n", "created_at": "2014-12-16T18:41:49Z" }, { "body": "Merged in edd33c0\n", "created_at": "2014-12-29T22:11:30Z" }, { "body": "I tried @pablocastro's example on trunk, and unfortunately the issue is still there. There might've been some confusion -- the original example refers to the geo_polygon filter, whereas @nknize's fix is for the polygon geo_shape type.\n\nShould I create a new ticket for the geo_polygon filter, or should we re-open this one?\n", "created_at": "2015-01-03T22:17:20Z" }, { "body": "Good catch @jtibshirani! We'll go ahead and reopen this ticket since its a separate issue.\n", "created_at": "2015-01-04T04:41:55Z" }, { "body": "Reopening due to #9462 \n", "created_at": "2015-01-28T15:06:48Z" }, { "body": "The search hits of really huge polygon (elasticsearch 1.4.3)\n\n``` javascript\n{\n \"query\": {\n \"filtered\": {\n \"filter\": {\n \"geo_shape\": {\n \"location\": {\n \"shape\": {\n \"type\": \"Polygon\",\n \"coordinates\": [\n [\n [\n -70.4873046875,\n 79.9818262344106\n ],\n [\n -70.4873046875,\n -28.07230647927298\n ],\n [\n -103.3583984375,\n -28.07230647927298\n ],\n [\n -103.3583984375,\n 79.9818262344106\n ],\n [\n -70.4873046875,\n 79.9818262344106\n ]\n ]\n ],\n \"orientation\": \"ccw\"\n }\n }\n }\n }\n }\n }\n}\n```\n\ndoesn't include any points inside that polygon even with orientation option.\nIs it related with this issue?\n", "created_at": "2015-02-12T17:36:56Z" }, { "body": "Its related. For now, if you have an ambiguous poly that crosses the pole you'll need to manually split it into 2 separate explicit polys and put inside a `MultiPolygon` Depending on the complexity of the poly computing the pole intersections can be non-trivial. The in-work patch will do this for you.\n\nA separate issue is related to the distance_error_pct parameter. If not specified, larger filters will have reduced accuracy. Though this seems unrelated to your GeoJSON\n", "created_at": "2015-02-12T22:49:53Z" }, { "body": "relates to #26286", "created_at": "2018-03-26T16:59:07Z" }, { "body": "closing in favor of #26286 since its an old issue", "created_at": "2018-03-26T17:00:11Z" } ], "number": 5968, "title": "geo_polygon not handling polygons that cross the date line properly" }
{ "body": "This reverts PR #9339 which introduces a dependency on JTS. This reopens issues #5968 and #9304 which will be closed with #9462\n", "number": 9463, "review_comments": [], "title": "Revert \"[GEO] Update GeoPolygonFilter to handle ambiguous polygons\"" }
{ "commits": [ { "message": "Revert \"[GEO] Update GeoPolygonFilter to handle ambiguous polygons\"\n\nThis reverts commit 06667c6aa898895acd624b8a71a6e00ff7ae32b8 which introduces an undesireable dependency on JTS." } ], "files": [ { "diff": "@@ -20,12 +20,13 @@\n package org.elasticsearch.common.geo;\n \n \n-import com.vividsolutions.jts.geom.Coordinate;\n-\n /**\n *\n */\n-public final class GeoPoint extends Coordinate {\n+public final class GeoPoint {\n+\n+ private double lat;\n+ private double lon;\n \n public GeoPoint() {\n }\n@@ -40,36 +41,32 @@ public GeoPoint(String value) {\n this.resetFromString(value);\n }\n \n- public GeoPoint(GeoPoint other) {\n- super(other);\n- }\n-\n public GeoPoint(double lat, double lon) {\n- this.y = lat;\n- this.x = lon;\n+ this.lat = lat;\n+ this.lon = lon;\n }\n \n public GeoPoint reset(double lat, double lon) {\n- this.y = lat;\n- this.x = lon;\n+ this.lat = lat;\n+ this.lon = lon;\n return this;\n }\n \n public GeoPoint resetLat(double lat) {\n- this.y = lat;\n+ this.lat = lat;\n return this;\n }\n \n public GeoPoint resetLon(double lon) {\n- this.x = lon;\n+ this.lon = lon;\n return this;\n }\n \n public GeoPoint resetFromString(String value) {\n int comma = value.indexOf(',');\n if (comma != -1) {\n- this.y = Double.parseDouble(value.substring(0, comma).trim());\n- this.x = Double.parseDouble(value.substring(comma + 1).trim());\n+ lat = Double.parseDouble(value.substring(0, comma).trim());\n+ lon = Double.parseDouble(value.substring(comma + 1).trim());\n } else {\n resetFromGeoHash(value);\n }\n@@ -82,40 +79,38 @@ public GeoPoint resetFromGeoHash(String hash) {\n }\n \n public final double lat() {\n- return this.y;\n+ return this.lat;\n }\n \n public final double getLat() {\n- return this.y;\n+ return this.lat;\n }\n \n public final double lon() {\n- return this.x;\n+ return this.lon;\n }\n \n public final double getLon() {\n- return this.x;\n+ return this.lon;\n }\n \n public final String geohash() {\n- return GeoHashUtils.encode(y, x);\n+ return GeoHashUtils.encode(lat, lon);\n }\n \n public final String getGeohash() {\n- return GeoHashUtils.encode(y, x);\n+ return GeoHashUtils.encode(lat, lon);\n }\n \n @Override\n public boolean equals(Object o) {\n if (this == o) return true;\n- if (o == null) return false;\n- if (o instanceof Coordinate) {\n- Coordinate c = (Coordinate)o;\n- return Double.compare(c.x, this.x) == 0\n- && Double.compare(c.y, this.y) == 0\n- && Double.compare(c.z, this.z) == 0;\n- }\n- if (getClass() != o.getClass()) return false;\n+ if (o == null || getClass() != o.getClass()) return false;\n+\n+ GeoPoint geoPoint = (GeoPoint) o;\n+\n+ if (Double.compare(geoPoint.lat, lat) != 0) return false;\n+ if (Double.compare(geoPoint.lon, lon) != 0) return false;\n \n return true;\n }\n@@ -124,15 +119,15 @@ public boolean equals(Object o) {\n public int hashCode() {\n int result;\n long temp;\n- temp = y != +0.0d ? Double.doubleToLongBits(y) : 0L;\n+ temp = lat != +0.0d ? Double.doubleToLongBits(lat) : 0L;\n result = (int) (temp ^ (temp >>> 32));\n- temp = x != +0.0d ? Double.doubleToLongBits(x) : 0L;\n+ temp = lon != +0.0d ? Double.doubleToLongBits(lon) : 0L;\n result = 31 * result + (int) (temp ^ (temp >>> 32));\n return result;\n }\n \n public String toString() {\n- return \"[\" + y + \", \" + x + \"]\";\n+ return \"[\" + lat + \", \" + lon + \"]\";\n }\n \n public static GeoPoint parseFromLatLon(String latLon) {", "filename": "src/main/java/org/elasticsearch/common/geo/GeoPoint.java", "status": "modified" }, { "diff": "@@ -19,7 +19,6 @@\n \n package org.elasticsearch.common.geo;\n \n-import org.apache.commons.lang3.tuple.Pair;\n import org.apache.lucene.spatial.prefix.tree.GeohashPrefixTree;\n import org.apache.lucene.spatial.prefix.tree.QuadPrefixTree;\n import org.apache.lucene.util.SloppyMath;\n@@ -38,9 +37,7 @@ public class GeoUtils {\n public static final String LATITUDE = GeoPointFieldMapper.Names.LAT;\n public static final String LONGITUDE = GeoPointFieldMapper.Names.LON;\n public static final String GEOHASH = GeoPointFieldMapper.Names.GEOHASH;\n-\n- public static final double DATELINE = 180.0D;\n-\n+ \n /** Earth ellipsoid major axis defined by WGS 84 in meters */\n public static final double EARTH_SEMI_MAJOR_AXIS = 6378137.0; // meters (WGS 84)\n \n@@ -425,113 +422,6 @@ public static GeoPoint parseGeoPoint(XContentParser parser, GeoPoint point) thro\n }\n }\n \n- public static boolean correctPolyAmbiguity(GeoPoint[] points, boolean handedness) {\n- return correctPolyAmbiguity(points, handedness, computePolyOrientation(points), 0, points.length, false);\n- }\n-\n- public static boolean correctPolyAmbiguity(GeoPoint[] points, boolean handedness, boolean orientation, int component, int length,\n- boolean shellCorrected) {\n- // OGC requires shell as ccw (Right-Handedness) and holes as cw (Left-Handedness)\n- // since GeoJSON doesn't specify (and doesn't need to) GEO core will assume OGC standards\n- // thus if orientation is computed as cw, the logic will translate points across dateline\n- // and convert to a right handed system\n-\n- // compute the bounding box and calculate range\n- Pair<Pair, Pair> range = GeoUtils.computeBBox(points, length);\n- final double rng = (Double)range.getLeft().getRight() - (Double)range.getLeft().getLeft();\n- // translate the points if the following is true\n- // 1. shell orientation is cw and range is greater than a hemisphere (180 degrees) but not spanning 2 hemispheres\n- // (translation would result in a collapsed poly)\n- // 2. the shell of the candidate hole has been translated (to preserve the coordinate system)\n- boolean incorrectOrientation = component == 0 && handedness != orientation;\n- boolean translated = ((incorrectOrientation && (rng > DATELINE && rng != 360.0)) || (shellCorrected && component != 0));\n- if (translated) {\n- for (GeoPoint c : points) {\n- if (c.x < 0.0) {\n- c.x += 360.0;\n- }\n- }\n- }\n- return translated;\n- }\n-\n- public static boolean computePolyOrientation(GeoPoint[] points) {\n- return computePolyOrientation(points, points.length);\n- }\n-\n- public static boolean computePolyOrientation(GeoPoint[] points, int length) {\n- // calculate the direction of the points:\n- // find the point at the top of the set and check its\n- // neighbors orientation. So direction is equivalent\n- // to clockwise/counterclockwise\n- final int top = computePolyOrigin(points, length);\n- final int prev = ((top + length - 1) % length);\n- final int next = ((top + 1) % length);\n- return (points[prev].x > points[next].x);\n- }\n-\n- private static final int computePolyOrigin(GeoPoint[] points, int length) {\n- int top = 0;\n- // we start at 1 here since top points to 0\n- for (int i = 1; i < length; i++) {\n- if (points[i].y < points[top].y) {\n- top = i;\n- } else if (points[i].y == points[top].y) {\n- if (points[i].x < points[top].x) {\n- top = i;\n- }\n- }\n- }\n- return top;\n- }\n-\n- public static final Pair computeBBox(GeoPoint[] points) {\n- return computeBBox(points, 0);\n- }\n-\n- public static final Pair computeBBox(GeoPoint[] points, int length) {\n- double minX = points[0].x;\n- double maxX = points[0].x;\n- double minY = points[0].y;\n- double maxY = points[0].y;\n- // compute the bounding coordinates (@todo: cleanup brute force)\n- for (int i = 1; i < length; ++i) {\n- if (points[i].x < minX) {\n- minX = points[i].x;\n- }\n- if (points[i].x > maxX) {\n- maxX = points[i].x;\n- }\n- if (points[i].y < minY) {\n- minY = points[i].y;\n- }\n- if (points[i].y > maxY) {\n- maxY = points[i].y;\n- }\n- }\n- // return a pair of ranges on the X and Y axis, respectively\n- return Pair.of(Pair.of(minX, maxX), Pair.of(minY, maxY));\n- }\n-\n- public static GeoPoint convertToGreatCircle(GeoPoint point) {\n- return convertToGreatCircle(point.y, point.x);\n- }\n-\n- public static GeoPoint convertToGreatCircle(double lat, double lon) {\n- GeoPoint p = new GeoPoint(lat, lon);\n- // convert the point to standard lat/lon bounds\n- normalizePoint(p);\n-\n- if (p.x < 0.0D) {\n- p.x += 360.0D;\n- }\n-\n- if (p.y < 0.0D) {\n- p.y +=180.0D;\n- }\n- return p;\n- }\n-\n private GeoUtils() {\n }\n }", "filename": "src/main/java/org/elasticsearch/common/geo/GeoUtils.java", "status": "modified" }, { "diff": "@@ -23,22 +23,22 @@\n import java.util.ArrayList;\n import java.util.Arrays;\n \n-import org.elasticsearch.common.geo.GeoPoint;\n-import org.elasticsearch.common.geo.GeoUtils;\n+import com.spatial4j.core.shape.ShapeCollection;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n \n import com.spatial4j.core.shape.Shape;\n+import com.vividsolutions.jts.geom.Coordinate;\n import com.vividsolutions.jts.geom.Geometry;\n import com.vividsolutions.jts.geom.GeometryFactory;\n import com.vividsolutions.jts.geom.LineString;\n \n public abstract class BaseLineStringBuilder<E extends BaseLineStringBuilder<E>> extends PointCollection<E> {\n \n protected BaseLineStringBuilder() {\n- this(new ArrayList<GeoPoint>());\n+ this(new ArrayList<Coordinate>());\n }\n \n- protected BaseLineStringBuilder(ArrayList<GeoPoint> points) {\n+ protected BaseLineStringBuilder(ArrayList<Coordinate> points) {\n super(points);\n }\n \n@@ -49,7 +49,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n \n @Override\n public Shape build() {\n- GeoPoint[] coordinates = points.toArray(new GeoPoint[points.size()]);\n+ Coordinate[] coordinates = points.toArray(new Coordinate[points.size()]);\n Geometry geometry;\n if(wrapdateline) {\n ArrayList<LineString> strings = decompose(FACTORY, coordinates, new ArrayList<LineString>());\n@@ -67,9 +67,9 @@ public Shape build() {\n return jtsGeometry(geometry);\n }\n \n- protected static ArrayList<LineString> decompose(GeometryFactory factory, GeoPoint[] coordinates, ArrayList<LineString> strings) {\n- for(GeoPoint[] part : decompose(+DATELINE, coordinates)) {\n- for(GeoPoint[] line : decompose(-DATELINE, part)) {\n+ protected static ArrayList<LineString> decompose(GeometryFactory factory, Coordinate[] coordinates, ArrayList<LineString> strings) {\n+ for(Coordinate[] part : decompose(+DATELINE, coordinates)) {\n+ for(Coordinate[] line : decompose(-DATELINE, part)) {\n strings.add(factory.createLineString(line));\n }\n }\n@@ -83,16 +83,16 @@ protected static ArrayList<LineString> decompose(GeometryFactory factory, GeoPoi\n * @param coordinates coordinates forming the linestring\n * @return array of linestrings given as coordinate arrays \n */\n- protected static GeoPoint[][] decompose(double dateline, GeoPoint[] coordinates) {\n+ protected static Coordinate[][] decompose(double dateline, Coordinate[] coordinates) {\n int offset = 0;\n- ArrayList<GeoPoint[]> parts = new ArrayList<>();\n+ ArrayList<Coordinate[]> parts = new ArrayList<>();\n \n double shift = coordinates[0].x > DATELINE ? DATELINE : (coordinates[0].x < -DATELINE ? -DATELINE : 0);\n \n for (int i = 1; i < coordinates.length; i++) {\n double t = intersection(coordinates[i-1], coordinates[i], dateline);\n if(!Double.isNaN(t)) {\n- GeoPoint[] part;\n+ Coordinate[] part;\n if(t<1) {\n part = Arrays.copyOfRange(coordinates, offset, i+1);\n part[part.length-1] = Edge.position(coordinates[i-1], coordinates[i], t);\n@@ -111,16 +111,16 @@ protected static GeoPoint[][] decompose(double dateline, GeoPoint[] coordinates)\n if(offset == 0) {\n parts.add(shift(shift, coordinates));\n } else if(offset < coordinates.length-1) {\n- GeoPoint[] part = Arrays.copyOfRange(coordinates, offset, coordinates.length);\n+ Coordinate[] part = Arrays.copyOfRange(coordinates, offset, coordinates.length);\n parts.add(shift(shift, part));\n }\n- return parts.toArray(new GeoPoint[parts.size()][]);\n+ return parts.toArray(new Coordinate[parts.size()][]);\n }\n \n- private static GeoPoint[] shift(double shift, GeoPoint...coordinates) {\n+ private static Coordinate[] shift(double shift, Coordinate...coordinates) {\n if(shift != 0) {\n for (int j = 0; j < coordinates.length; j++) {\n- coordinates[j] = new GeoPoint(coordinates[j].y, coordinates[j].x - 2 * shift);\n+ coordinates[j] = new Coordinate(coordinates[j].x - 2 * shift, coordinates[j].y);\n }\n }\n return coordinates;", "filename": "src/main/java/org/elasticsearch/common/geo/builders/BaseLineStringBuilder.java", "status": "modified" }, { "diff": "@@ -20,14 +20,8 @@\n package org.elasticsearch.common.geo.builders;\n \n import com.spatial4j.core.shape.Shape;\n-import com.vividsolutions.jts.geom.Geometry;\n-import com.vividsolutions.jts.geom.GeometryFactory;\n-import com.vividsolutions.jts.geom.LinearRing;\n-import com.vividsolutions.jts.geom.MultiPolygon;\n-import com.vividsolutions.jts.geom.Polygon;\n+import com.vividsolutions.jts.geom.*;\n import org.elasticsearch.ElasticsearchParseException;\n-import org.elasticsearch.common.geo.GeoPoint;\n-import org.elasticsearch.common.geo.GeoUtils;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n \n import java.io.IOException;\n@@ -73,7 +67,7 @@ public E point(double longitude, double latitude) {\n * @param coordinate coordinate of the new point\n * @return this\n */\n- public E point(GeoPoint coordinate) {\n+ public E point(Coordinate coordinate) {\n shell.point(coordinate);\n return thisRef();\n }\n@@ -83,7 +77,7 @@ public E point(GeoPoint coordinate) {\n * @param coordinates coordinates of the new points to add\n * @return this\n */\n- public E points(GeoPoint...coordinates) {\n+ public E points(Coordinate...coordinates) {\n shell.points(coordinates);\n return thisRef();\n }\n@@ -127,7 +121,7 @@ public ShapeBuilder close() {\n * \n * @return coordinates of the polygon\n */\n- public GeoPoint[][][] coordinates() {\n+ public Coordinate[][][] coordinates() {\n int numEdges = shell.points.size()-1; // Last point is repeated \n for (int i = 0; i < holes.size(); i++) {\n numEdges += holes.get(i).points.size()-1;\n@@ -176,7 +170,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n \n public Geometry buildGeometry(GeometryFactory factory, boolean fixDateline) {\n if(fixDateline) {\n- GeoPoint[][][] polygons = coordinates();\n+ Coordinate[][][] polygons = coordinates();\n return polygons.length == 1\n ? polygon(factory, polygons[0])\n : multipolygon(factory, polygons);\n@@ -199,16 +193,16 @@ protected Polygon toPolygon(GeometryFactory factory) {\n return factory.createPolygon(shell, holes);\n }\n \n- protected static LinearRing linearRing(GeometryFactory factory, ArrayList<GeoPoint> coordinates) {\n- return factory.createLinearRing(coordinates.toArray(new GeoPoint[coordinates.size()]));\n+ protected static LinearRing linearRing(GeometryFactory factory, ArrayList<Coordinate> coordinates) {\n+ return factory.createLinearRing(coordinates.toArray(new Coordinate[coordinates.size()]));\n }\n \n @Override\n public GeoShapeType type() {\n return TYPE;\n }\n \n- protected static Polygon polygon(GeometryFactory factory, GeoPoint[][] polygon) {\n+ protected static Polygon polygon(GeometryFactory factory, Coordinate[][] polygon) {\n LinearRing shell = factory.createLinearRing(polygon[0]);\n LinearRing[] holes;\n \n@@ -233,7 +227,7 @@ protected static Polygon polygon(GeometryFactory factory, GeoPoint[][] polygon)\n * @param polygons definition of polygons\n * @return a new Multipolygon\n */\n- protected static MultiPolygon multipolygon(GeometryFactory factory, GeoPoint[][][] polygons) {\n+ protected static MultiPolygon multipolygon(GeometryFactory factory, Coordinate[][][] polygons) {\n Polygon[] polygonSet = new Polygon[polygons.length];\n for (int i = 0; i < polygonSet.length; i++) {\n polygonSet[i] = polygon(factory, polygons[i]);\n@@ -289,18 +283,18 @@ private static int component(final Edge edge, final int id, final ArrayList<Edge\n * @param coordinates Array of coordinates to write the result to\n * @return the coordinates parameter\n */\n- private static GeoPoint[] coordinates(Edge component, GeoPoint[] coordinates) {\n+ private static Coordinate[] coordinates(Edge component, Coordinate[] coordinates) {\n for (int i = 0; i < coordinates.length; i++) {\n coordinates[i] = (component = component.next).coordinate;\n }\n return coordinates;\n }\n \n- private static GeoPoint[][][] buildCoordinates(ArrayList<ArrayList<GeoPoint[]>> components) {\n- GeoPoint[][][] result = new GeoPoint[components.size()][][];\n+ private static Coordinate[][][] buildCoordinates(ArrayList<ArrayList<Coordinate[]>> components) {\n+ Coordinate[][][] result = new Coordinate[components.size()][][];\n for (int i = 0; i < result.length; i++) {\n- ArrayList<GeoPoint[]> component = components.get(i);\n- result[i] = component.toArray(new GeoPoint[component.size()][]);\n+ ArrayList<Coordinate[]> component = components.get(i);\n+ result[i] = component.toArray(new Coordinate[component.size()][]);\n }\n \n if(debugEnabled()) {\n@@ -315,45 +309,44 @@ private static GeoPoint[][][] buildCoordinates(ArrayList<ArrayList<GeoPoint[]>>\n return result;\n } \n \n- private static final GeoPoint[][] EMPTY = new GeoPoint[0][];\n+ private static final Coordinate[][] EMPTY = new Coordinate[0][];\n \n- private static GeoPoint[][] holes(Edge[] holes, int numHoles) {\n+ private static Coordinate[][] holes(Edge[] holes, int numHoles) {\n if (numHoles == 0) {\n return EMPTY;\n }\n- final GeoPoint[][] points = new GeoPoint[numHoles][];\n+ final Coordinate[][] points = new Coordinate[numHoles][];\n \n for (int i = 0; i < numHoles; i++) {\n int length = component(holes[i], -(i+1), null); // mark as visited by inverting the sign\n- points[i] = coordinates(holes[i], new GeoPoint[length+1]);\n+ points[i] = coordinates(holes[i], new Coordinate[length+1]);\n }\n \n return points;\n } \n \n- private static Edge[] edges(Edge[] edges, int numHoles, ArrayList<ArrayList<GeoPoint[]>> components) {\n+ private static Edge[] edges(Edge[] edges, int numHoles, ArrayList<ArrayList<Coordinate[]>> components) {\n ArrayList<Edge> mainEdges = new ArrayList<>(edges.length);\n \n for (int i = 0; i < edges.length; i++) {\n if (edges[i].component >= 0) {\n int length = component(edges[i], -(components.size()+numHoles+1), mainEdges);\n- ArrayList<GeoPoint[]> component = new ArrayList<>();\n- component.add(coordinates(edges[i], new GeoPoint[length+1]));\n+ ArrayList<Coordinate[]> component = new ArrayList<>();\n+ component.add(coordinates(edges[i], new Coordinate[length+1]));\n components.add(component);\n }\n }\n \n return mainEdges.toArray(new Edge[mainEdges.size()]);\n }\n \n- private static GeoPoint[][][] compose(Edge[] edges, Edge[] holes, int numHoles) {\n- final ArrayList<ArrayList<GeoPoint[]>> components = new ArrayList<>();\n+ private static Coordinate[][][] compose(Edge[] edges, Edge[] holes, int numHoles) {\n+ final ArrayList<ArrayList<Coordinate[]>> components = new ArrayList<>();\n assign(holes, holes(holes, numHoles), numHoles, edges(edges, numHoles, components), components);\n return buildCoordinates(components);\n }\n \n- private static void assign(Edge[] holes, GeoPoint[][] points, int numHoles, Edge[] edges, ArrayList<ArrayList<GeoPoint[]>>\n- components) {\n+ private static void assign(Edge[] holes, Coordinate[][] points, int numHoles, Edge[] edges, ArrayList<ArrayList<Coordinate[]>> components) {\n // Assign Hole to related components\n // To find the new component the hole belongs to all intersections of the\n // polygon edges with a vertical line are calculated. This vertical line\n@@ -468,13 +461,14 @@ private static void connect(Edge in, Edge out) {\n }\n \n private static int createEdges(int component, Orientation orientation, BaseLineStringBuilder<?> shell,\n- BaseLineStringBuilder<?> hole, Edge[] edges, int edgeOffset) {\n+ BaseLineStringBuilder<?> hole,\n+ Edge[] edges, int offset) {\n // inner rings (holes) have an opposite direction than the outer rings\n // XOR will invert the orientation for outer ring cases (Truth Table:, T/T = F, T/F = T, F/T = T, F/F = F)\n boolean direction = (component != 0 ^ orientation == Orientation.RIGHT);\n // set the points array accordingly (shell or hole)\n- GeoPoint[] points = (hole != null) ? hole.coordinates(false) : shell.coordinates(false);\n- Edge.ring(component, direction, orientation == Orientation.LEFT, shell, points, edges, edgeOffset, points.length-1);\n+ Coordinate[] points = (hole != null) ? hole.coordinates(false) : shell.coordinates(false);\n+ Edge.ring(component, direction, orientation == Orientation.LEFT, shell, points, 0, edges, offset, points.length-1);\n return points.length-1;\n }\n \n@@ -483,17 +477,17 @@ public static class Ring<P extends ShapeBuilder> extends BaseLineStringBuilder<R\n private final P parent;\n \n protected Ring(P parent) {\n- this(parent, new ArrayList<GeoPoint>());\n+ this(parent, new ArrayList<Coordinate>());\n }\n \n- protected Ring(P parent, ArrayList<GeoPoint> points) {\n+ protected Ring(P parent, ArrayList<Coordinate> points) {\n super(points);\n this.parent = parent;\n }\n \n public P close() {\n- GeoPoint start = points.get(0);\n- GeoPoint end = points.get(points.size()-1);\n+ Coordinate start = points.get(0);\n+ Coordinate end = points.get(points.size()-1);\n if(start.x != end.x || start.y != end.y) {\n points.add(start);\n }", "filename": "src/main/java/org/elasticsearch/common/geo/builders/BasePolygonBuilder.java", "status": "modified" }, { "diff": "@@ -20,7 +20,7 @@\n package org.elasticsearch.common.geo.builders;\n \n import com.spatial4j.core.shape.Circle;\n-import org.elasticsearch.common.geo.GeoPoint;\n+import com.vividsolutions.jts.geom.Coordinate;\n import org.elasticsearch.common.unit.DistanceUnit;\n import org.elasticsearch.common.unit.DistanceUnit.Distance;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n@@ -34,15 +34,15 @@ public class CircleBuilder extends ShapeBuilder {\n \n private DistanceUnit unit;\n private double radius;\n- private GeoPoint center;\n+ private Coordinate center;\n \n /**\n * Set the center of the circle\n * \n * @param center coordinate of the circles center\n * @return this\n */\n- public CircleBuilder center(GeoPoint center) {\n+ public CircleBuilder center(Coordinate center) {\n this.center = center;\n return this;\n }\n@@ -54,7 +54,7 @@ public CircleBuilder center(GeoPoint center) {\n * @return this\n */\n public CircleBuilder center(double lon, double lat) {\n- return center(new GeoPoint(lat, lon));\n+ return center(new Coordinate(lon, lat));\n }\n \n /**", "filename": "src/main/java/org/elasticsearch/common/geo/builders/CircleBuilder.java", "status": "modified" }, { "diff": "@@ -20,7 +20,7 @@\n package org.elasticsearch.common.geo.builders;\n \n import com.spatial4j.core.shape.Rectangle;\n-import org.elasticsearch.common.geo.GeoPoint;\n+import com.vividsolutions.jts.geom.Coordinate;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n \n import java.io.IOException;\n@@ -29,8 +29,8 @@ public class EnvelopeBuilder extends ShapeBuilder {\n \n public static final GeoShapeType TYPE = GeoShapeType.ENVELOPE; \n \n- protected GeoPoint topLeft;\n- protected GeoPoint bottomRight;\n+ protected Coordinate topLeft;\n+ protected Coordinate bottomRight;\n \n public EnvelopeBuilder() {\n this(Orientation.RIGHT);\n@@ -40,7 +40,7 @@ public EnvelopeBuilder(Orientation orientation) {\n super(orientation);\n }\n \n- public EnvelopeBuilder topLeft(GeoPoint topLeft) {\n+ public EnvelopeBuilder topLeft(Coordinate topLeft) {\n this.topLeft = topLeft;\n return this;\n }\n@@ -49,7 +49,7 @@ public EnvelopeBuilder topLeft(double longitude, double latitude) {\n return topLeft(coordinate(longitude, latitude));\n }\n \n- public EnvelopeBuilder bottomRight(GeoPoint bottomRight) {\n+ public EnvelopeBuilder bottomRight(Coordinate bottomRight) {\n this.bottomRight = bottomRight;\n return this;\n }", "filename": "src/main/java/org/elasticsearch/common/geo/builders/EnvelopeBuilder.java", "status": "modified" }, { "diff": "@@ -19,10 +19,11 @@\n \n package org.elasticsearch.common.geo.builders;\n \n-import org.elasticsearch.common.geo.GeoPoint;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n \n import com.spatial4j.core.shape.Shape;\n+import com.spatial4j.core.shape.jts.JtsGeometry;\n+import com.vividsolutions.jts.geom.Coordinate;\n import com.vividsolutions.jts.geom.Geometry;\n import com.vividsolutions.jts.geom.LineString;\n \n@@ -47,8 +48,8 @@ public MultiLineStringBuilder linestring(BaseLineStringBuilder<?> line) {\n return this;\n }\n \n- public GeoPoint[][] coordinates() {\n- GeoPoint[][] result = new GeoPoint[lines.size()][];\n+ public Coordinate[][] coordinates() {\n+ Coordinate[][] result = new Coordinate[lines.size()][];\n for (int i = 0; i < result.length; i++) {\n result[i] = lines.get(i).coordinates(false);\n }\n@@ -112,7 +113,7 @@ public MultiLineStringBuilder end() {\n return collection;\n }\n \n- public GeoPoint[] coordinates() {\n+ public Coordinate[] coordinates() {\n return super.coordinates(false);\n }\n ", "filename": "src/main/java/org/elasticsearch/common/geo/builders/MultiLineStringBuilder.java", "status": "modified" }, { "diff": "@@ -22,7 +22,7 @@\n import com.spatial4j.core.shape.Point;\n import com.spatial4j.core.shape.Shape;\n import com.spatial4j.core.shape.ShapeCollection;\n-import org.elasticsearch.common.geo.GeoPoint;\n+import com.vividsolutions.jts.geom.Coordinate;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n \n import java.io.IOException;\n@@ -48,7 +48,7 @@ public Shape build() {\n //Could wrap JtsGeometry but probably slower due to conversions to/from JTS in relate()\n //MultiPoint geometry = FACTORY.createMultiPoint(points.toArray(new Coordinate[points.size()]));\n List<Point> shapes = new ArrayList<>(points.size());\n- for (GeoPoint coord : points) {\n+ for (Coordinate coord : points) {\n shapes.add(SPATIAL_CONTEXT.makePoint(coord.x, coord.y));\n }\n return new ShapeCollection<>(shapes, SPATIAL_CONTEXT);", "filename": "src/main/java/org/elasticsearch/common/geo/builders/MultiPointBuilder.java", "status": "modified" }, { "diff": "@@ -24,10 +24,10 @@\n import java.util.List;\n \n import com.spatial4j.core.shape.ShapeCollection;\n-import org.elasticsearch.common.geo.GeoPoint;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n \n import com.spatial4j.core.shape.Shape;\n+import com.vividsolutions.jts.geom.Coordinate;\n \n public class MultiPolygonBuilder extends ShapeBuilder {\n \n@@ -84,7 +84,7 @@ public Shape build() {\n \n if(wrapdateline) {\n for (BasePolygonBuilder<?> polygon : this.polygons) {\n- for(GeoPoint[][] part : polygon.coordinates()) {\n+ for(Coordinate[][] part : polygon.coordinates()) {\n shapes.add(jtsGeometry(PolygonBuilder.polygon(FACTORY, part)));\n }\n }", "filename": "src/main/java/org/elasticsearch/common/geo/builders/MultiPolygonBuilder.java", "status": "modified" }, { "diff": "@@ -21,18 +21,18 @@\n \n import java.io.IOException;\n \n-import org.elasticsearch.common.geo.GeoPoint;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n \n import com.spatial4j.core.shape.Point;\n+import com.vividsolutions.jts.geom.Coordinate;\n \n public class PointBuilder extends ShapeBuilder {\n \n public static final GeoShapeType TYPE = GeoShapeType.POINT;\n \n- private GeoPoint coordinate;\n+ private Coordinate coordinate;\n \n- public PointBuilder coordinate(GeoPoint coordinate) {\n+ public PointBuilder coordinate(Coordinate coordinate) {\n this.coordinate = coordinate;\n return this;\n }", "filename": "src/main/java/org/elasticsearch/common/geo/builders/PointBuilder.java", "status": "modified" }, { "diff": "@@ -24,22 +24,23 @@\n import java.util.Arrays;\n import java.util.Collection;\n \n-import org.elasticsearch.common.geo.GeoPoint;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n \n+import com.vividsolutions.jts.geom.Coordinate;\n+\n /**\n * The {@link PointCollection} is an abstract base implementation for all GeoShapes. It simply handles a set of points. \n */\n public abstract class PointCollection<E extends PointCollection<E>> extends ShapeBuilder {\n \n- protected final ArrayList<GeoPoint> points;\n+ protected final ArrayList<Coordinate> points;\n protected boolean translated = false;\n \n protected PointCollection() {\n- this(new ArrayList<GeoPoint>());\n+ this(new ArrayList<Coordinate>());\n }\n \n- protected PointCollection(ArrayList<GeoPoint> points) {\n+ protected PointCollection(ArrayList<Coordinate> points) {\n this.points = points;\n }\n \n@@ -63,28 +64,28 @@ public E point(double longitude, double latitude) {\n * @param coordinate coordinate of the point\n * @return this\n */\n- public E point(GeoPoint coordinate) {\n+ public E point(Coordinate coordinate) {\n this.points.add(coordinate);\n return thisRef();\n }\n \n /**\n * Add a array of points to the collection\n * \n- * @param coordinates array of {@link GeoPoint}s to add\n+ * @param coordinates array of {@link Coordinate}s to add\n * @return this\n */\n- public E points(GeoPoint...coordinates) {\n+ public E points(Coordinate...coordinates) {\n return this.points(Arrays.asList(coordinates));\n }\n \n /**\n * Add a collection of points to the collection\n * \n- * @param coordinates array of {@link GeoPoint}s to add\n+ * @param coordinates array of {@link Coordinate}s to add\n * @return this\n */\n- public E points(Collection<? extends GeoPoint> coordinates) {\n+ public E points(Collection<? extends Coordinate> coordinates) {\n this.points.addAll(coordinates);\n return thisRef();\n }\n@@ -95,8 +96,8 @@ public E points(Collection<? extends GeoPoint> coordinates) {\n * @param closed if set to true the first point of the array is repeated as last element\n * @return Array of coordinates\n */\n- protected GeoPoint[] coordinates(boolean closed) {\n- GeoPoint[] result = points.toArray(new GeoPoint[points.size() + (closed?1:0)]);\n+ protected Coordinate[] coordinates(boolean closed) {\n+ Coordinate[] result = points.toArray(new Coordinate[points.size() + (closed?1:0)]);\n if(closed) {\n result[result.length-1] = result[0];\n }\n@@ -113,12 +114,12 @@ protected GeoPoint[] coordinates(boolean closed) {\n */\n protected XContentBuilder coordinatesToXcontent(XContentBuilder builder, boolean closed) throws IOException {\n builder.startArray();\n- for(GeoPoint point : points) {\n+ for(Coordinate point : points) {\n toXContent(builder, point);\n }\n if(closed) {\n- GeoPoint start = points.get(0);\n- GeoPoint end = points.get(points.size()-1);\n+ Coordinate start = points.get(0);\n+ Coordinate end = points.get(points.size()-1);\n if(start.x != end.x || start.y != end.y) {\n toXContent(builder, points.get(0));\n }", "filename": "src/main/java/org/elasticsearch/common/geo/builders/PointCollection.java", "status": "modified" }, { "diff": "@@ -21,19 +21,19 @@\n \n import java.util.ArrayList;\n \n-import org.elasticsearch.common.geo.GeoPoint;\n+import com.vividsolutions.jts.geom.Coordinate;\n \n public class PolygonBuilder extends BasePolygonBuilder<PolygonBuilder> {\n \n public PolygonBuilder() {\n- this(new ArrayList<GeoPoint>(), Orientation.RIGHT);\n+ this(new ArrayList<Coordinate>(), Orientation.RIGHT);\n }\n \n public PolygonBuilder(Orientation orientation) {\n- this(new ArrayList<GeoPoint>(), orientation);\n+ this(new ArrayList<Coordinate>(), orientation);\n }\n \n- protected PolygonBuilder(ArrayList<GeoPoint> points, Orientation orientation) {\n+ protected PolygonBuilder(ArrayList<Coordinate> points, Orientation orientation) {\n super(orientation);\n this.shell = new Ring<>(this, points);\n }", "filename": "src/main/java/org/elasticsearch/common/geo/builders/PolygonBuilder.java", "status": "modified" }, { "diff": "@@ -22,12 +22,12 @@\n import com.spatial4j.core.context.jts.JtsSpatialContext;\n import com.spatial4j.core.shape.Shape;\n import com.spatial4j.core.shape.jts.JtsGeometry;\n+import com.vividsolutions.jts.geom.Coordinate;\n import com.vividsolutions.jts.geom.Geometry;\n import com.vividsolutions.jts.geom.GeometryFactory;\n+import org.apache.commons.lang3.tuple.Pair;\n import org.elasticsearch.ElasticsearchIllegalArgumentException;\n import org.elasticsearch.ElasticsearchParseException;\n-import org.elasticsearch.common.geo.GeoPoint;\n-import org.elasticsearch.common.geo.GeoUtils;\n import org.elasticsearch.common.logging.ESLogger;\n import org.elasticsearch.common.logging.ESLoggerFactory;\n import org.elasticsearch.common.unit.DistanceUnit.Distance;\n@@ -57,7 +57,7 @@ public abstract class ShapeBuilder implements ToXContent {\n DEBUG = debug;\n }\n \n- public static final double DATELINE = GeoUtils.DATELINE;\n+ public static final double DATELINE = 180;\n // TODO how might we use JtsSpatialContextFactory to configure the context (esp. for non-geo)?\n public static final JtsSpatialContext SPATIAL_CONTEXT = JtsSpatialContext.GEO;\n public static final GeometryFactory FACTORY = SPATIAL_CONTEXT.getGeometryFactory();\n@@ -84,8 +84,8 @@ protected ShapeBuilder(Orientation orientation) {\n this.orientation = orientation;\n }\n \n- protected static GeoPoint coordinate(double longitude, double latitude) {\n- return new GeoPoint(latitude, longitude);\n+ protected static Coordinate coordinate(double longitude, double latitude) {\n+ return new Coordinate(longitude, latitude);\n }\n \n protected JtsGeometry jtsGeometry(Geometry geom) {\n@@ -106,15 +106,15 @@ protected JtsGeometry jtsGeometry(Geometry geom) {\n * @return a new {@link PointBuilder}\n */\n public static PointBuilder newPoint(double longitude, double latitude) {\n- return newPoint(new GeoPoint(latitude, longitude));\n+ return newPoint(new Coordinate(longitude, latitude));\n }\n \n /**\n- * Create a new {@link PointBuilder} from a {@link GeoPoint}\n+ * Create a new {@link PointBuilder} from a {@link Coordinate}\n * @param coordinate coordinate defining the position of the point\n * @return a new {@link PointBuilder}\n */\n- public static PointBuilder newPoint(GeoPoint coordinate) {\n+ public static PointBuilder newPoint(Coordinate coordinate) {\n return new PointBuilder().coordinate(coordinate);\n }\n \n@@ -250,7 +250,7 @@ private static CoordinateNode parseCoordinates(XContentParser parser) throws IOE\n token = parser.nextToken();\n double lat = parser.doubleValue();\n token = parser.nextToken();\n- return new CoordinateNode(new GeoPoint(lat, lon));\n+ return new CoordinateNode(new Coordinate(lon, lat));\n } else if (token == XContentParser.Token.VALUE_NULL) {\n throw new ElasticsearchIllegalArgumentException(\"coordinates cannot contain NULL values)\");\n }\n@@ -289,7 +289,7 @@ public static ShapeBuilder parse(XContentParser parser, GeoShapeFieldMapper geoD\n return GeoShapeType.parse(parser, geoDocMapper);\n }\n \n- protected static XContentBuilder toXContent(XContentBuilder builder, GeoPoint coordinate) throws IOException {\n+ protected static XContentBuilder toXContent(XContentBuilder builder, Coordinate coordinate) throws IOException {\n return builder.startArray().value(coordinate.x).value(coordinate.y).endArray();\n }\n \n@@ -309,11 +309,11 @@ public static Orientation orientationFromString(String orientation) {\n }\n }\n \n- protected static GeoPoint shift(GeoPoint coordinate, double dateline) {\n+ protected static Coordinate shift(Coordinate coordinate, double dateline) {\n if (dateline == 0) {\n return coordinate;\n } else {\n- return new GeoPoint(coordinate.y, -2 * dateline + coordinate.x);\n+ return new Coordinate(-2 * dateline + coordinate.x, coordinate.y);\n }\n }\n \n@@ -325,7 +325,7 @@ protected static GeoPoint shift(GeoPoint coordinate, double dateline) {\n \n /**\n * Calculate the intersection of a line segment and a vertical dateline.\n- *\n+ * \n * @param p1\n * start-point of the line segment\n * @param p2\n@@ -336,7 +336,7 @@ protected static GeoPoint shift(GeoPoint coordinate, double dateline) {\n * segment intersects with the line segment. Otherwise this method\n * returns {@link Double#NaN}\n */\n- protected static final double intersection(GeoPoint p1, GeoPoint p2, double dateline) {\n+ protected static final double intersection(Coordinate p1, Coordinate p2, double dateline) {\n if (p1.x == p2.x && p1.x != dateline) {\n return Double.NaN;\n } else if (p1.x == p2.x && p1.x == dateline) {\n@@ -366,8 +366,8 @@ protected static int intersections(double dateline, Edge[] edges) {\n int numIntersections = 0;\n assert !Double.isNaN(dateline);\n for (int i = 0; i < edges.length; i++) {\n- GeoPoint p1 = edges[i].coordinate;\n- GeoPoint p2 = edges[i].next.coordinate;\n+ Coordinate p1 = edges[i].coordinate;\n+ Coordinate p2 = edges[i].next.coordinate;\n assert !Double.isNaN(p2.x) && !Double.isNaN(p1.x); \n edges[i].intersect = Edge.MAX_COORDINATE;\n \n@@ -384,21 +384,21 @@ protected static int intersections(double dateline, Edge[] edges) {\n /**\n * Node used to represent a tree of coordinates.\n * <p/>\n- * Can either be a leaf node consisting of a GeoPoint, or a parent with\n+ * Can either be a leaf node consisting of a Coordinate, or a parent with\n * children\n */\n protected static class CoordinateNode implements ToXContent {\n \n- protected final GeoPoint coordinate;\n+ protected final Coordinate coordinate;\n protected final List<CoordinateNode> children;\n \n /**\n * Creates a new leaf CoordinateNode\n * \n * @param coordinate\n- * GeoPoint for the Node\n+ * Coordinate for the Node\n */\n- protected CoordinateNode(GeoPoint coordinate) {\n+ protected CoordinateNode(Coordinate coordinate) {\n this.coordinate = coordinate;\n this.children = null;\n }\n@@ -434,17 +434,17 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n }\n \n /**\n- * This helper class implements a linked list for {@link GeoPoint}. It contains\n+ * This helper class implements a linked list for {@link Coordinate}. It contains\n * fields for a dateline intersection and component id \n */\n protected static final class Edge {\n- GeoPoint coordinate; // coordinate of the start point\n+ Coordinate coordinate; // coordinate of the start point\n Edge next; // next segment\n- GeoPoint intersect; // potential intersection with dateline\n+ Coordinate intersect; // potential intersection with dateline\n int component = -1; // id of the component this edge belongs to\n- public static final GeoPoint MAX_COORDINATE = new GeoPoint(Double.POSITIVE_INFINITY, Double.POSITIVE_INFINITY);\n+ public static final Coordinate MAX_COORDINATE = new Coordinate(Double.POSITIVE_INFINITY, Double.POSITIVE_INFINITY);\n \n- protected Edge(GeoPoint coordinate, Edge next, GeoPoint intersection) {\n+ protected Edge(Coordinate coordinate, Edge next, Coordinate intersection) {\n this.coordinate = coordinate;\n this.next = next;\n this.intersect = intersection;\n@@ -453,11 +453,11 @@ protected Edge(GeoPoint coordinate, Edge next, GeoPoint intersection) {\n }\n }\n \n- protected Edge(GeoPoint coordinate, Edge next) {\n+ protected Edge(Coordinate coordinate, Edge next) {\n this(coordinate, next, Edge.MAX_COORDINATE);\n }\n \n- private static final int top(GeoPoint[] points, int offset, int length) {\n+ private static final int top(Coordinate[] points, int offset, int length) {\n int top = 0; // we start at 1 here since top points to 0\n for (int i = 1; i < length; i++) {\n if (points[offset + i].y < points[offset + top].y) {\n@@ -471,6 +471,29 @@ private static final int top(GeoPoint[] points, int offset, int length) {\n return top;\n }\n \n+ private static final Pair range(Coordinate[] points, int offset, int length) {\n+ double minX = points[0].x;\n+ double maxX = points[0].x;\n+ double minY = points[0].y;\n+ double maxY = points[0].y;\n+ // compute the bounding coordinates (@todo: cleanup brute force)\n+ for (int i = 1; i < length; ++i) {\n+ if (points[offset + i].x < minX) {\n+ minX = points[offset + i].x;\n+ }\n+ if (points[offset + i].x > maxX) {\n+ maxX = points[offset + i].x;\n+ }\n+ if (points[offset + i].y < minY) {\n+ minY = points[offset + i].y;\n+ }\n+ if (points[offset + i].y > maxY) {\n+ maxY = points[offset + i].y;\n+ }\n+ }\n+ return Pair.of(Pair.of(minX, maxX), Pair.of(minY, maxY));\n+ }\n+\n /**\n * Concatenate a set of points to a polygon\n * \n@@ -480,6 +503,8 @@ private static final int top(GeoPoint[] points, int offset, int length) {\n * direction of the ring\n * @param points\n * list of points to concatenate\n+ * @param pointOffset\n+ * index of the first point\n * @param edges\n * Array of edges to write the result to\n * @param edgeOffset\n@@ -488,29 +513,27 @@ private static final int top(GeoPoint[] points, int offset, int length) {\n * number of points to use\n * @return the edges creates\n */\n- private static Edge[] concat(int component, boolean direction, GeoPoint[] points, Edge[] edges, final int edgeOffset,\n- int length) {\n+ private static Edge[] concat(int component, boolean direction, Coordinate[] points, final int pointOffset, Edge[] edges, final int edgeOffset,\n+ int length) {\n assert edges.length >= length+edgeOffset;\n- assert points.length >= length;\n- edges[edgeOffset] = new Edge(points[0], null);\n- int edgeEnd = edgeOffset + length;\n-\n- for (int i = edgeOffset+1, p = 1; i < edgeEnd; ++i, ++p) {\n+ assert points.length >= length+pointOffset;\n+ edges[edgeOffset] = new Edge(points[pointOffset], null);\n+ for (int i = 1; i < length; i++) {\n if (direction) {\n- edges[i] = new Edge(points[p], edges[i - 1]);\n- edges[i].component = component;\n+ edges[edgeOffset + i] = new Edge(points[pointOffset + i], edges[edgeOffset + i - 1]);\n+ edges[edgeOffset + i].component = component;\n } else {\n- edges[i - 1].next = edges[i] = new Edge(points[p], null);\n- edges[i - 1].component = component;\n+ edges[edgeOffset + i - 1].next = edges[edgeOffset + i] = new Edge(points[pointOffset + i], null);\n+ edges[edgeOffset + i - 1].component = component;\n }\n }\n \n if (direction) {\n- edges[edgeOffset].next = edges[edgeEnd - 1];\n+ edges[edgeOffset].next = edges[edgeOffset + length - 1];\n edges[edgeOffset].component = component;\n } else {\n- edges[edgeEnd - 1].next = edges[edgeOffset];\n- edges[edgeEnd - 1].component = component;\n+ edges[edgeOffset + length - 1].next = edges[edgeOffset];\n+ edges[edgeOffset + length - 1].component = component;\n }\n \n return edges;\n@@ -521,47 +544,82 @@ private static Edge[] concat(int component, boolean direction, GeoPoint[] points\n * \n * @param points\n * array of point\n+ * @param offset\n+ * index of the first point\n * @param length\n * number of points\n * @return Array of edges\n */\n protected static Edge[] ring(int component, boolean direction, boolean handedness, BaseLineStringBuilder<?> shell,\n- GeoPoint[] points, Edge[] edges, int edgeOffset, int length) {\n+ Coordinate[] points, int offset, Edge[] edges, int toffset, int length) {\n // calculate the direction of the points:\n- boolean orientation = GeoUtils.computePolyOrientation(points, length);\n- boolean corrected = GeoUtils.correctPolyAmbiguity(points, handedness, orientation, component, length,\n- shell.translated);\n-\n- // correct the orientation post translation (ccw for shell, cw for holes)\n- if (corrected && (component == 0 || (component != 0 && handedness == orientation))) {\n+ // find the point a the top of the set and check its\n+ // neighbors orientation. So direction is equivalent\n+ // to clockwise/counterclockwise\n+ final int top = top(points, offset, length);\n+ final int prev = (offset + ((top + length - 1) % length));\n+ final int next = (offset + ((top + 1) % length));\n+ boolean orientation = points[offset + prev].x > points[offset + next].x;\n+\n+ // OGC requires shell as ccw (Right-Handedness) and holes as cw (Left-Handedness) \n+ // since GeoJSON doesn't specify (and doesn't need to) GEO core will assume OGC standards\n+ // thus if orientation is computed as cw, the logic will translate points across dateline\n+ // and convert to a right handed system\n+\n+ // compute the bounding box and calculate range\n+ Pair<Pair, Pair> range = range(points, offset, length);\n+ final double rng = (Double)range.getLeft().getRight() - (Double)range.getLeft().getLeft();\n+ // translate the points if the following is true\n+ // 1. shell orientation is cw and range is greater than a hemisphere (180 degrees) but not spanning 2 hemispheres \n+ // (translation would result in a collapsed poly)\n+ // 2. the shell of the candidate hole has been translated (to preserve the coordinate system)\n+ boolean incorrectOrientation = component == 0 && handedness != orientation;\n+ if ( (incorrectOrientation && (rng > DATELINE && rng != 2*DATELINE)) || (shell.translated && component != 0)) {\n+ translate(points);\n+ // flip the translation bit if the shell is being translated\n if (component == 0) {\n- shell.translated = corrected;\n+ shell.translated = true;\n+ }\n+ // correct the orientation post translation (ccw for shell, cw for holes)\n+ if (component == 0 || (component != 0 && handedness == orientation)) {\n+ orientation = !orientation;\n+ }\n+ }\n+ return concat(component, direction ^ orientation, points, offset, edges, toffset, length);\n+ }\n+\n+ /**\n+ * Transforms coordinates in the eastern hemisphere (-180:0) to a (180:360) range \n+ * @param points\n+ */\n+ protected static void translate(Coordinate[] points) {\n+ for (Coordinate c : points) {\n+ if (c.x < 0) {\n+ c.x += 2*DATELINE;\n }\n- orientation = !orientation;\n }\n- return concat(component, direction ^ orientation, points, edges, edgeOffset, length);\n }\n \n /**\n * Set the intersection of this line segment to the given position\n * \n * @param position\n * position of the intersection [0..1]\n- * @return the {@link GeoPoint} of the intersection\n+ * @return the {@link Coordinate} of the intersection\n */\n- protected GeoPoint intersection(double position) {\n+ protected Coordinate intersection(double position) {\n return intersect = position(coordinate, next.coordinate, position);\n }\n \n- public static GeoPoint position(GeoPoint p1, GeoPoint p2, double position) {\n+ public static Coordinate position(Coordinate p1, Coordinate p2, double position) {\n if (position == 0) {\n return p1;\n } else if (position == 1) {\n return p2;\n } else {\n final double x = p1.x + position * (p2.x - p1.x);\n final double y = p1.y + position * (p2.y - p1.y);\n- return new GeoPoint(y, x);\n+ return new Coordinate(x, y);\n }\n }\n \n@@ -735,12 +793,12 @@ protected static EnvelopeBuilder parseEnvelope(CoordinateNode coordinates, Orien\n \"geo_shape ('envelope') when expecting an array of 2 coordinates\");\n }\n // verify coordinate bounds, correct if necessary\n- GeoPoint uL = coordinates.children.get(0).coordinate;\n- GeoPoint lR = coordinates.children.get(1).coordinate;\n+ Coordinate uL = coordinates.children.get(0).coordinate;\n+ Coordinate lR = coordinates.children.get(1).coordinate;\n if (((lR.x < uL.x) || (uL.y < lR.y))) {\n- GeoPoint uLtmp = uL;\n- uL = new GeoPoint(Math.max(uL.y, lR.y), Math.min(uL.x, lR.x));\n- lR = new GeoPoint(Math.min(uLtmp.y, lR.y), Math.max(uLtmp.x, lR.x));\n+ Coordinate uLtmp = uL;\n+ uL = new Coordinate(Math.min(uL.x, lR.x), Math.max(uL.y, lR.y));\n+ lR = new Coordinate(Math.max(uLtmp.x, lR.x), Math.min(uLtmp.y, lR.y));\n }\n return newEnvelope(orientation).topLeft(uL).bottomRight(lR);\n }", "filename": "src/main/java/org/elasticsearch/common/geo/builders/ShapeBuilder.java", "status": "modified" }, { "diff": "@@ -294,10 +294,6 @@ protected void doXContentBody(XContentBuilder builder, boolean includeDefaults,\n if (includeDefaults || defaultStrategy.getDistErrPct() != Defaults.DISTANCE_ERROR_PCT) {\n builder.field(Names.DISTANCE_ERROR_PCT, defaultStrategy.getDistErrPct());\n }\n-\n- if (includeDefaults || shapeOrientation != Defaults.ORIENTATION ) {\n- builder.field(Names.ORIENTATION, shapeOrientation);\n- }\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/index/mapper/geo/GeoShapeFieldMapper.java", "status": "modified" }, { "diff": "@@ -29,7 +29,6 @@\n import org.apache.lucene.util.Bits;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.geo.GeoPoint;\n-import org.elasticsearch.common.geo.GeoUtils;\n import org.elasticsearch.index.fielddata.IndexGeoPointFieldData;\n import org.elasticsearch.index.fielddata.MultiGeoPointValues;\n \n@@ -94,27 +93,15 @@ protected boolean matchDoc(int doc) {\n \n private static boolean pointInPolygon(GeoPoint[] points, double lat, double lon) {\n boolean inPoly = false;\n- // @TODO handedness will be an option provided by the parser\n- boolean corrected = GeoUtils.correctPolyAmbiguity(points, false);\n- GeoPoint p = (corrected) ?\n- GeoUtils.convertToGreatCircle(lat, lon) :\n- new GeoPoint(lat, lon);\n-\n- GeoPoint pp0 = (corrected) ? GeoUtils.convertToGreatCircle(points[0]) : points[0] ;\n- GeoPoint pp1;\n- // simple even-odd PIP computation\n- // 1. Determine if point is contained in the longitudinal range\n- // 2. Determine whether point crosses the edge by computing the latitudinal delta\n- // between the end-point of a parallel vector (originating at the point) and the\n- // y-component of the edge sink\n+\n for (int i = 1; i < points.length; i++) {\n- pp1 = points[i];\n- if (pp1.x < p.x && pp0.x >= p.x || pp0.x < p.x && pp1.x >= p.x) {\n- if (pp1.y + (p.x - pp1.x) / (pp0.x - pp1.x) * (pp0.y - pp1.y) < p.y) {\n+ if (points[i].lon() < lon && points[i-1].lon() >= lon\n+ || points[i-1].lon() < lon && points[i].lon() >= lon) {\n+ if (points[i].lat() + (lon - points[i].lon()) /\n+ (points[i-1].lon() - points[i].lon()) * (points[i-1].lat() - points[i].lat()) < lat) {\n inPoly = !inPoly;\n }\n }\n- pp0 = pp1;\n }\n return inPoly;\n }", "filename": "src/main/java/org/elasticsearch/index/search/geo/GeoPolygonFilter.java", "status": "modified" }, { "diff": "@@ -26,13 +26,7 @@\n import com.spatial4j.core.shape.ShapeCollection;\n import com.spatial4j.core.shape.jts.JtsGeometry;\n import com.spatial4j.core.shape.jts.JtsPoint;\n-import com.vividsolutions.jts.geom.Geometry;\n-import com.vividsolutions.jts.geom.GeometryFactory;\n-import com.vividsolutions.jts.geom.LineString;\n-import com.vividsolutions.jts.geom.LinearRing;\n-import com.vividsolutions.jts.geom.MultiLineString;\n-import com.vividsolutions.jts.geom.Point;\n-import com.vividsolutions.jts.geom.Polygon;\n+import com.vividsolutions.jts.geom.*;\n import org.elasticsearch.ElasticsearchIllegalArgumentException;\n import org.elasticsearch.ElasticsearchParseException;\n import org.elasticsearch.common.geo.builders.ShapeBuilder;\n@@ -63,7 +57,7 @@ public void testParse_simplePoint() throws IOException {\n .startArray(\"coordinates\").value(100.0).value(0.0).endArray()\n .endObject().string();\n \n- Point expected = GEOMETRY_FACTORY.createPoint(new GeoPoint(0.0, 100.0));\n+ Point expected = GEOMETRY_FACTORY.createPoint(new Coordinate(100.0, 0.0));\n assertGeometryEquals(new JtsPoint(expected, SPATIAL_CONTEXT), pointGeoJson);\n }\n \n@@ -75,12 +69,12 @@ public void testParse_lineString() throws IOException {\n .endArray()\n .endObject().string();\n \n- List<GeoPoint> lineCoordinates = new ArrayList<>();\n- lineCoordinates.add(new GeoPoint(0, 100));\n- lineCoordinates.add(new GeoPoint(1, 101));\n+ List<Coordinate> lineCoordinates = new ArrayList<>();\n+ lineCoordinates.add(new Coordinate(100, 0));\n+ lineCoordinates.add(new Coordinate(101, 1));\n \n LineString expected = GEOMETRY_FACTORY.createLineString(\n- lineCoordinates.toArray(new GeoPoint[lineCoordinates.size()]));\n+ lineCoordinates.toArray(new Coordinate[lineCoordinates.size()]));\n assertGeometryEquals(jtsGeom(expected), lineGeoJson);\n }\n \n@@ -99,13 +93,13 @@ public void testParse_multiLineString() throws IOException {\n .endObject().string();\n \n MultiLineString expected = GEOMETRY_FACTORY.createMultiLineString(new LineString[]{\n- GEOMETRY_FACTORY.createLineString(new GeoPoint[]{\n- new GeoPoint(0, 100),\n- new GeoPoint(1, 101),\n+ GEOMETRY_FACTORY.createLineString(new Coordinate[]{\n+ new Coordinate(100, 0),\n+ new Coordinate(101, 1),\n }),\n- GEOMETRY_FACTORY.createLineString(new GeoPoint[]{\n- new GeoPoint(2, 102),\n- new GeoPoint(3, 103),\n+ GEOMETRY_FACTORY.createLineString(new Coordinate[]{\n+ new Coordinate(102, 2),\n+ new Coordinate(103, 3),\n }),\n });\n assertGeometryEquals(jtsGeom(expected), multilinesGeoJson);\n@@ -179,14 +173,14 @@ public void testParse_polygonNoHoles() throws IOException {\n .endArray()\n .endObject().string();\n \n- List<GeoPoint> shellCoordinates = new ArrayList<>();\n- shellCoordinates.add(new GeoPoint(0, 100));\n- shellCoordinates.add(new GeoPoint(0, 101));\n- shellCoordinates.add(new GeoPoint(1, 101));\n- shellCoordinates.add(new GeoPoint(1, 100));\n- shellCoordinates.add(new GeoPoint(0, 100));\n+ List<Coordinate> shellCoordinates = new ArrayList<>();\n+ shellCoordinates.add(new Coordinate(100, 0));\n+ shellCoordinates.add(new Coordinate(101, 0));\n+ shellCoordinates.add(new Coordinate(101, 1));\n+ shellCoordinates.add(new Coordinate(100, 1));\n+ shellCoordinates.add(new Coordinate(100, 0));\n \n- LinearRing shell = GEOMETRY_FACTORY.createLinearRing(shellCoordinates.toArray(new GeoPoint[shellCoordinates.size()]));\n+ LinearRing shell = GEOMETRY_FACTORY.createLinearRing(shellCoordinates.toArray(new Coordinate[shellCoordinates.size()]));\n Polygon expected = GEOMETRY_FACTORY.createPolygon(shell, null);\n assertGeometryEquals(jtsGeom(expected), polygonGeoJson);\n }\n@@ -573,25 +567,25 @@ public void testParse_polygonWithHole() throws IOException {\n .endArray()\n .endObject().string();\n \n- List<GeoPoint> shellCoordinates = new ArrayList<>();\n- shellCoordinates.add(new GeoPoint(0, 100));\n- shellCoordinates.add(new GeoPoint(0, 101));\n- shellCoordinates.add(new GeoPoint(1, 101));\n- shellCoordinates.add(new GeoPoint(1, 100));\n- shellCoordinates.add(new GeoPoint(0, 100));\n+ List<Coordinate> shellCoordinates = new ArrayList<>();\n+ shellCoordinates.add(new Coordinate(100, 0));\n+ shellCoordinates.add(new Coordinate(101, 0));\n+ shellCoordinates.add(new Coordinate(101, 1));\n+ shellCoordinates.add(new Coordinate(100, 1));\n+ shellCoordinates.add(new Coordinate(100, 0));\n \n- List<GeoPoint> holeCoordinates = new ArrayList<>();\n- holeCoordinates.add(new GeoPoint(0.2, 100.2));\n- holeCoordinates.add(new GeoPoint(0.2, 100.8));\n- holeCoordinates.add(new GeoPoint(0.8, 100.8));\n- holeCoordinates.add(new GeoPoint(0.8, 100.2));\n- holeCoordinates.add(new GeoPoint(0.2, 100.2));\n+ List<Coordinate> holeCoordinates = new ArrayList<>();\n+ holeCoordinates.add(new Coordinate(100.2, 0.2));\n+ holeCoordinates.add(new Coordinate(100.8, 0.2));\n+ holeCoordinates.add(new Coordinate(100.8, 0.8));\n+ holeCoordinates.add(new Coordinate(100.2, 0.8));\n+ holeCoordinates.add(new Coordinate(100.2, 0.2));\n \n LinearRing shell = GEOMETRY_FACTORY.createLinearRing(\n- shellCoordinates.toArray(new GeoPoint[shellCoordinates.size()]));\n+ shellCoordinates.toArray(new Coordinate[shellCoordinates.size()]));\n LinearRing[] holes = new LinearRing[1];\n holes[0] = GEOMETRY_FACTORY.createLinearRing(\n- holeCoordinates.toArray(new GeoPoint[holeCoordinates.size()]));\n+ holeCoordinates.toArray(new Coordinate[holeCoordinates.size()]));\n Polygon expected = GEOMETRY_FACTORY.createPolygon(shell, holes);\n assertGeometryEquals(jtsGeom(expected), polygonGeoJson);\n }\n@@ -663,34 +657,34 @@ public void testParse_multiPolygon() throws IOException {\n .endArray()\n .endObject().string();\n \n- List<GeoPoint> shellCoordinates = new ArrayList<>();\n- shellCoordinates.add(new GeoPoint(0, 100));\n- shellCoordinates.add(new GeoPoint(0, 101));\n- shellCoordinates.add(new GeoPoint(1, 101));\n- shellCoordinates.add(new GeoPoint(1, 100));\n- shellCoordinates.add(new GeoPoint(0, 100));\n+ List<Coordinate> shellCoordinates = new ArrayList<>();\n+ shellCoordinates.add(new Coordinate(100, 0));\n+ shellCoordinates.add(new Coordinate(101, 0));\n+ shellCoordinates.add(new Coordinate(101, 1));\n+ shellCoordinates.add(new Coordinate(100, 1));\n+ shellCoordinates.add(new Coordinate(100, 0));\n \n- List<GeoPoint> holeCoordinates = new ArrayList<>();\n- holeCoordinates.add(new GeoPoint(0.2, 100.2));\n- holeCoordinates.add(new GeoPoint(0.2, 100.8));\n- holeCoordinates.add(new GeoPoint(0.8, 100.8));\n- holeCoordinates.add(new GeoPoint(0.8, 100.2));\n- holeCoordinates.add(new GeoPoint(0.2, 100.2));\n+ List<Coordinate> holeCoordinates = new ArrayList<>();\n+ holeCoordinates.add(new Coordinate(100.2, 0.2));\n+ holeCoordinates.add(new Coordinate(100.8, 0.2));\n+ holeCoordinates.add(new Coordinate(100.8, 0.8));\n+ holeCoordinates.add(new Coordinate(100.2, 0.8));\n+ holeCoordinates.add(new Coordinate(100.2, 0.2));\n \n- LinearRing shell = GEOMETRY_FACTORY.createLinearRing(shellCoordinates.toArray(new GeoPoint[shellCoordinates.size()]));\n+ LinearRing shell = GEOMETRY_FACTORY.createLinearRing(shellCoordinates.toArray(new Coordinate[shellCoordinates.size()]));\n LinearRing[] holes = new LinearRing[1];\n- holes[0] = GEOMETRY_FACTORY.createLinearRing(holeCoordinates.toArray(new GeoPoint[holeCoordinates.size()]));\n+ holes[0] = GEOMETRY_FACTORY.createLinearRing(holeCoordinates.toArray(new Coordinate[holeCoordinates.size()]));\n Polygon withHoles = GEOMETRY_FACTORY.createPolygon(shell, holes);\n \n shellCoordinates = new ArrayList<>();\n- shellCoordinates.add(new GeoPoint(3, 102));\n- shellCoordinates.add(new GeoPoint(3, 103));\n- shellCoordinates.add(new GeoPoint(2, 103));\n- shellCoordinates.add(new GeoPoint(2, 102));\n- shellCoordinates.add(new GeoPoint(3, 102));\n+ shellCoordinates.add(new Coordinate(102, 3));\n+ shellCoordinates.add(new Coordinate(103, 3));\n+ shellCoordinates.add(new Coordinate(103, 2));\n+ shellCoordinates.add(new Coordinate(102, 2));\n+ shellCoordinates.add(new Coordinate(102, 3));\n \n \n- shell = GEOMETRY_FACTORY.createLinearRing(shellCoordinates.toArray(new GeoPoint[shellCoordinates.size()]));\n+ shell = GEOMETRY_FACTORY.createLinearRing(shellCoordinates.toArray(new Coordinate[shellCoordinates.size()]));\n Polygon withoutHoles = GEOMETRY_FACTORY.createPolygon(shell, null);\n \n Shape expected = shapeCollection(withoutHoles, withHoles);\n@@ -722,22 +716,22 @@ public void testParse_multiPolygon() throws IOException {\n .endObject().string();\n \n shellCoordinates = new ArrayList<>();\n- shellCoordinates.add(new GeoPoint(1, 100));\n- shellCoordinates.add(new GeoPoint(1, 101));\n- shellCoordinates.add(new GeoPoint(0, 101));\n- shellCoordinates.add(new GeoPoint(0, 100));\n- shellCoordinates.add(new GeoPoint(1, 100));\n+ shellCoordinates.add(new Coordinate(100, 1));\n+ shellCoordinates.add(new Coordinate(101, 1));\n+ shellCoordinates.add(new Coordinate(101, 0));\n+ shellCoordinates.add(new Coordinate(100, 0));\n+ shellCoordinates.add(new Coordinate(100, 1));\n \n holeCoordinates = new ArrayList<>();\n- holeCoordinates.add(new GeoPoint(0.8, 100.2));\n- holeCoordinates.add(new GeoPoint(0.2, 100.2));\n- holeCoordinates.add(new GeoPoint(0.2, 100.8));\n- holeCoordinates.add(new GeoPoint(0.8, 100.8));\n- holeCoordinates.add(new GeoPoint(0.8, 100.2));\n+ holeCoordinates.add(new Coordinate(100.2, 0.8));\n+ holeCoordinates.add(new Coordinate(100.2, 0.2));\n+ holeCoordinates.add(new Coordinate(100.8, 0.2));\n+ holeCoordinates.add(new Coordinate(100.8, 0.8));\n+ holeCoordinates.add(new Coordinate(100.2, 0.8));\n \n- shell = GEOMETRY_FACTORY.createLinearRing(shellCoordinates.toArray(new GeoPoint[shellCoordinates.size()]));\n+ shell = GEOMETRY_FACTORY.createLinearRing(shellCoordinates.toArray(new Coordinate[shellCoordinates.size()]));\n holes = new LinearRing[1];\n- holes[0] = GEOMETRY_FACTORY.createLinearRing(holeCoordinates.toArray(new GeoPoint[holeCoordinates.size()]));\n+ holes[0] = GEOMETRY_FACTORY.createLinearRing(holeCoordinates.toArray(new Coordinate[holeCoordinates.size()]));\n withHoles = GEOMETRY_FACTORY.createPolygon(shell, holes);\n \n assertGeometryEquals(jtsGeom(withHoles), multiPolygonGeoJson);\n@@ -763,12 +757,12 @@ public void testParse_geometryCollection() throws IOException {\n .string();\n \n Shape[] expected = new Shape[2];\n- LineString expectedLineString = GEOMETRY_FACTORY.createLineString(new GeoPoint[]{\n- new GeoPoint(0, 100),\n- new GeoPoint(1, 101),\n+ LineString expectedLineString = GEOMETRY_FACTORY.createLineString(new Coordinate[]{\n+ new Coordinate(100, 0),\n+ new Coordinate(101, 1),\n });\n expected[0] = jtsGeom(expectedLineString);\n- Point expectedPoint = GEOMETRY_FACTORY.createPoint(new GeoPoint(2.0, 102.0));\n+ Point expectedPoint = GEOMETRY_FACTORY.createPoint(new Coordinate(102.0, 2.0));\n expected[1] = new JtsPoint(expectedPoint, SPATIAL_CONTEXT);\n \n //equals returns true only if geometries are in the same order\n@@ -791,7 +785,7 @@ public void testThatParserExtractsCorrectTypeAndCoordinatesFromArbitraryJson() t\n .startObject(\"lala\").field(\"type\", \"NotAPoint\").endObject()\n .endObject().string();\n \n- Point expected = GEOMETRY_FACTORY.createPoint(new GeoPoint(0.0, 100.0));\n+ Point expected = GEOMETRY_FACTORY.createPoint(new Coordinate(100.0, 0.0));\n assertGeometryEquals(new JtsPoint(expected, SPATIAL_CONTEXT), pointGeoJson);\n }\n ", "filename": "src/test/java/org/elasticsearch/common/geo/GeoJSONShapeParserTests.java", "status": "modified" }, { "diff": "@@ -24,6 +24,7 @@\n import com.spatial4j.core.shape.Rectangle;\n import com.spatial4j.core.shape.Shape;\n import com.spatial4j.core.shape.impl.PointImpl;\n+import com.vividsolutions.jts.geom.Coordinate;\n import com.vividsolutions.jts.geom.LineString;\n import com.vividsolutions.jts.geom.Polygon;\n import org.elasticsearch.common.geo.builders.PolygonBuilder;\n@@ -63,39 +64,38 @@ public void testNewPolygon() {\n .point(-45, 30).toPolygon();\n \n LineString exterior = polygon.getExteriorRing();\n- assertEquals(exterior.getCoordinateN(0), new GeoPoint(30, -45));\n- assertEquals(exterior.getCoordinateN(1), new GeoPoint(30, 45));\n- assertEquals(exterior.getCoordinateN(2), new GeoPoint(-30, 45));\n- assertEquals(exterior.getCoordinateN(3), new GeoPoint(-30, -45));\n+ assertEquals(exterior.getCoordinateN(0), new Coordinate(-45, 30));\n+ assertEquals(exterior.getCoordinateN(1), new Coordinate(45, 30));\n+ assertEquals(exterior.getCoordinateN(2), new Coordinate(45, -30));\n+ assertEquals(exterior.getCoordinateN(3), new Coordinate(-45, -30));\n }\n \n @Test\n public void testNewPolygon_coordinate() {\n Polygon polygon = ShapeBuilder.newPolygon()\n- .point(new GeoPoint(30, -45))\n- .point(new GeoPoint(30, 45))\n- .point(new GeoPoint(-30, 45))\n- .point(new GeoPoint(-30, -45))\n- .point(new GeoPoint(30, -45)).toPolygon();\n+ .point(new Coordinate(-45, 30))\n+ .point(new Coordinate(45, 30))\n+ .point(new Coordinate(45, -30))\n+ .point(new Coordinate(-45, -30))\n+ .point(new Coordinate(-45, 30)).toPolygon();\n \n LineString exterior = polygon.getExteriorRing();\n- assertEquals(exterior.getCoordinateN(0), new GeoPoint(30, -45));\n- assertEquals(exterior.getCoordinateN(1), new GeoPoint(30, 45));\n- assertEquals(exterior.getCoordinateN(2), new GeoPoint(-30, 45));\n- assertEquals(exterior.getCoordinateN(3), new GeoPoint(-30, -45));\n+ assertEquals(exterior.getCoordinateN(0), new Coordinate(-45, 30));\n+ assertEquals(exterior.getCoordinateN(1), new Coordinate(45, 30));\n+ assertEquals(exterior.getCoordinateN(2), new Coordinate(45, -30));\n+ assertEquals(exterior.getCoordinateN(3), new Coordinate(-45, -30));\n }\n \n @Test\n public void testNewPolygon_coordinates() {\n Polygon polygon = ShapeBuilder.newPolygon()\n- .points(new GeoPoint(30, -45), new GeoPoint(30, 45), new GeoPoint(-30, 45), new GeoPoint(-30, -45),\n- new GeoPoint(30, -45)).toPolygon();\n+ .points(new Coordinate(-45, 30), new Coordinate(45, 30), new Coordinate(45, -30), new Coordinate(-45, -30), new Coordinate(-45, 30)).toPolygon();\n \n LineString exterior = polygon.getExteriorRing();\n- assertEquals(exterior.getCoordinateN(0), new GeoPoint(30, -45));\n- assertEquals(exterior.getCoordinateN(1), new GeoPoint(30, 45));\n- assertEquals(exterior.getCoordinateN(2), new GeoPoint(-30, 45));\n- assertEquals(exterior.getCoordinateN(3), new GeoPoint(-30, -45));\n+ assertEquals(exterior.getCoordinateN(0), new Coordinate(-45, 30));\n+ assertEquals(exterior.getCoordinateN(1), new Coordinate(45, 30));\n+ assertEquals(exterior.getCoordinateN(2), new Coordinate(45, -30));\n+ assertEquals(exterior.getCoordinateN(3), new Coordinate(-45, -30));\n }\n \n @Test", "filename": "src/test/java/org/elasticsearch/common/geo/ShapeBuilderTests.java", "status": "modified" } ] }
{ "body": "I keep getting the same NPE when performing snapshots in 1.4.2:\n\n```\njava.lang.NullPointerException\n at org.elasticsearch.snapshots.SnapshotsService.shards(SnapshotsService.java:1195)\n at org.elasticsearch.snapshots.SnapshotsService.access$800(SnapshotsService.java:88)\n at org.elasticsearch.snapshots.SnapshotsService$2.execute(SnapshotsService.java:300)\n at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:329)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:153)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:744)\n```\n\nThe line in question is:\n\n`IndexRoutingTable indexRoutingTable = clusterState.getRoutingTable().index(index);`\n\nclusterState isn't null because it's successfully dererenced above. So that leaves getRoutingTable(). Any ideas as to why that would return null?\n", "comments": [], "number": 9024, "title": "NPE in SnapshotsService in 1.4.2" }
{ "body": "If an index is deleted during initial state of the snapshot operation, the entire snapshot can fail with NPE. This commit improves handling of this situation and allows snapshot to continue if partial snapshots are allowed.\n\nCloses #9024\n", "number": 9418, "review_comments": [ { "body": "How can you be sure the snapshot didn't complete prior to running these deletes without some type of latch?\n", "created_at": "2015-01-27T00:56:39Z" } ], "title": "Better handling of index deletion during snapshot" }
{ "commits": [ { "message": "Snapshot/Restore: better handling of index deletion during snapshot\n\nIf an index is deleted during initial state of the snapshot operation, the entire snapshot can fail with NPE. This commit improves handling of this situation and allows snapshot to continue if partial snapshots are allowed.\n\nCloses #9024" } ], "files": [ { "diff": "@@ -306,7 +306,7 @@ public ClusterState execute(ClusterState currentState) {\n for (SnapshotMetaData.Entry entry : snapshots.entries()) {\n if (entry.snapshotId().equals(snapshot.snapshotId())) {\n // Replace the snapshot that was just created\n- ImmutableMap<ShardId, SnapshotMetaData.ShardSnapshotStatus> shards = shards(entry.snapshotId(), currentState, entry.indices());\n+ ImmutableMap<ShardId, SnapshotMetaData.ShardSnapshotStatus> shards = shards(currentState, entry.indices());\n if (!partial) {\n Set<String> indicesWithMissingShards = indicesWithMissingShards(shards);\n if (indicesWithMissingShards != null) {\n@@ -1196,33 +1196,37 @@ public void run() {\n /**\n * Calculates the list of shards that should be included into the current snapshot\n *\n- * @param snapshotId snapshot id\n * @param clusterState cluster state\n * @param indices list of indices to be snapshotted\n * @return list of shard to be included into current snapshot\n */\n- private ImmutableMap<ShardId, SnapshotMetaData.ShardSnapshotStatus> shards(SnapshotId snapshotId, ClusterState clusterState, ImmutableList<String> indices) {\n+ private ImmutableMap<ShardId, SnapshotMetaData.ShardSnapshotStatus> shards(ClusterState clusterState, ImmutableList<String> indices) {\n ImmutableMap.Builder<ShardId, SnapshotMetaData.ShardSnapshotStatus> builder = ImmutableMap.builder();\n MetaData metaData = clusterState.metaData();\n for (String index : indices) {\n IndexMetaData indexMetaData = metaData.index(index);\n- IndexRoutingTable indexRoutingTable = clusterState.getRoutingTable().index(index);\n- for (int i = 0; i < indexMetaData.numberOfShards(); i++) {\n- ShardId shardId = new ShardId(index, i);\n- if (indexRoutingTable != null) {\n- ShardRouting primary = indexRoutingTable.shard(i).primaryShard();\n- if (primary == null || !primary.assignedToNode()) {\n- builder.put(shardId, new SnapshotMetaData.ShardSnapshotStatus(null, State.MISSING, \"primary shard is not allocated\"));\n- } else if (primary.relocating() || primary.initializing()) {\n- // The WAITING state was introduced in V1.2.0 - don't use it if there are nodes with older version in the cluster\n- builder.put(shardId, new SnapshotMetaData.ShardSnapshotStatus(primary.currentNodeId(), State.WAITING));\n- } else if (!primary.started()) {\n- builder.put(shardId, new SnapshotMetaData.ShardSnapshotStatus(primary.currentNodeId(), State.MISSING, \"primary shard hasn't been started yet\"));\n+ if (indexMetaData == null) {\n+ // The index was deleted before we managed to start the snapshot - mark it as missing.\n+ builder.put(new ShardId(index, 0), new SnapshotMetaData.ShardSnapshotStatus(null, State.MISSING, \"missing index\"));\n+ } else {\n+ IndexRoutingTable indexRoutingTable = clusterState.getRoutingTable().index(index);\n+ for (int i = 0; i < indexMetaData.numberOfShards(); i++) {\n+ ShardId shardId = new ShardId(index, i);\n+ if (indexRoutingTable != null) {\n+ ShardRouting primary = indexRoutingTable.shard(i).primaryShard();\n+ if (primary == null || !primary.assignedToNode()) {\n+ builder.put(shardId, new SnapshotMetaData.ShardSnapshotStatus(null, State.MISSING, \"primary shard is not allocated\"));\n+ } else if (primary.relocating() || primary.initializing()) {\n+ // The WAITING state was introduced in V1.2.0 - don't use it if there are nodes with older version in the cluster\n+ builder.put(shardId, new SnapshotMetaData.ShardSnapshotStatus(primary.currentNodeId(), State.WAITING));\n+ } else if (!primary.started()) {\n+ builder.put(shardId, new SnapshotMetaData.ShardSnapshotStatus(primary.currentNodeId(), State.MISSING, \"primary shard hasn't been started yet\"));\n+ } else {\n+ builder.put(shardId, new SnapshotMetaData.ShardSnapshotStatus(primary.currentNodeId()));\n+ }\n } else {\n- builder.put(shardId, new SnapshotMetaData.ShardSnapshotStatus(primary.currentNodeId()));\n+ builder.put(shardId, new SnapshotMetaData.ShardSnapshotStatus(null, State.MISSING, \"missing routing table\"));\n }\n- } else {\n- builder.put(shardId, new SnapshotMetaData.ShardSnapshotStatus(null, State.MISSING, \"missing routing table\"));\n }\n }\n }", "filename": "src/main/java/org/elasticsearch/snapshots/SnapshotsService.java", "status": "modified" }, { "diff": "@@ -1558,6 +1558,59 @@ public void changeSettingsOnRestoreTest() throws Exception {\n \n }\n \n+ @Test\n+ public void deleteIndexDuringSnapshotTest() throws Exception {\n+ Client client = client();\n+\n+ boolean allowPartial = randomBoolean();\n+\n+ logger.info(\"--> creating repository\");\n+ assertAcked(client.admin().cluster().preparePutRepository(\"test-repo\")\n+ .setType(MockRepositoryModule.class.getCanonicalName()).setSettings(ImmutableSettings.settingsBuilder()\n+ .put(\"location\", newTempDirPath())\n+ .put(\"compress\", randomBoolean())\n+ .put(\"chunk_size\", randomIntBetween(100, 1000))\n+ .put(\"block_on_init\", true)\n+ ));\n+\n+ createIndex(\"test-idx-1\", \"test-idx-2\", \"test-idx-3\");\n+ ensureGreen();\n+\n+ logger.info(\"--> indexing some data\");\n+ for (int i = 0; i < 100; i++) {\n+ index(\"test-idx-1\", \"doc\", Integer.toString(i), \"foo\", \"bar\" + i);\n+ index(\"test-idx-2\", \"doc\", Integer.toString(i), \"foo\", \"baz\" + i);\n+ index(\"test-idx-3\", \"doc\", Integer.toString(i), \"foo\", \"baz\" + i);\n+ }\n+ refresh();\n+ assertThat(client.prepareCount(\"test-idx-1\").get().getCount(), equalTo(100L));\n+ assertThat(client.prepareCount(\"test-idx-2\").get().getCount(), equalTo(100L));\n+ assertThat(client.prepareCount(\"test-idx-3\").get().getCount(), equalTo(100L));\n+\n+ logger.info(\"--> snapshot allow partial {}\", allowPartial);\n+ ListenableActionFuture<CreateSnapshotResponse> future = client.admin().cluster().prepareCreateSnapshot(\"test-repo\", \"test-snap\")\n+ .setIndices(\"test-idx-*\").setWaitForCompletion(true).setPartial(allowPartial).execute();\n+ logger.info(\"--> wait for block to kick in\");\n+ waitForBlock(internalCluster().getMasterName(), \"test-repo\", TimeValue.timeValueMinutes(1));\n+ logger.info(\"--> delete some indices while snapshot is running\");\n+ client.admin().indices().prepareDelete(\"test-idx-1\", \"test-idx-2\").get();\n+ logger.info(\"--> unblock running master node\");\n+ unblockNode(internalCluster().getMasterName());\n+ logger.info(\"--> waiting for snapshot to finish\");\n+ CreateSnapshotResponse createSnapshotResponse = future.get();\n+\n+ if (allowPartial) {\n+ logger.info(\"Deleted index during snapshot, but allow partial\");\n+ assertThat(createSnapshotResponse.getSnapshotInfo().state(), equalTo((SnapshotState.PARTIAL)));\n+ assertThat(createSnapshotResponse.getSnapshotInfo().successfulShards(), greaterThan(0));\n+ assertThat(createSnapshotResponse.getSnapshotInfo().failedShards(), greaterThan(0));\n+ assertThat(createSnapshotResponse.getSnapshotInfo().successfulShards(), lessThan(createSnapshotResponse.getSnapshotInfo().totalShards()));\n+ } else {\n+ logger.info(\"Deleted index during snapshot and doesn't allow partial\");\n+ assertThat(createSnapshotResponse.getSnapshotInfo().state(), equalTo((SnapshotState.FAILED)));\n+ }\n+ }\n+\n private boolean waitForIndex(final String index, TimeValue timeout) throws InterruptedException {\n return awaitBusy(new Predicate<Object>() {\n @Override", "filename": "src/test/java/org/elasticsearch/snapshots/SharedClusterSnapshotRestoreTests.java", "status": "modified" }, { "diff": "@@ -19,10 +19,13 @@\n \n package org.elasticsearch.snapshots.mockstore;\n \n+import com.google.common.collect.ImmutableList;\n import com.google.common.collect.ImmutableMap;\n \n import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.cluster.ClusterService;\n+import org.elasticsearch.cluster.metadata.MetaData;\n+import org.elasticsearch.cluster.metadata.SnapshotId;\n import org.elasticsearch.common.blobstore.BlobMetaData;\n import org.elasticsearch.common.blobstore.BlobPath;\n import org.elasticsearch.common.blobstore.BlobStore;\n@@ -46,6 +49,7 @@\n import java.util.concurrent.ConcurrentMap;\n import java.util.concurrent.atomic.AtomicLong;\n \n+import static com.carrotsearch.randomizedtesting.RandomizedTest.randomAsciiOfLength;\n import static org.elasticsearch.common.settings.ImmutableSettings.settingsBuilder;\n \n /**\n@@ -68,6 +72,8 @@ public long getFailureCount() {\n \n private final String randomPrefix;\n \n+ private volatile boolean blockOnInitialization;\n+\n private volatile boolean blockOnControlFiles;\n \n private volatile boolean blockOnDataFiles;\n@@ -81,12 +87,21 @@ public MockRepository(RepositoryName name, RepositorySettings repositorySettings\n randomDataFileIOExceptionRate = repositorySettings.settings().getAsDouble(\"random_data_file_io_exception_rate\", 0.0);\n blockOnControlFiles = repositorySettings.settings().getAsBoolean(\"block_on_control\", false);\n blockOnDataFiles = repositorySettings.settings().getAsBoolean(\"block_on_data\", false);\n- randomPrefix = repositorySettings.settings().get(\"random\");\n+ blockOnInitialization = repositorySettings.settings().getAsBoolean(\"block_on_init\", false);\n+ randomPrefix = repositorySettings.settings().get(\"random\", \"default\");\n waitAfterUnblock = repositorySettings.settings().getAsLong(\"wait_after_unblock\", 0L);\n logger.info(\"starting mock repository with random prefix \" + randomPrefix);\n mockBlobStore = new MockBlobStore(super.blobStore());\n }\n \n+ @Override\n+ public void initializeSnapshot(SnapshotId snapshotId, ImmutableList<String> indices, MetaData metaData) {\n+ if (blockOnInitialization ) {\n+ blockExecution();\n+ }\n+ super.initializeSnapshot(snapshotId, indices, metaData);\n+ }\n+\n private static RepositorySettings overrideSettings(RepositorySettings repositorySettings, ClusterService clusterService) {\n if (repositorySettings.settings().getAsBoolean(\"localize_location\", false)) {\n return new RepositorySettings(\n@@ -118,12 +133,8 @@ protected BlobStore blobStore() {\n return mockBlobStore;\n }\n \n- public boolean blocked() {\n- return mockBlobStore.blocked();\n- }\n-\n public void unblock() {\n- mockBlobStore.unblockExecution();\n+ unblockExecution();\n }\n \n public void blockOnDataFiles(boolean blocked) {\n@@ -134,6 +145,37 @@ public void blockOnControlFiles(boolean blocked) {\n blockOnControlFiles = blocked;\n }\n \n+ public synchronized void unblockExecution() {\n+ if (blocked) {\n+ blocked = false;\n+ // Clean blocking flags, so we wouldn't try to block again\n+ blockOnDataFiles = false;\n+ blockOnControlFiles = false;\n+ blockOnInitialization = false;\n+ this.notifyAll();\n+ }\n+ }\n+\n+ public boolean blocked() {\n+ return blocked;\n+ }\n+\n+ private synchronized boolean blockExecution() {\n+ logger.debug(\"Blocking execution\");\n+ boolean wasBlocked = false;\n+ try {\n+ while (blockOnDataFiles || blockOnControlFiles || blockOnInitialization) {\n+ blocked = true;\n+ this.wait();\n+ wasBlocked = true;\n+ }\n+ } catch (InterruptedException ex) {\n+ Thread.currentThread().interrupt();\n+ }\n+ logger.debug(\"Unblocking execution\");\n+ return wasBlocked;\n+ }\n+\n public class MockBlobStore extends BlobStoreWrapper {\n ConcurrentMap<String, AtomicLong> accessCounts = new ConcurrentHashMap<>();\n \n@@ -157,34 +199,6 @@ public BlobContainer blobContainer(BlobPath path) {\n return new MockBlobContainer(super.blobContainer(path));\n }\n \n- public synchronized void unblockExecution() {\n- if (blocked) {\n- blocked = false;\n- // Clean blocking flags, so we wouldn't try to block again\n- blockOnDataFiles = false;\n- blockOnControlFiles = false;\n- this.notifyAll();\n- }\n- }\n-\n- public boolean blocked() {\n- return blocked;\n- }\n-\n- private synchronized boolean blockExecution() {\n- boolean wasBlocked = false;\n- try {\n- while (blockOnDataFiles || blockOnControlFiles) {\n- blocked = true;\n- this.wait();\n- wasBlocked = true;\n- }\n- } catch (InterruptedException ex) {\n- Thread.currentThread().interrupt();\n- }\n- return wasBlocked;\n- }\n-\n private class MockBlobContainer extends BlobContainerWrapper {\n private MessageDigest digest;\n ", "filename": "src/test/java/org/elasticsearch/snapshots/mockstore/MockRepository.java", "status": "modified" } ] }
{ "body": "Unfortunately, we don't have exact details/timings for how this occurred. From the information received (version 1.3.2), the S3 bucket tied to a S3 repository was inadvertently deleted at some point, and possibly while a snapshot was running. As a result, when attempting to use the PUT command to update the repository definition in the cluster, the following message was received:\n\n```\n{\"error\":\"RemoteTransportException[[elasticsearch-elasticsearch7.localdomain][inet[/IP:9300]][cluster/repository/put]]; nested: ElasticsearchIllegalStateException[trying to modify or unregister repository that is currently used ]; \",\"status\":500}\n```\n\nAn attempt to use _all to retrieve the snapshots to see their statuses from the repository resulted in a bucket not found error:\n\n```\ncurl -XGET \"http://host:9200/_snapshot/production/_all\" \n{\"error\":\"RemoteTransportException[[elasticsearch-elasticsearch7.localdomain][inet[/IP:9300]][cluster/snapshot/get]]; nested: AmazonS3Exception[The specified bucket does not exist (Service: Amazon S3; Status Code: 404; Error Code: NoSuchBucket; Request ID: 87CD758CA1776085)]; \",\"status\":500}\n```\n\nAn attempt to delete the problem repository to start over failed as well:\n\n```\ncurl -XDELETE http://host:9200/_snapshot/production?pretty \n{ \n\"error\" : \"RemoteTransportException[[elasticsearch-elasticsearch7.localdomain][inet[/IP:9300]][cluster/repository/delete]]; nested: ElasticsearchIllegalStateException[trying to modify or unregister repository that is currently used ]; \", \n\"status\" : 500 \n} \n```\n\nAn attempt to create a different/new repository to take a snapshot also did not work:\n\n```\n'{\"error\":\"RemoteTransportException[[elasticsearch-elasticsearch7.localdomain][inet[/IP:9300]][cluster/snapshot/create]]; nested: ConcurrentSnapshotExecutionException[[production-v2:1209201414] a snapshot is already running]; \",\"status\":503}' \n```\n\nSince _all does not retrieve the outstanding snapshot information, we looked at the cluster state itself and saw that there is indeed a snapshot that appears to be stuck in INIT state:\n\n```\n \"snapshots\" : {\n \"snapshots\" : [ {\n \"repository\" : \"production\",\n \"snapshot\" : \"1207201400\",\n \"include_global_state\" : true,\n \"state\" : \"INIT\",\n \"indices\" : [\n```\n\nWe subsequently was able to delete the snapshot using the snapshot API to recover from this.\n\nSo it sounds like there may be an issue with the _all API not able to retrieve a snapshot that is still in the cluster state.\n", "comments": [], "number": 8887, "title": "Old snapshot in cluster state but cannot be retrieved via _all" }
{ "body": "Together with #8782 it should help in the situations simliar to #8887 by adding an ability to get information about currently running snapshot without accessing the repository itself.\n\nCloses #8887 and #7859\n", "number": 9400, "review_comments": [ { "body": "Can you make `\"_all\"` and `\"_current\"` public static strings somewhere appropriate so they can be used by the Java API?\n", "created_at": "2015-02-23T16:20:07Z" } ], "title": "Add ability to retrieve currently running snapshots" }
{ "commits": [ { "message": "Snapshot/Restore: add ability to retrieve currently running snapshots\n\nTogether with #8782 it should help in the situations simliar to #8887 by adding an ability to get information about currently running snapshot without accessing the repository itself.\n\nCloses #8887" } ], "files": [ { "diff": "@@ -176,6 +176,13 @@ All snapshots currently stored in the repository can be listed using the followi\n $ curl -XGET \"localhost:9200/_snapshot/my_backup/_all\"\n -----------------------------------\n \n+coming[2.0] A currently running snapshot can be retrieved using the following command:\n+\n+[source,shell]\n+-----------------------------------\n+$ curl -XGET \"localhost:9200/_snapshot/my_backup/_current\"\n+-----------------------------------\n+\n A snapshot can be deleted from the repository using the following command:\n \n [source,shell]", "filename": "docs/reference/modules/snapshots.asciidoc", "status": "modified" }, { "diff": "@@ -34,6 +34,9 @@\n */\n public class GetSnapshotsRequest extends MasterNodeOperationRequest<GetSnapshotsRequest> {\n \n+ public static final String ALL_SNAPSHOTS = \"_all\";\n+ public static final String CURRENT_SNAPSHOT = \"_current\";\n+\n private String repository;\n \n private String[] snapshots = Strings.EMPTY_ARRAY;", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/snapshots/get/GetSnapshotsRequest.java", "status": "modified" }, { "diff": "@@ -70,6 +70,16 @@ public GetSnapshotsRequestBuilder setSnapshots(String... snapshots) {\n return this;\n }\n \n+ /**\n+ * Makes the request to return the current snapshot\n+ *\n+ * @return this builder\n+ */\n+ public GetSnapshotsRequestBuilder setCurrentSnapshot() {\n+ request.snapshots(new String[] {GetSnapshotsRequest.CURRENT_SNAPSHOT});\n+ return this;\n+ }\n+\n /**\n * Adds additional snapshots to the list of snapshots to return\n *", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/snapshots/get/GetSnapshotsRequestBuilder.java", "status": "modified" }, { "diff": "@@ -52,7 +52,7 @@ public TransportGetSnapshotsAction(Settings settings, TransportService transport\n \n @Override\n protected String executor() {\n- return ThreadPool.Names.SNAPSHOT;\n+ return ThreadPool.Names.GENERIC;\n }\n \n @Override\n@@ -72,26 +72,35 @@ protected ClusterBlockException checkBlock(GetSnapshotsRequest request, ClusterS\n \n @Override\n protected void masterOperation(final GetSnapshotsRequest request, ClusterState state, final ActionListener<GetSnapshotsResponse> listener) throws ElasticsearchException {\n- SnapshotId[] snapshotIds = new SnapshotId[request.snapshots().length];\n- for (int i = 0; i < snapshotIds.length; i++) {\n- snapshotIds[i] = new SnapshotId(request.repository(), request.snapshots()[i]);\n- }\n-\n try {\n ImmutableList.Builder<SnapshotInfo> snapshotInfoBuilder = ImmutableList.builder();\n- if (snapshotIds.length > 0) {\n- for (SnapshotId snapshotId : snapshotIds) {\n- snapshotInfoBuilder.add(new SnapshotInfo(snapshotsService.snapshot(snapshotId)));\n- }\n- } else {\n+ if (isAllSnapshots(request.snapshots())) {\n ImmutableList<Snapshot> snapshots = snapshotsService.snapshots(request.repository());\n for (Snapshot snapshot : snapshots) {\n snapshotInfoBuilder.add(new SnapshotInfo(snapshot));\n }\n+ } else if (isCurrentSnapshots(request.snapshots())) {\n+ ImmutableList<Snapshot> snapshots = snapshotsService.currentSnapshots(request.repository());\n+ for (Snapshot snapshot : snapshots) {\n+ snapshotInfoBuilder.add(new SnapshotInfo(snapshot));\n+ }\n+ } else {\n+ for (int i = 0; i < request.snapshots().length; i++) {\n+ SnapshotId snapshotId = new SnapshotId(request.repository(), request.snapshots()[i]);\n+ snapshotInfoBuilder.add(new SnapshotInfo(snapshotsService.snapshot(snapshotId)));\n+ }\n }\n listener.onResponse(new GetSnapshotsResponse(snapshotInfoBuilder.build()));\n } catch (Throwable t) {\n listener.onFailure(t);\n }\n }\n+\n+ private boolean isAllSnapshots(String[] snapshots) {\n+ return (snapshots.length == 0) || (snapshots.length == 1 && GetSnapshotsRequest.ALL_SNAPSHOTS.equalsIgnoreCase(snapshots[0]));\n+ }\n+\n+ private boolean isCurrentSnapshots(String[] snapshots) {\n+ return (snapshots.length == 1 && GetSnapshotsRequest.CURRENT_SNAPSHOT.equalsIgnoreCase(snapshots[0]));\n+ }\n }", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/snapshots/get/TransportGetSnapshotsAction.java", "status": "modified" }, { "diff": "@@ -47,9 +47,6 @@ public RestGetSnapshotsAction(Settings settings, RestController controller, Clie\n public void handleRequest(final RestRequest request, final RestChannel channel, final Client client) {\n String repository = request.param(\"repository\");\n String[] snapshots = request.paramAsStringArray(\"snapshot\", Strings.EMPTY_ARRAY);\n- if (snapshots.length == 1 && \"_all\".equalsIgnoreCase(snapshots[0])) {\n- snapshots = Strings.EMPTY_ARRAY;\n- }\n GetSnapshotsRequest getSnapshotsRequest = getSnapshotsRequest(repository).snapshots(snapshots);\n getSnapshotsRequest.masterNodeTimeout(request.paramAsTime(\"master_timeout\", getSnapshotsRequest.masterNodeTimeout()));\n client.admin().cluster().getSnapshots(getSnapshotsRequest, new RestToXContentListener<GetSnapshotsResponse>(channel));", "filename": "src/main/java/org/elasticsearch/rest/action/admin/cluster/snapshots/get/RestGetSnapshotsAction.java", "status": "modified" }, { "diff": "@@ -161,6 +161,22 @@ public ImmutableList<Snapshot> snapshots(String repositoryName) {\n return ImmutableList.copyOf(snapshotList);\n }\n \n+ /**\n+ * Returns a list of currently running snapshots from repository sorted by snapshot creation date\n+ *\n+ * @param repositoryName repository name\n+ * @return list of snapshots\n+ */\n+ public ImmutableList<Snapshot> currentSnapshots(String repositoryName) {\n+ List<Snapshot> snapshotList = newArrayList();\n+ ImmutableList<SnapshotMetaData.Entry> entries = currentSnapshots(repositoryName, null);\n+ for (SnapshotMetaData.Entry entry : entries) {\n+ snapshotList.add(inProgressSnapshot(entry));\n+ }\n+ CollectionUtil.timSort(snapshotList);\n+ return ImmutableList.copyOf(snapshotList);\n+ }\n+\n /**\n * Initializes the snapshotting process.\n * <p/>", "filename": "src/main/java/org/elasticsearch/snapshots/SnapshotsService.java", "status": "modified" }, { "diff": "@@ -1325,7 +1325,6 @@ public void snapshotStatusTest() throws Exception {\n // Pick one node and block it\n String blockedNode = blockNodeWithIndex(\"test-idx\");\n \n-\n logger.info(\"--> snapshot\");\n client.admin().cluster().prepareCreateSnapshot(\"test-repo\", \"test-snap\").setWaitForCompletion(false).setIndices(\"test-idx\").get();\n \n@@ -1358,10 +1357,16 @@ public void snapshotStatusTest() throws Exception {\n }\n }\n \n+ logger.info(\"--> checking that _current returns the currently running snapshot\", blockedNode);\n+ GetSnapshotsResponse getResponse = client.admin().cluster().prepareGetSnapshots(\"test-repo\").setCurrentSnapshot().execute().actionGet();\n+ assertThat(getResponse.getSnapshots().size(), equalTo(1));\n+ SnapshotInfo snapshotInfo = getResponse.getSnapshots().get(0);\n+ assertThat(snapshotInfo.state(), equalTo(SnapshotState.IN_PROGRESS));\n+\n logger.info(\"--> unblocking blocked node\");\n unblockNode(blockedNode);\n \n- SnapshotInfo snapshotInfo = waitForCompletion(\"test-repo\", \"test-snap\", TimeValue.timeValueSeconds(600));\n+ snapshotInfo = waitForCompletion(\"test-repo\", \"test-snap\", TimeValue.timeValueSeconds(600));\n logger.info(\"Number of failed shards [{}]\", snapshotInfo.shardFailures().size());\n logger.info(\"--> done\");\n \n@@ -1381,6 +1386,9 @@ public void snapshotStatusTest() throws Exception {\n response = client.admin().cluster().prepareSnapshotStatus().execute().actionGet();\n assertThat(response.getSnapshots().size(), equalTo(0));\n \n+ logger.info(\"--> checking that _current no longer returns the snapshot\", blockedNode);\n+ assertThat(client.admin().cluster().prepareGetSnapshots(\"test-repo\").addSnapshots(\"_current\").execute().actionGet().getSnapshots().isEmpty(), equalTo(true));\n+\n try {\n client.admin().cluster().prepareSnapshotStatus(\"test-repo\").addSnapshots(\"test-snap-doesnt-exist\").execute().actionGet();\n fail();", "filename": "src/test/java/org/elasticsearch/snapshots/SharedClusterSnapshotRestoreTests.java", "status": "modified" } ] }
{ "body": "This commit removes creation of in-progress snapshot file and makes creation of the final snapshot file atomic.\n\nFixes #8696\n", "comments": [ { "body": "w00t! - I will review\n", "created_at": "2014-12-04T17:12:21Z" }, { "body": "I really like it. I think we should have a small unittest in `BlobStoreTest` but other than that LGTM\n", "created_at": "2014-12-04T21:39:08Z" }, { "body": "The more I dig into the snapshot/restore code, the more I like it :) \n\nLGTM, left two comments.\n", "created_at": "2014-12-05T14:10:31Z" } ], "number": 8782, "title": "Switch to write once mode for snapshot metadata files" }
{ "body": "Together with #8782 it should help in the situations simliar to #8887 by adding an ability to get information about currently running snapshot without accessing the repository itself.\n\nCloses #8887 and #7859\n", "number": 9400, "review_comments": [ { "body": "Can you make `\"_all\"` and `\"_current\"` public static strings somewhere appropriate so they can be used by the Java API?\n", "created_at": "2015-02-23T16:20:07Z" } ], "title": "Add ability to retrieve currently running snapshots" }
{ "commits": [ { "message": "Snapshot/Restore: add ability to retrieve currently running snapshots\n\nTogether with #8782 it should help in the situations simliar to #8887 by adding an ability to get information about currently running snapshot without accessing the repository itself.\n\nCloses #8887" } ], "files": [ { "diff": "@@ -176,6 +176,13 @@ All snapshots currently stored in the repository can be listed using the followi\n $ curl -XGET \"localhost:9200/_snapshot/my_backup/_all\"\n -----------------------------------\n \n+coming[2.0] A currently running snapshot can be retrieved using the following command:\n+\n+[source,shell]\n+-----------------------------------\n+$ curl -XGET \"localhost:9200/_snapshot/my_backup/_current\"\n+-----------------------------------\n+\n A snapshot can be deleted from the repository using the following command:\n \n [source,shell]", "filename": "docs/reference/modules/snapshots.asciidoc", "status": "modified" }, { "diff": "@@ -34,6 +34,9 @@\n */\n public class GetSnapshotsRequest extends MasterNodeOperationRequest<GetSnapshotsRequest> {\n \n+ public static final String ALL_SNAPSHOTS = \"_all\";\n+ public static final String CURRENT_SNAPSHOT = \"_current\";\n+\n private String repository;\n \n private String[] snapshots = Strings.EMPTY_ARRAY;", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/snapshots/get/GetSnapshotsRequest.java", "status": "modified" }, { "diff": "@@ -70,6 +70,16 @@ public GetSnapshotsRequestBuilder setSnapshots(String... snapshots) {\n return this;\n }\n \n+ /**\n+ * Makes the request to return the current snapshot\n+ *\n+ * @return this builder\n+ */\n+ public GetSnapshotsRequestBuilder setCurrentSnapshot() {\n+ request.snapshots(new String[] {GetSnapshotsRequest.CURRENT_SNAPSHOT});\n+ return this;\n+ }\n+\n /**\n * Adds additional snapshots to the list of snapshots to return\n *", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/snapshots/get/GetSnapshotsRequestBuilder.java", "status": "modified" }, { "diff": "@@ -52,7 +52,7 @@ public TransportGetSnapshotsAction(Settings settings, TransportService transport\n \n @Override\n protected String executor() {\n- return ThreadPool.Names.SNAPSHOT;\n+ return ThreadPool.Names.GENERIC;\n }\n \n @Override\n@@ -72,26 +72,35 @@ protected ClusterBlockException checkBlock(GetSnapshotsRequest request, ClusterS\n \n @Override\n protected void masterOperation(final GetSnapshotsRequest request, ClusterState state, final ActionListener<GetSnapshotsResponse> listener) throws ElasticsearchException {\n- SnapshotId[] snapshotIds = new SnapshotId[request.snapshots().length];\n- for (int i = 0; i < snapshotIds.length; i++) {\n- snapshotIds[i] = new SnapshotId(request.repository(), request.snapshots()[i]);\n- }\n-\n try {\n ImmutableList.Builder<SnapshotInfo> snapshotInfoBuilder = ImmutableList.builder();\n- if (snapshotIds.length > 0) {\n- for (SnapshotId snapshotId : snapshotIds) {\n- snapshotInfoBuilder.add(new SnapshotInfo(snapshotsService.snapshot(snapshotId)));\n- }\n- } else {\n+ if (isAllSnapshots(request.snapshots())) {\n ImmutableList<Snapshot> snapshots = snapshotsService.snapshots(request.repository());\n for (Snapshot snapshot : snapshots) {\n snapshotInfoBuilder.add(new SnapshotInfo(snapshot));\n }\n+ } else if (isCurrentSnapshots(request.snapshots())) {\n+ ImmutableList<Snapshot> snapshots = snapshotsService.currentSnapshots(request.repository());\n+ for (Snapshot snapshot : snapshots) {\n+ snapshotInfoBuilder.add(new SnapshotInfo(snapshot));\n+ }\n+ } else {\n+ for (int i = 0; i < request.snapshots().length; i++) {\n+ SnapshotId snapshotId = new SnapshotId(request.repository(), request.snapshots()[i]);\n+ snapshotInfoBuilder.add(new SnapshotInfo(snapshotsService.snapshot(snapshotId)));\n+ }\n }\n listener.onResponse(new GetSnapshotsResponse(snapshotInfoBuilder.build()));\n } catch (Throwable t) {\n listener.onFailure(t);\n }\n }\n+\n+ private boolean isAllSnapshots(String[] snapshots) {\n+ return (snapshots.length == 0) || (snapshots.length == 1 && GetSnapshotsRequest.ALL_SNAPSHOTS.equalsIgnoreCase(snapshots[0]));\n+ }\n+\n+ private boolean isCurrentSnapshots(String[] snapshots) {\n+ return (snapshots.length == 1 && GetSnapshotsRequest.CURRENT_SNAPSHOT.equalsIgnoreCase(snapshots[0]));\n+ }\n }", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/snapshots/get/TransportGetSnapshotsAction.java", "status": "modified" }, { "diff": "@@ -47,9 +47,6 @@ public RestGetSnapshotsAction(Settings settings, RestController controller, Clie\n public void handleRequest(final RestRequest request, final RestChannel channel, final Client client) {\n String repository = request.param(\"repository\");\n String[] snapshots = request.paramAsStringArray(\"snapshot\", Strings.EMPTY_ARRAY);\n- if (snapshots.length == 1 && \"_all\".equalsIgnoreCase(snapshots[0])) {\n- snapshots = Strings.EMPTY_ARRAY;\n- }\n GetSnapshotsRequest getSnapshotsRequest = getSnapshotsRequest(repository).snapshots(snapshots);\n getSnapshotsRequest.masterNodeTimeout(request.paramAsTime(\"master_timeout\", getSnapshotsRequest.masterNodeTimeout()));\n client.admin().cluster().getSnapshots(getSnapshotsRequest, new RestToXContentListener<GetSnapshotsResponse>(channel));", "filename": "src/main/java/org/elasticsearch/rest/action/admin/cluster/snapshots/get/RestGetSnapshotsAction.java", "status": "modified" }, { "diff": "@@ -161,6 +161,22 @@ public ImmutableList<Snapshot> snapshots(String repositoryName) {\n return ImmutableList.copyOf(snapshotList);\n }\n \n+ /**\n+ * Returns a list of currently running snapshots from repository sorted by snapshot creation date\n+ *\n+ * @param repositoryName repository name\n+ * @return list of snapshots\n+ */\n+ public ImmutableList<Snapshot> currentSnapshots(String repositoryName) {\n+ List<Snapshot> snapshotList = newArrayList();\n+ ImmutableList<SnapshotMetaData.Entry> entries = currentSnapshots(repositoryName, null);\n+ for (SnapshotMetaData.Entry entry : entries) {\n+ snapshotList.add(inProgressSnapshot(entry));\n+ }\n+ CollectionUtil.timSort(snapshotList);\n+ return ImmutableList.copyOf(snapshotList);\n+ }\n+\n /**\n * Initializes the snapshotting process.\n * <p/>", "filename": "src/main/java/org/elasticsearch/snapshots/SnapshotsService.java", "status": "modified" }, { "diff": "@@ -1325,7 +1325,6 @@ public void snapshotStatusTest() throws Exception {\n // Pick one node and block it\n String blockedNode = blockNodeWithIndex(\"test-idx\");\n \n-\n logger.info(\"--> snapshot\");\n client.admin().cluster().prepareCreateSnapshot(\"test-repo\", \"test-snap\").setWaitForCompletion(false).setIndices(\"test-idx\").get();\n \n@@ -1358,10 +1357,16 @@ public void snapshotStatusTest() throws Exception {\n }\n }\n \n+ logger.info(\"--> checking that _current returns the currently running snapshot\", blockedNode);\n+ GetSnapshotsResponse getResponse = client.admin().cluster().prepareGetSnapshots(\"test-repo\").setCurrentSnapshot().execute().actionGet();\n+ assertThat(getResponse.getSnapshots().size(), equalTo(1));\n+ SnapshotInfo snapshotInfo = getResponse.getSnapshots().get(0);\n+ assertThat(snapshotInfo.state(), equalTo(SnapshotState.IN_PROGRESS));\n+\n logger.info(\"--> unblocking blocked node\");\n unblockNode(blockedNode);\n \n- SnapshotInfo snapshotInfo = waitForCompletion(\"test-repo\", \"test-snap\", TimeValue.timeValueSeconds(600));\n+ snapshotInfo = waitForCompletion(\"test-repo\", \"test-snap\", TimeValue.timeValueSeconds(600));\n logger.info(\"Number of failed shards [{}]\", snapshotInfo.shardFailures().size());\n logger.info(\"--> done\");\n \n@@ -1381,6 +1386,9 @@ public void snapshotStatusTest() throws Exception {\n response = client.admin().cluster().prepareSnapshotStatus().execute().actionGet();\n assertThat(response.getSnapshots().size(), equalTo(0));\n \n+ logger.info(\"--> checking that _current no longer returns the snapshot\", blockedNode);\n+ assertThat(client.admin().cluster().prepareGetSnapshots(\"test-repo\").addSnapshots(\"_current\").execute().actionGet().getSnapshots().isEmpty(), equalTo(true));\n+\n try {\n client.admin().cluster().prepareSnapshotStatus(\"test-repo\").addSnapshots(\"test-snap-doesnt-exist\").execute().actionGet();\n fail();", "filename": "src/test/java/org/elasticsearch/snapshots/SharedClusterSnapshotRestoreTests.java", "status": "modified" } ] }
{ "body": "I'm testing span queries on percolator and encountered a problem. It seems that percolator is ignoring position_offset_gap. Below there is an example.\n\n```\n$ curl \"localhost:9200/test1/_mapping/type?pretty\" -XPUT -d '{\"type\" : {\"properties\" : {\"field1\" : {\"type\" : \"string\", \"position_offset_gap\" : 100}}}}'\n$ curl \"localhost:9200/test1/.percolator\" -d '{\"type\" : \"type\", \"query\" : {\"span_near\" : {\"slop\" : 90, \"clauses\" : [{\"span_term\" : {\"field1\" : \"foo\"}}, {\"span_term\" : {\"field1\":\"bar\"}}]}}}'\n{\"_index\":\"test1\",\"_type\":\".percolator\",\"_id\":\"sK3bC-XYQ9SQ4qC3SpyXfg\",\"_version\":1,\"created\":true}\n$ curl \"localhost:9200/test1/type/_percolate?pretty\" -XGET -d '{\"doc\" : {\"field1\" : [\"foo\", \"bar\"]}}'{\n \"took\" : 3,\n \"_shards\" : {\n \"total\" : 5,\n \"successful\" : 5,\n \"failed\" : 0\n },\n \"total\" : 1,\n \"matches\" : [ {\n \"_index\" : \"test1\",\n \"_id\" : \"sK3bC-XYQ9SQ4qC3SpyXfg\"\n } ]\n}\n```\n\nTo verify the mapping I also run queries on ES index\n\n```\n$ curl \"localhost:9200/test1/type\" -d '{\"field1\" : [\"foo\", \"bar\"]}'\n{\"_index\":\"test1\",\"_type\":\"type\",\"_id\":\"HPMTUjVtS0utxr2USt6UGA\",\"_version\":1,\"created\":true}\n$ curl \"localhost:9200/test1/type/_search?pretty\" -d '{\"query\" : {\"span_near\" : {\"slop\" : 90, \"clauses\" : [{\"span_term\" : {\"field1\" : \"foo\"}}, {\"span_term\" : {\"field1\":\"bar\"}}]}}}'\n{\n \"took\" : 6,\n \"timed_out\" : false,\n \"_shards\" : {\n \"total\" : 5,\n \"successful\" : 5,\n \"failed\" : 0\n },\n \"hits\" : {\n \"total\" : 0,\n \"max_score\" : null,\n \"hits\" : [ ]\n }\n}\n```\n", "comments": [ { "body": "This resolves the problem\nhttps://github.com/elasticsearch/elasticsearch/pull/9387\n", "created_at": "2015-01-22T19:47:50Z" }, { "body": ":+1: \n", "created_at": "2015-01-22T22:59:38Z" } ], "number": 9386, "title": "Percolator is ignoring position_offset_gap" }
{ "body": "Fixes #9386\n", "number": 9388, "review_comments": [], "title": "Fix - Make percolator accept positionIncrementGap" }
{ "commits": [ { "message": "Fix - Make percolator accept positionIncrementGap\n\nFixes #9386" } ], "files": [ { "diff": "@@ -87,7 +87,7 @@ MemoryIndex indexDoc(ParseContext.Document d, Analyzer analyzer, MemoryIndex mem\n // like the indexer does\n TokenStream tokenStream = field.tokenStream(analyzer, null);\n if (tokenStream != null) {\n- memoryIndex.addField(field.name(), tokenStream, field.boost());\n+ memoryIndex.addField(field.name(), tokenStream, field.boost(), parsedDocument.analyzer().getPositionIncrementGap(field.name()));\n }\n } catch (IOException e) {\n throw new ElasticsearchException(\"Failed to create token stream\", e);", "filename": "src/main/java/org/elasticsearch/percolator/MultiDocumentPercolatorIndex.java", "status": "modified" }, { "diff": "@@ -58,7 +58,7 @@ public void prepare(PercolateContext context, ParsedDocument parsedDocument) {\n // like the indexer does\n TokenStream tokenStream = field.tokenStream(parsedDocument.analyzer(), null);\n if (tokenStream != null) {\n- memoryIndex.addField(field.name(), tokenStream, field.boost());\n+ memoryIndex.addField(field.name(), tokenStream, field.boost(), parsedDocument.analyzer().getPositionIncrementGap(field.name()));\n }\n } catch (IOException e) {\n throw new ElasticsearchException(\"Failed to create token stream\", e);", "filename": "src/main/java/org/elasticsearch/percolator/SingleDocumentPercolatorIndex.java", "status": "modified" } ] }
{ "body": "I have issue with aggregation result then i use nested aggregation in terms aggregation.\nTested with ES 1.4.[1,2].\n\nCreate mapping and add documents\n\n```\nDELETE test/product/_mapping\nPOST test/product/_mapping\n{\n \"properties\": {\n \"categories\": {\n \"type\": \"long\"\n },\n \"name\": {\n \"type\": \"string\"\n },\n \"property\": {\n \"type\": \"nested\", \n \"properties\": {\n \"id\": {\n \"type\": \"long\"\n }\n }\n }\n }\n}\n\nPOST test/product\n{\n \"name\":\"product1\",\n \"categories\":[1,2,3,4],\n \"property\":[\n {\"id\":1}, \n {\"id\":2},\n {\"id\":3}\n ]\n}\n\nPOST test/product\n{\n \"name\":\"product2\",\n \"categories\":[1,2],\n \"property\":[\n {\"id\":1}, \n {\"id\":5},\n {\"id\":4}\n ]\n}\n```\n\nAggregation query\n\n```\nGET test/product/_search\n{\n \"size\": 0,\n \"aggs\": {\n \"category\": {\n \"terms\": {\"field\": \"categories\",\"size\": 0},\n \"aggs\": {\n \"property\": {\n \"nested\": {\"path\": \"property\"},\n \"aggs\": {\n \"property_id\": {\n \"terms\": {\"field\": \"property.id\",\"size\": 0}\n }\n }\n }\n }\n }\n }\n}\n```\n\nResult \n\n```\n...\n\"aggregations\": {\n \"category\": {\n \"doc_count_error_upper_bound\": 0,\n \"sum_other_doc_count\": 0,\n \"buckets\": [\n {\n \"key\": 1,\n \"doc_count\": 2,\n \"property\": {\n \"doc_count\": 6,\n \"property_id\": {\n \"doc_count_error_upper_bound\": 0,\n \"sum_other_doc_count\": 0,\n \"buckets\": [\n {\n \"key\": 1,\n \"doc_count\": 2\n },\n {\n \"key\": 2,\n \"doc_count\": 1\n },\n {\n \"key\": 3,\n \"doc_count\": 1\n },\n {\n \"key\": 4,\n \"doc_count\": 1\n },\n {\n \"key\": 5,\n \"doc_count\": 1\n }\n ]\n }\n }\n },\n {\n \"key\": 2,\n \"doc_count\": 2,\n \"property\": {\n \"doc_count\": 0,\n \"property_id\": {\n \"doc_count_error_upper_bound\": 0,\n \"sum_other_doc_count\": 0,\n \"buckets\": []\n }\n }\n },\n {\n \"key\": 3,\n \"doc_count\": 1,\n \"property\": {\n \"doc_count\": 0,\n \"property_id\": {\n \"doc_count_error_upper_bound\": 0,\n \"sum_other_doc_count\": 0,\n \"buckets\": []\n }\n }\n },\n {\n \"key\": 4,\n \"doc_count\": 1,\n \"property\": {\n \"doc_count\": 0,\n \"property_id\": {\n \"doc_count_error_upper_bound\": 0,\n \"sum_other_doc_count\": 0,\n \"buckets\": []\n }\n }\n }\n ]\n }\n }\n...\n```\n\nI have no sub aggregation result in aggregation \"category\" keys 2,3,4 \n", "comments": [ { "body": "Try to use a single shard (in settings), sometimes there is trouble with several shards and a few documents.\n", "created_at": "2015-01-15T21:54:57Z" }, { "body": "I temporarily move category aggregation to nested aggregation, and have correct result, but i need a hierarchy of results with categories at the top.\n\n``` json\nGET test/product/_search\n{\n \"size\": 0,\n \"aggs\": {\n \"property\": {\n \"nested\": {\"path\": \"property\"},\n \"aggs\": {\n \"property_id\": {\n \"terms\": {\"field\": \"property.id\",\"size\": 0},\n \"aggs\": {\n \"categories\": {\n \"reverse_nested\": {},\n \"aggs\": {\n \"categories\": {\n \"terms\": {\"field\": \"categories\",\"size\": 0}\n }\n }\n }\n }\n }\n }\n }\n }\n}\n```\n\nresult\n\n``` json\n...\n \"aggregations\": {\n \"property\": {\n \"doc_count\": 6,\n \"property_id\": {\n \"doc_count_error_upper_bound\": 0,\n \"sum_other_doc_count\": 0,\n \"buckets\": [\n {\n \"key\": 1,\n \"doc_count\": 2,\n \"categories\": {\n \"doc_count\": 2,\n \"categories\": {\n \"doc_count_error_upper_bound\": 0,\n \"sum_other_doc_count\": 0,\n \"buckets\": [\n {\n \"key\": 1,\n \"doc_count\": 2\n },\n {\n \"key\": 2,\n \"doc_count\": 2\n },\n {\n \"key\": 3,\n \"doc_count\": 1\n },\n {\n \"key\": 4,\n \"doc_count\": 1\n }\n ]\n }\n }\n },\n {\n \"key\": 2,\n \"doc_count\": 1,\n \"categories\": {\n \"doc_count\": 1,\n \"categories\": {\n \"doc_count_error_upper_bound\": 0,\n \"sum_other_doc_count\": 0,\n \"buckets\": [\n {\n \"key\": 1,\n \"doc_count\": 1\n },\n {\n \"key\": 2,\n \"doc_count\": 1\n },\n {\n \"key\": 3,\n \"doc_count\": 1\n },\n {\n \"key\": 4,\n \"doc_count\": 1\n }\n ]\n }\n }\n },\n {\n \"key\": 3,\n \"doc_count\": 1,\n \"categories\": {\n \"doc_count\": 1,\n \"categories\": {\n \"doc_count_error_upper_bound\": 0,\n \"sum_other_doc_count\": 0,\n \"buckets\": [\n {\n \"key\": 1,\n \"doc_count\": 1\n },\n {\n \"key\": 2,\n \"doc_count\": 1\n },\n {\n \"key\": 3,\n \"doc_count\": 1\n },\n {\n \"key\": 4,\n \"doc_count\": 1\n }\n ]\n }\n }\n },\n {\n \"key\": 4,\n \"doc_count\": 1,\n \"categories\": {\n \"doc_count\": 1,\n \"categories\": {\n \"doc_count_error_upper_bound\": 0,\n \"sum_other_doc_count\": 0,\n \"buckets\": [\n {\n \"key\": 1,\n \"doc_count\": 1\n },\n {\n \"key\": 2,\n \"doc_count\": 1\n }\n ]\n }\n }\n },\n {\n \"key\": 5,\n \"doc_count\": 1,\n \"categories\": {\n \"doc_count\": 1,\n \"categories\": {\n \"doc_count_error_upper_bound\": 0,\n \"sum_other_doc_count\": 0,\n \"buckets\": [\n {\n \"key\": 1,\n \"doc_count\": 1\n },\n {\n \"key\": 2,\n \"doc_count\": 1\n }\n ]\n }\n }\n }\n ]\n }\n }\n }\n...\n```\n", "created_at": "2015-01-16T12:17:39Z" }, { "body": "@kvspb Thanks for reporting, this is indeed a bug.\n", "created_at": "2015-01-16T12:59:28Z" } ], "number": 9317, "title": "Nested aggregation in terms aggregation " }
{ "body": "This bug was introduced by #8454 which allowed the childFilter to only be consumed once. By adding the child docid buffering multiple buckets can now be emitted by the same doc id. This child docid buffering only happens in the score of the current root document, so the amount of child doc ids buffered is small.\n\nCloses #9317\n", "number": 9346, "review_comments": [ { "body": "Could childDocIdBuffers store a reusable iterator? Would avoid the object creation overhead every time a parent is re-collected.\n", "created_at": "2015-01-23T15:30:43Z" }, { "body": "Not a critical part of the test but presumably this should be named \"product2\" for clarity\n", "created_at": "2015-01-23T16:15:15Z" }, { "body": "Maybe pull the search context from the aggregation context instead? It does not make any difference today but I think it would help if we ever have aggregation unit tests.\n", "created_at": "2015-01-24T13:38:21Z" }, { "body": "I'm wondering that it might be even easier to return an IntList instead of an iterator?\n", "created_at": "2015-01-24T13:41:00Z" }, { "body": "+1I like that idea. I'll update the PR with that change.\n", "created_at": "2015-01-25T14:33:59Z" }, { "body": "s/getChildIterator/getChildren/ ?\n", "created_at": "2015-01-26T11:53:18Z" }, { "body": "good, point, will fix this and push it.\n", "created_at": "2015-01-26T13:52:59Z" } ], "title": "Fix handling of multiple buckets being emitted for the same parent doc id in nested aggregation" }
{ "commits": [ { "message": "Nested aggregator: Fix handling of multiple buckets being emitted for the same parent doc id.\n\nThis bug was introduced by #8454 which allowed the childFilter to only be consumed once. By adding the child docid buffering multiple buckets can now be emitted by the same doc id. This child docid buffering only happens in the scope of the current root document, so the amount of child doc ids buffered is small.\n\nCloses #9317\nCloses #9346" } ], "files": [ { "diff": "@@ -18,6 +18,8 @@\n */\n package org.elasticsearch.search.aggregations.bucket.nested;\n \n+import com.carrotsearch.hppc.IntArrayList;\n+import com.carrotsearch.hppc.IntObjectOpenHashMap;\n import org.apache.lucene.index.AtomicReaderContext;\n import org.apache.lucene.search.DocIdSet;\n import org.apache.lucene.search.DocIdSetIterator;\n@@ -46,9 +48,12 @@ public class NestedAggregator extends SingleBucketAggregator implements ReaderCo\n \n private DocIdSetIterator childDocs;\n private FixedBitSet parentDocs;\n-\n private AtomicReaderContext reader;\n \n+ private FixedBitSet rootDocs;\n+ private int currentRootDoc = -1;\n+ private final IntObjectOpenHashMap<IntArrayList> childDocIdBuffers = new IntObjectOpenHashMap<>();\n+\n public NestedAggregator(String name, AggregatorFactories factories, ObjectMapper objectMapper, AggregationContext aggregationContext, Aggregator parentAggregator) {\n super(name, factories, aggregationContext, parentAggregator);\n this.parentAggregator = parentAggregator;\n@@ -79,6 +84,7 @@ public void setNextReader(AtomicReaderContext reader) {\n } else {\n childDocs = childDocIdSet.iterator();\n }\n+ rootDocs = context.searchContext().fixedBitSetFilterCache().getFixedBitSetFilter(NonNestedDocsFilter.INSTANCE).getDocIdSet(reader, null);\n } catch (IOException ioe) {\n throw new AggregationExecutionException(\"Failed to aggregate [\" + name + \"]\", ioe);\n }\n@@ -109,22 +115,22 @@ public void collect(int parentDoc, long bucketOrd) throws IOException {\n parentDocs = parentFilter.getDocIdSet(reader, null);\n }\n \n- int prevParentDoc = parentDocs.prevSetBit(parentDoc - 1);\n- int childDocId;\n- if (childDocs.docID() > prevParentDoc) {\n- childDocId = childDocs.docID();\n- } else {\n- childDocId = childDocs.advance(prevParentDoc + 1);\n- }\n-\n int numChildren = 0;\n- for (; childDocId < parentDoc; childDocId = childDocs.nextDoc()) {\n+ IntArrayList iterator = getChildren(parentDoc);\n+ final int[] buffer = iterator.buffer;\n+ final int size = iterator.size();\n+ for (int i = 0; i < size; i++) {\n numChildren++;\n- collectBucketNoCounts(childDocId, bucketOrd);\n+ collectBucketNoCounts(buffer[i], bucketOrd);\n }\n incrementBucketDocCount(bucketOrd, numChildren);\n }\n \n+ @Override\n+ protected void doClose() {\n+ childDocIdBuffers.clear();\n+ }\n+\n @Override\n public InternalAggregation buildAggregation(long owningBucketOrdinal) {\n return new InternalNested(name, bucketDocCount(owningBucketOrdinal), bucketAggregations(owningBucketOrdinal));\n@@ -183,4 +189,42 @@ public InternalAggregation buildEmptyAggregation() {\n }\n }\n }\n+\n+ // The aggs framework can collect buckets for the same parent doc id more than once and because the children docs\n+ // can only be consumed once we need to buffer the child docs. We only need to buffer child docs in the scope\n+ // of the current root doc.\n+\n+ // Examples:\n+ // 1) nested agg wrapped is by terms agg and multiple buckets per document are emitted\n+ // 2) Multiple nested fields are defined. A nested agg joins back to another nested agg via the reverse_nested agg.\n+ // For each child in the first nested agg the second nested agg gets invoked with the same buckets / docids\n+ private IntArrayList getChildren(final int parentDocId) throws IOException {\n+ int rootDocId = rootDocs.nextSetBit(parentDocId);\n+ if (currentRootDoc == rootDocId) {\n+ final IntArrayList childDocIdBuffer = childDocIdBuffers.get(parentDocId);\n+ if (childDocIdBuffer != null) {\n+ return childDocIdBuffer;\n+ } else {\n+ // here we translate the parent doc to a list of its nested docs,\n+ // and then collect buckets for every one of them so they'll be collected\n+ final IntArrayList newChildDocIdBuffer = new IntArrayList();\n+ childDocIdBuffers.put(parentDocId, newChildDocIdBuffer);\n+ int prevParentDoc = parentDocs.prevSetBit(parentDocId - 1);\n+ int childDocId;\n+ if (childDocs.docID() > prevParentDoc) {\n+ childDocId = childDocs.docID();\n+ } else {\n+ childDocId = childDocs.advance(prevParentDoc + 1);\n+ }\n+ for (; childDocId < parentDocId; childDocId = childDocs.nextDoc()) {\n+ newChildDocIdBuffer.add(childDocId);\n+ }\n+ return newChildDocIdBuffer;\n+ }\n+ } else {\n+ this.currentRootDoc = rootDocId;\n+ childDocIdBuffers.clear();\n+ return getChildren(parentDocId);\n+ }\n+ }\n }", "filename": "src/main/java/org/elasticsearch/search/aggregations/bucket/nested/NestedAggregator.java", "status": "modified" }, { "diff": "@@ -449,4 +449,90 @@ public void testParentFilterResolvedCorrectly() throws Exception {\n tags = nestedTags.getAggregations().get(\"tag\");\n assertThat(tags.getBuckets().size(), equalTo(0)); // and this must be empty\n }\n+\n+ @Test\n+ public void nestedSameDocIdProcessedMultipleTime() throws Exception {\n+ assertAcked(\n+ prepareCreate(\"idx4\")\n+ .setSettings(ImmutableSettings.builder().put(SETTING_NUMBER_OF_SHARDS, 1).put(SETTING_NUMBER_OF_REPLICAS, 0))\n+ .addMapping(\"product\", \"categories\", \"type=string\", \"name\", \"type=string\", \"property\", \"type=nested\")\n+ );\n+\n+ client().prepareIndex(\"idx4\", \"product\", \"1\").setSource(jsonBuilder().startObject()\n+ .field(\"name\", \"product1\")\n+ .field(\"categories\", \"1\", \"2\", \"3\", \"4\")\n+ .startArray(\"property\")\n+ .startObject().field(\"id\", 1).endObject()\n+ .startObject().field(\"id\", 2).endObject()\n+ .startObject().field(\"id\", 3).endObject()\n+ .endArray()\n+ .endObject()).get();\n+ client().prepareIndex(\"idx4\", \"product\", \"2\").setSource(jsonBuilder().startObject()\n+ .field(\"name\", \"product2\")\n+ .field(\"categories\", \"1\", \"2\")\n+ .startArray(\"property\")\n+ .startObject().field(\"id\", 1).endObject()\n+ .startObject().field(\"id\", 5).endObject()\n+ .startObject().field(\"id\", 4).endObject()\n+ .endArray()\n+ .endObject()).get();\n+ refresh();\n+\n+ SearchResponse response = client().prepareSearch(\"idx4\").setTypes(\"product\")\n+ .addAggregation(terms(\"category\").field(\"categories\").subAggregation(\n+ nested(\"property\").path(\"property\").subAggregation(\n+ terms(\"property_id\").field(\"property.id\")\n+ )\n+ ))\n+ .get();\n+ assertNoFailures(response);\n+ assertHitCount(response, 2);\n+\n+ Terms category = response.getAggregations().get(\"category\");\n+ assertThat(category.getBuckets().size(), equalTo(4));\n+\n+ Terms.Bucket bucket = category.getBucketByKey(\"1\");\n+ assertThat(bucket.getDocCount(), equalTo(2l));\n+ Nested property = bucket.getAggregations().get(\"property\");\n+ assertThat(property.getDocCount(), equalTo(6l));\n+ Terms propertyId = property.getAggregations().get(\"property_id\");\n+ assertThat(propertyId.getBuckets().size(), equalTo(5));\n+ assertThat(propertyId.getBucketByKey(\"1\").getDocCount(), equalTo(2l));\n+ assertThat(propertyId.getBucketByKey(\"2\").getDocCount(), equalTo(1l));\n+ assertThat(propertyId.getBucketByKey(\"3\").getDocCount(), equalTo(1l));\n+ assertThat(propertyId.getBucketByKey(\"4\").getDocCount(), equalTo(1l));\n+ assertThat(propertyId.getBucketByKey(\"5\").getDocCount(), equalTo(1l));\n+\n+ bucket = category.getBucketByKey(\"2\");\n+ assertThat(bucket.getDocCount(), equalTo(2l));\n+ property = bucket.getAggregations().get(\"property\");\n+ assertThat(property.getDocCount(), equalTo(6l));\n+ propertyId = property.getAggregations().get(\"property_id\");\n+ assertThat(propertyId.getBuckets().size(), equalTo(5));\n+ assertThat(propertyId.getBucketByKey(\"1\").getDocCount(), equalTo(2l));\n+ assertThat(propertyId.getBucketByKey(\"2\").getDocCount(), equalTo(1l));\n+ assertThat(propertyId.getBucketByKey(\"3\").getDocCount(), equalTo(1l));\n+ assertThat(propertyId.getBucketByKey(\"4\").getDocCount(), equalTo(1l));\n+ assertThat(propertyId.getBucketByKey(\"5\").getDocCount(), equalTo(1l));\n+\n+ bucket = category.getBucketByKey(\"3\");\n+ assertThat(bucket.getDocCount(), equalTo(1l));\n+ property = bucket.getAggregations().get(\"property\");\n+ assertThat(property.getDocCount(), equalTo(3l));\n+ propertyId = property.getAggregations().get(\"property_id\");\n+ assertThat(propertyId.getBuckets().size(), equalTo(3));\n+ assertThat(propertyId.getBucketByKey(\"1\").getDocCount(), equalTo(1l));\n+ assertThat(propertyId.getBucketByKey(\"2\").getDocCount(), equalTo(1l));\n+ assertThat(propertyId.getBucketByKey(\"3\").getDocCount(), equalTo(1l));\n+\n+ bucket = category.getBucketByKey(\"4\");\n+ assertThat(bucket.getDocCount(), equalTo(1l));\n+ property = bucket.getAggregations().get(\"property\");\n+ assertThat(property.getDocCount(), equalTo(3l));\n+ propertyId = property.getAggregations().get(\"property_id\");\n+ assertThat(propertyId.getBuckets().size(), equalTo(3));\n+ assertThat(propertyId.getBucketByKey(\"1\").getDocCount(), equalTo(1l));\n+ assertThat(propertyId.getBucketByKey(\"2\").getDocCount(), equalTo(1l));\n+ assertThat(propertyId.getBucketByKey(\"3\").getDocCount(), equalTo(1l));\n+ }\n }", "filename": "src/test/java/org/elasticsearch/search/aggregations/bucket/NestedTests.java", "status": "modified" } ] }
{ "body": "By executing in docid order the child level filters don't require random access bitset any more and can use normal docid set iterators. Also the child filters don't need the bitset cache anymore and can rely on the normal filter cache.\n\nNote: PR is against 1.x branch\n", "comments": [ { "body": "@martijnvg I like the change and just left one minor comment about a potential simplification.\n", "created_at": "2014-11-14T14:04:42Z" }, { "body": "@jpountz I updated the PR with your simplification and it looks much better now!\n", "created_at": "2014-11-14T16:00:38Z" }, { "body": "LGTM, thanks @martijnvg !\n", "created_at": "2014-11-14T16:01:37Z" } ], "number": 8454, "title": "Change nested agg to execute in doc id order" }
{ "body": "This bug was introduced by #8454 which allowed the childFilter to only be consumed once. By adding the child docid buffering multiple buckets can now be emitted by the same doc id. This child docid buffering only happens in the score of the current root document, so the amount of child doc ids buffered is small.\n\nCloses #9317\n", "number": 9346, "review_comments": [ { "body": "Could childDocIdBuffers store a reusable iterator? Would avoid the object creation overhead every time a parent is re-collected.\n", "created_at": "2015-01-23T15:30:43Z" }, { "body": "Not a critical part of the test but presumably this should be named \"product2\" for clarity\n", "created_at": "2015-01-23T16:15:15Z" }, { "body": "Maybe pull the search context from the aggregation context instead? It does not make any difference today but I think it would help if we ever have aggregation unit tests.\n", "created_at": "2015-01-24T13:38:21Z" }, { "body": "I'm wondering that it might be even easier to return an IntList instead of an iterator?\n", "created_at": "2015-01-24T13:41:00Z" }, { "body": "+1I like that idea. I'll update the PR with that change.\n", "created_at": "2015-01-25T14:33:59Z" }, { "body": "s/getChildIterator/getChildren/ ?\n", "created_at": "2015-01-26T11:53:18Z" }, { "body": "good, point, will fix this and push it.\n", "created_at": "2015-01-26T13:52:59Z" } ], "title": "Fix handling of multiple buckets being emitted for the same parent doc id in nested aggregation" }
{ "commits": [ { "message": "Nested aggregator: Fix handling of multiple buckets being emitted for the same parent doc id.\n\nThis bug was introduced by #8454 which allowed the childFilter to only be consumed once. By adding the child docid buffering multiple buckets can now be emitted by the same doc id. This child docid buffering only happens in the scope of the current root document, so the amount of child doc ids buffered is small.\n\nCloses #9317\nCloses #9346" } ], "files": [ { "diff": "@@ -18,6 +18,8 @@\n */\n package org.elasticsearch.search.aggregations.bucket.nested;\n \n+import com.carrotsearch.hppc.IntArrayList;\n+import com.carrotsearch.hppc.IntObjectOpenHashMap;\n import org.apache.lucene.index.AtomicReaderContext;\n import org.apache.lucene.search.DocIdSet;\n import org.apache.lucene.search.DocIdSetIterator;\n@@ -46,9 +48,12 @@ public class NestedAggregator extends SingleBucketAggregator implements ReaderCo\n \n private DocIdSetIterator childDocs;\n private FixedBitSet parentDocs;\n-\n private AtomicReaderContext reader;\n \n+ private FixedBitSet rootDocs;\n+ private int currentRootDoc = -1;\n+ private final IntObjectOpenHashMap<IntArrayList> childDocIdBuffers = new IntObjectOpenHashMap<>();\n+\n public NestedAggregator(String name, AggregatorFactories factories, ObjectMapper objectMapper, AggregationContext aggregationContext, Aggregator parentAggregator) {\n super(name, factories, aggregationContext, parentAggregator);\n this.parentAggregator = parentAggregator;\n@@ -79,6 +84,7 @@ public void setNextReader(AtomicReaderContext reader) {\n } else {\n childDocs = childDocIdSet.iterator();\n }\n+ rootDocs = context.searchContext().fixedBitSetFilterCache().getFixedBitSetFilter(NonNestedDocsFilter.INSTANCE).getDocIdSet(reader, null);\n } catch (IOException ioe) {\n throw new AggregationExecutionException(\"Failed to aggregate [\" + name + \"]\", ioe);\n }\n@@ -109,22 +115,22 @@ public void collect(int parentDoc, long bucketOrd) throws IOException {\n parentDocs = parentFilter.getDocIdSet(reader, null);\n }\n \n- int prevParentDoc = parentDocs.prevSetBit(parentDoc - 1);\n- int childDocId;\n- if (childDocs.docID() > prevParentDoc) {\n- childDocId = childDocs.docID();\n- } else {\n- childDocId = childDocs.advance(prevParentDoc + 1);\n- }\n-\n int numChildren = 0;\n- for (; childDocId < parentDoc; childDocId = childDocs.nextDoc()) {\n+ IntArrayList iterator = getChildren(parentDoc);\n+ final int[] buffer = iterator.buffer;\n+ final int size = iterator.size();\n+ for (int i = 0; i < size; i++) {\n numChildren++;\n- collectBucketNoCounts(childDocId, bucketOrd);\n+ collectBucketNoCounts(buffer[i], bucketOrd);\n }\n incrementBucketDocCount(bucketOrd, numChildren);\n }\n \n+ @Override\n+ protected void doClose() {\n+ childDocIdBuffers.clear();\n+ }\n+\n @Override\n public InternalAggregation buildAggregation(long owningBucketOrdinal) {\n return new InternalNested(name, bucketDocCount(owningBucketOrdinal), bucketAggregations(owningBucketOrdinal));\n@@ -183,4 +189,42 @@ public InternalAggregation buildEmptyAggregation() {\n }\n }\n }\n+\n+ // The aggs framework can collect buckets for the same parent doc id more than once and because the children docs\n+ // can only be consumed once we need to buffer the child docs. We only need to buffer child docs in the scope\n+ // of the current root doc.\n+\n+ // Examples:\n+ // 1) nested agg wrapped is by terms agg and multiple buckets per document are emitted\n+ // 2) Multiple nested fields are defined. A nested agg joins back to another nested agg via the reverse_nested agg.\n+ // For each child in the first nested agg the second nested agg gets invoked with the same buckets / docids\n+ private IntArrayList getChildren(final int parentDocId) throws IOException {\n+ int rootDocId = rootDocs.nextSetBit(parentDocId);\n+ if (currentRootDoc == rootDocId) {\n+ final IntArrayList childDocIdBuffer = childDocIdBuffers.get(parentDocId);\n+ if (childDocIdBuffer != null) {\n+ return childDocIdBuffer;\n+ } else {\n+ // here we translate the parent doc to a list of its nested docs,\n+ // and then collect buckets for every one of them so they'll be collected\n+ final IntArrayList newChildDocIdBuffer = new IntArrayList();\n+ childDocIdBuffers.put(parentDocId, newChildDocIdBuffer);\n+ int prevParentDoc = parentDocs.prevSetBit(parentDocId - 1);\n+ int childDocId;\n+ if (childDocs.docID() > prevParentDoc) {\n+ childDocId = childDocs.docID();\n+ } else {\n+ childDocId = childDocs.advance(prevParentDoc + 1);\n+ }\n+ for (; childDocId < parentDocId; childDocId = childDocs.nextDoc()) {\n+ newChildDocIdBuffer.add(childDocId);\n+ }\n+ return newChildDocIdBuffer;\n+ }\n+ } else {\n+ this.currentRootDoc = rootDocId;\n+ childDocIdBuffers.clear();\n+ return getChildren(parentDocId);\n+ }\n+ }\n }", "filename": "src/main/java/org/elasticsearch/search/aggregations/bucket/nested/NestedAggregator.java", "status": "modified" }, { "diff": "@@ -449,4 +449,90 @@ public void testParentFilterResolvedCorrectly() throws Exception {\n tags = nestedTags.getAggregations().get(\"tag\");\n assertThat(tags.getBuckets().size(), equalTo(0)); // and this must be empty\n }\n+\n+ @Test\n+ public void nestedSameDocIdProcessedMultipleTime() throws Exception {\n+ assertAcked(\n+ prepareCreate(\"idx4\")\n+ .setSettings(ImmutableSettings.builder().put(SETTING_NUMBER_OF_SHARDS, 1).put(SETTING_NUMBER_OF_REPLICAS, 0))\n+ .addMapping(\"product\", \"categories\", \"type=string\", \"name\", \"type=string\", \"property\", \"type=nested\")\n+ );\n+\n+ client().prepareIndex(\"idx4\", \"product\", \"1\").setSource(jsonBuilder().startObject()\n+ .field(\"name\", \"product1\")\n+ .field(\"categories\", \"1\", \"2\", \"3\", \"4\")\n+ .startArray(\"property\")\n+ .startObject().field(\"id\", 1).endObject()\n+ .startObject().field(\"id\", 2).endObject()\n+ .startObject().field(\"id\", 3).endObject()\n+ .endArray()\n+ .endObject()).get();\n+ client().prepareIndex(\"idx4\", \"product\", \"2\").setSource(jsonBuilder().startObject()\n+ .field(\"name\", \"product2\")\n+ .field(\"categories\", \"1\", \"2\")\n+ .startArray(\"property\")\n+ .startObject().field(\"id\", 1).endObject()\n+ .startObject().field(\"id\", 5).endObject()\n+ .startObject().field(\"id\", 4).endObject()\n+ .endArray()\n+ .endObject()).get();\n+ refresh();\n+\n+ SearchResponse response = client().prepareSearch(\"idx4\").setTypes(\"product\")\n+ .addAggregation(terms(\"category\").field(\"categories\").subAggregation(\n+ nested(\"property\").path(\"property\").subAggregation(\n+ terms(\"property_id\").field(\"property.id\")\n+ )\n+ ))\n+ .get();\n+ assertNoFailures(response);\n+ assertHitCount(response, 2);\n+\n+ Terms category = response.getAggregations().get(\"category\");\n+ assertThat(category.getBuckets().size(), equalTo(4));\n+\n+ Terms.Bucket bucket = category.getBucketByKey(\"1\");\n+ assertThat(bucket.getDocCount(), equalTo(2l));\n+ Nested property = bucket.getAggregations().get(\"property\");\n+ assertThat(property.getDocCount(), equalTo(6l));\n+ Terms propertyId = property.getAggregations().get(\"property_id\");\n+ assertThat(propertyId.getBuckets().size(), equalTo(5));\n+ assertThat(propertyId.getBucketByKey(\"1\").getDocCount(), equalTo(2l));\n+ assertThat(propertyId.getBucketByKey(\"2\").getDocCount(), equalTo(1l));\n+ assertThat(propertyId.getBucketByKey(\"3\").getDocCount(), equalTo(1l));\n+ assertThat(propertyId.getBucketByKey(\"4\").getDocCount(), equalTo(1l));\n+ assertThat(propertyId.getBucketByKey(\"5\").getDocCount(), equalTo(1l));\n+\n+ bucket = category.getBucketByKey(\"2\");\n+ assertThat(bucket.getDocCount(), equalTo(2l));\n+ property = bucket.getAggregations().get(\"property\");\n+ assertThat(property.getDocCount(), equalTo(6l));\n+ propertyId = property.getAggregations().get(\"property_id\");\n+ assertThat(propertyId.getBuckets().size(), equalTo(5));\n+ assertThat(propertyId.getBucketByKey(\"1\").getDocCount(), equalTo(2l));\n+ assertThat(propertyId.getBucketByKey(\"2\").getDocCount(), equalTo(1l));\n+ assertThat(propertyId.getBucketByKey(\"3\").getDocCount(), equalTo(1l));\n+ assertThat(propertyId.getBucketByKey(\"4\").getDocCount(), equalTo(1l));\n+ assertThat(propertyId.getBucketByKey(\"5\").getDocCount(), equalTo(1l));\n+\n+ bucket = category.getBucketByKey(\"3\");\n+ assertThat(bucket.getDocCount(), equalTo(1l));\n+ property = bucket.getAggregations().get(\"property\");\n+ assertThat(property.getDocCount(), equalTo(3l));\n+ propertyId = property.getAggregations().get(\"property_id\");\n+ assertThat(propertyId.getBuckets().size(), equalTo(3));\n+ assertThat(propertyId.getBucketByKey(\"1\").getDocCount(), equalTo(1l));\n+ assertThat(propertyId.getBucketByKey(\"2\").getDocCount(), equalTo(1l));\n+ assertThat(propertyId.getBucketByKey(\"3\").getDocCount(), equalTo(1l));\n+\n+ bucket = category.getBucketByKey(\"4\");\n+ assertThat(bucket.getDocCount(), equalTo(1l));\n+ property = bucket.getAggregations().get(\"property\");\n+ assertThat(property.getDocCount(), equalTo(3l));\n+ propertyId = property.getAggregations().get(\"property_id\");\n+ assertThat(propertyId.getBuckets().size(), equalTo(3));\n+ assertThat(propertyId.getBucketByKey(\"1\").getDocCount(), equalTo(1l));\n+ assertThat(propertyId.getBucketByKey(\"2\").getDocCount(), equalTo(1l));\n+ assertThat(propertyId.getBucketByKey(\"3\").getDocCount(), equalTo(1l));\n+ }\n }", "filename": "src/test/java/org/elasticsearch/search/aggregations/bucket/NestedTests.java", "status": "modified" } ] }
{ "body": "This bug was introduced by #8454 which allowed the childFilter to only be consumed once. By adding the child docid buffering multiple buckets can now be emitted by the same doc id. This child docid buffering only happens in the score of the current root document, so the amount of child doc ids buffered is small.\n\nCloses #9317\n", "comments": [ { "body": "Looks good and tests passing here\n", "created_at": "2015-01-23T16:18:30Z" }, { "body": "@markharwood @jpountz I updated the PR and use IntArrayList directly now.\n", "created_at": "2015-01-25T21:37:57Z" }, { "body": "+1 Thanks @martijnvg \n\nOne concern could be that we keep on creating new object in collect() which is called in a tight loop, but correctness over speed, we can take time to think about it one this change is in.\n", "created_at": "2015-01-26T13:37:07Z" }, { "body": "@jpountz Thanks, I was thinking about adding a different impl for when we know that that the child docs of root docs are only evaluated once. (similar to how we push to a different impl for singe values values / multi values in the global ordinals terms aggregator)\n", "created_at": "2015-01-26T13:52:34Z" } ], "number": 9346, "title": "Fix handling of multiple buckets being emitted for the same parent doc id in nested aggregation" }
{ "body": "PR for #9263\n\nOn the master, 1.x and 1.4 PR #9346 should be applied first.\n", "number": 9345, "review_comments": [], "title": "In reverse nested aggregation, fix handling of the same child doc id being processed multiple times." }
{ "commits": [ { "message": "Aggs: fix handling of the same child doc id being processed multiple times in the `reverse_nested` aggregation.\n\nCloses #9263\nCloses #9345" } ], "files": [ { "diff": "@@ -20,12 +20,11 @@\n \n import com.carrotsearch.hppc.LongIntOpenHashMap;\n import org.apache.lucene.index.AtomicReaderContext;\n-import org.apache.lucene.search.DocIdSet;\n import org.apache.lucene.search.DocIdSetIterator;\n import org.apache.lucene.search.Filter;\n+import org.apache.lucene.util.FixedBitSet;\n import org.elasticsearch.common.lease.Releasables;\n import org.elasticsearch.common.lucene.ReaderContextAware;\n-import org.elasticsearch.common.lucene.docset.DocIdSets;\n import org.elasticsearch.common.recycler.Recycler;\n import org.elasticsearch.index.cache.fixedbitset.FixedBitSetFilter;\n import org.elasticsearch.index.mapper.MapperService;\n@@ -35,7 +34,6 @@\n import org.elasticsearch.search.aggregations.*;\n import org.elasticsearch.search.aggregations.bucket.SingleBucketAggregator;\n import org.elasticsearch.search.aggregations.support.AggregationContext;\n-import org.elasticsearch.search.internal.SearchContext;\n \n import java.io.IOException;\n \n@@ -45,7 +43,8 @@\n public class ReverseNestedAggregator extends SingleBucketAggregator implements ReaderContextAware {\n \n private final FixedBitSetFilter parentFilter;\n- private DocIdSetIterator parentDocs;\n+ // It is ok to use bitset from bitset cache, because in this agg the path always to a nested parent path.\n+ private FixedBitSet parentDocs;\n \n // TODO: Add LongIntPagedHashMap?\n private final Recycler.V<LongIntOpenHashMap> bucketOrdToLastCollectedParentDocRecycler;\n@@ -54,9 +53,9 @@ public class ReverseNestedAggregator extends SingleBucketAggregator implements R\n public ReverseNestedAggregator(String name, AggregatorFactories factories, ObjectMapper objectMapper, AggregationContext aggregationContext, Aggregator parent) {\n super(name, factories, aggregationContext, parent);\n if (objectMapper == null) {\n- parentFilter = SearchContext.current().fixedBitSetFilterCache().getFixedBitSetFilter(NonNestedDocsFilter.INSTANCE);\n+ parentFilter = context.searchContext().fixedBitSetFilterCache().getFixedBitSetFilter(NonNestedDocsFilter.INSTANCE);\n } else {\n- parentFilter = SearchContext.current().fixedBitSetFilterCache().getFixedBitSetFilter(objectMapper.nestedTypeFilter());\n+ parentFilter = context.searchContext().fixedBitSetFilterCache().getFixedBitSetFilter(objectMapper.nestedTypeFilter());\n }\n bucketOrdToLastCollectedParentDocRecycler = aggregationContext.searchContext().cacheRecycler().longIntMap(32);\n bucketOrdToLastCollectedParentDoc = bucketOrdToLastCollectedParentDocRecycler.v();\n@@ -69,12 +68,7 @@ public void setNextReader(AtomicReaderContext reader) {\n try {\n // In ES if parent is deleted, then also the children are deleted, so the child docs this agg receives\n // must belong to parent docs that is alive. For this reason acceptedDocs can be null here.\n- DocIdSet docIdSet = parentFilter.getDocIdSet(reader, null);\n- if (DocIdSets.isEmpty(docIdSet)) {\n- parentDocs = null;\n- } else {\n- parentDocs = docIdSet.iterator();\n- }\n+ parentDocs = parentFilter.getDocIdSet(reader, null);\n } catch (IOException ioe) {\n throw new AggregationExecutionException(\"Failed to aggregate [\" + name + \"]\", ioe);\n }\n@@ -87,12 +81,7 @@ public void collect(int childDoc, long bucketOrd) throws IOException {\n }\n \n // fast forward to retrieve the parentDoc this childDoc belongs to\n- final int parentDoc;\n- if (parentDocs.docID() < childDoc) {\n- parentDoc = parentDocs.advance(childDoc);\n- } else {\n- parentDoc = parentDocs.docID();\n- }\n+ final int parentDoc = parentDocs.nextSetBit(childDoc);\n assert childDoc <= parentDoc && parentDoc != DocIdSetIterator.NO_MORE_DOCS;\n if (bucketOrdToLastCollectedParentDoc.containsKey(bucketOrd)) {\n int lastCollectedParentDoc = bucketOrdToLastCollectedParentDoc.lget();\n@@ -157,7 +146,7 @@ public Aggregator create(AggregationContext context, Aggregator parent, long exp\n \n final ObjectMapper objectMapper;\n if (path != null) {\n- MapperService.SmartNameObjectMapper mapper = SearchContext.current().smartNameObjectMapper(path);\n+ MapperService.SmartNameObjectMapper mapper = context.searchContext().smartNameObjectMapper(path);\n if (mapper == null) {\n return new Unmapped(name, context, parent);\n }", "filename": "src/main/java/org/elasticsearch/search/aggregations/bucket/nested/ReverseNestedAggregator.java", "status": "modified" }, { "diff": "@@ -20,11 +20,14 @@\n \n import org.elasticsearch.action.search.SearchPhaseExecutionException;\n import org.elasticsearch.action.search.SearchResponse;\n+import org.elasticsearch.common.settings.ImmutableSettings;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.search.aggregations.Aggregator.SubAggCollectionMode;\n+import org.elasticsearch.search.aggregations.bucket.filter.Filter;\n import org.elasticsearch.search.aggregations.bucket.nested.Nested;\n import org.elasticsearch.search.aggregations.bucket.nested.ReverseNested;\n import org.elasticsearch.search.aggregations.bucket.terms.Terms;\n+import org.elasticsearch.search.aggregations.metrics.valuecount.ValueCount;\n import org.elasticsearch.test.ElasticsearchIntegrationTest;\n import org.hamcrest.Matchers;\n import org.junit.Test;\n@@ -33,11 +36,15 @@\n import java.util.Arrays;\n import java.util.List;\n \n+import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_NUMBER_OF_REPLICAS;\n+import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_NUMBER_OF_SHARDS;\n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n+import static org.elasticsearch.index.query.FilterBuilders.termFilter;\n import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.*;\n-import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n-import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchResponse;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.*;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertHitCount;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertNoFailures;\n import static org.hamcrest.Matchers.*;\n import static org.hamcrest.core.IsNull.notNullValue;\n \n@@ -464,4 +471,163 @@ public void nonExistingNestedField() throws Exception {\n ReverseNested reverseNested = nested.getAggregations().get(\"incorrect\");\n assertThat(reverseNested.getDocCount(), is(0l));\n }\n+\n+ @Test\n+ public void testSameParentDocHavingMultipleBuckets() throws Exception {\n+ XContentBuilder mapping = jsonBuilder().startObject().startObject(\"product\").field(\"dynamic\", \"strict\").startObject(\"properties\")\n+ .startObject(\"id\").field(\"type\", \"long\").endObject()\n+ .startObject(\"category\")\n+ .field(\"type\", \"nested\")\n+ .startObject(\"properties\")\n+ .startObject(\"name\").field(\"type\", \"string\").endObject()\n+ .endObject()\n+ .endObject()\n+ .startObject(\"sku\")\n+ .field(\"type\", \"nested\")\n+ .startObject(\"properties\")\n+ .startObject(\"sku_type\").field(\"type\", \"string\").endObject()\n+ .startObject(\"colors\")\n+ .field(\"type\", \"nested\")\n+ .startObject(\"properties\")\n+ .startObject(\"name\").field(\"type\", \"string\").endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject().endObject().endObject();\n+ assertAcked(\n+ prepareCreate(\"idx3\")\n+ .setSettings(ImmutableSettings.builder().put(SETTING_NUMBER_OF_SHARDS, 1).put(SETTING_NUMBER_OF_REPLICAS, 0))\n+ .addMapping(\"product\", mapping)\n+ );\n+\n+ client().prepareIndex(\"idx3\", \"product\", \"1\").setRefresh(true).setSource(\n+ jsonBuilder().startObject()\n+ .startArray(\"sku\")\n+ .startObject()\n+ .field(\"sku_type\", \"bar1\")\n+ .startArray(\"colors\")\n+ .startObject().field(\"name\", \"red\").endObject()\n+ .startObject().field(\"name\", \"green\").endObject()\n+ .startObject().field(\"name\", \"yellow\").endObject()\n+ .endArray()\n+ .endObject()\n+ .startObject()\n+ .field(\"sku_type\", \"bar1\")\n+ .startArray(\"colors\")\n+ .startObject().field(\"name\", \"red\").endObject()\n+ .startObject().field(\"name\", \"blue\").endObject()\n+ .startObject().field(\"name\", \"white\").endObject()\n+ .endArray()\n+ .endObject()\n+ .startObject()\n+ .field(\"sku_type\", \"bar1\")\n+ .startArray(\"colors\")\n+ .startObject().field(\"name\", \"black\").endObject()\n+ .startObject().field(\"name\", \"blue\").endObject()\n+ .endArray()\n+ .endObject()\n+ .startObject()\n+ .field(\"sku_type\", \"bar2\")\n+ .startArray(\"colors\")\n+ .startObject().field(\"name\", \"orange\").endObject()\n+ .endArray()\n+ .endObject()\n+ .startObject()\n+ .field(\"sku_type\", \"bar2\")\n+ .startArray(\"colors\")\n+ .startObject().field(\"name\", \"pink\").endObject()\n+ .endArray()\n+ .endObject()\n+ .endArray()\n+ .startArray(\"category\")\n+ .startObject().field(\"name\", \"abc\").endObject()\n+ .startObject().field(\"name\", \"klm\").endObject()\n+ .startObject().field(\"name\", \"xyz\").endObject()\n+ .endArray()\n+ .endObject()\n+ ).get();\n+\n+ SearchResponse response = client().prepareSearch(\"idx3\")\n+ .addAggregation(\n+ nested(\"nested_0\").path(\"category\").subAggregation(\n+ terms(\"group_by_category\").field(\"category.name\").subAggregation(\n+ reverseNested(\"to_root\").subAggregation(\n+ nested(\"nested_1\").path(\"sku\").subAggregation(\n+ filter(\"filter_by_sku\").filter(termFilter(\"sku.sku_type\", \"bar1\")).subAggregation(\n+ count(\"sku_count\").field(\"sku_type\")\n+ )\n+ )\n+ )\n+ )\n+ )\n+ ).get();\n+ assertNoFailures(response);\n+ assertHitCount(response, 1);\n+\n+ Nested nested0 = response.getAggregations().get(\"nested_0\");\n+ assertThat(nested0.getDocCount(), equalTo(3l));\n+ Terms terms = nested0.getAggregations().get(\"group_by_category\");\n+ assertThat(terms.getBuckets().size(), equalTo(3));\n+ for (String bucketName : new String[]{\"abc\", \"klm\", \"xyz\"}) {\n+ logger.info(\"Checking results for bucket {}\", bucketName);\n+ Terms.Bucket bucket = terms.getBucketByKey(bucketName);\n+ assertThat(bucket.getDocCount(), equalTo(1l));\n+ ReverseNested toRoot = bucket.getAggregations().get(\"to_root\");\n+ assertThat(toRoot.getDocCount(), equalTo(1l));\n+ Nested nested1 = toRoot.getAggregations().get(\"nested_1\");\n+ assertThat(nested1.getDocCount(), equalTo(5l));\n+ Filter filterByBar = nested1.getAggregations().get(\"filter_by_sku\");\n+ assertThat(filterByBar.getDocCount(), equalTo(3l));\n+ ValueCount barCount = filterByBar.getAggregations().get(\"sku_count\");\n+ assertThat(barCount.getValue(), equalTo(3l));\n+ }\n+\n+ response = client().prepareSearch(\"idx3\")\n+ .addAggregation(\n+ nested(\"nested_0\").path(\"category\").subAggregation(\n+ terms(\"group_by_category\").field(\"category.name\").subAggregation(\n+ reverseNested(\"to_root\").subAggregation(\n+ nested(\"nested_1\").path(\"sku\").subAggregation(\n+ filter(\"filter_by_sku\").filter(termFilter(\"sku.sku_type\", \"bar1\")).subAggregation(\n+ nested(\"nested_2\").path(\"sku.colors\").subAggregation(\n+ filter(\"filter_sku_color\").filter(termFilter(\"sku.colors.name\", \"red\")).subAggregation(\n+ reverseNested(\"reverse_to_sku\").path(\"sku\").subAggregation(\n+ count(\"sku_count\").field(\"sku_type\")\n+ )\n+ )\n+ )\n+ )\n+ )\n+ )\n+ )\n+ )\n+ ).get();\n+ assertNoFailures(response);\n+ assertHitCount(response, 1);\n+\n+ nested0 = response.getAggregations().get(\"nested_0\");\n+ assertThat(nested0.getDocCount(), equalTo(3l));\n+ terms = nested0.getAggregations().get(\"group_by_category\");\n+ assertThat(terms.getBuckets().size(), equalTo(3));\n+ for (String bucketName : new String[]{\"abc\", \"klm\", \"xyz\"}) {\n+ logger.info(\"Checking results for bucket {}\", bucketName);\n+ Terms.Bucket bucket = terms.getBucketByKey(bucketName);\n+ assertThat(bucket.getDocCount(), equalTo(1l));\n+ ReverseNested toRoot = bucket.getAggregations().get(\"to_root\");\n+ assertThat(toRoot.getDocCount(), equalTo(1l));\n+ Nested nested1 = toRoot.getAggregations().get(\"nested_1\");\n+ assertThat(nested1.getDocCount(), equalTo(5l));\n+ Filter filterByBar = nested1.getAggregations().get(\"filter_by_sku\");\n+ assertThat(filterByBar.getDocCount(), equalTo(3l));\n+ Nested nested2 = filterByBar.getAggregations().get(\"nested_2\");\n+ assertThat(nested2.getDocCount(), equalTo(8l));\n+ Filter filterBarColor = nested2.getAggregations().get(\"filter_sku_color\");\n+ assertThat(filterBarColor.getDocCount(), equalTo(2l));\n+ ReverseNested reverseToBar = filterBarColor.getAggregations().get(\"reverse_to_sku\");\n+ assertThat(reverseToBar.getDocCount(), equalTo(2l));\n+ ValueCount barCount = reverseToBar.getAggregations().get(\"sku_count\");\n+ assertThat(barCount.getValue(), equalTo(2l));\n+ }\n+ }\n }", "filename": "src/test/java/org/elasticsearch/search/aggregations/bucket/ReverseNestedTests.java", "status": "modified" } ] }
{ "body": "The GeoJSON specification (http://geojson.org/geojson-spec.html) does not mandate a specific order for polygon vertices thus leading to ambiguous polys around the dateline. To alleviate ambiguity the OGC requires vertex ordering for exterior rings according to the right-hand rule (ccw) with interior rings in the reverse order (cw) (http://www.opengeospatial.org/standards/sfa). While JTS expects all vertices in cw order (http://tsusiatsoftware.net/jts/jts-faq/jts-faq.html). Spatial4j circumvents the issue by not allowing polys to exceed 180 degs in width, thus choosing the smaller of the two polys. Since GEO core includes logic that determines orientation at runtime, the following OGC compliant poly will fail: \n\n``` java\n {\n \"geo_shape\" : {\n \"loc\" : {\n \"shape\" : {\n \"type\" : \"polygon\",\n \"coordinates\" : [ [ \n [ 176, 15 ], \n [ -177, 10 ], \n [ -177, -10 ], \n [ 176, -15 ], \n [ 172, 0 ],\n [ 176, 15] ], \n [ [ -179, 5],\n [-179, -5],\n [176, -5],\n [176, 5],\n [-179, -5] ]\n ]\n }\n }\n }\n}\n```\n\nOne workaround is to manually transform coordinates in the supplied GeoJSON from -180:180 to a 0:360 coordinate system (e.g, -179 = 181). This, of course, fails to comply with OGC specs and requires clients roll their own transform.\n\nThe other (preferred) solution - and the purpose of this issue - is to correct the orientation logic such that GEO core supports OGC compliant polygons without the 180 degree restriction/workaround.\n", "comments": [ { "body": "Follow feature branch feature/WKT_poly_vertex_order for WIP\n", "created_at": "2014-11-26T22:50:58Z" }, { "body": "Fixed in #8762\n", "created_at": "2014-12-16T18:53:56Z" } ], "number": 8672, "title": "[GEO] OGC compliant polygons fail with ambiguity" }
{ "body": "PR #8672 addresses ambiguous polygons - those that either cross the dateline or span the map - by complying with the OGC standard right-hand rule. Since `GeoPolygonFilter` is self contained logic, the fix in #8672 did not address the issue for the `GeoPolygonFilter`. This was identified in issue #5968\n\nThis fixes the ambiguous polygon issue in `GeoPolygonFilter` by moving the dateline crossing code from `ShapeBuilder` to `GeoUtils` and reusing the logic inside the `pointInPolygon` method. Unit tests are added to ensure support for coordinates specified in either standard lat/lon or great-circle coordinate systems.\n\ncloses #5968\ncloses #9304\n", "number": 9339, "review_comments": [ { "body": "Won't this be `true` for any `GeoPoint`, so you can never reach the block casting to `GeoPoint`?\n", "created_at": "2015-01-20T16:22:57Z" }, { "body": "I realize this was already this way, but the format here (using brackets) looks like GeoJSON, so shouldn't it be `[x,y]`?\n", "created_at": "2015-01-20T16:25:12Z" }, { "body": "I think it would be cleaner to set translated outside of the if statement.\n\n```\nboolean translated = incorrectOrientation && rng > DATELINE && rng != 360.0;\nif (translated || shellCorrected && component != 0) {\n...\n```\n", "created_at": "2015-01-21T06:09:08Z" }, { "body": "a -> at?\n", "created_at": "2015-01-21T06:21:08Z" }, { "body": "It would be nice to have this take the args in the same order as computePolyTop (array, offset, length)\n", "created_at": "2015-01-21T06:22:50Z" }, { "body": "Shouldn't this condition be reversed? You want to find the point which has the highest y value right? which means that you want to update top when the y of the current point is greater than y of the current top?\n", "created_at": "2015-01-21T06:28:50Z" }, { "body": "`offset` has already been accounted for in `prev` and `next`, so you shouldn't need to add it here right?\n", "created_at": "2015-01-21T07:44:40Z" }, { "body": "Also, you could start `top` at `offset` and `i` at `offset + 1`, so that you don't need to continually add it to `i` and `top`?\n", "created_at": "2015-01-21T07:46:58Z" }, { "body": "Should this be points[offset]?\n", "created_at": "2015-01-21T07:51:48Z" }, { "body": "Again, it would be cleaner to init `i` to `offset + 1` so that you don't have to add `offset` in every iteration.\n", "created_at": "2015-01-21T07:52:50Z" }, { "body": "It would be good to document what this function is returning. I would have expected a pair of x,y coords, but it seems like you are doing the x's and y's together? I haven't seen this before so wanted to make sure that is what was meant..\n", "created_at": "2015-01-21T07:55:42Z" }, { "body": "Is the `component != 0` check needed? Seems like the only way to get past the `||` is for that condition to be true.\n", "created_at": "2015-01-21T08:04:57Z" }, { "body": "It would be good to have direct unit tests for these utility functions, especially checking the boundary cases with the passed in offset and length.\n", "created_at": "2015-01-21T08:10:24Z" }, { "body": "To comply with GeoJSON, absolutely. I was a little concerned about changing order for sake of the existing Java API. If ES users are doing anything upstream with that string and we reverse order on them... But I do agree that this should comply with the GeoJSON spec, and so should they.\n\nFor consistency we should probably revisit the docs at http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-geo-point-type.html and the `geo_point` type in general. The order was originally set to comply with \"natural language\" `lat/lon` ordering to make life easy. \n", "created_at": "2015-01-21T14:58:17Z" }, { "body": "Per our discussion (and the fact that reversing it ripples heavily through the GeoPointFilter and related tests) I'm going to leave this the way it is for now.\n", "created_at": "2015-01-21T18:57:43Z" }, { "body": "i still feel like we should make this function name \"correct\" and maybe call it \"computePolyBottomLeft\"?\n", "created_at": "2015-01-27T19:20:03Z" } ], "title": "Update GeoPolygonFilter to handle polygons crossing the dateline" }
{ "commits": [ { "message": "[GEO] Update GeoPolygonFilter to handle ambiguous polygons\n\nPR #8672 addresses ambiguous polygons - those that either cross the dateline or span the map - by complying with the OGC standard right-hand rule. Since ```GeoPolygonFilter``` is self contained logic, the fix in #8672 did not address the issue for the ```GeoPolygonFilter```. This was identified in issue #5968\n\nThis fixes the ambiguous polygon issue in ```GeoPolygonFilter``` by moving the dateline crossing code from ```ShapeBuilder``` to ```GeoUtils``` and reusing the logic inside the ```pointInPolygon``` method. Unit tests are added to ensure support for coordinates specified in either standard lat/lon or great-circle coordinate systems.\n\ncloses #5968\ncloses #9304" } ], "files": [ { "diff": "@@ -20,13 +20,12 @@\n package org.elasticsearch.common.geo;\n \n \n+import com.vividsolutions.jts.geom.Coordinate;\n+\n /**\n *\n */\n-public final class GeoPoint {\n-\n- private double lat;\n- private double lon;\n+public final class GeoPoint extends Coordinate {\n \n public GeoPoint() {\n }\n@@ -41,32 +40,36 @@ public GeoPoint(String value) {\n this.resetFromString(value);\n }\n \n+ public GeoPoint(GeoPoint other) {\n+ super(other);\n+ }\n+\n public GeoPoint(double lat, double lon) {\n- this.lat = lat;\n- this.lon = lon;\n+ this.y = lat;\n+ this.x = lon;\n }\n \n public GeoPoint reset(double lat, double lon) {\n- this.lat = lat;\n- this.lon = lon;\n+ this.y = lat;\n+ this.x = lon;\n return this;\n }\n \n public GeoPoint resetLat(double lat) {\n- this.lat = lat;\n+ this.y = lat;\n return this;\n }\n \n public GeoPoint resetLon(double lon) {\n- this.lon = lon;\n+ this.x = lon;\n return this;\n }\n \n public GeoPoint resetFromString(String value) {\n int comma = value.indexOf(',');\n if (comma != -1) {\n- lat = Double.parseDouble(value.substring(0, comma).trim());\n- lon = Double.parseDouble(value.substring(comma + 1).trim());\n+ this.y = Double.parseDouble(value.substring(0, comma).trim());\n+ this.x = Double.parseDouble(value.substring(comma + 1).trim());\n } else {\n resetFromGeoHash(value);\n }\n@@ -79,38 +82,40 @@ public GeoPoint resetFromGeoHash(String hash) {\n }\n \n public final double lat() {\n- return this.lat;\n+ return this.y;\n }\n \n public final double getLat() {\n- return this.lat;\n+ return this.y;\n }\n \n public final double lon() {\n- return this.lon;\n+ return this.x;\n }\n \n public final double getLon() {\n- return this.lon;\n+ return this.x;\n }\n \n public final String geohash() {\n- return GeoHashUtils.encode(lat, lon);\n+ return GeoHashUtils.encode(y, x);\n }\n \n public final String getGeohash() {\n- return GeoHashUtils.encode(lat, lon);\n+ return GeoHashUtils.encode(y, x);\n }\n \n @Override\n public boolean equals(Object o) {\n if (this == o) return true;\n- if (o == null || getClass() != o.getClass()) return false;\n-\n- GeoPoint geoPoint = (GeoPoint) o;\n-\n- if (Double.compare(geoPoint.lat, lat) != 0) return false;\n- if (Double.compare(geoPoint.lon, lon) != 0) return false;\n+ if (o == null) return false;\n+ if (o instanceof Coordinate) {\n+ Coordinate c = (Coordinate)o;\n+ return Double.compare(c.x, this.x) == 0\n+ && Double.compare(c.y, this.y) == 0\n+ && Double.compare(c.z, this.z) == 0;\n+ }\n+ if (getClass() != o.getClass()) return false;\n \n return true;\n }\n@@ -119,15 +124,15 @@ public boolean equals(Object o) {\n public int hashCode() {\n int result;\n long temp;\n- temp = lat != +0.0d ? Double.doubleToLongBits(lat) : 0L;\n+ temp = y != +0.0d ? Double.doubleToLongBits(y) : 0L;\n result = (int) (temp ^ (temp >>> 32));\n- temp = lon != +0.0d ? Double.doubleToLongBits(lon) : 0L;\n+ temp = x != +0.0d ? Double.doubleToLongBits(x) : 0L;\n result = 31 * result + (int) (temp ^ (temp >>> 32));\n return result;\n }\n \n public String toString() {\n- return \"[\" + lat + \", \" + lon + \"]\";\n+ return \"[\" + y + \", \" + x + \"]\";\n }\n \n public static GeoPoint parseFromLatLon(String latLon) {", "filename": "src/main/java/org/elasticsearch/common/geo/GeoPoint.java", "status": "modified" }, { "diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.common.geo;\n \n+import org.apache.commons.lang3.tuple.Pair;\n import org.apache.lucene.spatial.prefix.tree.GeohashPrefixTree;\n import org.apache.lucene.spatial.prefix.tree.QuadPrefixTree;\n import org.apache.lucene.util.SloppyMath;\n@@ -37,7 +38,9 @@ public class GeoUtils {\n public static final String LATITUDE = GeoPointFieldMapper.Names.LAT;\n public static final String LONGITUDE = GeoPointFieldMapper.Names.LON;\n public static final String GEOHASH = GeoPointFieldMapper.Names.GEOHASH;\n- \n+\n+ public static final double DATELINE = 180.0D;\n+\n /** Earth ellipsoid major axis defined by WGS 84 in meters */\n public static final double EARTH_SEMI_MAJOR_AXIS = 6378137.0; // meters (WGS 84)\n \n@@ -422,6 +425,113 @@ public static GeoPoint parseGeoPoint(XContentParser parser, GeoPoint point) thro\n }\n }\n \n+ public static boolean correctPolyAmbiguity(GeoPoint[] points, boolean handedness) {\n+ return correctPolyAmbiguity(points, handedness, computePolyOrientation(points), 0, points.length, false);\n+ }\n+\n+ public static boolean correctPolyAmbiguity(GeoPoint[] points, boolean handedness, boolean orientation, int component, int length,\n+ boolean shellCorrected) {\n+ // OGC requires shell as ccw (Right-Handedness) and holes as cw (Left-Handedness)\n+ // since GeoJSON doesn't specify (and doesn't need to) GEO core will assume OGC standards\n+ // thus if orientation is computed as cw, the logic will translate points across dateline\n+ // and convert to a right handed system\n+\n+ // compute the bounding box and calculate range\n+ Pair<Pair, Pair> range = GeoUtils.computeBBox(points, length);\n+ final double rng = (Double)range.getLeft().getRight() - (Double)range.getLeft().getLeft();\n+ // translate the points if the following is true\n+ // 1. shell orientation is cw and range is greater than a hemisphere (180 degrees) but not spanning 2 hemispheres\n+ // (translation would result in a collapsed poly)\n+ // 2. the shell of the candidate hole has been translated (to preserve the coordinate system)\n+ boolean incorrectOrientation = component == 0 && handedness != orientation;\n+ boolean translated = ((incorrectOrientation && (rng > DATELINE && rng != 360.0)) || (shellCorrected && component != 0));\n+ if (translated) {\n+ for (GeoPoint c : points) {\n+ if (c.x < 0.0) {\n+ c.x += 360.0;\n+ }\n+ }\n+ }\n+ return translated;\n+ }\n+\n+ public static boolean computePolyOrientation(GeoPoint[] points) {\n+ return computePolyOrientation(points, points.length);\n+ }\n+\n+ public static boolean computePolyOrientation(GeoPoint[] points, int length) {\n+ // calculate the direction of the points:\n+ // find the point at the top of the set and check its\n+ // neighbors orientation. So direction is equivalent\n+ // to clockwise/counterclockwise\n+ final int top = computePolyOrigin(points, length);\n+ final int prev = ((top + length - 1) % length);\n+ final int next = ((top + 1) % length);\n+ return (points[prev].x > points[next].x);\n+ }\n+\n+ private static final int computePolyOrigin(GeoPoint[] points, int length) {\n+ int top = 0;\n+ // we start at 1 here since top points to 0\n+ for (int i = 1; i < length; i++) {\n+ if (points[i].y < points[top].y) {\n+ top = i;\n+ } else if (points[i].y == points[top].y) {\n+ if (points[i].x < points[top].x) {\n+ top = i;\n+ }\n+ }\n+ }\n+ return top;\n+ }\n+\n+ public static final Pair computeBBox(GeoPoint[] points) {\n+ return computeBBox(points, 0);\n+ }\n+\n+ public static final Pair computeBBox(GeoPoint[] points, int length) {\n+ double minX = points[0].x;\n+ double maxX = points[0].x;\n+ double minY = points[0].y;\n+ double maxY = points[0].y;\n+ // compute the bounding coordinates (@todo: cleanup brute force)\n+ for (int i = 1; i < length; ++i) {\n+ if (points[i].x < minX) {\n+ minX = points[i].x;\n+ }\n+ if (points[i].x > maxX) {\n+ maxX = points[i].x;\n+ }\n+ if (points[i].y < minY) {\n+ minY = points[i].y;\n+ }\n+ if (points[i].y > maxY) {\n+ maxY = points[i].y;\n+ }\n+ }\n+ // return a pair of ranges on the X and Y axis, respectively\n+ return Pair.of(Pair.of(minX, maxX), Pair.of(minY, maxY));\n+ }\n+\n+ public static GeoPoint convertToGreatCircle(GeoPoint point) {\n+ return convertToGreatCircle(point.y, point.x);\n+ }\n+\n+ public static GeoPoint convertToGreatCircle(double lat, double lon) {\n+ GeoPoint p = new GeoPoint(lat, lon);\n+ // convert the point to standard lat/lon bounds\n+ normalizePoint(p);\n+\n+ if (p.x < 0.0D) {\n+ p.x += 360.0D;\n+ }\n+\n+ if (p.y < 0.0D) {\n+ p.y +=180.0D;\n+ }\n+ return p;\n+ }\n+\n private GeoUtils() {\n }\n }", "filename": "src/main/java/org/elasticsearch/common/geo/GeoUtils.java", "status": "modified" }, { "diff": "@@ -23,22 +23,22 @@\n import java.util.ArrayList;\n import java.util.Arrays;\n \n-import com.spatial4j.core.shape.ShapeCollection;\n+import org.elasticsearch.common.geo.GeoPoint;\n+import org.elasticsearch.common.geo.GeoUtils;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n \n import com.spatial4j.core.shape.Shape;\n-import com.vividsolutions.jts.geom.Coordinate;\n import com.vividsolutions.jts.geom.Geometry;\n import com.vividsolutions.jts.geom.GeometryFactory;\n import com.vividsolutions.jts.geom.LineString;\n \n public abstract class BaseLineStringBuilder<E extends BaseLineStringBuilder<E>> extends PointCollection<E> {\n \n protected BaseLineStringBuilder() {\n- this(new ArrayList<Coordinate>());\n+ this(new ArrayList<GeoPoint>());\n }\n \n- protected BaseLineStringBuilder(ArrayList<Coordinate> points) {\n+ protected BaseLineStringBuilder(ArrayList<GeoPoint> points) {\n super(points);\n }\n \n@@ -49,7 +49,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n \n @Override\n public Shape build() {\n- Coordinate[] coordinates = points.toArray(new Coordinate[points.size()]);\n+ GeoPoint[] coordinates = points.toArray(new GeoPoint[points.size()]);\n Geometry geometry;\n if(wrapdateline) {\n ArrayList<LineString> strings = decompose(FACTORY, coordinates, new ArrayList<LineString>());\n@@ -67,9 +67,9 @@ public Shape build() {\n return jtsGeometry(geometry);\n }\n \n- protected static ArrayList<LineString> decompose(GeometryFactory factory, Coordinate[] coordinates, ArrayList<LineString> strings) {\n- for(Coordinate[] part : decompose(+DATELINE, coordinates)) {\n- for(Coordinate[] line : decompose(-DATELINE, part)) {\n+ protected static ArrayList<LineString> decompose(GeometryFactory factory, GeoPoint[] coordinates, ArrayList<LineString> strings) {\n+ for(GeoPoint[] part : decompose(+DATELINE, coordinates)) {\n+ for(GeoPoint[] line : decompose(-DATELINE, part)) {\n strings.add(factory.createLineString(line));\n }\n }\n@@ -83,16 +83,16 @@ protected static ArrayList<LineString> decompose(GeometryFactory factory, Coordi\n * @param coordinates coordinates forming the linestring\n * @return array of linestrings given as coordinate arrays \n */\n- protected static Coordinate[][] decompose(double dateline, Coordinate[] coordinates) {\n+ protected static GeoPoint[][] decompose(double dateline, GeoPoint[] coordinates) {\n int offset = 0;\n- ArrayList<Coordinate[]> parts = new ArrayList<>();\n+ ArrayList<GeoPoint[]> parts = new ArrayList<>();\n \n double shift = coordinates[0].x > DATELINE ? DATELINE : (coordinates[0].x < -DATELINE ? -DATELINE : 0);\n \n for (int i = 1; i < coordinates.length; i++) {\n double t = intersection(coordinates[i-1], coordinates[i], dateline);\n if(!Double.isNaN(t)) {\n- Coordinate[] part;\n+ GeoPoint[] part;\n if(t<1) {\n part = Arrays.copyOfRange(coordinates, offset, i+1);\n part[part.length-1] = Edge.position(coordinates[i-1], coordinates[i], t);\n@@ -111,16 +111,16 @@ protected static Coordinate[][] decompose(double dateline, Coordinate[] coordina\n if(offset == 0) {\n parts.add(shift(shift, coordinates));\n } else if(offset < coordinates.length-1) {\n- Coordinate[] part = Arrays.copyOfRange(coordinates, offset, coordinates.length);\n+ GeoPoint[] part = Arrays.copyOfRange(coordinates, offset, coordinates.length);\n parts.add(shift(shift, part));\n }\n- return parts.toArray(new Coordinate[parts.size()][]);\n+ return parts.toArray(new GeoPoint[parts.size()][]);\n }\n \n- private static Coordinate[] shift(double shift, Coordinate...coordinates) {\n+ private static GeoPoint[] shift(double shift, GeoPoint...coordinates) {\n if(shift != 0) {\n for (int j = 0; j < coordinates.length; j++) {\n- coordinates[j] = new Coordinate(coordinates[j].x - 2 * shift, coordinates[j].y);\n+ coordinates[j] = new GeoPoint(coordinates[j].y, coordinates[j].x - 2 * shift);\n }\n }\n return coordinates;", "filename": "src/main/java/org/elasticsearch/common/geo/builders/BaseLineStringBuilder.java", "status": "modified" }, { "diff": "@@ -20,8 +20,14 @@\n package org.elasticsearch.common.geo.builders;\n \n import com.spatial4j.core.shape.Shape;\n-import com.vividsolutions.jts.geom.*;\n+import com.vividsolutions.jts.geom.Geometry;\n+import com.vividsolutions.jts.geom.GeometryFactory;\n+import com.vividsolutions.jts.geom.LinearRing;\n+import com.vividsolutions.jts.geom.MultiPolygon;\n+import com.vividsolutions.jts.geom.Polygon;\n import org.elasticsearch.ElasticsearchParseException;\n+import org.elasticsearch.common.geo.GeoPoint;\n+import org.elasticsearch.common.geo.GeoUtils;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n \n import java.io.IOException;\n@@ -67,7 +73,7 @@ public E point(double longitude, double latitude) {\n * @param coordinate coordinate of the new point\n * @return this\n */\n- public E point(Coordinate coordinate) {\n+ public E point(GeoPoint coordinate) {\n shell.point(coordinate);\n return thisRef();\n }\n@@ -77,7 +83,7 @@ public E point(Coordinate coordinate) {\n * @param coordinates coordinates of the new points to add\n * @return this\n */\n- public E points(Coordinate...coordinates) {\n+ public E points(GeoPoint...coordinates) {\n shell.points(coordinates);\n return thisRef();\n }\n@@ -121,7 +127,7 @@ public ShapeBuilder close() {\n * \n * @return coordinates of the polygon\n */\n- public Coordinate[][][] coordinates() {\n+ public GeoPoint[][][] coordinates() {\n int numEdges = shell.points.size()-1; // Last point is repeated \n for (int i = 0; i < holes.size(); i++) {\n numEdges += holes.get(i).points.size()-1;\n@@ -170,7 +176,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n \n public Geometry buildGeometry(GeometryFactory factory, boolean fixDateline) {\n if(fixDateline) {\n- Coordinate[][][] polygons = coordinates();\n+ GeoPoint[][][] polygons = coordinates();\n return polygons.length == 1\n ? polygon(factory, polygons[0])\n : multipolygon(factory, polygons);\n@@ -193,16 +199,16 @@ protected Polygon toPolygon(GeometryFactory factory) {\n return factory.createPolygon(shell, holes);\n }\n \n- protected static LinearRing linearRing(GeometryFactory factory, ArrayList<Coordinate> coordinates) {\n- return factory.createLinearRing(coordinates.toArray(new Coordinate[coordinates.size()]));\n+ protected static LinearRing linearRing(GeometryFactory factory, ArrayList<GeoPoint> coordinates) {\n+ return factory.createLinearRing(coordinates.toArray(new GeoPoint[coordinates.size()]));\n }\n \n @Override\n public GeoShapeType type() {\n return TYPE;\n }\n \n- protected static Polygon polygon(GeometryFactory factory, Coordinate[][] polygon) {\n+ protected static Polygon polygon(GeometryFactory factory, GeoPoint[][] polygon) {\n LinearRing shell = factory.createLinearRing(polygon[0]);\n LinearRing[] holes;\n \n@@ -227,7 +233,7 @@ protected static Polygon polygon(GeometryFactory factory, Coordinate[][] polygon\n * @param polygons definition of polygons\n * @return a new Multipolygon\n */\n- protected static MultiPolygon multipolygon(GeometryFactory factory, Coordinate[][][] polygons) {\n+ protected static MultiPolygon multipolygon(GeometryFactory factory, GeoPoint[][][] polygons) {\n Polygon[] polygonSet = new Polygon[polygons.length];\n for (int i = 0; i < polygonSet.length; i++) {\n polygonSet[i] = polygon(factory, polygons[i]);\n@@ -283,18 +289,18 @@ private static int component(final Edge edge, final int id, final ArrayList<Edge\n * @param coordinates Array of coordinates to write the result to\n * @return the coordinates parameter\n */\n- private static Coordinate[] coordinates(Edge component, Coordinate[] coordinates) {\n+ private static GeoPoint[] coordinates(Edge component, GeoPoint[] coordinates) {\n for (int i = 0; i < coordinates.length; i++) {\n coordinates[i] = (component = component.next).coordinate;\n }\n return coordinates;\n }\n \n- private static Coordinate[][][] buildCoordinates(ArrayList<ArrayList<Coordinate[]>> components) {\n- Coordinate[][][] result = new Coordinate[components.size()][][];\n+ private static GeoPoint[][][] buildCoordinates(ArrayList<ArrayList<GeoPoint[]>> components) {\n+ GeoPoint[][][] result = new GeoPoint[components.size()][][];\n for (int i = 0; i < result.length; i++) {\n- ArrayList<Coordinate[]> component = components.get(i);\n- result[i] = component.toArray(new Coordinate[component.size()][]);\n+ ArrayList<GeoPoint[]> component = components.get(i);\n+ result[i] = component.toArray(new GeoPoint[component.size()][]);\n }\n \n if(debugEnabled()) {\n@@ -309,44 +315,45 @@ private static Coordinate[][][] buildCoordinates(ArrayList<ArrayList<Coordinate[\n return result;\n } \n \n- private static final Coordinate[][] EMPTY = new Coordinate[0][];\n+ private static final GeoPoint[][] EMPTY = new GeoPoint[0][];\n \n- private static Coordinate[][] holes(Edge[] holes, int numHoles) {\n+ private static GeoPoint[][] holes(Edge[] holes, int numHoles) {\n if (numHoles == 0) {\n return EMPTY;\n }\n- final Coordinate[][] points = new Coordinate[numHoles][];\n+ final GeoPoint[][] points = new GeoPoint[numHoles][];\n \n for (int i = 0; i < numHoles; i++) {\n int length = component(holes[i], -(i+1), null); // mark as visited by inverting the sign\n- points[i] = coordinates(holes[i], new Coordinate[length+1]);\n+ points[i] = coordinates(holes[i], new GeoPoint[length+1]);\n }\n \n return points;\n } \n \n- private static Edge[] edges(Edge[] edges, int numHoles, ArrayList<ArrayList<Coordinate[]>> components) {\n+ private static Edge[] edges(Edge[] edges, int numHoles, ArrayList<ArrayList<GeoPoint[]>> components) {\n ArrayList<Edge> mainEdges = new ArrayList<>(edges.length);\n \n for (int i = 0; i < edges.length; i++) {\n if (edges[i].component >= 0) {\n int length = component(edges[i], -(components.size()+numHoles+1), mainEdges);\n- ArrayList<Coordinate[]> component = new ArrayList<>();\n- component.add(coordinates(edges[i], new Coordinate[length+1]));\n+ ArrayList<GeoPoint[]> component = new ArrayList<>();\n+ component.add(coordinates(edges[i], new GeoPoint[length+1]));\n components.add(component);\n }\n }\n \n return mainEdges.toArray(new Edge[mainEdges.size()]);\n }\n \n- private static Coordinate[][][] compose(Edge[] edges, Edge[] holes, int numHoles) {\n- final ArrayList<ArrayList<Coordinate[]>> components = new ArrayList<>();\n+ private static GeoPoint[][][] compose(Edge[] edges, Edge[] holes, int numHoles) {\n+ final ArrayList<ArrayList<GeoPoint[]>> components = new ArrayList<>();\n assign(holes, holes(holes, numHoles), numHoles, edges(edges, numHoles, components), components);\n return buildCoordinates(components);\n }\n \n- private static void assign(Edge[] holes, Coordinate[][] points, int numHoles, Edge[] edges, ArrayList<ArrayList<Coordinate[]>> components) {\n+ private static void assign(Edge[] holes, GeoPoint[][] points, int numHoles, Edge[] edges, ArrayList<ArrayList<GeoPoint[]>>\n+ components) {\n // Assign Hole to related components\n // To find the new component the hole belongs to all intersections of the\n // polygon edges with a vertical line are calculated. This vertical line\n@@ -461,14 +468,13 @@ private static void connect(Edge in, Edge out) {\n }\n \n private static int createEdges(int component, Orientation orientation, BaseLineStringBuilder<?> shell,\n- BaseLineStringBuilder<?> hole,\n- Edge[] edges, int offset) {\n+ BaseLineStringBuilder<?> hole, Edge[] edges, int edgeOffset) {\n // inner rings (holes) have an opposite direction than the outer rings\n // XOR will invert the orientation for outer ring cases (Truth Table:, T/T = F, T/F = T, F/T = T, F/F = F)\n boolean direction = (component != 0 ^ orientation == Orientation.RIGHT);\n // set the points array accordingly (shell or hole)\n- Coordinate[] points = (hole != null) ? hole.coordinates(false) : shell.coordinates(false);\n- Edge.ring(component, direction, orientation == Orientation.LEFT, shell, points, 0, edges, offset, points.length-1);\n+ GeoPoint[] points = (hole != null) ? hole.coordinates(false) : shell.coordinates(false);\n+ Edge.ring(component, direction, orientation == Orientation.LEFT, shell, points, edges, edgeOffset, points.length-1);\n return points.length-1;\n }\n \n@@ -477,17 +483,17 @@ public static class Ring<P extends ShapeBuilder> extends BaseLineStringBuilder<R\n private final P parent;\n \n protected Ring(P parent) {\n- this(parent, new ArrayList<Coordinate>());\n+ this(parent, new ArrayList<GeoPoint>());\n }\n \n- protected Ring(P parent, ArrayList<Coordinate> points) {\n+ protected Ring(P parent, ArrayList<GeoPoint> points) {\n super(points);\n this.parent = parent;\n }\n \n public P close() {\n- Coordinate start = points.get(0);\n- Coordinate end = points.get(points.size()-1);\n+ GeoPoint start = points.get(0);\n+ GeoPoint end = points.get(points.size()-1);\n if(start.x != end.x || start.y != end.y) {\n points.add(start);\n }", "filename": "src/main/java/org/elasticsearch/common/geo/builders/BasePolygonBuilder.java", "status": "modified" }, { "diff": "@@ -20,7 +20,7 @@\n package org.elasticsearch.common.geo.builders;\n \n import com.spatial4j.core.shape.Circle;\n-import com.vividsolutions.jts.geom.Coordinate;\n+import org.elasticsearch.common.geo.GeoPoint;\n import org.elasticsearch.common.unit.DistanceUnit;\n import org.elasticsearch.common.unit.DistanceUnit.Distance;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n@@ -34,15 +34,15 @@ public class CircleBuilder extends ShapeBuilder {\n \n private DistanceUnit unit;\n private double radius;\n- private Coordinate center;\n+ private GeoPoint center;\n \n /**\n * Set the center of the circle\n * \n * @param center coordinate of the circles center\n * @return this\n */\n- public CircleBuilder center(Coordinate center) {\n+ public CircleBuilder center(GeoPoint center) {\n this.center = center;\n return this;\n }\n@@ -54,7 +54,7 @@ public CircleBuilder center(Coordinate center) {\n * @return this\n */\n public CircleBuilder center(double lon, double lat) {\n- return center(new Coordinate(lon, lat));\n+ return center(new GeoPoint(lat, lon));\n }\n \n /**", "filename": "src/main/java/org/elasticsearch/common/geo/builders/CircleBuilder.java", "status": "modified" }, { "diff": "@@ -20,7 +20,7 @@\n package org.elasticsearch.common.geo.builders;\n \n import com.spatial4j.core.shape.Rectangle;\n-import com.vividsolutions.jts.geom.Coordinate;\n+import org.elasticsearch.common.geo.GeoPoint;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n \n import java.io.IOException;\n@@ -29,8 +29,8 @@ public class EnvelopeBuilder extends ShapeBuilder {\n \n public static final GeoShapeType TYPE = GeoShapeType.ENVELOPE; \n \n- protected Coordinate topLeft;\n- protected Coordinate bottomRight;\n+ protected GeoPoint topLeft;\n+ protected GeoPoint bottomRight;\n \n public EnvelopeBuilder() {\n this(Orientation.RIGHT);\n@@ -40,7 +40,7 @@ public EnvelopeBuilder(Orientation orientation) {\n super(orientation);\n }\n \n- public EnvelopeBuilder topLeft(Coordinate topLeft) {\n+ public EnvelopeBuilder topLeft(GeoPoint topLeft) {\n this.topLeft = topLeft;\n return this;\n }\n@@ -49,7 +49,7 @@ public EnvelopeBuilder topLeft(double longitude, double latitude) {\n return topLeft(coordinate(longitude, latitude));\n }\n \n- public EnvelopeBuilder bottomRight(Coordinate bottomRight) {\n+ public EnvelopeBuilder bottomRight(GeoPoint bottomRight) {\n this.bottomRight = bottomRight;\n return this;\n }", "filename": "src/main/java/org/elasticsearch/common/geo/builders/EnvelopeBuilder.java", "status": "modified" }, { "diff": "@@ -19,11 +19,10 @@\n \n package org.elasticsearch.common.geo.builders;\n \n+import org.elasticsearch.common.geo.GeoPoint;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n \n import com.spatial4j.core.shape.Shape;\n-import com.spatial4j.core.shape.jts.JtsGeometry;\n-import com.vividsolutions.jts.geom.Coordinate;\n import com.vividsolutions.jts.geom.Geometry;\n import com.vividsolutions.jts.geom.LineString;\n \n@@ -48,8 +47,8 @@ public MultiLineStringBuilder linestring(BaseLineStringBuilder<?> line) {\n return this;\n }\n \n- public Coordinate[][] coordinates() {\n- Coordinate[][] result = new Coordinate[lines.size()][];\n+ public GeoPoint[][] coordinates() {\n+ GeoPoint[][] result = new GeoPoint[lines.size()][];\n for (int i = 0; i < result.length; i++) {\n result[i] = lines.get(i).coordinates(false);\n }\n@@ -113,7 +112,7 @@ public MultiLineStringBuilder end() {\n return collection;\n }\n \n- public Coordinate[] coordinates() {\n+ public GeoPoint[] coordinates() {\n return super.coordinates(false);\n }\n ", "filename": "src/main/java/org/elasticsearch/common/geo/builders/MultiLineStringBuilder.java", "status": "modified" }, { "diff": "@@ -22,7 +22,7 @@\n import com.spatial4j.core.shape.Point;\n import com.spatial4j.core.shape.Shape;\n import com.spatial4j.core.shape.ShapeCollection;\n-import com.vividsolutions.jts.geom.Coordinate;\n+import org.elasticsearch.common.geo.GeoPoint;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n \n import java.io.IOException;\n@@ -48,7 +48,7 @@ public Shape build() {\n //Could wrap JtsGeometry but probably slower due to conversions to/from JTS in relate()\n //MultiPoint geometry = FACTORY.createMultiPoint(points.toArray(new Coordinate[points.size()]));\n List<Point> shapes = new ArrayList<>(points.size());\n- for (Coordinate coord : points) {\n+ for (GeoPoint coord : points) {\n shapes.add(SPATIAL_CONTEXT.makePoint(coord.x, coord.y));\n }\n return new ShapeCollection<>(shapes, SPATIAL_CONTEXT);", "filename": "src/main/java/org/elasticsearch/common/geo/builders/MultiPointBuilder.java", "status": "modified" }, { "diff": "@@ -24,10 +24,10 @@\n import java.util.List;\n \n import com.spatial4j.core.shape.ShapeCollection;\n+import org.elasticsearch.common.geo.GeoPoint;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n \n import com.spatial4j.core.shape.Shape;\n-import com.vividsolutions.jts.geom.Coordinate;\n \n public class MultiPolygonBuilder extends ShapeBuilder {\n \n@@ -84,7 +84,7 @@ public Shape build() {\n \n if(wrapdateline) {\n for (BasePolygonBuilder<?> polygon : this.polygons) {\n- for(Coordinate[][] part : polygon.coordinates()) {\n+ for(GeoPoint[][] part : polygon.coordinates()) {\n shapes.add(jtsGeometry(PolygonBuilder.polygon(FACTORY, part)));\n }\n }", "filename": "src/main/java/org/elasticsearch/common/geo/builders/MultiPolygonBuilder.java", "status": "modified" }, { "diff": "@@ -21,18 +21,18 @@\n \n import java.io.IOException;\n \n+import org.elasticsearch.common.geo.GeoPoint;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n \n import com.spatial4j.core.shape.Point;\n-import com.vividsolutions.jts.geom.Coordinate;\n \n public class PointBuilder extends ShapeBuilder {\n \n public static final GeoShapeType TYPE = GeoShapeType.POINT;\n \n- private Coordinate coordinate;\n+ private GeoPoint coordinate;\n \n- public PointBuilder coordinate(Coordinate coordinate) {\n+ public PointBuilder coordinate(GeoPoint coordinate) {\n this.coordinate = coordinate;\n return this;\n }", "filename": "src/main/java/org/elasticsearch/common/geo/builders/PointBuilder.java", "status": "modified" }, { "diff": "@@ -24,23 +24,22 @@\n import java.util.Arrays;\n import java.util.Collection;\n \n+import org.elasticsearch.common.geo.GeoPoint;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n \n-import com.vividsolutions.jts.geom.Coordinate;\n-\n /**\n * The {@link PointCollection} is an abstract base implementation for all GeoShapes. It simply handles a set of points. \n */\n public abstract class PointCollection<E extends PointCollection<E>> extends ShapeBuilder {\n \n- protected final ArrayList<Coordinate> points;\n+ protected final ArrayList<GeoPoint> points;\n protected boolean translated = false;\n \n protected PointCollection() {\n- this(new ArrayList<Coordinate>());\n+ this(new ArrayList<GeoPoint>());\n }\n \n- protected PointCollection(ArrayList<Coordinate> points) {\n+ protected PointCollection(ArrayList<GeoPoint> points) {\n this.points = points;\n }\n \n@@ -64,28 +63,28 @@ public E point(double longitude, double latitude) {\n * @param coordinate coordinate of the point\n * @return this\n */\n- public E point(Coordinate coordinate) {\n+ public E point(GeoPoint coordinate) {\n this.points.add(coordinate);\n return thisRef();\n }\n \n /**\n * Add a array of points to the collection\n * \n- * @param coordinates array of {@link Coordinate}s to add\n+ * @param coordinates array of {@link GeoPoint}s to add\n * @return this\n */\n- public E points(Coordinate...coordinates) {\n+ public E points(GeoPoint...coordinates) {\n return this.points(Arrays.asList(coordinates));\n }\n \n /**\n * Add a collection of points to the collection\n * \n- * @param coordinates array of {@link Coordinate}s to add\n+ * @param coordinates array of {@link GeoPoint}s to add\n * @return this\n */\n- public E points(Collection<? extends Coordinate> coordinates) {\n+ public E points(Collection<? extends GeoPoint> coordinates) {\n this.points.addAll(coordinates);\n return thisRef();\n }\n@@ -96,8 +95,8 @@ public E points(Collection<? extends Coordinate> coordinates) {\n * @param closed if set to true the first point of the array is repeated as last element\n * @return Array of coordinates\n */\n- protected Coordinate[] coordinates(boolean closed) {\n- Coordinate[] result = points.toArray(new Coordinate[points.size() + (closed?1:0)]);\n+ protected GeoPoint[] coordinates(boolean closed) {\n+ GeoPoint[] result = points.toArray(new GeoPoint[points.size() + (closed?1:0)]);\n if(closed) {\n result[result.length-1] = result[0];\n }\n@@ -114,12 +113,12 @@ protected Coordinate[] coordinates(boolean closed) {\n */\n protected XContentBuilder coordinatesToXcontent(XContentBuilder builder, boolean closed) throws IOException {\n builder.startArray();\n- for(Coordinate point : points) {\n+ for(GeoPoint point : points) {\n toXContent(builder, point);\n }\n if(closed) {\n- Coordinate start = points.get(0);\n- Coordinate end = points.get(points.size()-1);\n+ GeoPoint start = points.get(0);\n+ GeoPoint end = points.get(points.size()-1);\n if(start.x != end.x || start.y != end.y) {\n toXContent(builder, points.get(0));\n }", "filename": "src/main/java/org/elasticsearch/common/geo/builders/PointCollection.java", "status": "modified" }, { "diff": "@@ -21,19 +21,19 @@\n \n import java.util.ArrayList;\n \n-import com.vividsolutions.jts.geom.Coordinate;\n+import org.elasticsearch.common.geo.GeoPoint;\n \n public class PolygonBuilder extends BasePolygonBuilder<PolygonBuilder> {\n \n public PolygonBuilder() {\n- this(new ArrayList<Coordinate>(), Orientation.RIGHT);\n+ this(new ArrayList<GeoPoint>(), Orientation.RIGHT);\n }\n \n public PolygonBuilder(Orientation orientation) {\n- this(new ArrayList<Coordinate>(), orientation);\n+ this(new ArrayList<GeoPoint>(), orientation);\n }\n \n- protected PolygonBuilder(ArrayList<Coordinate> points, Orientation orientation) {\n+ protected PolygonBuilder(ArrayList<GeoPoint> points, Orientation orientation) {\n super(orientation);\n this.shell = new Ring<>(this, points);\n }", "filename": "src/main/java/org/elasticsearch/common/geo/builders/PolygonBuilder.java", "status": "modified" }, { "diff": "@@ -22,12 +22,12 @@\n import com.spatial4j.core.context.jts.JtsSpatialContext;\n import com.spatial4j.core.shape.Shape;\n import com.spatial4j.core.shape.jts.JtsGeometry;\n-import com.vividsolutions.jts.geom.Coordinate;\n import com.vividsolutions.jts.geom.Geometry;\n import com.vividsolutions.jts.geom.GeometryFactory;\n-import org.apache.commons.lang3.tuple.Pair;\n import org.elasticsearch.ElasticsearchIllegalArgumentException;\n import org.elasticsearch.ElasticsearchParseException;\n+import org.elasticsearch.common.geo.GeoPoint;\n+import org.elasticsearch.common.geo.GeoUtils;\n import org.elasticsearch.common.logging.ESLogger;\n import org.elasticsearch.common.logging.ESLoggerFactory;\n import org.elasticsearch.common.unit.DistanceUnit.Distance;\n@@ -57,7 +57,7 @@ public abstract class ShapeBuilder implements ToXContent {\n DEBUG = debug;\n }\n \n- public static final double DATELINE = 180;\n+ public static final double DATELINE = GeoUtils.DATELINE;\n // TODO how might we use JtsSpatialContextFactory to configure the context (esp. for non-geo)?\n public static final JtsSpatialContext SPATIAL_CONTEXT = JtsSpatialContext.GEO;\n public static final GeometryFactory FACTORY = SPATIAL_CONTEXT.getGeometryFactory();\n@@ -84,8 +84,8 @@ protected ShapeBuilder(Orientation orientation) {\n this.orientation = orientation;\n }\n \n- protected static Coordinate coordinate(double longitude, double latitude) {\n- return new Coordinate(longitude, latitude);\n+ protected static GeoPoint coordinate(double longitude, double latitude) {\n+ return new GeoPoint(latitude, longitude);\n }\n \n protected JtsGeometry jtsGeometry(Geometry geom) {\n@@ -106,15 +106,15 @@ protected JtsGeometry jtsGeometry(Geometry geom) {\n * @return a new {@link PointBuilder}\n */\n public static PointBuilder newPoint(double longitude, double latitude) {\n- return newPoint(new Coordinate(longitude, latitude));\n+ return newPoint(new GeoPoint(latitude, longitude));\n }\n \n /**\n- * Create a new {@link PointBuilder} from a {@link Coordinate}\n+ * Create a new {@link PointBuilder} from a {@link GeoPoint}\n * @param coordinate coordinate defining the position of the point\n * @return a new {@link PointBuilder}\n */\n- public static PointBuilder newPoint(Coordinate coordinate) {\n+ public static PointBuilder newPoint(GeoPoint coordinate) {\n return new PointBuilder().coordinate(coordinate);\n }\n \n@@ -250,7 +250,7 @@ private static CoordinateNode parseCoordinates(XContentParser parser) throws IOE\n token = parser.nextToken();\n double lat = parser.doubleValue();\n token = parser.nextToken();\n- return new CoordinateNode(new Coordinate(lon, lat));\n+ return new CoordinateNode(new GeoPoint(lat, lon));\n } else if (token == XContentParser.Token.VALUE_NULL) {\n throw new ElasticsearchIllegalArgumentException(\"coordinates cannot contain NULL values)\");\n }\n@@ -289,7 +289,7 @@ public static ShapeBuilder parse(XContentParser parser, GeoShapeFieldMapper geoD\n return GeoShapeType.parse(parser, geoDocMapper);\n }\n \n- protected static XContentBuilder toXContent(XContentBuilder builder, Coordinate coordinate) throws IOException {\n+ protected static XContentBuilder toXContent(XContentBuilder builder, GeoPoint coordinate) throws IOException {\n return builder.startArray().value(coordinate.x).value(coordinate.y).endArray();\n }\n \n@@ -309,11 +309,11 @@ public static Orientation orientationFromString(String orientation) {\n }\n }\n \n- protected static Coordinate shift(Coordinate coordinate, double dateline) {\n+ protected static GeoPoint shift(GeoPoint coordinate, double dateline) {\n if (dateline == 0) {\n return coordinate;\n } else {\n- return new Coordinate(-2 * dateline + coordinate.x, coordinate.y);\n+ return new GeoPoint(coordinate.y, -2 * dateline + coordinate.x);\n }\n }\n \n@@ -325,7 +325,7 @@ protected static Coordinate shift(Coordinate coordinate, double dateline) {\n \n /**\n * Calculate the intersection of a line segment and a vertical dateline.\n- * \n+ *\n * @param p1\n * start-point of the line segment\n * @param p2\n@@ -336,7 +336,7 @@ protected static Coordinate shift(Coordinate coordinate, double dateline) {\n * segment intersects with the line segment. Otherwise this method\n * returns {@link Double#NaN}\n */\n- protected static final double intersection(Coordinate p1, Coordinate p2, double dateline) {\n+ protected static final double intersection(GeoPoint p1, GeoPoint p2, double dateline) {\n if (p1.x == p2.x && p1.x != dateline) {\n return Double.NaN;\n } else if (p1.x == p2.x && p1.x == dateline) {\n@@ -366,8 +366,8 @@ protected static int intersections(double dateline, Edge[] edges) {\n int numIntersections = 0;\n assert !Double.isNaN(dateline);\n for (int i = 0; i < edges.length; i++) {\n- Coordinate p1 = edges[i].coordinate;\n- Coordinate p2 = edges[i].next.coordinate;\n+ GeoPoint p1 = edges[i].coordinate;\n+ GeoPoint p2 = edges[i].next.coordinate;\n assert !Double.isNaN(p2.x) && !Double.isNaN(p1.x); \n edges[i].intersect = Edge.MAX_COORDINATE;\n \n@@ -384,21 +384,21 @@ protected static int intersections(double dateline, Edge[] edges) {\n /**\n * Node used to represent a tree of coordinates.\n * <p/>\n- * Can either be a leaf node consisting of a Coordinate, or a parent with\n+ * Can either be a leaf node consisting of a GeoPoint, or a parent with\n * children\n */\n protected static class CoordinateNode implements ToXContent {\n \n- protected final Coordinate coordinate;\n+ protected final GeoPoint coordinate;\n protected final List<CoordinateNode> children;\n \n /**\n * Creates a new leaf CoordinateNode\n * \n * @param coordinate\n- * Coordinate for the Node\n+ * GeoPoint for the Node\n */\n- protected CoordinateNode(Coordinate coordinate) {\n+ protected CoordinateNode(GeoPoint coordinate) {\n this.coordinate = coordinate;\n this.children = null;\n }\n@@ -434,17 +434,17 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n }\n \n /**\n- * This helper class implements a linked list for {@link Coordinate}. It contains\n+ * This helper class implements a linked list for {@link GeoPoint}. It contains\n * fields for a dateline intersection and component id \n */\n protected static final class Edge {\n- Coordinate coordinate; // coordinate of the start point\n+ GeoPoint coordinate; // coordinate of the start point\n Edge next; // next segment\n- Coordinate intersect; // potential intersection with dateline\n+ GeoPoint intersect; // potential intersection with dateline\n int component = -1; // id of the component this edge belongs to\n- public static final Coordinate MAX_COORDINATE = new Coordinate(Double.POSITIVE_INFINITY, Double.POSITIVE_INFINITY);\n+ public static final GeoPoint MAX_COORDINATE = new GeoPoint(Double.POSITIVE_INFINITY, Double.POSITIVE_INFINITY);\n \n- protected Edge(Coordinate coordinate, Edge next, Coordinate intersection) {\n+ protected Edge(GeoPoint coordinate, Edge next, GeoPoint intersection) {\n this.coordinate = coordinate;\n this.next = next;\n this.intersect = intersection;\n@@ -453,11 +453,11 @@ protected Edge(Coordinate coordinate, Edge next, Coordinate intersection) {\n }\n }\n \n- protected Edge(Coordinate coordinate, Edge next) {\n+ protected Edge(GeoPoint coordinate, Edge next) {\n this(coordinate, next, Edge.MAX_COORDINATE);\n }\n \n- private static final int top(Coordinate[] points, int offset, int length) {\n+ private static final int top(GeoPoint[] points, int offset, int length) {\n int top = 0; // we start at 1 here since top points to 0\n for (int i = 1; i < length; i++) {\n if (points[offset + i].y < points[offset + top].y) {\n@@ -471,29 +471,6 @@ private static final int top(Coordinate[] points, int offset, int length) {\n return top;\n }\n \n- private static final Pair range(Coordinate[] points, int offset, int length) {\n- double minX = points[0].x;\n- double maxX = points[0].x;\n- double minY = points[0].y;\n- double maxY = points[0].y;\n- // compute the bounding coordinates (@todo: cleanup brute force)\n- for (int i = 1; i < length; ++i) {\n- if (points[offset + i].x < minX) {\n- minX = points[offset + i].x;\n- }\n- if (points[offset + i].x > maxX) {\n- maxX = points[offset + i].x;\n- }\n- if (points[offset + i].y < minY) {\n- minY = points[offset + i].y;\n- }\n- if (points[offset + i].y > maxY) {\n- maxY = points[offset + i].y;\n- }\n- }\n- return Pair.of(Pair.of(minX, maxX), Pair.of(minY, maxY));\n- }\n-\n /**\n * Concatenate a set of points to a polygon\n * \n@@ -503,8 +480,6 @@ private static final Pair range(Coordinate[] points, int offset, int length) {\n * direction of the ring\n * @param points\n * list of points to concatenate\n- * @param pointOffset\n- * index of the first point\n * @param edges\n * Array of edges to write the result to\n * @param edgeOffset\n@@ -513,27 +488,29 @@ private static final Pair range(Coordinate[] points, int offset, int length) {\n * number of points to use\n * @return the edges creates\n */\n- private static Edge[] concat(int component, boolean direction, Coordinate[] points, final int pointOffset, Edge[] edges, final int edgeOffset,\n- int length) {\n+ private static Edge[] concat(int component, boolean direction, GeoPoint[] points, Edge[] edges, final int edgeOffset,\n+ int length) {\n assert edges.length >= length+edgeOffset;\n- assert points.length >= length+pointOffset;\n- edges[edgeOffset] = new Edge(points[pointOffset], null);\n- for (int i = 1; i < length; i++) {\n+ assert points.length >= length;\n+ edges[edgeOffset] = new Edge(points[0], null);\n+ int edgeEnd = edgeOffset + length;\n+\n+ for (int i = edgeOffset+1, p = 1; i < edgeEnd; ++i, ++p) {\n if (direction) {\n- edges[edgeOffset + i] = new Edge(points[pointOffset + i], edges[edgeOffset + i - 1]);\n- edges[edgeOffset + i].component = component;\n+ edges[i] = new Edge(points[p], edges[i - 1]);\n+ edges[i].component = component;\n } else {\n- edges[edgeOffset + i - 1].next = edges[edgeOffset + i] = new Edge(points[pointOffset + i], null);\n- edges[edgeOffset + i - 1].component = component;\n+ edges[i - 1].next = edges[i] = new Edge(points[p], null);\n+ edges[i - 1].component = component;\n }\n }\n \n if (direction) {\n- edges[edgeOffset].next = edges[edgeOffset + length - 1];\n+ edges[edgeOffset].next = edges[edgeEnd - 1];\n edges[edgeOffset].component = component;\n } else {\n- edges[edgeOffset + length - 1].next = edges[edgeOffset];\n- edges[edgeOffset + length - 1].component = component;\n+ edges[edgeEnd - 1].next = edges[edgeOffset];\n+ edges[edgeEnd - 1].component = component;\n }\n \n return edges;\n@@ -544,82 +521,47 @@ private static Edge[] concat(int component, boolean direction, Coordinate[] poin\n * \n * @param points\n * array of point\n- * @param offset\n- * index of the first point\n * @param length\n * number of points\n * @return Array of edges\n */\n protected static Edge[] ring(int component, boolean direction, boolean handedness, BaseLineStringBuilder<?> shell,\n- Coordinate[] points, int offset, Edge[] edges, int toffset, int length) {\n+ GeoPoint[] points, Edge[] edges, int edgeOffset, int length) {\n // calculate the direction of the points:\n- // find the point a the top of the set and check its\n- // neighbors orientation. So direction is equivalent\n- // to clockwise/counterclockwise\n- final int top = top(points, offset, length);\n- final int prev = (offset + ((top + length - 1) % length));\n- final int next = (offset + ((top + 1) % length));\n- boolean orientation = points[offset + prev].x > points[offset + next].x;\n-\n- // OGC requires shell as ccw (Right-Handedness) and holes as cw (Left-Handedness) \n- // since GeoJSON doesn't specify (and doesn't need to) GEO core will assume OGC standards\n- // thus if orientation is computed as cw, the logic will translate points across dateline\n- // and convert to a right handed system\n-\n- // compute the bounding box and calculate range\n- Pair<Pair, Pair> range = range(points, offset, length);\n- final double rng = (Double)range.getLeft().getRight() - (Double)range.getLeft().getLeft();\n- // translate the points if the following is true\n- // 1. shell orientation is cw and range is greater than a hemisphere (180 degrees) but not spanning 2 hemispheres \n- // (translation would result in a collapsed poly)\n- // 2. the shell of the candidate hole has been translated (to preserve the coordinate system)\n- boolean incorrectOrientation = component == 0 && handedness != orientation;\n- if ( (incorrectOrientation && (rng > DATELINE && rng != 2*DATELINE)) || (shell.translated && component != 0)) {\n- translate(points);\n- // flip the translation bit if the shell is being translated\n- if (component == 0) {\n- shell.translated = true;\n- }\n- // correct the orientation post translation (ccw for shell, cw for holes)\n- if (component == 0 || (component != 0 && handedness == orientation)) {\n- orientation = !orientation;\n- }\n- }\n- return concat(component, direction ^ orientation, points, offset, edges, toffset, length);\n- }\n+ boolean orientation = GeoUtils.computePolyOrientation(points, length);\n+ boolean corrected = GeoUtils.correctPolyAmbiguity(points, handedness, orientation, component, length,\n+ shell.translated);\n \n- /**\n- * Transforms coordinates in the eastern hemisphere (-180:0) to a (180:360) range \n- * @param points\n- */\n- protected static void translate(Coordinate[] points) {\n- for (Coordinate c : points) {\n- if (c.x < 0) {\n- c.x += 2*DATELINE;\n+ // correct the orientation post translation (ccw for shell, cw for holes)\n+ if (corrected && (component == 0 || (component != 0 && handedness == orientation))) {\n+ if (component == 0) {\n+ shell.translated = corrected;\n }\n+ orientation = !orientation;\n }\n+ return concat(component, direction ^ orientation, points, edges, edgeOffset, length);\n }\n \n /**\n * Set the intersection of this line segment to the given position\n * \n * @param position\n * position of the intersection [0..1]\n- * @return the {@link Coordinate} of the intersection\n+ * @return the {@link GeoPoint} of the intersection\n */\n- protected Coordinate intersection(double position) {\n+ protected GeoPoint intersection(double position) {\n return intersect = position(coordinate, next.coordinate, position);\n }\n \n- public static Coordinate position(Coordinate p1, Coordinate p2, double position) {\n+ public static GeoPoint position(GeoPoint p1, GeoPoint p2, double position) {\n if (position == 0) {\n return p1;\n } else if (position == 1) {\n return p2;\n } else {\n final double x = p1.x + position * (p2.x - p1.x);\n final double y = p1.y + position * (p2.y - p1.y);\n- return new Coordinate(x, y);\n+ return new GeoPoint(y, x);\n }\n }\n \n@@ -793,12 +735,12 @@ protected static EnvelopeBuilder parseEnvelope(CoordinateNode coordinates, Orien\n \"geo_shape ('envelope') when expecting an array of 2 coordinates\");\n }\n // verify coordinate bounds, correct if necessary\n- Coordinate uL = coordinates.children.get(0).coordinate;\n- Coordinate lR = coordinates.children.get(1).coordinate;\n+ GeoPoint uL = coordinates.children.get(0).coordinate;\n+ GeoPoint lR = coordinates.children.get(1).coordinate;\n if (((lR.x < uL.x) || (uL.y < lR.y))) {\n- Coordinate uLtmp = uL;\n- uL = new Coordinate(Math.min(uL.x, lR.x), Math.max(uL.y, lR.y));\n- lR = new Coordinate(Math.max(uLtmp.x, lR.x), Math.min(uLtmp.y, lR.y));\n+ GeoPoint uLtmp = uL;\n+ uL = new GeoPoint(Math.max(uL.y, lR.y), Math.min(uL.x, lR.x));\n+ lR = new GeoPoint(Math.min(uLtmp.y, lR.y), Math.max(uLtmp.x, lR.x));\n }\n return newEnvelope(orientation).topLeft(uL).bottomRight(lR);\n }", "filename": "src/main/java/org/elasticsearch/common/geo/builders/ShapeBuilder.java", "status": "modified" }, { "diff": "@@ -294,6 +294,10 @@ protected void doXContentBody(XContentBuilder builder, boolean includeDefaults,\n if (includeDefaults || defaultStrategy.getDistErrPct() != Defaults.DISTANCE_ERROR_PCT) {\n builder.field(Names.DISTANCE_ERROR_PCT, defaultStrategy.getDistErrPct());\n }\n+\n+ if (includeDefaults || shapeOrientation != Defaults.ORIENTATION ) {\n+ builder.field(Names.ORIENTATION, shapeOrientation);\n+ }\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/index/mapper/geo/GeoShapeFieldMapper.java", "status": "modified" }, { "diff": "@@ -29,6 +29,7 @@\n import org.apache.lucene.util.Bits;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.geo.GeoPoint;\n+import org.elasticsearch.common.geo.GeoUtils;\n import org.elasticsearch.index.fielddata.IndexGeoPointFieldData;\n import org.elasticsearch.index.fielddata.MultiGeoPointValues;\n \n@@ -93,15 +94,27 @@ protected boolean matchDoc(int doc) {\n \n private static boolean pointInPolygon(GeoPoint[] points, double lat, double lon) {\n boolean inPoly = false;\n-\n+ // @TODO handedness will be an option provided by the parser\n+ boolean corrected = GeoUtils.correctPolyAmbiguity(points, false);\n+ GeoPoint p = (corrected) ?\n+ GeoUtils.convertToGreatCircle(lat, lon) :\n+ new GeoPoint(lat, lon);\n+\n+ GeoPoint pp0 = (corrected) ? GeoUtils.convertToGreatCircle(points[0]) : points[0] ;\n+ GeoPoint pp1;\n+ // simple even-odd PIP computation\n+ // 1. Determine if point is contained in the longitudinal range\n+ // 2. Determine whether point crosses the edge by computing the latitudinal delta\n+ // between the end-point of a parallel vector (originating at the point) and the\n+ // y-component of the edge sink\n for (int i = 1; i < points.length; i++) {\n- if (points[i].lon() < lon && points[i-1].lon() >= lon\n- || points[i-1].lon() < lon && points[i].lon() >= lon) {\n- if (points[i].lat() + (lon - points[i].lon()) /\n- (points[i-1].lon() - points[i].lon()) * (points[i-1].lat() - points[i].lat()) < lat) {\n+ pp1 = points[i];\n+ if (pp1.x < p.x && pp0.x >= p.x || pp0.x < p.x && pp1.x >= p.x) {\n+ if (pp1.y + (p.x - pp1.x) / (pp0.x - pp1.x) * (pp0.y - pp1.y) < p.y) {\n inPoly = !inPoly;\n }\n }\n+ pp0 = pp1;\n }\n return inPoly;\n }", "filename": "src/main/java/org/elasticsearch/index/search/geo/GeoPolygonFilter.java", "status": "modified" }, { "diff": "@@ -26,7 +26,13 @@\n import com.spatial4j.core.shape.ShapeCollection;\n import com.spatial4j.core.shape.jts.JtsGeometry;\n import com.spatial4j.core.shape.jts.JtsPoint;\n-import com.vividsolutions.jts.geom.*;\n+import com.vividsolutions.jts.geom.Geometry;\n+import com.vividsolutions.jts.geom.GeometryFactory;\n+import com.vividsolutions.jts.geom.LineString;\n+import com.vividsolutions.jts.geom.LinearRing;\n+import com.vividsolutions.jts.geom.MultiLineString;\n+import com.vividsolutions.jts.geom.Point;\n+import com.vividsolutions.jts.geom.Polygon;\n import org.elasticsearch.ElasticsearchIllegalArgumentException;\n import org.elasticsearch.ElasticsearchParseException;\n import org.elasticsearch.common.geo.builders.ShapeBuilder;\n@@ -57,7 +63,7 @@ public void testParse_simplePoint() throws IOException {\n .startArray(\"coordinates\").value(100.0).value(0.0).endArray()\n .endObject().string();\n \n- Point expected = GEOMETRY_FACTORY.createPoint(new Coordinate(100.0, 0.0));\n+ Point expected = GEOMETRY_FACTORY.createPoint(new GeoPoint(0.0, 100.0));\n assertGeometryEquals(new JtsPoint(expected, SPATIAL_CONTEXT), pointGeoJson);\n }\n \n@@ -69,12 +75,12 @@ public void testParse_lineString() throws IOException {\n .endArray()\n .endObject().string();\n \n- List<Coordinate> lineCoordinates = new ArrayList<>();\n- lineCoordinates.add(new Coordinate(100, 0));\n- lineCoordinates.add(new Coordinate(101, 1));\n+ List<GeoPoint> lineCoordinates = new ArrayList<>();\n+ lineCoordinates.add(new GeoPoint(0, 100));\n+ lineCoordinates.add(new GeoPoint(1, 101));\n \n LineString expected = GEOMETRY_FACTORY.createLineString(\n- lineCoordinates.toArray(new Coordinate[lineCoordinates.size()]));\n+ lineCoordinates.toArray(new GeoPoint[lineCoordinates.size()]));\n assertGeometryEquals(jtsGeom(expected), lineGeoJson);\n }\n \n@@ -93,13 +99,13 @@ public void testParse_multiLineString() throws IOException {\n .endObject().string();\n \n MultiLineString expected = GEOMETRY_FACTORY.createMultiLineString(new LineString[]{\n- GEOMETRY_FACTORY.createLineString(new Coordinate[]{\n- new Coordinate(100, 0),\n- new Coordinate(101, 1),\n+ GEOMETRY_FACTORY.createLineString(new GeoPoint[]{\n+ new GeoPoint(0, 100),\n+ new GeoPoint(1, 101),\n }),\n- GEOMETRY_FACTORY.createLineString(new Coordinate[]{\n- new Coordinate(102, 2),\n- new Coordinate(103, 3),\n+ GEOMETRY_FACTORY.createLineString(new GeoPoint[]{\n+ new GeoPoint(2, 102),\n+ new GeoPoint(3, 103),\n }),\n });\n assertGeometryEquals(jtsGeom(expected), multilinesGeoJson);\n@@ -173,14 +179,14 @@ public void testParse_polygonNoHoles() throws IOException {\n .endArray()\n .endObject().string();\n \n- List<Coordinate> shellCoordinates = new ArrayList<>();\n- shellCoordinates.add(new Coordinate(100, 0));\n- shellCoordinates.add(new Coordinate(101, 0));\n- shellCoordinates.add(new Coordinate(101, 1));\n- shellCoordinates.add(new Coordinate(100, 1));\n- shellCoordinates.add(new Coordinate(100, 0));\n+ List<GeoPoint> shellCoordinates = new ArrayList<>();\n+ shellCoordinates.add(new GeoPoint(0, 100));\n+ shellCoordinates.add(new GeoPoint(0, 101));\n+ shellCoordinates.add(new GeoPoint(1, 101));\n+ shellCoordinates.add(new GeoPoint(1, 100));\n+ shellCoordinates.add(new GeoPoint(0, 100));\n \n- LinearRing shell = GEOMETRY_FACTORY.createLinearRing(shellCoordinates.toArray(new Coordinate[shellCoordinates.size()]));\n+ LinearRing shell = GEOMETRY_FACTORY.createLinearRing(shellCoordinates.toArray(new GeoPoint[shellCoordinates.size()]));\n Polygon expected = GEOMETRY_FACTORY.createPolygon(shell, null);\n assertGeometryEquals(jtsGeom(expected), polygonGeoJson);\n }\n@@ -567,25 +573,25 @@ public void testParse_polygonWithHole() throws IOException {\n .endArray()\n .endObject().string();\n \n- List<Coordinate> shellCoordinates = new ArrayList<>();\n- shellCoordinates.add(new Coordinate(100, 0));\n- shellCoordinates.add(new Coordinate(101, 0));\n- shellCoordinates.add(new Coordinate(101, 1));\n- shellCoordinates.add(new Coordinate(100, 1));\n- shellCoordinates.add(new Coordinate(100, 0));\n+ List<GeoPoint> shellCoordinates = new ArrayList<>();\n+ shellCoordinates.add(new GeoPoint(0, 100));\n+ shellCoordinates.add(new GeoPoint(0, 101));\n+ shellCoordinates.add(new GeoPoint(1, 101));\n+ shellCoordinates.add(new GeoPoint(1, 100));\n+ shellCoordinates.add(new GeoPoint(0, 100));\n \n- List<Coordinate> holeCoordinates = new ArrayList<>();\n- holeCoordinates.add(new Coordinate(100.2, 0.2));\n- holeCoordinates.add(new Coordinate(100.8, 0.2));\n- holeCoordinates.add(new Coordinate(100.8, 0.8));\n- holeCoordinates.add(new Coordinate(100.2, 0.8));\n- holeCoordinates.add(new Coordinate(100.2, 0.2));\n+ List<GeoPoint> holeCoordinates = new ArrayList<>();\n+ holeCoordinates.add(new GeoPoint(0.2, 100.2));\n+ holeCoordinates.add(new GeoPoint(0.2, 100.8));\n+ holeCoordinates.add(new GeoPoint(0.8, 100.8));\n+ holeCoordinates.add(new GeoPoint(0.8, 100.2));\n+ holeCoordinates.add(new GeoPoint(0.2, 100.2));\n \n LinearRing shell = GEOMETRY_FACTORY.createLinearRing(\n- shellCoordinates.toArray(new Coordinate[shellCoordinates.size()]));\n+ shellCoordinates.toArray(new GeoPoint[shellCoordinates.size()]));\n LinearRing[] holes = new LinearRing[1];\n holes[0] = GEOMETRY_FACTORY.createLinearRing(\n- holeCoordinates.toArray(new Coordinate[holeCoordinates.size()]));\n+ holeCoordinates.toArray(new GeoPoint[holeCoordinates.size()]));\n Polygon expected = GEOMETRY_FACTORY.createPolygon(shell, holes);\n assertGeometryEquals(jtsGeom(expected), polygonGeoJson);\n }\n@@ -657,34 +663,34 @@ public void testParse_multiPolygon() throws IOException {\n .endArray()\n .endObject().string();\n \n- List<Coordinate> shellCoordinates = new ArrayList<>();\n- shellCoordinates.add(new Coordinate(100, 0));\n- shellCoordinates.add(new Coordinate(101, 0));\n- shellCoordinates.add(new Coordinate(101, 1));\n- shellCoordinates.add(new Coordinate(100, 1));\n- shellCoordinates.add(new Coordinate(100, 0));\n+ List<GeoPoint> shellCoordinates = new ArrayList<>();\n+ shellCoordinates.add(new GeoPoint(0, 100));\n+ shellCoordinates.add(new GeoPoint(0, 101));\n+ shellCoordinates.add(new GeoPoint(1, 101));\n+ shellCoordinates.add(new GeoPoint(1, 100));\n+ shellCoordinates.add(new GeoPoint(0, 100));\n \n- List<Coordinate> holeCoordinates = new ArrayList<>();\n- holeCoordinates.add(new Coordinate(100.2, 0.2));\n- holeCoordinates.add(new Coordinate(100.8, 0.2));\n- holeCoordinates.add(new Coordinate(100.8, 0.8));\n- holeCoordinates.add(new Coordinate(100.2, 0.8));\n- holeCoordinates.add(new Coordinate(100.2, 0.2));\n+ List<GeoPoint> holeCoordinates = new ArrayList<>();\n+ holeCoordinates.add(new GeoPoint(0.2, 100.2));\n+ holeCoordinates.add(new GeoPoint(0.2, 100.8));\n+ holeCoordinates.add(new GeoPoint(0.8, 100.8));\n+ holeCoordinates.add(new GeoPoint(0.8, 100.2));\n+ holeCoordinates.add(new GeoPoint(0.2, 100.2));\n \n- LinearRing shell = GEOMETRY_FACTORY.createLinearRing(shellCoordinates.toArray(new Coordinate[shellCoordinates.size()]));\n+ LinearRing shell = GEOMETRY_FACTORY.createLinearRing(shellCoordinates.toArray(new GeoPoint[shellCoordinates.size()]));\n LinearRing[] holes = new LinearRing[1];\n- holes[0] = GEOMETRY_FACTORY.createLinearRing(holeCoordinates.toArray(new Coordinate[holeCoordinates.size()]));\n+ holes[0] = GEOMETRY_FACTORY.createLinearRing(holeCoordinates.toArray(new GeoPoint[holeCoordinates.size()]));\n Polygon withHoles = GEOMETRY_FACTORY.createPolygon(shell, holes);\n \n shellCoordinates = new ArrayList<>();\n- shellCoordinates.add(new Coordinate(102, 3));\n- shellCoordinates.add(new Coordinate(103, 3));\n- shellCoordinates.add(new Coordinate(103, 2));\n- shellCoordinates.add(new Coordinate(102, 2));\n- shellCoordinates.add(new Coordinate(102, 3));\n+ shellCoordinates.add(new GeoPoint(3, 102));\n+ shellCoordinates.add(new GeoPoint(3, 103));\n+ shellCoordinates.add(new GeoPoint(2, 103));\n+ shellCoordinates.add(new GeoPoint(2, 102));\n+ shellCoordinates.add(new GeoPoint(3, 102));\n \n \n- shell = GEOMETRY_FACTORY.createLinearRing(shellCoordinates.toArray(new Coordinate[shellCoordinates.size()]));\n+ shell = GEOMETRY_FACTORY.createLinearRing(shellCoordinates.toArray(new GeoPoint[shellCoordinates.size()]));\n Polygon withoutHoles = GEOMETRY_FACTORY.createPolygon(shell, null);\n \n Shape expected = shapeCollection(withoutHoles, withHoles);\n@@ -716,22 +722,22 @@ public void testParse_multiPolygon() throws IOException {\n .endObject().string();\n \n shellCoordinates = new ArrayList<>();\n- shellCoordinates.add(new Coordinate(100, 1));\n- shellCoordinates.add(new Coordinate(101, 1));\n- shellCoordinates.add(new Coordinate(101, 0));\n- shellCoordinates.add(new Coordinate(100, 0));\n- shellCoordinates.add(new Coordinate(100, 1));\n+ shellCoordinates.add(new GeoPoint(1, 100));\n+ shellCoordinates.add(new GeoPoint(1, 101));\n+ shellCoordinates.add(new GeoPoint(0, 101));\n+ shellCoordinates.add(new GeoPoint(0, 100));\n+ shellCoordinates.add(new GeoPoint(1, 100));\n \n holeCoordinates = new ArrayList<>();\n- holeCoordinates.add(new Coordinate(100.2, 0.8));\n- holeCoordinates.add(new Coordinate(100.2, 0.2));\n- holeCoordinates.add(new Coordinate(100.8, 0.2));\n- holeCoordinates.add(new Coordinate(100.8, 0.8));\n- holeCoordinates.add(new Coordinate(100.2, 0.8));\n+ holeCoordinates.add(new GeoPoint(0.8, 100.2));\n+ holeCoordinates.add(new GeoPoint(0.2, 100.2));\n+ holeCoordinates.add(new GeoPoint(0.2, 100.8));\n+ holeCoordinates.add(new GeoPoint(0.8, 100.8));\n+ holeCoordinates.add(new GeoPoint(0.8, 100.2));\n \n- shell = GEOMETRY_FACTORY.createLinearRing(shellCoordinates.toArray(new Coordinate[shellCoordinates.size()]));\n+ shell = GEOMETRY_FACTORY.createLinearRing(shellCoordinates.toArray(new GeoPoint[shellCoordinates.size()]));\n holes = new LinearRing[1];\n- holes[0] = GEOMETRY_FACTORY.createLinearRing(holeCoordinates.toArray(new Coordinate[holeCoordinates.size()]));\n+ holes[0] = GEOMETRY_FACTORY.createLinearRing(holeCoordinates.toArray(new GeoPoint[holeCoordinates.size()]));\n withHoles = GEOMETRY_FACTORY.createPolygon(shell, holes);\n \n assertGeometryEquals(jtsGeom(withHoles), multiPolygonGeoJson);\n@@ -757,12 +763,12 @@ public void testParse_geometryCollection() throws IOException {\n .string();\n \n Shape[] expected = new Shape[2];\n- LineString expectedLineString = GEOMETRY_FACTORY.createLineString(new Coordinate[]{\n- new Coordinate(100, 0),\n- new Coordinate(101, 1),\n+ LineString expectedLineString = GEOMETRY_FACTORY.createLineString(new GeoPoint[]{\n+ new GeoPoint(0, 100),\n+ new GeoPoint(1, 101),\n });\n expected[0] = jtsGeom(expectedLineString);\n- Point expectedPoint = GEOMETRY_FACTORY.createPoint(new Coordinate(102.0, 2.0));\n+ Point expectedPoint = GEOMETRY_FACTORY.createPoint(new GeoPoint(2.0, 102.0));\n expected[1] = new JtsPoint(expectedPoint, SPATIAL_CONTEXT);\n \n //equals returns true only if geometries are in the same order\n@@ -785,7 +791,7 @@ public void testThatParserExtractsCorrectTypeAndCoordinatesFromArbitraryJson() t\n .startObject(\"lala\").field(\"type\", \"NotAPoint\").endObject()\n .endObject().string();\n \n- Point expected = GEOMETRY_FACTORY.createPoint(new Coordinate(100.0, 0.0));\n+ Point expected = GEOMETRY_FACTORY.createPoint(new GeoPoint(0.0, 100.0));\n assertGeometryEquals(new JtsPoint(expected, SPATIAL_CONTEXT), pointGeoJson);\n }\n ", "filename": "src/test/java/org/elasticsearch/common/geo/GeoJSONShapeParserTests.java", "status": "modified" }, { "diff": "@@ -24,7 +24,6 @@\n import com.spatial4j.core.shape.Rectangle;\n import com.spatial4j.core.shape.Shape;\n import com.spatial4j.core.shape.impl.PointImpl;\n-import com.vividsolutions.jts.geom.Coordinate;\n import com.vividsolutions.jts.geom.LineString;\n import com.vividsolutions.jts.geom.Polygon;\n import org.elasticsearch.common.geo.builders.PolygonBuilder;\n@@ -64,38 +63,39 @@ public void testNewPolygon() {\n .point(-45, 30).toPolygon();\n \n LineString exterior = polygon.getExteriorRing();\n- assertEquals(exterior.getCoordinateN(0), new Coordinate(-45, 30));\n- assertEquals(exterior.getCoordinateN(1), new Coordinate(45, 30));\n- assertEquals(exterior.getCoordinateN(2), new Coordinate(45, -30));\n- assertEquals(exterior.getCoordinateN(3), new Coordinate(-45, -30));\n+ assertEquals(exterior.getCoordinateN(0), new GeoPoint(30, -45));\n+ assertEquals(exterior.getCoordinateN(1), new GeoPoint(30, 45));\n+ assertEquals(exterior.getCoordinateN(2), new GeoPoint(-30, 45));\n+ assertEquals(exterior.getCoordinateN(3), new GeoPoint(-30, -45));\n }\n \n @Test\n public void testNewPolygon_coordinate() {\n Polygon polygon = ShapeBuilder.newPolygon()\n- .point(new Coordinate(-45, 30))\n- .point(new Coordinate(45, 30))\n- .point(new Coordinate(45, -30))\n- .point(new Coordinate(-45, -30))\n- .point(new Coordinate(-45, 30)).toPolygon();\n+ .point(new GeoPoint(30, -45))\n+ .point(new GeoPoint(30, 45))\n+ .point(new GeoPoint(-30, 45))\n+ .point(new GeoPoint(-30, -45))\n+ .point(new GeoPoint(30, -45)).toPolygon();\n \n LineString exterior = polygon.getExteriorRing();\n- assertEquals(exterior.getCoordinateN(0), new Coordinate(-45, 30));\n- assertEquals(exterior.getCoordinateN(1), new Coordinate(45, 30));\n- assertEquals(exterior.getCoordinateN(2), new Coordinate(45, -30));\n- assertEquals(exterior.getCoordinateN(3), new Coordinate(-45, -30));\n+ assertEquals(exterior.getCoordinateN(0), new GeoPoint(30, -45));\n+ assertEquals(exterior.getCoordinateN(1), new GeoPoint(30, 45));\n+ assertEquals(exterior.getCoordinateN(2), new GeoPoint(-30, 45));\n+ assertEquals(exterior.getCoordinateN(3), new GeoPoint(-30, -45));\n }\n \n @Test\n public void testNewPolygon_coordinates() {\n Polygon polygon = ShapeBuilder.newPolygon()\n- .points(new Coordinate(-45, 30), new Coordinate(45, 30), new Coordinate(45, -30), new Coordinate(-45, -30), new Coordinate(-45, 30)).toPolygon();\n+ .points(new GeoPoint(30, -45), new GeoPoint(30, 45), new GeoPoint(-30, 45), new GeoPoint(-30, -45),\n+ new GeoPoint(30, -45)).toPolygon();\n \n LineString exterior = polygon.getExteriorRing();\n- assertEquals(exterior.getCoordinateN(0), new Coordinate(-45, 30));\n- assertEquals(exterior.getCoordinateN(1), new Coordinate(45, 30));\n- assertEquals(exterior.getCoordinateN(2), new Coordinate(45, -30));\n- assertEquals(exterior.getCoordinateN(3), new Coordinate(-45, -30));\n+ assertEquals(exterior.getCoordinateN(0), new GeoPoint(30, -45));\n+ assertEquals(exterior.getCoordinateN(1), new GeoPoint(30, 45));\n+ assertEquals(exterior.getCoordinateN(2), new GeoPoint(-30, 45));\n+ assertEquals(exterior.getCoordinateN(3), new GeoPoint(-30, -45));\n }\n \n @Test", "filename": "src/test/java/org/elasticsearch/common/geo/ShapeBuilderTests.java", "status": "modified" } ] }
{ "body": "Using Elasticsearch 1.1.1. I'm seeing geo_polygon behave oddly when the input polygon cross the date line. From a quick look at GeoPolygonFilter.GeoPolygonDocSet.pointInPolygon it doesn't seem that this is explicitly handled. \n\nThe reproduce this create an index/mapping as:\n\nPOST /geo\n\n``` json\n{ \"mappings\": { \"docs\": { \"properties\": { \"p\": { \"type\": \"geo_point\" } } } } }\n```\n\nUpload a document:\n\nPUT /geo/docs/1\n\n``` json\n{ \"p\": { \"lat\": 40, \"lon\": 179 } }\n```\n\nSearch with a polygon that's a box around the uploaded point and that crosses the date line:\n\nPOST /geo/docs/_search\n\n``` json\n{\n \"filter\": { \"geo_polygon\": { \"p\": { \"points\": [\n { \"lat\": 42, \"lon\": 178 },\n { \"lat\": 39, \"lon\": 178 },\n { \"lat\": 39, \"lon\": -179 },\n { \"lat\": 42, \"lon\": -179 },\n { \"lat\": 42, \"lon\": 178 }\n ] } } }\n}\n```\n\nES returns 0 results. If I use a polygon that stays to the west of the date line I do get results:\n\n``` json\n{\n \"filter\": { \"geo_polygon\": { \"p\": { \"points\": [\n { \"lat\": 42, \"lon\": 178 },\n { \"lat\": 39, \"lon\": 178 },\n { \"lat\": 39, \"lon\": 179.5 },\n { \"lat\": 42, \"lon\": 179.5 },\n { \"lat\": 42, \"lon\": 178 }\n ] } } }\n}\n```\n\nAlso, if I use a bounding box query with the same coordinates as the initial polygon, it does work:\n\n``` json\n{\n \"filter\": { \"geo_bounding_box\": { \"p\": \n { \"top_left\": { \"lat\": 42, \"lon\": 178 },\n \"bottom_right\": { \"lat\": 39, \"lon\": -179 }\n }\n } }\n}\n```\n\nIt seems that this code needs to either split the check into east and west checks or normalize the input values. Am I missing something?\n", "comments": [ { "body": "This is actually working as expected. Your first query will resolve as shown in the following GeoJSON gist which will not contain your document:\nhttps://gist.github.com/anonymous/82b50b74a7b6d170bfc6\n\nTo create the desired results you specified you would need to split the polygon in to two polygons, one to the left of the date line and the other to the right. This can be done with the following query:\n\n```\ncurl -XPOST 'localhost:9200/geo/_search?pretty' -d '{\n \"query\" : {\n \"filtered\" : {\n \"query\" : {\n \"match_all\" : {}\n },\n \"filter\" : {\n \"or\" : [\n {\n \"geo_polygon\" : {\n \"p\" : {\n \"points\" : [\n { \"lat\": 42, \"lon\": 178 },\n { \"lat\": 39, \"lon\": 178 },\n { \"lat\": 39, \"lon\": 180 },\n { \"lat\": 42, \"lon\": 180 },\n { \"lat\": 42, \"lon\": 178 }\n ]\n }\n }\n },\n {\n \"geo_polygon\" : {\n \"p\" : {\n \"points\" : [\n { \"lat\": 42, \"lon\": -180 },\n { \"lat\": 39, \"lon\": -180 },\n { \"lat\": 39, \"lon\": -179 },\n { \"lat\": 42, \"lon\": -179 },\n { \"lat\": 42, \"lon\": -180 }\n ]\n }\n }\n }\n ]\n }\n }\n }\n}'\n```\n\nThe bounding box is a little different since by specifying which coordinate is top_left and which is top_right you are fixing the box to overlap the date line.\n", "created_at": "2014-05-27T15:58:46Z" }, { "body": "For me, I'd expect the line segment between two points to lie in the same direction as the great circle arc. Splitting an arbitrary query into sub-polygons isn't entirely straightforward, because it could intersect longitude 180 multiple times.\n\nI have a rough draft of a commit that fixes this issue by shifting the polygon in GeoPolygonFilter (and all points passed in to pointInPolygon) so that it lies completely on one side of longitude 180. It is low-overhead and has basically no effect on the normal case. The only constraint is that the polygon can't span more than 360 degrees in longitude.\n\nDoes this sound reasonable, and is it worth submitting a PR?\n", "created_at": "2014-09-17T21:44:43Z" }, { "body": "@colings86 could this be solved with a `left/right` parameter or something similar?\n", "created_at": "2014-09-25T18:21:19Z" }, { "body": "@nknize is this fixed by https://github.com/elasticsearch/elasticsearch/pull/8521 ?\n", "created_at": "2014-11-28T09:49:36Z" }, { "body": "It does not. This is the infamous \"ambiguous polygon\" problem that occurs when treating a spherical coordinate system as a cartesian plane. I opened a discussion and feature branch to address this in #8672 \n\ntldr: GeoJSON doesn't specify order, but OGC does.\n\nFeature fix: Default behavior = For GeoJSON poly's specified in OGC order (shell: ccw, holes: cw) ES Ring logic will correctly transform and split polys across the dateline (e.g., see https://gist.github.com/nknize/d122b243dc63dcba8474). For GeoJSON poly's provided in the opposite order original behavior will occur (e.g., @colings86 example https://gist.github.com/anonymous/82b50b74a7b6d170bfc6). \n\nAdditionally, I like @clintongormley suggestion of adding an optional left/right parameter. Its an easy fix letting user's clearly specify intent.\n", "created_at": "2014-12-01T14:32:21Z" }, { "body": "This is now addressed in PR #8762\n", "created_at": "2014-12-03T14:00:29Z" }, { "body": "Optional left/right parameter added in PR #8978 \n", "created_at": "2014-12-16T18:41:49Z" }, { "body": "Merged in edd33c0\n", "created_at": "2014-12-29T22:11:30Z" }, { "body": "I tried @pablocastro's example on trunk, and unfortunately the issue is still there. There might've been some confusion -- the original example refers to the geo_polygon filter, whereas @nknize's fix is for the polygon geo_shape type.\n\nShould I create a new ticket for the geo_polygon filter, or should we re-open this one?\n", "created_at": "2015-01-03T22:17:20Z" }, { "body": "Good catch @jtibshirani! We'll go ahead and reopen this ticket since its a separate issue.\n", "created_at": "2015-01-04T04:41:55Z" }, { "body": "Reopening due to #9462 \n", "created_at": "2015-01-28T15:06:48Z" }, { "body": "The search hits of really huge polygon (elasticsearch 1.4.3)\n\n``` javascript\n{\n \"query\": {\n \"filtered\": {\n \"filter\": {\n \"geo_shape\": {\n \"location\": {\n \"shape\": {\n \"type\": \"Polygon\",\n \"coordinates\": [\n [\n [\n -70.4873046875,\n 79.9818262344106\n ],\n [\n -70.4873046875,\n -28.07230647927298\n ],\n [\n -103.3583984375,\n -28.07230647927298\n ],\n [\n -103.3583984375,\n 79.9818262344106\n ],\n [\n -70.4873046875,\n 79.9818262344106\n ]\n ]\n ],\n \"orientation\": \"ccw\"\n }\n }\n }\n }\n }\n }\n}\n```\n\ndoesn't include any points inside that polygon even with orientation option.\nIs it related with this issue?\n", "created_at": "2015-02-12T17:36:56Z" }, { "body": "Its related. For now, if you have an ambiguous poly that crosses the pole you'll need to manually split it into 2 separate explicit polys and put inside a `MultiPolygon` Depending on the complexity of the poly computing the pole intersections can be non-trivial. The in-work patch will do this for you.\n\nA separate issue is related to the distance_error_pct parameter. If not specified, larger filters will have reduced accuracy. Though this seems unrelated to your GeoJSON\n", "created_at": "2015-02-12T22:49:53Z" }, { "body": "relates to #26286", "created_at": "2018-03-26T16:59:07Z" }, { "body": "closing in favor of #26286 since its an old issue", "created_at": "2018-03-26T17:00:11Z" } ], "number": 5968, "title": "geo_polygon not handling polygons that cross the date line properly" }
{ "body": "PR #8672 addresses ambiguous polygons - those that either cross the dateline or span the map - by complying with the OGC standard right-hand rule. Since `GeoPolygonFilter` is self contained logic, the fix in #8672 did not address the issue for the `GeoPolygonFilter`. This was identified in issue #5968\n\nThis fixes the ambiguous polygon issue in `GeoPolygonFilter` by moving the dateline crossing code from `ShapeBuilder` to `GeoUtils` and reusing the logic inside the `pointInPolygon` method. Unit tests are added to ensure support for coordinates specified in either standard lat/lon or great-circle coordinate systems.\n\ncloses #5968\ncloses #9304\n", "number": 9339, "review_comments": [ { "body": "Won't this be `true` for any `GeoPoint`, so you can never reach the block casting to `GeoPoint`?\n", "created_at": "2015-01-20T16:22:57Z" }, { "body": "I realize this was already this way, but the format here (using brackets) looks like GeoJSON, so shouldn't it be `[x,y]`?\n", "created_at": "2015-01-20T16:25:12Z" }, { "body": "I think it would be cleaner to set translated outside of the if statement.\n\n```\nboolean translated = incorrectOrientation && rng > DATELINE && rng != 360.0;\nif (translated || shellCorrected && component != 0) {\n...\n```\n", "created_at": "2015-01-21T06:09:08Z" }, { "body": "a -> at?\n", "created_at": "2015-01-21T06:21:08Z" }, { "body": "It would be nice to have this take the args in the same order as computePolyTop (array, offset, length)\n", "created_at": "2015-01-21T06:22:50Z" }, { "body": "Shouldn't this condition be reversed? You want to find the point which has the highest y value right? which means that you want to update top when the y of the current point is greater than y of the current top?\n", "created_at": "2015-01-21T06:28:50Z" }, { "body": "`offset` has already been accounted for in `prev` and `next`, so you shouldn't need to add it here right?\n", "created_at": "2015-01-21T07:44:40Z" }, { "body": "Also, you could start `top` at `offset` and `i` at `offset + 1`, so that you don't need to continually add it to `i` and `top`?\n", "created_at": "2015-01-21T07:46:58Z" }, { "body": "Should this be points[offset]?\n", "created_at": "2015-01-21T07:51:48Z" }, { "body": "Again, it would be cleaner to init `i` to `offset + 1` so that you don't have to add `offset` in every iteration.\n", "created_at": "2015-01-21T07:52:50Z" }, { "body": "It would be good to document what this function is returning. I would have expected a pair of x,y coords, but it seems like you are doing the x's and y's together? I haven't seen this before so wanted to make sure that is what was meant..\n", "created_at": "2015-01-21T07:55:42Z" }, { "body": "Is the `component != 0` check needed? Seems like the only way to get past the `||` is for that condition to be true.\n", "created_at": "2015-01-21T08:04:57Z" }, { "body": "It would be good to have direct unit tests for these utility functions, especially checking the boundary cases with the passed in offset and length.\n", "created_at": "2015-01-21T08:10:24Z" }, { "body": "To comply with GeoJSON, absolutely. I was a little concerned about changing order for sake of the existing Java API. If ES users are doing anything upstream with that string and we reverse order on them... But I do agree that this should comply with the GeoJSON spec, and so should they.\n\nFor consistency we should probably revisit the docs at http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-geo-point-type.html and the `geo_point` type in general. The order was originally set to comply with \"natural language\" `lat/lon` ordering to make life easy. \n", "created_at": "2015-01-21T14:58:17Z" }, { "body": "Per our discussion (and the fact that reversing it ripples heavily through the GeoPointFilter and related tests) I'm going to leave this the way it is for now.\n", "created_at": "2015-01-21T18:57:43Z" }, { "body": "i still feel like we should make this function name \"correct\" and maybe call it \"computePolyBottomLeft\"?\n", "created_at": "2015-01-27T19:20:03Z" } ], "title": "Update GeoPolygonFilter to handle polygons crossing the dateline" }
{ "commits": [ { "message": "[GEO] Update GeoPolygonFilter to handle ambiguous polygons\n\nPR #8672 addresses ambiguous polygons - those that either cross the dateline or span the map - by complying with the OGC standard right-hand rule. Since ```GeoPolygonFilter``` is self contained logic, the fix in #8672 did not address the issue for the ```GeoPolygonFilter```. This was identified in issue #5968\n\nThis fixes the ambiguous polygon issue in ```GeoPolygonFilter``` by moving the dateline crossing code from ```ShapeBuilder``` to ```GeoUtils``` and reusing the logic inside the ```pointInPolygon``` method. Unit tests are added to ensure support for coordinates specified in either standard lat/lon or great-circle coordinate systems.\n\ncloses #5968\ncloses #9304" } ], "files": [ { "diff": "@@ -20,13 +20,12 @@\n package org.elasticsearch.common.geo;\n \n \n+import com.vividsolutions.jts.geom.Coordinate;\n+\n /**\n *\n */\n-public final class GeoPoint {\n-\n- private double lat;\n- private double lon;\n+public final class GeoPoint extends Coordinate {\n \n public GeoPoint() {\n }\n@@ -41,32 +40,36 @@ public GeoPoint(String value) {\n this.resetFromString(value);\n }\n \n+ public GeoPoint(GeoPoint other) {\n+ super(other);\n+ }\n+\n public GeoPoint(double lat, double lon) {\n- this.lat = lat;\n- this.lon = lon;\n+ this.y = lat;\n+ this.x = lon;\n }\n \n public GeoPoint reset(double lat, double lon) {\n- this.lat = lat;\n- this.lon = lon;\n+ this.y = lat;\n+ this.x = lon;\n return this;\n }\n \n public GeoPoint resetLat(double lat) {\n- this.lat = lat;\n+ this.y = lat;\n return this;\n }\n \n public GeoPoint resetLon(double lon) {\n- this.lon = lon;\n+ this.x = lon;\n return this;\n }\n \n public GeoPoint resetFromString(String value) {\n int comma = value.indexOf(',');\n if (comma != -1) {\n- lat = Double.parseDouble(value.substring(0, comma).trim());\n- lon = Double.parseDouble(value.substring(comma + 1).trim());\n+ this.y = Double.parseDouble(value.substring(0, comma).trim());\n+ this.x = Double.parseDouble(value.substring(comma + 1).trim());\n } else {\n resetFromGeoHash(value);\n }\n@@ -79,38 +82,40 @@ public GeoPoint resetFromGeoHash(String hash) {\n }\n \n public final double lat() {\n- return this.lat;\n+ return this.y;\n }\n \n public final double getLat() {\n- return this.lat;\n+ return this.y;\n }\n \n public final double lon() {\n- return this.lon;\n+ return this.x;\n }\n \n public final double getLon() {\n- return this.lon;\n+ return this.x;\n }\n \n public final String geohash() {\n- return GeoHashUtils.encode(lat, lon);\n+ return GeoHashUtils.encode(y, x);\n }\n \n public final String getGeohash() {\n- return GeoHashUtils.encode(lat, lon);\n+ return GeoHashUtils.encode(y, x);\n }\n \n @Override\n public boolean equals(Object o) {\n if (this == o) return true;\n- if (o == null || getClass() != o.getClass()) return false;\n-\n- GeoPoint geoPoint = (GeoPoint) o;\n-\n- if (Double.compare(geoPoint.lat, lat) != 0) return false;\n- if (Double.compare(geoPoint.lon, lon) != 0) return false;\n+ if (o == null) return false;\n+ if (o instanceof Coordinate) {\n+ Coordinate c = (Coordinate)o;\n+ return Double.compare(c.x, this.x) == 0\n+ && Double.compare(c.y, this.y) == 0\n+ && Double.compare(c.z, this.z) == 0;\n+ }\n+ if (getClass() != o.getClass()) return false;\n \n return true;\n }\n@@ -119,15 +124,15 @@ public boolean equals(Object o) {\n public int hashCode() {\n int result;\n long temp;\n- temp = lat != +0.0d ? Double.doubleToLongBits(lat) : 0L;\n+ temp = y != +0.0d ? Double.doubleToLongBits(y) : 0L;\n result = (int) (temp ^ (temp >>> 32));\n- temp = lon != +0.0d ? Double.doubleToLongBits(lon) : 0L;\n+ temp = x != +0.0d ? Double.doubleToLongBits(x) : 0L;\n result = 31 * result + (int) (temp ^ (temp >>> 32));\n return result;\n }\n \n public String toString() {\n- return \"[\" + lat + \", \" + lon + \"]\";\n+ return \"[\" + y + \", \" + x + \"]\";\n }\n \n public static GeoPoint parseFromLatLon(String latLon) {", "filename": "src/main/java/org/elasticsearch/common/geo/GeoPoint.java", "status": "modified" }, { "diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.common.geo;\n \n+import org.apache.commons.lang3.tuple.Pair;\n import org.apache.lucene.spatial.prefix.tree.GeohashPrefixTree;\n import org.apache.lucene.spatial.prefix.tree.QuadPrefixTree;\n import org.apache.lucene.util.SloppyMath;\n@@ -37,7 +38,9 @@ public class GeoUtils {\n public static final String LATITUDE = GeoPointFieldMapper.Names.LAT;\n public static final String LONGITUDE = GeoPointFieldMapper.Names.LON;\n public static final String GEOHASH = GeoPointFieldMapper.Names.GEOHASH;\n- \n+\n+ public static final double DATELINE = 180.0D;\n+\n /** Earth ellipsoid major axis defined by WGS 84 in meters */\n public static final double EARTH_SEMI_MAJOR_AXIS = 6378137.0; // meters (WGS 84)\n \n@@ -422,6 +425,113 @@ public static GeoPoint parseGeoPoint(XContentParser parser, GeoPoint point) thro\n }\n }\n \n+ public static boolean correctPolyAmbiguity(GeoPoint[] points, boolean handedness) {\n+ return correctPolyAmbiguity(points, handedness, computePolyOrientation(points), 0, points.length, false);\n+ }\n+\n+ public static boolean correctPolyAmbiguity(GeoPoint[] points, boolean handedness, boolean orientation, int component, int length,\n+ boolean shellCorrected) {\n+ // OGC requires shell as ccw (Right-Handedness) and holes as cw (Left-Handedness)\n+ // since GeoJSON doesn't specify (and doesn't need to) GEO core will assume OGC standards\n+ // thus if orientation is computed as cw, the logic will translate points across dateline\n+ // and convert to a right handed system\n+\n+ // compute the bounding box and calculate range\n+ Pair<Pair, Pair> range = GeoUtils.computeBBox(points, length);\n+ final double rng = (Double)range.getLeft().getRight() - (Double)range.getLeft().getLeft();\n+ // translate the points if the following is true\n+ // 1. shell orientation is cw and range is greater than a hemisphere (180 degrees) but not spanning 2 hemispheres\n+ // (translation would result in a collapsed poly)\n+ // 2. the shell of the candidate hole has been translated (to preserve the coordinate system)\n+ boolean incorrectOrientation = component == 0 && handedness != orientation;\n+ boolean translated = ((incorrectOrientation && (rng > DATELINE && rng != 360.0)) || (shellCorrected && component != 0));\n+ if (translated) {\n+ for (GeoPoint c : points) {\n+ if (c.x < 0.0) {\n+ c.x += 360.0;\n+ }\n+ }\n+ }\n+ return translated;\n+ }\n+\n+ public static boolean computePolyOrientation(GeoPoint[] points) {\n+ return computePolyOrientation(points, points.length);\n+ }\n+\n+ public static boolean computePolyOrientation(GeoPoint[] points, int length) {\n+ // calculate the direction of the points:\n+ // find the point at the top of the set and check its\n+ // neighbors orientation. So direction is equivalent\n+ // to clockwise/counterclockwise\n+ final int top = computePolyOrigin(points, length);\n+ final int prev = ((top + length - 1) % length);\n+ final int next = ((top + 1) % length);\n+ return (points[prev].x > points[next].x);\n+ }\n+\n+ private static final int computePolyOrigin(GeoPoint[] points, int length) {\n+ int top = 0;\n+ // we start at 1 here since top points to 0\n+ for (int i = 1; i < length; i++) {\n+ if (points[i].y < points[top].y) {\n+ top = i;\n+ } else if (points[i].y == points[top].y) {\n+ if (points[i].x < points[top].x) {\n+ top = i;\n+ }\n+ }\n+ }\n+ return top;\n+ }\n+\n+ public static final Pair computeBBox(GeoPoint[] points) {\n+ return computeBBox(points, 0);\n+ }\n+\n+ public static final Pair computeBBox(GeoPoint[] points, int length) {\n+ double minX = points[0].x;\n+ double maxX = points[0].x;\n+ double minY = points[0].y;\n+ double maxY = points[0].y;\n+ // compute the bounding coordinates (@todo: cleanup brute force)\n+ for (int i = 1; i < length; ++i) {\n+ if (points[i].x < minX) {\n+ minX = points[i].x;\n+ }\n+ if (points[i].x > maxX) {\n+ maxX = points[i].x;\n+ }\n+ if (points[i].y < minY) {\n+ minY = points[i].y;\n+ }\n+ if (points[i].y > maxY) {\n+ maxY = points[i].y;\n+ }\n+ }\n+ // return a pair of ranges on the X and Y axis, respectively\n+ return Pair.of(Pair.of(minX, maxX), Pair.of(minY, maxY));\n+ }\n+\n+ public static GeoPoint convertToGreatCircle(GeoPoint point) {\n+ return convertToGreatCircle(point.y, point.x);\n+ }\n+\n+ public static GeoPoint convertToGreatCircle(double lat, double lon) {\n+ GeoPoint p = new GeoPoint(lat, lon);\n+ // convert the point to standard lat/lon bounds\n+ normalizePoint(p);\n+\n+ if (p.x < 0.0D) {\n+ p.x += 360.0D;\n+ }\n+\n+ if (p.y < 0.0D) {\n+ p.y +=180.0D;\n+ }\n+ return p;\n+ }\n+\n private GeoUtils() {\n }\n }", "filename": "src/main/java/org/elasticsearch/common/geo/GeoUtils.java", "status": "modified" }, { "diff": "@@ -23,22 +23,22 @@\n import java.util.ArrayList;\n import java.util.Arrays;\n \n-import com.spatial4j.core.shape.ShapeCollection;\n+import org.elasticsearch.common.geo.GeoPoint;\n+import org.elasticsearch.common.geo.GeoUtils;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n \n import com.spatial4j.core.shape.Shape;\n-import com.vividsolutions.jts.geom.Coordinate;\n import com.vividsolutions.jts.geom.Geometry;\n import com.vividsolutions.jts.geom.GeometryFactory;\n import com.vividsolutions.jts.geom.LineString;\n \n public abstract class BaseLineStringBuilder<E extends BaseLineStringBuilder<E>> extends PointCollection<E> {\n \n protected BaseLineStringBuilder() {\n- this(new ArrayList<Coordinate>());\n+ this(new ArrayList<GeoPoint>());\n }\n \n- protected BaseLineStringBuilder(ArrayList<Coordinate> points) {\n+ protected BaseLineStringBuilder(ArrayList<GeoPoint> points) {\n super(points);\n }\n \n@@ -49,7 +49,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n \n @Override\n public Shape build() {\n- Coordinate[] coordinates = points.toArray(new Coordinate[points.size()]);\n+ GeoPoint[] coordinates = points.toArray(new GeoPoint[points.size()]);\n Geometry geometry;\n if(wrapdateline) {\n ArrayList<LineString> strings = decompose(FACTORY, coordinates, new ArrayList<LineString>());\n@@ -67,9 +67,9 @@ public Shape build() {\n return jtsGeometry(geometry);\n }\n \n- protected static ArrayList<LineString> decompose(GeometryFactory factory, Coordinate[] coordinates, ArrayList<LineString> strings) {\n- for(Coordinate[] part : decompose(+DATELINE, coordinates)) {\n- for(Coordinate[] line : decompose(-DATELINE, part)) {\n+ protected static ArrayList<LineString> decompose(GeometryFactory factory, GeoPoint[] coordinates, ArrayList<LineString> strings) {\n+ for(GeoPoint[] part : decompose(+DATELINE, coordinates)) {\n+ for(GeoPoint[] line : decompose(-DATELINE, part)) {\n strings.add(factory.createLineString(line));\n }\n }\n@@ -83,16 +83,16 @@ protected static ArrayList<LineString> decompose(GeometryFactory factory, Coordi\n * @param coordinates coordinates forming the linestring\n * @return array of linestrings given as coordinate arrays \n */\n- protected static Coordinate[][] decompose(double dateline, Coordinate[] coordinates) {\n+ protected static GeoPoint[][] decompose(double dateline, GeoPoint[] coordinates) {\n int offset = 0;\n- ArrayList<Coordinate[]> parts = new ArrayList<>();\n+ ArrayList<GeoPoint[]> parts = new ArrayList<>();\n \n double shift = coordinates[0].x > DATELINE ? DATELINE : (coordinates[0].x < -DATELINE ? -DATELINE : 0);\n \n for (int i = 1; i < coordinates.length; i++) {\n double t = intersection(coordinates[i-1], coordinates[i], dateline);\n if(!Double.isNaN(t)) {\n- Coordinate[] part;\n+ GeoPoint[] part;\n if(t<1) {\n part = Arrays.copyOfRange(coordinates, offset, i+1);\n part[part.length-1] = Edge.position(coordinates[i-1], coordinates[i], t);\n@@ -111,16 +111,16 @@ protected static Coordinate[][] decompose(double dateline, Coordinate[] coordina\n if(offset == 0) {\n parts.add(shift(shift, coordinates));\n } else if(offset < coordinates.length-1) {\n- Coordinate[] part = Arrays.copyOfRange(coordinates, offset, coordinates.length);\n+ GeoPoint[] part = Arrays.copyOfRange(coordinates, offset, coordinates.length);\n parts.add(shift(shift, part));\n }\n- return parts.toArray(new Coordinate[parts.size()][]);\n+ return parts.toArray(new GeoPoint[parts.size()][]);\n }\n \n- private static Coordinate[] shift(double shift, Coordinate...coordinates) {\n+ private static GeoPoint[] shift(double shift, GeoPoint...coordinates) {\n if(shift != 0) {\n for (int j = 0; j < coordinates.length; j++) {\n- coordinates[j] = new Coordinate(coordinates[j].x - 2 * shift, coordinates[j].y);\n+ coordinates[j] = new GeoPoint(coordinates[j].y, coordinates[j].x - 2 * shift);\n }\n }\n return coordinates;", "filename": "src/main/java/org/elasticsearch/common/geo/builders/BaseLineStringBuilder.java", "status": "modified" }, { "diff": "@@ -20,8 +20,14 @@\n package org.elasticsearch.common.geo.builders;\n \n import com.spatial4j.core.shape.Shape;\n-import com.vividsolutions.jts.geom.*;\n+import com.vividsolutions.jts.geom.Geometry;\n+import com.vividsolutions.jts.geom.GeometryFactory;\n+import com.vividsolutions.jts.geom.LinearRing;\n+import com.vividsolutions.jts.geom.MultiPolygon;\n+import com.vividsolutions.jts.geom.Polygon;\n import org.elasticsearch.ElasticsearchParseException;\n+import org.elasticsearch.common.geo.GeoPoint;\n+import org.elasticsearch.common.geo.GeoUtils;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n \n import java.io.IOException;\n@@ -67,7 +73,7 @@ public E point(double longitude, double latitude) {\n * @param coordinate coordinate of the new point\n * @return this\n */\n- public E point(Coordinate coordinate) {\n+ public E point(GeoPoint coordinate) {\n shell.point(coordinate);\n return thisRef();\n }\n@@ -77,7 +83,7 @@ public E point(Coordinate coordinate) {\n * @param coordinates coordinates of the new points to add\n * @return this\n */\n- public E points(Coordinate...coordinates) {\n+ public E points(GeoPoint...coordinates) {\n shell.points(coordinates);\n return thisRef();\n }\n@@ -121,7 +127,7 @@ public ShapeBuilder close() {\n * \n * @return coordinates of the polygon\n */\n- public Coordinate[][][] coordinates() {\n+ public GeoPoint[][][] coordinates() {\n int numEdges = shell.points.size()-1; // Last point is repeated \n for (int i = 0; i < holes.size(); i++) {\n numEdges += holes.get(i).points.size()-1;\n@@ -170,7 +176,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n \n public Geometry buildGeometry(GeometryFactory factory, boolean fixDateline) {\n if(fixDateline) {\n- Coordinate[][][] polygons = coordinates();\n+ GeoPoint[][][] polygons = coordinates();\n return polygons.length == 1\n ? polygon(factory, polygons[0])\n : multipolygon(factory, polygons);\n@@ -193,16 +199,16 @@ protected Polygon toPolygon(GeometryFactory factory) {\n return factory.createPolygon(shell, holes);\n }\n \n- protected static LinearRing linearRing(GeometryFactory factory, ArrayList<Coordinate> coordinates) {\n- return factory.createLinearRing(coordinates.toArray(new Coordinate[coordinates.size()]));\n+ protected static LinearRing linearRing(GeometryFactory factory, ArrayList<GeoPoint> coordinates) {\n+ return factory.createLinearRing(coordinates.toArray(new GeoPoint[coordinates.size()]));\n }\n \n @Override\n public GeoShapeType type() {\n return TYPE;\n }\n \n- protected static Polygon polygon(GeometryFactory factory, Coordinate[][] polygon) {\n+ protected static Polygon polygon(GeometryFactory factory, GeoPoint[][] polygon) {\n LinearRing shell = factory.createLinearRing(polygon[0]);\n LinearRing[] holes;\n \n@@ -227,7 +233,7 @@ protected static Polygon polygon(GeometryFactory factory, Coordinate[][] polygon\n * @param polygons definition of polygons\n * @return a new Multipolygon\n */\n- protected static MultiPolygon multipolygon(GeometryFactory factory, Coordinate[][][] polygons) {\n+ protected static MultiPolygon multipolygon(GeometryFactory factory, GeoPoint[][][] polygons) {\n Polygon[] polygonSet = new Polygon[polygons.length];\n for (int i = 0; i < polygonSet.length; i++) {\n polygonSet[i] = polygon(factory, polygons[i]);\n@@ -283,18 +289,18 @@ private static int component(final Edge edge, final int id, final ArrayList<Edge\n * @param coordinates Array of coordinates to write the result to\n * @return the coordinates parameter\n */\n- private static Coordinate[] coordinates(Edge component, Coordinate[] coordinates) {\n+ private static GeoPoint[] coordinates(Edge component, GeoPoint[] coordinates) {\n for (int i = 0; i < coordinates.length; i++) {\n coordinates[i] = (component = component.next).coordinate;\n }\n return coordinates;\n }\n \n- private static Coordinate[][][] buildCoordinates(ArrayList<ArrayList<Coordinate[]>> components) {\n- Coordinate[][][] result = new Coordinate[components.size()][][];\n+ private static GeoPoint[][][] buildCoordinates(ArrayList<ArrayList<GeoPoint[]>> components) {\n+ GeoPoint[][][] result = new GeoPoint[components.size()][][];\n for (int i = 0; i < result.length; i++) {\n- ArrayList<Coordinate[]> component = components.get(i);\n- result[i] = component.toArray(new Coordinate[component.size()][]);\n+ ArrayList<GeoPoint[]> component = components.get(i);\n+ result[i] = component.toArray(new GeoPoint[component.size()][]);\n }\n \n if(debugEnabled()) {\n@@ -309,44 +315,45 @@ private static Coordinate[][][] buildCoordinates(ArrayList<ArrayList<Coordinate[\n return result;\n } \n \n- private static final Coordinate[][] EMPTY = new Coordinate[0][];\n+ private static final GeoPoint[][] EMPTY = new GeoPoint[0][];\n \n- private static Coordinate[][] holes(Edge[] holes, int numHoles) {\n+ private static GeoPoint[][] holes(Edge[] holes, int numHoles) {\n if (numHoles == 0) {\n return EMPTY;\n }\n- final Coordinate[][] points = new Coordinate[numHoles][];\n+ final GeoPoint[][] points = new GeoPoint[numHoles][];\n \n for (int i = 0; i < numHoles; i++) {\n int length = component(holes[i], -(i+1), null); // mark as visited by inverting the sign\n- points[i] = coordinates(holes[i], new Coordinate[length+1]);\n+ points[i] = coordinates(holes[i], new GeoPoint[length+1]);\n }\n \n return points;\n } \n \n- private static Edge[] edges(Edge[] edges, int numHoles, ArrayList<ArrayList<Coordinate[]>> components) {\n+ private static Edge[] edges(Edge[] edges, int numHoles, ArrayList<ArrayList<GeoPoint[]>> components) {\n ArrayList<Edge> mainEdges = new ArrayList<>(edges.length);\n \n for (int i = 0; i < edges.length; i++) {\n if (edges[i].component >= 0) {\n int length = component(edges[i], -(components.size()+numHoles+1), mainEdges);\n- ArrayList<Coordinate[]> component = new ArrayList<>();\n- component.add(coordinates(edges[i], new Coordinate[length+1]));\n+ ArrayList<GeoPoint[]> component = new ArrayList<>();\n+ component.add(coordinates(edges[i], new GeoPoint[length+1]));\n components.add(component);\n }\n }\n \n return mainEdges.toArray(new Edge[mainEdges.size()]);\n }\n \n- private static Coordinate[][][] compose(Edge[] edges, Edge[] holes, int numHoles) {\n- final ArrayList<ArrayList<Coordinate[]>> components = new ArrayList<>();\n+ private static GeoPoint[][][] compose(Edge[] edges, Edge[] holes, int numHoles) {\n+ final ArrayList<ArrayList<GeoPoint[]>> components = new ArrayList<>();\n assign(holes, holes(holes, numHoles), numHoles, edges(edges, numHoles, components), components);\n return buildCoordinates(components);\n }\n \n- private static void assign(Edge[] holes, Coordinate[][] points, int numHoles, Edge[] edges, ArrayList<ArrayList<Coordinate[]>> components) {\n+ private static void assign(Edge[] holes, GeoPoint[][] points, int numHoles, Edge[] edges, ArrayList<ArrayList<GeoPoint[]>>\n+ components) {\n // Assign Hole to related components\n // To find the new component the hole belongs to all intersections of the\n // polygon edges with a vertical line are calculated. This vertical line\n@@ -461,14 +468,13 @@ private static void connect(Edge in, Edge out) {\n }\n \n private static int createEdges(int component, Orientation orientation, BaseLineStringBuilder<?> shell,\n- BaseLineStringBuilder<?> hole,\n- Edge[] edges, int offset) {\n+ BaseLineStringBuilder<?> hole, Edge[] edges, int edgeOffset) {\n // inner rings (holes) have an opposite direction than the outer rings\n // XOR will invert the orientation for outer ring cases (Truth Table:, T/T = F, T/F = T, F/T = T, F/F = F)\n boolean direction = (component != 0 ^ orientation == Orientation.RIGHT);\n // set the points array accordingly (shell or hole)\n- Coordinate[] points = (hole != null) ? hole.coordinates(false) : shell.coordinates(false);\n- Edge.ring(component, direction, orientation == Orientation.LEFT, shell, points, 0, edges, offset, points.length-1);\n+ GeoPoint[] points = (hole != null) ? hole.coordinates(false) : shell.coordinates(false);\n+ Edge.ring(component, direction, orientation == Orientation.LEFT, shell, points, edges, edgeOffset, points.length-1);\n return points.length-1;\n }\n \n@@ -477,17 +483,17 @@ public static class Ring<P extends ShapeBuilder> extends BaseLineStringBuilder<R\n private final P parent;\n \n protected Ring(P parent) {\n- this(parent, new ArrayList<Coordinate>());\n+ this(parent, new ArrayList<GeoPoint>());\n }\n \n- protected Ring(P parent, ArrayList<Coordinate> points) {\n+ protected Ring(P parent, ArrayList<GeoPoint> points) {\n super(points);\n this.parent = parent;\n }\n \n public P close() {\n- Coordinate start = points.get(0);\n- Coordinate end = points.get(points.size()-1);\n+ GeoPoint start = points.get(0);\n+ GeoPoint end = points.get(points.size()-1);\n if(start.x != end.x || start.y != end.y) {\n points.add(start);\n }", "filename": "src/main/java/org/elasticsearch/common/geo/builders/BasePolygonBuilder.java", "status": "modified" }, { "diff": "@@ -20,7 +20,7 @@\n package org.elasticsearch.common.geo.builders;\n \n import com.spatial4j.core.shape.Circle;\n-import com.vividsolutions.jts.geom.Coordinate;\n+import org.elasticsearch.common.geo.GeoPoint;\n import org.elasticsearch.common.unit.DistanceUnit;\n import org.elasticsearch.common.unit.DistanceUnit.Distance;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n@@ -34,15 +34,15 @@ public class CircleBuilder extends ShapeBuilder {\n \n private DistanceUnit unit;\n private double radius;\n- private Coordinate center;\n+ private GeoPoint center;\n \n /**\n * Set the center of the circle\n * \n * @param center coordinate of the circles center\n * @return this\n */\n- public CircleBuilder center(Coordinate center) {\n+ public CircleBuilder center(GeoPoint center) {\n this.center = center;\n return this;\n }\n@@ -54,7 +54,7 @@ public CircleBuilder center(Coordinate center) {\n * @return this\n */\n public CircleBuilder center(double lon, double lat) {\n- return center(new Coordinate(lon, lat));\n+ return center(new GeoPoint(lat, lon));\n }\n \n /**", "filename": "src/main/java/org/elasticsearch/common/geo/builders/CircleBuilder.java", "status": "modified" }, { "diff": "@@ -20,7 +20,7 @@\n package org.elasticsearch.common.geo.builders;\n \n import com.spatial4j.core.shape.Rectangle;\n-import com.vividsolutions.jts.geom.Coordinate;\n+import org.elasticsearch.common.geo.GeoPoint;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n \n import java.io.IOException;\n@@ -29,8 +29,8 @@ public class EnvelopeBuilder extends ShapeBuilder {\n \n public static final GeoShapeType TYPE = GeoShapeType.ENVELOPE; \n \n- protected Coordinate topLeft;\n- protected Coordinate bottomRight;\n+ protected GeoPoint topLeft;\n+ protected GeoPoint bottomRight;\n \n public EnvelopeBuilder() {\n this(Orientation.RIGHT);\n@@ -40,7 +40,7 @@ public EnvelopeBuilder(Orientation orientation) {\n super(orientation);\n }\n \n- public EnvelopeBuilder topLeft(Coordinate topLeft) {\n+ public EnvelopeBuilder topLeft(GeoPoint topLeft) {\n this.topLeft = topLeft;\n return this;\n }\n@@ -49,7 +49,7 @@ public EnvelopeBuilder topLeft(double longitude, double latitude) {\n return topLeft(coordinate(longitude, latitude));\n }\n \n- public EnvelopeBuilder bottomRight(Coordinate bottomRight) {\n+ public EnvelopeBuilder bottomRight(GeoPoint bottomRight) {\n this.bottomRight = bottomRight;\n return this;\n }", "filename": "src/main/java/org/elasticsearch/common/geo/builders/EnvelopeBuilder.java", "status": "modified" }, { "diff": "@@ -19,11 +19,10 @@\n \n package org.elasticsearch.common.geo.builders;\n \n+import org.elasticsearch.common.geo.GeoPoint;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n \n import com.spatial4j.core.shape.Shape;\n-import com.spatial4j.core.shape.jts.JtsGeometry;\n-import com.vividsolutions.jts.geom.Coordinate;\n import com.vividsolutions.jts.geom.Geometry;\n import com.vividsolutions.jts.geom.LineString;\n \n@@ -48,8 +47,8 @@ public MultiLineStringBuilder linestring(BaseLineStringBuilder<?> line) {\n return this;\n }\n \n- public Coordinate[][] coordinates() {\n- Coordinate[][] result = new Coordinate[lines.size()][];\n+ public GeoPoint[][] coordinates() {\n+ GeoPoint[][] result = new GeoPoint[lines.size()][];\n for (int i = 0; i < result.length; i++) {\n result[i] = lines.get(i).coordinates(false);\n }\n@@ -113,7 +112,7 @@ public MultiLineStringBuilder end() {\n return collection;\n }\n \n- public Coordinate[] coordinates() {\n+ public GeoPoint[] coordinates() {\n return super.coordinates(false);\n }\n ", "filename": "src/main/java/org/elasticsearch/common/geo/builders/MultiLineStringBuilder.java", "status": "modified" }, { "diff": "@@ -22,7 +22,7 @@\n import com.spatial4j.core.shape.Point;\n import com.spatial4j.core.shape.Shape;\n import com.spatial4j.core.shape.ShapeCollection;\n-import com.vividsolutions.jts.geom.Coordinate;\n+import org.elasticsearch.common.geo.GeoPoint;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n \n import java.io.IOException;\n@@ -48,7 +48,7 @@ public Shape build() {\n //Could wrap JtsGeometry but probably slower due to conversions to/from JTS in relate()\n //MultiPoint geometry = FACTORY.createMultiPoint(points.toArray(new Coordinate[points.size()]));\n List<Point> shapes = new ArrayList<>(points.size());\n- for (Coordinate coord : points) {\n+ for (GeoPoint coord : points) {\n shapes.add(SPATIAL_CONTEXT.makePoint(coord.x, coord.y));\n }\n return new ShapeCollection<>(shapes, SPATIAL_CONTEXT);", "filename": "src/main/java/org/elasticsearch/common/geo/builders/MultiPointBuilder.java", "status": "modified" }, { "diff": "@@ -24,10 +24,10 @@\n import java.util.List;\n \n import com.spatial4j.core.shape.ShapeCollection;\n+import org.elasticsearch.common.geo.GeoPoint;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n \n import com.spatial4j.core.shape.Shape;\n-import com.vividsolutions.jts.geom.Coordinate;\n \n public class MultiPolygonBuilder extends ShapeBuilder {\n \n@@ -84,7 +84,7 @@ public Shape build() {\n \n if(wrapdateline) {\n for (BasePolygonBuilder<?> polygon : this.polygons) {\n- for(Coordinate[][] part : polygon.coordinates()) {\n+ for(GeoPoint[][] part : polygon.coordinates()) {\n shapes.add(jtsGeometry(PolygonBuilder.polygon(FACTORY, part)));\n }\n }", "filename": "src/main/java/org/elasticsearch/common/geo/builders/MultiPolygonBuilder.java", "status": "modified" }, { "diff": "@@ -21,18 +21,18 @@\n \n import java.io.IOException;\n \n+import org.elasticsearch.common.geo.GeoPoint;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n \n import com.spatial4j.core.shape.Point;\n-import com.vividsolutions.jts.geom.Coordinate;\n \n public class PointBuilder extends ShapeBuilder {\n \n public static final GeoShapeType TYPE = GeoShapeType.POINT;\n \n- private Coordinate coordinate;\n+ private GeoPoint coordinate;\n \n- public PointBuilder coordinate(Coordinate coordinate) {\n+ public PointBuilder coordinate(GeoPoint coordinate) {\n this.coordinate = coordinate;\n return this;\n }", "filename": "src/main/java/org/elasticsearch/common/geo/builders/PointBuilder.java", "status": "modified" }, { "diff": "@@ -24,23 +24,22 @@\n import java.util.Arrays;\n import java.util.Collection;\n \n+import org.elasticsearch.common.geo.GeoPoint;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n \n-import com.vividsolutions.jts.geom.Coordinate;\n-\n /**\n * The {@link PointCollection} is an abstract base implementation for all GeoShapes. It simply handles a set of points. \n */\n public abstract class PointCollection<E extends PointCollection<E>> extends ShapeBuilder {\n \n- protected final ArrayList<Coordinate> points;\n+ protected final ArrayList<GeoPoint> points;\n protected boolean translated = false;\n \n protected PointCollection() {\n- this(new ArrayList<Coordinate>());\n+ this(new ArrayList<GeoPoint>());\n }\n \n- protected PointCollection(ArrayList<Coordinate> points) {\n+ protected PointCollection(ArrayList<GeoPoint> points) {\n this.points = points;\n }\n \n@@ -64,28 +63,28 @@ public E point(double longitude, double latitude) {\n * @param coordinate coordinate of the point\n * @return this\n */\n- public E point(Coordinate coordinate) {\n+ public E point(GeoPoint coordinate) {\n this.points.add(coordinate);\n return thisRef();\n }\n \n /**\n * Add a array of points to the collection\n * \n- * @param coordinates array of {@link Coordinate}s to add\n+ * @param coordinates array of {@link GeoPoint}s to add\n * @return this\n */\n- public E points(Coordinate...coordinates) {\n+ public E points(GeoPoint...coordinates) {\n return this.points(Arrays.asList(coordinates));\n }\n \n /**\n * Add a collection of points to the collection\n * \n- * @param coordinates array of {@link Coordinate}s to add\n+ * @param coordinates array of {@link GeoPoint}s to add\n * @return this\n */\n- public E points(Collection<? extends Coordinate> coordinates) {\n+ public E points(Collection<? extends GeoPoint> coordinates) {\n this.points.addAll(coordinates);\n return thisRef();\n }\n@@ -96,8 +95,8 @@ public E points(Collection<? extends Coordinate> coordinates) {\n * @param closed if set to true the first point of the array is repeated as last element\n * @return Array of coordinates\n */\n- protected Coordinate[] coordinates(boolean closed) {\n- Coordinate[] result = points.toArray(new Coordinate[points.size() + (closed?1:0)]);\n+ protected GeoPoint[] coordinates(boolean closed) {\n+ GeoPoint[] result = points.toArray(new GeoPoint[points.size() + (closed?1:0)]);\n if(closed) {\n result[result.length-1] = result[0];\n }\n@@ -114,12 +113,12 @@ protected Coordinate[] coordinates(boolean closed) {\n */\n protected XContentBuilder coordinatesToXcontent(XContentBuilder builder, boolean closed) throws IOException {\n builder.startArray();\n- for(Coordinate point : points) {\n+ for(GeoPoint point : points) {\n toXContent(builder, point);\n }\n if(closed) {\n- Coordinate start = points.get(0);\n- Coordinate end = points.get(points.size()-1);\n+ GeoPoint start = points.get(0);\n+ GeoPoint end = points.get(points.size()-1);\n if(start.x != end.x || start.y != end.y) {\n toXContent(builder, points.get(0));\n }", "filename": "src/main/java/org/elasticsearch/common/geo/builders/PointCollection.java", "status": "modified" }, { "diff": "@@ -21,19 +21,19 @@\n \n import java.util.ArrayList;\n \n-import com.vividsolutions.jts.geom.Coordinate;\n+import org.elasticsearch.common.geo.GeoPoint;\n \n public class PolygonBuilder extends BasePolygonBuilder<PolygonBuilder> {\n \n public PolygonBuilder() {\n- this(new ArrayList<Coordinate>(), Orientation.RIGHT);\n+ this(new ArrayList<GeoPoint>(), Orientation.RIGHT);\n }\n \n public PolygonBuilder(Orientation orientation) {\n- this(new ArrayList<Coordinate>(), orientation);\n+ this(new ArrayList<GeoPoint>(), orientation);\n }\n \n- protected PolygonBuilder(ArrayList<Coordinate> points, Orientation orientation) {\n+ protected PolygonBuilder(ArrayList<GeoPoint> points, Orientation orientation) {\n super(orientation);\n this.shell = new Ring<>(this, points);\n }", "filename": "src/main/java/org/elasticsearch/common/geo/builders/PolygonBuilder.java", "status": "modified" }, { "diff": "@@ -22,12 +22,12 @@\n import com.spatial4j.core.context.jts.JtsSpatialContext;\n import com.spatial4j.core.shape.Shape;\n import com.spatial4j.core.shape.jts.JtsGeometry;\n-import com.vividsolutions.jts.geom.Coordinate;\n import com.vividsolutions.jts.geom.Geometry;\n import com.vividsolutions.jts.geom.GeometryFactory;\n-import org.apache.commons.lang3.tuple.Pair;\n import org.elasticsearch.ElasticsearchIllegalArgumentException;\n import org.elasticsearch.ElasticsearchParseException;\n+import org.elasticsearch.common.geo.GeoPoint;\n+import org.elasticsearch.common.geo.GeoUtils;\n import org.elasticsearch.common.logging.ESLogger;\n import org.elasticsearch.common.logging.ESLoggerFactory;\n import org.elasticsearch.common.unit.DistanceUnit.Distance;\n@@ -57,7 +57,7 @@ public abstract class ShapeBuilder implements ToXContent {\n DEBUG = debug;\n }\n \n- public static final double DATELINE = 180;\n+ public static final double DATELINE = GeoUtils.DATELINE;\n // TODO how might we use JtsSpatialContextFactory to configure the context (esp. for non-geo)?\n public static final JtsSpatialContext SPATIAL_CONTEXT = JtsSpatialContext.GEO;\n public static final GeometryFactory FACTORY = SPATIAL_CONTEXT.getGeometryFactory();\n@@ -84,8 +84,8 @@ protected ShapeBuilder(Orientation orientation) {\n this.orientation = orientation;\n }\n \n- protected static Coordinate coordinate(double longitude, double latitude) {\n- return new Coordinate(longitude, latitude);\n+ protected static GeoPoint coordinate(double longitude, double latitude) {\n+ return new GeoPoint(latitude, longitude);\n }\n \n protected JtsGeometry jtsGeometry(Geometry geom) {\n@@ -106,15 +106,15 @@ protected JtsGeometry jtsGeometry(Geometry geom) {\n * @return a new {@link PointBuilder}\n */\n public static PointBuilder newPoint(double longitude, double latitude) {\n- return newPoint(new Coordinate(longitude, latitude));\n+ return newPoint(new GeoPoint(latitude, longitude));\n }\n \n /**\n- * Create a new {@link PointBuilder} from a {@link Coordinate}\n+ * Create a new {@link PointBuilder} from a {@link GeoPoint}\n * @param coordinate coordinate defining the position of the point\n * @return a new {@link PointBuilder}\n */\n- public static PointBuilder newPoint(Coordinate coordinate) {\n+ public static PointBuilder newPoint(GeoPoint coordinate) {\n return new PointBuilder().coordinate(coordinate);\n }\n \n@@ -250,7 +250,7 @@ private static CoordinateNode parseCoordinates(XContentParser parser) throws IOE\n token = parser.nextToken();\n double lat = parser.doubleValue();\n token = parser.nextToken();\n- return new CoordinateNode(new Coordinate(lon, lat));\n+ return new CoordinateNode(new GeoPoint(lat, lon));\n } else if (token == XContentParser.Token.VALUE_NULL) {\n throw new ElasticsearchIllegalArgumentException(\"coordinates cannot contain NULL values)\");\n }\n@@ -289,7 +289,7 @@ public static ShapeBuilder parse(XContentParser parser, GeoShapeFieldMapper geoD\n return GeoShapeType.parse(parser, geoDocMapper);\n }\n \n- protected static XContentBuilder toXContent(XContentBuilder builder, Coordinate coordinate) throws IOException {\n+ protected static XContentBuilder toXContent(XContentBuilder builder, GeoPoint coordinate) throws IOException {\n return builder.startArray().value(coordinate.x).value(coordinate.y).endArray();\n }\n \n@@ -309,11 +309,11 @@ public static Orientation orientationFromString(String orientation) {\n }\n }\n \n- protected static Coordinate shift(Coordinate coordinate, double dateline) {\n+ protected static GeoPoint shift(GeoPoint coordinate, double dateline) {\n if (dateline == 0) {\n return coordinate;\n } else {\n- return new Coordinate(-2 * dateline + coordinate.x, coordinate.y);\n+ return new GeoPoint(coordinate.y, -2 * dateline + coordinate.x);\n }\n }\n \n@@ -325,7 +325,7 @@ protected static Coordinate shift(Coordinate coordinate, double dateline) {\n \n /**\n * Calculate the intersection of a line segment and a vertical dateline.\n- * \n+ *\n * @param p1\n * start-point of the line segment\n * @param p2\n@@ -336,7 +336,7 @@ protected static Coordinate shift(Coordinate coordinate, double dateline) {\n * segment intersects with the line segment. Otherwise this method\n * returns {@link Double#NaN}\n */\n- protected static final double intersection(Coordinate p1, Coordinate p2, double dateline) {\n+ protected static final double intersection(GeoPoint p1, GeoPoint p2, double dateline) {\n if (p1.x == p2.x && p1.x != dateline) {\n return Double.NaN;\n } else if (p1.x == p2.x && p1.x == dateline) {\n@@ -366,8 +366,8 @@ protected static int intersections(double dateline, Edge[] edges) {\n int numIntersections = 0;\n assert !Double.isNaN(dateline);\n for (int i = 0; i < edges.length; i++) {\n- Coordinate p1 = edges[i].coordinate;\n- Coordinate p2 = edges[i].next.coordinate;\n+ GeoPoint p1 = edges[i].coordinate;\n+ GeoPoint p2 = edges[i].next.coordinate;\n assert !Double.isNaN(p2.x) && !Double.isNaN(p1.x); \n edges[i].intersect = Edge.MAX_COORDINATE;\n \n@@ -384,21 +384,21 @@ protected static int intersections(double dateline, Edge[] edges) {\n /**\n * Node used to represent a tree of coordinates.\n * <p/>\n- * Can either be a leaf node consisting of a Coordinate, or a parent with\n+ * Can either be a leaf node consisting of a GeoPoint, or a parent with\n * children\n */\n protected static class CoordinateNode implements ToXContent {\n \n- protected final Coordinate coordinate;\n+ protected final GeoPoint coordinate;\n protected final List<CoordinateNode> children;\n \n /**\n * Creates a new leaf CoordinateNode\n * \n * @param coordinate\n- * Coordinate for the Node\n+ * GeoPoint for the Node\n */\n- protected CoordinateNode(Coordinate coordinate) {\n+ protected CoordinateNode(GeoPoint coordinate) {\n this.coordinate = coordinate;\n this.children = null;\n }\n@@ -434,17 +434,17 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n }\n \n /**\n- * This helper class implements a linked list for {@link Coordinate}. It contains\n+ * This helper class implements a linked list for {@link GeoPoint}. It contains\n * fields for a dateline intersection and component id \n */\n protected static final class Edge {\n- Coordinate coordinate; // coordinate of the start point\n+ GeoPoint coordinate; // coordinate of the start point\n Edge next; // next segment\n- Coordinate intersect; // potential intersection with dateline\n+ GeoPoint intersect; // potential intersection with dateline\n int component = -1; // id of the component this edge belongs to\n- public static final Coordinate MAX_COORDINATE = new Coordinate(Double.POSITIVE_INFINITY, Double.POSITIVE_INFINITY);\n+ public static final GeoPoint MAX_COORDINATE = new GeoPoint(Double.POSITIVE_INFINITY, Double.POSITIVE_INFINITY);\n \n- protected Edge(Coordinate coordinate, Edge next, Coordinate intersection) {\n+ protected Edge(GeoPoint coordinate, Edge next, GeoPoint intersection) {\n this.coordinate = coordinate;\n this.next = next;\n this.intersect = intersection;\n@@ -453,11 +453,11 @@ protected Edge(Coordinate coordinate, Edge next, Coordinate intersection) {\n }\n }\n \n- protected Edge(Coordinate coordinate, Edge next) {\n+ protected Edge(GeoPoint coordinate, Edge next) {\n this(coordinate, next, Edge.MAX_COORDINATE);\n }\n \n- private static final int top(Coordinate[] points, int offset, int length) {\n+ private static final int top(GeoPoint[] points, int offset, int length) {\n int top = 0; // we start at 1 here since top points to 0\n for (int i = 1; i < length; i++) {\n if (points[offset + i].y < points[offset + top].y) {\n@@ -471,29 +471,6 @@ private static final int top(Coordinate[] points, int offset, int length) {\n return top;\n }\n \n- private static final Pair range(Coordinate[] points, int offset, int length) {\n- double minX = points[0].x;\n- double maxX = points[0].x;\n- double minY = points[0].y;\n- double maxY = points[0].y;\n- // compute the bounding coordinates (@todo: cleanup brute force)\n- for (int i = 1; i < length; ++i) {\n- if (points[offset + i].x < minX) {\n- minX = points[offset + i].x;\n- }\n- if (points[offset + i].x > maxX) {\n- maxX = points[offset + i].x;\n- }\n- if (points[offset + i].y < minY) {\n- minY = points[offset + i].y;\n- }\n- if (points[offset + i].y > maxY) {\n- maxY = points[offset + i].y;\n- }\n- }\n- return Pair.of(Pair.of(minX, maxX), Pair.of(minY, maxY));\n- }\n-\n /**\n * Concatenate a set of points to a polygon\n * \n@@ -503,8 +480,6 @@ private static final Pair range(Coordinate[] points, int offset, int length) {\n * direction of the ring\n * @param points\n * list of points to concatenate\n- * @param pointOffset\n- * index of the first point\n * @param edges\n * Array of edges to write the result to\n * @param edgeOffset\n@@ -513,27 +488,29 @@ private static final Pair range(Coordinate[] points, int offset, int length) {\n * number of points to use\n * @return the edges creates\n */\n- private static Edge[] concat(int component, boolean direction, Coordinate[] points, final int pointOffset, Edge[] edges, final int edgeOffset,\n- int length) {\n+ private static Edge[] concat(int component, boolean direction, GeoPoint[] points, Edge[] edges, final int edgeOffset,\n+ int length) {\n assert edges.length >= length+edgeOffset;\n- assert points.length >= length+pointOffset;\n- edges[edgeOffset] = new Edge(points[pointOffset], null);\n- for (int i = 1; i < length; i++) {\n+ assert points.length >= length;\n+ edges[edgeOffset] = new Edge(points[0], null);\n+ int edgeEnd = edgeOffset + length;\n+\n+ for (int i = edgeOffset+1, p = 1; i < edgeEnd; ++i, ++p) {\n if (direction) {\n- edges[edgeOffset + i] = new Edge(points[pointOffset + i], edges[edgeOffset + i - 1]);\n- edges[edgeOffset + i].component = component;\n+ edges[i] = new Edge(points[p], edges[i - 1]);\n+ edges[i].component = component;\n } else {\n- edges[edgeOffset + i - 1].next = edges[edgeOffset + i] = new Edge(points[pointOffset + i], null);\n- edges[edgeOffset + i - 1].component = component;\n+ edges[i - 1].next = edges[i] = new Edge(points[p], null);\n+ edges[i - 1].component = component;\n }\n }\n \n if (direction) {\n- edges[edgeOffset].next = edges[edgeOffset + length - 1];\n+ edges[edgeOffset].next = edges[edgeEnd - 1];\n edges[edgeOffset].component = component;\n } else {\n- edges[edgeOffset + length - 1].next = edges[edgeOffset];\n- edges[edgeOffset + length - 1].component = component;\n+ edges[edgeEnd - 1].next = edges[edgeOffset];\n+ edges[edgeEnd - 1].component = component;\n }\n \n return edges;\n@@ -544,82 +521,47 @@ private static Edge[] concat(int component, boolean direction, Coordinate[] poin\n * \n * @param points\n * array of point\n- * @param offset\n- * index of the first point\n * @param length\n * number of points\n * @return Array of edges\n */\n protected static Edge[] ring(int component, boolean direction, boolean handedness, BaseLineStringBuilder<?> shell,\n- Coordinate[] points, int offset, Edge[] edges, int toffset, int length) {\n+ GeoPoint[] points, Edge[] edges, int edgeOffset, int length) {\n // calculate the direction of the points:\n- // find the point a the top of the set and check its\n- // neighbors orientation. So direction is equivalent\n- // to clockwise/counterclockwise\n- final int top = top(points, offset, length);\n- final int prev = (offset + ((top + length - 1) % length));\n- final int next = (offset + ((top + 1) % length));\n- boolean orientation = points[offset + prev].x > points[offset + next].x;\n-\n- // OGC requires shell as ccw (Right-Handedness) and holes as cw (Left-Handedness) \n- // since GeoJSON doesn't specify (and doesn't need to) GEO core will assume OGC standards\n- // thus if orientation is computed as cw, the logic will translate points across dateline\n- // and convert to a right handed system\n-\n- // compute the bounding box and calculate range\n- Pair<Pair, Pair> range = range(points, offset, length);\n- final double rng = (Double)range.getLeft().getRight() - (Double)range.getLeft().getLeft();\n- // translate the points if the following is true\n- // 1. shell orientation is cw and range is greater than a hemisphere (180 degrees) but not spanning 2 hemispheres \n- // (translation would result in a collapsed poly)\n- // 2. the shell of the candidate hole has been translated (to preserve the coordinate system)\n- boolean incorrectOrientation = component == 0 && handedness != orientation;\n- if ( (incorrectOrientation && (rng > DATELINE && rng != 2*DATELINE)) || (shell.translated && component != 0)) {\n- translate(points);\n- // flip the translation bit if the shell is being translated\n- if (component == 0) {\n- shell.translated = true;\n- }\n- // correct the orientation post translation (ccw for shell, cw for holes)\n- if (component == 0 || (component != 0 && handedness == orientation)) {\n- orientation = !orientation;\n- }\n- }\n- return concat(component, direction ^ orientation, points, offset, edges, toffset, length);\n- }\n+ boolean orientation = GeoUtils.computePolyOrientation(points, length);\n+ boolean corrected = GeoUtils.correctPolyAmbiguity(points, handedness, orientation, component, length,\n+ shell.translated);\n \n- /**\n- * Transforms coordinates in the eastern hemisphere (-180:0) to a (180:360) range \n- * @param points\n- */\n- protected static void translate(Coordinate[] points) {\n- for (Coordinate c : points) {\n- if (c.x < 0) {\n- c.x += 2*DATELINE;\n+ // correct the orientation post translation (ccw for shell, cw for holes)\n+ if (corrected && (component == 0 || (component != 0 && handedness == orientation))) {\n+ if (component == 0) {\n+ shell.translated = corrected;\n }\n+ orientation = !orientation;\n }\n+ return concat(component, direction ^ orientation, points, edges, edgeOffset, length);\n }\n \n /**\n * Set the intersection of this line segment to the given position\n * \n * @param position\n * position of the intersection [0..1]\n- * @return the {@link Coordinate} of the intersection\n+ * @return the {@link GeoPoint} of the intersection\n */\n- protected Coordinate intersection(double position) {\n+ protected GeoPoint intersection(double position) {\n return intersect = position(coordinate, next.coordinate, position);\n }\n \n- public static Coordinate position(Coordinate p1, Coordinate p2, double position) {\n+ public static GeoPoint position(GeoPoint p1, GeoPoint p2, double position) {\n if (position == 0) {\n return p1;\n } else if (position == 1) {\n return p2;\n } else {\n final double x = p1.x + position * (p2.x - p1.x);\n final double y = p1.y + position * (p2.y - p1.y);\n- return new Coordinate(x, y);\n+ return new GeoPoint(y, x);\n }\n }\n \n@@ -793,12 +735,12 @@ protected static EnvelopeBuilder parseEnvelope(CoordinateNode coordinates, Orien\n \"geo_shape ('envelope') when expecting an array of 2 coordinates\");\n }\n // verify coordinate bounds, correct if necessary\n- Coordinate uL = coordinates.children.get(0).coordinate;\n- Coordinate lR = coordinates.children.get(1).coordinate;\n+ GeoPoint uL = coordinates.children.get(0).coordinate;\n+ GeoPoint lR = coordinates.children.get(1).coordinate;\n if (((lR.x < uL.x) || (uL.y < lR.y))) {\n- Coordinate uLtmp = uL;\n- uL = new Coordinate(Math.min(uL.x, lR.x), Math.max(uL.y, lR.y));\n- lR = new Coordinate(Math.max(uLtmp.x, lR.x), Math.min(uLtmp.y, lR.y));\n+ GeoPoint uLtmp = uL;\n+ uL = new GeoPoint(Math.max(uL.y, lR.y), Math.min(uL.x, lR.x));\n+ lR = new GeoPoint(Math.min(uLtmp.y, lR.y), Math.max(uLtmp.x, lR.x));\n }\n return newEnvelope(orientation).topLeft(uL).bottomRight(lR);\n }", "filename": "src/main/java/org/elasticsearch/common/geo/builders/ShapeBuilder.java", "status": "modified" }, { "diff": "@@ -294,6 +294,10 @@ protected void doXContentBody(XContentBuilder builder, boolean includeDefaults,\n if (includeDefaults || defaultStrategy.getDistErrPct() != Defaults.DISTANCE_ERROR_PCT) {\n builder.field(Names.DISTANCE_ERROR_PCT, defaultStrategy.getDistErrPct());\n }\n+\n+ if (includeDefaults || shapeOrientation != Defaults.ORIENTATION ) {\n+ builder.field(Names.ORIENTATION, shapeOrientation);\n+ }\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/index/mapper/geo/GeoShapeFieldMapper.java", "status": "modified" }, { "diff": "@@ -29,6 +29,7 @@\n import org.apache.lucene.util.Bits;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.geo.GeoPoint;\n+import org.elasticsearch.common.geo.GeoUtils;\n import org.elasticsearch.index.fielddata.IndexGeoPointFieldData;\n import org.elasticsearch.index.fielddata.MultiGeoPointValues;\n \n@@ -93,15 +94,27 @@ protected boolean matchDoc(int doc) {\n \n private static boolean pointInPolygon(GeoPoint[] points, double lat, double lon) {\n boolean inPoly = false;\n-\n+ // @TODO handedness will be an option provided by the parser\n+ boolean corrected = GeoUtils.correctPolyAmbiguity(points, false);\n+ GeoPoint p = (corrected) ?\n+ GeoUtils.convertToGreatCircle(lat, lon) :\n+ new GeoPoint(lat, lon);\n+\n+ GeoPoint pp0 = (corrected) ? GeoUtils.convertToGreatCircle(points[0]) : points[0] ;\n+ GeoPoint pp1;\n+ // simple even-odd PIP computation\n+ // 1. Determine if point is contained in the longitudinal range\n+ // 2. Determine whether point crosses the edge by computing the latitudinal delta\n+ // between the end-point of a parallel vector (originating at the point) and the\n+ // y-component of the edge sink\n for (int i = 1; i < points.length; i++) {\n- if (points[i].lon() < lon && points[i-1].lon() >= lon\n- || points[i-1].lon() < lon && points[i].lon() >= lon) {\n- if (points[i].lat() + (lon - points[i].lon()) /\n- (points[i-1].lon() - points[i].lon()) * (points[i-1].lat() - points[i].lat()) < lat) {\n+ pp1 = points[i];\n+ if (pp1.x < p.x && pp0.x >= p.x || pp0.x < p.x && pp1.x >= p.x) {\n+ if (pp1.y + (p.x - pp1.x) / (pp0.x - pp1.x) * (pp0.y - pp1.y) < p.y) {\n inPoly = !inPoly;\n }\n }\n+ pp0 = pp1;\n }\n return inPoly;\n }", "filename": "src/main/java/org/elasticsearch/index/search/geo/GeoPolygonFilter.java", "status": "modified" }, { "diff": "@@ -26,7 +26,13 @@\n import com.spatial4j.core.shape.ShapeCollection;\n import com.spatial4j.core.shape.jts.JtsGeometry;\n import com.spatial4j.core.shape.jts.JtsPoint;\n-import com.vividsolutions.jts.geom.*;\n+import com.vividsolutions.jts.geom.Geometry;\n+import com.vividsolutions.jts.geom.GeometryFactory;\n+import com.vividsolutions.jts.geom.LineString;\n+import com.vividsolutions.jts.geom.LinearRing;\n+import com.vividsolutions.jts.geom.MultiLineString;\n+import com.vividsolutions.jts.geom.Point;\n+import com.vividsolutions.jts.geom.Polygon;\n import org.elasticsearch.ElasticsearchIllegalArgumentException;\n import org.elasticsearch.ElasticsearchParseException;\n import org.elasticsearch.common.geo.builders.ShapeBuilder;\n@@ -57,7 +63,7 @@ public void testParse_simplePoint() throws IOException {\n .startArray(\"coordinates\").value(100.0).value(0.0).endArray()\n .endObject().string();\n \n- Point expected = GEOMETRY_FACTORY.createPoint(new Coordinate(100.0, 0.0));\n+ Point expected = GEOMETRY_FACTORY.createPoint(new GeoPoint(0.0, 100.0));\n assertGeometryEquals(new JtsPoint(expected, SPATIAL_CONTEXT), pointGeoJson);\n }\n \n@@ -69,12 +75,12 @@ public void testParse_lineString() throws IOException {\n .endArray()\n .endObject().string();\n \n- List<Coordinate> lineCoordinates = new ArrayList<>();\n- lineCoordinates.add(new Coordinate(100, 0));\n- lineCoordinates.add(new Coordinate(101, 1));\n+ List<GeoPoint> lineCoordinates = new ArrayList<>();\n+ lineCoordinates.add(new GeoPoint(0, 100));\n+ lineCoordinates.add(new GeoPoint(1, 101));\n \n LineString expected = GEOMETRY_FACTORY.createLineString(\n- lineCoordinates.toArray(new Coordinate[lineCoordinates.size()]));\n+ lineCoordinates.toArray(new GeoPoint[lineCoordinates.size()]));\n assertGeometryEquals(jtsGeom(expected), lineGeoJson);\n }\n \n@@ -93,13 +99,13 @@ public void testParse_multiLineString() throws IOException {\n .endObject().string();\n \n MultiLineString expected = GEOMETRY_FACTORY.createMultiLineString(new LineString[]{\n- GEOMETRY_FACTORY.createLineString(new Coordinate[]{\n- new Coordinate(100, 0),\n- new Coordinate(101, 1),\n+ GEOMETRY_FACTORY.createLineString(new GeoPoint[]{\n+ new GeoPoint(0, 100),\n+ new GeoPoint(1, 101),\n }),\n- GEOMETRY_FACTORY.createLineString(new Coordinate[]{\n- new Coordinate(102, 2),\n- new Coordinate(103, 3),\n+ GEOMETRY_FACTORY.createLineString(new GeoPoint[]{\n+ new GeoPoint(2, 102),\n+ new GeoPoint(3, 103),\n }),\n });\n assertGeometryEquals(jtsGeom(expected), multilinesGeoJson);\n@@ -173,14 +179,14 @@ public void testParse_polygonNoHoles() throws IOException {\n .endArray()\n .endObject().string();\n \n- List<Coordinate> shellCoordinates = new ArrayList<>();\n- shellCoordinates.add(new Coordinate(100, 0));\n- shellCoordinates.add(new Coordinate(101, 0));\n- shellCoordinates.add(new Coordinate(101, 1));\n- shellCoordinates.add(new Coordinate(100, 1));\n- shellCoordinates.add(new Coordinate(100, 0));\n+ List<GeoPoint> shellCoordinates = new ArrayList<>();\n+ shellCoordinates.add(new GeoPoint(0, 100));\n+ shellCoordinates.add(new GeoPoint(0, 101));\n+ shellCoordinates.add(new GeoPoint(1, 101));\n+ shellCoordinates.add(new GeoPoint(1, 100));\n+ shellCoordinates.add(new GeoPoint(0, 100));\n \n- LinearRing shell = GEOMETRY_FACTORY.createLinearRing(shellCoordinates.toArray(new Coordinate[shellCoordinates.size()]));\n+ LinearRing shell = GEOMETRY_FACTORY.createLinearRing(shellCoordinates.toArray(new GeoPoint[shellCoordinates.size()]));\n Polygon expected = GEOMETRY_FACTORY.createPolygon(shell, null);\n assertGeometryEquals(jtsGeom(expected), polygonGeoJson);\n }\n@@ -567,25 +573,25 @@ public void testParse_polygonWithHole() throws IOException {\n .endArray()\n .endObject().string();\n \n- List<Coordinate> shellCoordinates = new ArrayList<>();\n- shellCoordinates.add(new Coordinate(100, 0));\n- shellCoordinates.add(new Coordinate(101, 0));\n- shellCoordinates.add(new Coordinate(101, 1));\n- shellCoordinates.add(new Coordinate(100, 1));\n- shellCoordinates.add(new Coordinate(100, 0));\n+ List<GeoPoint> shellCoordinates = new ArrayList<>();\n+ shellCoordinates.add(new GeoPoint(0, 100));\n+ shellCoordinates.add(new GeoPoint(0, 101));\n+ shellCoordinates.add(new GeoPoint(1, 101));\n+ shellCoordinates.add(new GeoPoint(1, 100));\n+ shellCoordinates.add(new GeoPoint(0, 100));\n \n- List<Coordinate> holeCoordinates = new ArrayList<>();\n- holeCoordinates.add(new Coordinate(100.2, 0.2));\n- holeCoordinates.add(new Coordinate(100.8, 0.2));\n- holeCoordinates.add(new Coordinate(100.8, 0.8));\n- holeCoordinates.add(new Coordinate(100.2, 0.8));\n- holeCoordinates.add(new Coordinate(100.2, 0.2));\n+ List<GeoPoint> holeCoordinates = new ArrayList<>();\n+ holeCoordinates.add(new GeoPoint(0.2, 100.2));\n+ holeCoordinates.add(new GeoPoint(0.2, 100.8));\n+ holeCoordinates.add(new GeoPoint(0.8, 100.8));\n+ holeCoordinates.add(new GeoPoint(0.8, 100.2));\n+ holeCoordinates.add(new GeoPoint(0.2, 100.2));\n \n LinearRing shell = GEOMETRY_FACTORY.createLinearRing(\n- shellCoordinates.toArray(new Coordinate[shellCoordinates.size()]));\n+ shellCoordinates.toArray(new GeoPoint[shellCoordinates.size()]));\n LinearRing[] holes = new LinearRing[1];\n holes[0] = GEOMETRY_FACTORY.createLinearRing(\n- holeCoordinates.toArray(new Coordinate[holeCoordinates.size()]));\n+ holeCoordinates.toArray(new GeoPoint[holeCoordinates.size()]));\n Polygon expected = GEOMETRY_FACTORY.createPolygon(shell, holes);\n assertGeometryEquals(jtsGeom(expected), polygonGeoJson);\n }\n@@ -657,34 +663,34 @@ public void testParse_multiPolygon() throws IOException {\n .endArray()\n .endObject().string();\n \n- List<Coordinate> shellCoordinates = new ArrayList<>();\n- shellCoordinates.add(new Coordinate(100, 0));\n- shellCoordinates.add(new Coordinate(101, 0));\n- shellCoordinates.add(new Coordinate(101, 1));\n- shellCoordinates.add(new Coordinate(100, 1));\n- shellCoordinates.add(new Coordinate(100, 0));\n+ List<GeoPoint> shellCoordinates = new ArrayList<>();\n+ shellCoordinates.add(new GeoPoint(0, 100));\n+ shellCoordinates.add(new GeoPoint(0, 101));\n+ shellCoordinates.add(new GeoPoint(1, 101));\n+ shellCoordinates.add(new GeoPoint(1, 100));\n+ shellCoordinates.add(new GeoPoint(0, 100));\n \n- List<Coordinate> holeCoordinates = new ArrayList<>();\n- holeCoordinates.add(new Coordinate(100.2, 0.2));\n- holeCoordinates.add(new Coordinate(100.8, 0.2));\n- holeCoordinates.add(new Coordinate(100.8, 0.8));\n- holeCoordinates.add(new Coordinate(100.2, 0.8));\n- holeCoordinates.add(new Coordinate(100.2, 0.2));\n+ List<GeoPoint> holeCoordinates = new ArrayList<>();\n+ holeCoordinates.add(new GeoPoint(0.2, 100.2));\n+ holeCoordinates.add(new GeoPoint(0.2, 100.8));\n+ holeCoordinates.add(new GeoPoint(0.8, 100.8));\n+ holeCoordinates.add(new GeoPoint(0.8, 100.2));\n+ holeCoordinates.add(new GeoPoint(0.2, 100.2));\n \n- LinearRing shell = GEOMETRY_FACTORY.createLinearRing(shellCoordinates.toArray(new Coordinate[shellCoordinates.size()]));\n+ LinearRing shell = GEOMETRY_FACTORY.createLinearRing(shellCoordinates.toArray(new GeoPoint[shellCoordinates.size()]));\n LinearRing[] holes = new LinearRing[1];\n- holes[0] = GEOMETRY_FACTORY.createLinearRing(holeCoordinates.toArray(new Coordinate[holeCoordinates.size()]));\n+ holes[0] = GEOMETRY_FACTORY.createLinearRing(holeCoordinates.toArray(new GeoPoint[holeCoordinates.size()]));\n Polygon withHoles = GEOMETRY_FACTORY.createPolygon(shell, holes);\n \n shellCoordinates = new ArrayList<>();\n- shellCoordinates.add(new Coordinate(102, 3));\n- shellCoordinates.add(new Coordinate(103, 3));\n- shellCoordinates.add(new Coordinate(103, 2));\n- shellCoordinates.add(new Coordinate(102, 2));\n- shellCoordinates.add(new Coordinate(102, 3));\n+ shellCoordinates.add(new GeoPoint(3, 102));\n+ shellCoordinates.add(new GeoPoint(3, 103));\n+ shellCoordinates.add(new GeoPoint(2, 103));\n+ shellCoordinates.add(new GeoPoint(2, 102));\n+ shellCoordinates.add(new GeoPoint(3, 102));\n \n \n- shell = GEOMETRY_FACTORY.createLinearRing(shellCoordinates.toArray(new Coordinate[shellCoordinates.size()]));\n+ shell = GEOMETRY_FACTORY.createLinearRing(shellCoordinates.toArray(new GeoPoint[shellCoordinates.size()]));\n Polygon withoutHoles = GEOMETRY_FACTORY.createPolygon(shell, null);\n \n Shape expected = shapeCollection(withoutHoles, withHoles);\n@@ -716,22 +722,22 @@ public void testParse_multiPolygon() throws IOException {\n .endObject().string();\n \n shellCoordinates = new ArrayList<>();\n- shellCoordinates.add(new Coordinate(100, 1));\n- shellCoordinates.add(new Coordinate(101, 1));\n- shellCoordinates.add(new Coordinate(101, 0));\n- shellCoordinates.add(new Coordinate(100, 0));\n- shellCoordinates.add(new Coordinate(100, 1));\n+ shellCoordinates.add(new GeoPoint(1, 100));\n+ shellCoordinates.add(new GeoPoint(1, 101));\n+ shellCoordinates.add(new GeoPoint(0, 101));\n+ shellCoordinates.add(new GeoPoint(0, 100));\n+ shellCoordinates.add(new GeoPoint(1, 100));\n \n holeCoordinates = new ArrayList<>();\n- holeCoordinates.add(new Coordinate(100.2, 0.8));\n- holeCoordinates.add(new Coordinate(100.2, 0.2));\n- holeCoordinates.add(new Coordinate(100.8, 0.2));\n- holeCoordinates.add(new Coordinate(100.8, 0.8));\n- holeCoordinates.add(new Coordinate(100.2, 0.8));\n+ holeCoordinates.add(new GeoPoint(0.8, 100.2));\n+ holeCoordinates.add(new GeoPoint(0.2, 100.2));\n+ holeCoordinates.add(new GeoPoint(0.2, 100.8));\n+ holeCoordinates.add(new GeoPoint(0.8, 100.8));\n+ holeCoordinates.add(new GeoPoint(0.8, 100.2));\n \n- shell = GEOMETRY_FACTORY.createLinearRing(shellCoordinates.toArray(new Coordinate[shellCoordinates.size()]));\n+ shell = GEOMETRY_FACTORY.createLinearRing(shellCoordinates.toArray(new GeoPoint[shellCoordinates.size()]));\n holes = new LinearRing[1];\n- holes[0] = GEOMETRY_FACTORY.createLinearRing(holeCoordinates.toArray(new Coordinate[holeCoordinates.size()]));\n+ holes[0] = GEOMETRY_FACTORY.createLinearRing(holeCoordinates.toArray(new GeoPoint[holeCoordinates.size()]));\n withHoles = GEOMETRY_FACTORY.createPolygon(shell, holes);\n \n assertGeometryEquals(jtsGeom(withHoles), multiPolygonGeoJson);\n@@ -757,12 +763,12 @@ public void testParse_geometryCollection() throws IOException {\n .string();\n \n Shape[] expected = new Shape[2];\n- LineString expectedLineString = GEOMETRY_FACTORY.createLineString(new Coordinate[]{\n- new Coordinate(100, 0),\n- new Coordinate(101, 1),\n+ LineString expectedLineString = GEOMETRY_FACTORY.createLineString(new GeoPoint[]{\n+ new GeoPoint(0, 100),\n+ new GeoPoint(1, 101),\n });\n expected[0] = jtsGeom(expectedLineString);\n- Point expectedPoint = GEOMETRY_FACTORY.createPoint(new Coordinate(102.0, 2.0));\n+ Point expectedPoint = GEOMETRY_FACTORY.createPoint(new GeoPoint(2.0, 102.0));\n expected[1] = new JtsPoint(expectedPoint, SPATIAL_CONTEXT);\n \n //equals returns true only if geometries are in the same order\n@@ -785,7 +791,7 @@ public void testThatParserExtractsCorrectTypeAndCoordinatesFromArbitraryJson() t\n .startObject(\"lala\").field(\"type\", \"NotAPoint\").endObject()\n .endObject().string();\n \n- Point expected = GEOMETRY_FACTORY.createPoint(new Coordinate(100.0, 0.0));\n+ Point expected = GEOMETRY_FACTORY.createPoint(new GeoPoint(0.0, 100.0));\n assertGeometryEquals(new JtsPoint(expected, SPATIAL_CONTEXT), pointGeoJson);\n }\n ", "filename": "src/test/java/org/elasticsearch/common/geo/GeoJSONShapeParserTests.java", "status": "modified" }, { "diff": "@@ -24,7 +24,6 @@\n import com.spatial4j.core.shape.Rectangle;\n import com.spatial4j.core.shape.Shape;\n import com.spatial4j.core.shape.impl.PointImpl;\n-import com.vividsolutions.jts.geom.Coordinate;\n import com.vividsolutions.jts.geom.LineString;\n import com.vividsolutions.jts.geom.Polygon;\n import org.elasticsearch.common.geo.builders.PolygonBuilder;\n@@ -64,38 +63,39 @@ public void testNewPolygon() {\n .point(-45, 30).toPolygon();\n \n LineString exterior = polygon.getExteriorRing();\n- assertEquals(exterior.getCoordinateN(0), new Coordinate(-45, 30));\n- assertEquals(exterior.getCoordinateN(1), new Coordinate(45, 30));\n- assertEquals(exterior.getCoordinateN(2), new Coordinate(45, -30));\n- assertEquals(exterior.getCoordinateN(3), new Coordinate(-45, -30));\n+ assertEquals(exterior.getCoordinateN(0), new GeoPoint(30, -45));\n+ assertEquals(exterior.getCoordinateN(1), new GeoPoint(30, 45));\n+ assertEquals(exterior.getCoordinateN(2), new GeoPoint(-30, 45));\n+ assertEquals(exterior.getCoordinateN(3), new GeoPoint(-30, -45));\n }\n \n @Test\n public void testNewPolygon_coordinate() {\n Polygon polygon = ShapeBuilder.newPolygon()\n- .point(new Coordinate(-45, 30))\n- .point(new Coordinate(45, 30))\n- .point(new Coordinate(45, -30))\n- .point(new Coordinate(-45, -30))\n- .point(new Coordinate(-45, 30)).toPolygon();\n+ .point(new GeoPoint(30, -45))\n+ .point(new GeoPoint(30, 45))\n+ .point(new GeoPoint(-30, 45))\n+ .point(new GeoPoint(-30, -45))\n+ .point(new GeoPoint(30, -45)).toPolygon();\n \n LineString exterior = polygon.getExteriorRing();\n- assertEquals(exterior.getCoordinateN(0), new Coordinate(-45, 30));\n- assertEquals(exterior.getCoordinateN(1), new Coordinate(45, 30));\n- assertEquals(exterior.getCoordinateN(2), new Coordinate(45, -30));\n- assertEquals(exterior.getCoordinateN(3), new Coordinate(-45, -30));\n+ assertEquals(exterior.getCoordinateN(0), new GeoPoint(30, -45));\n+ assertEquals(exterior.getCoordinateN(1), new GeoPoint(30, 45));\n+ assertEquals(exterior.getCoordinateN(2), new GeoPoint(-30, 45));\n+ assertEquals(exterior.getCoordinateN(3), new GeoPoint(-30, -45));\n }\n \n @Test\n public void testNewPolygon_coordinates() {\n Polygon polygon = ShapeBuilder.newPolygon()\n- .points(new Coordinate(-45, 30), new Coordinate(45, 30), new Coordinate(45, -30), new Coordinate(-45, -30), new Coordinate(-45, 30)).toPolygon();\n+ .points(new GeoPoint(30, -45), new GeoPoint(30, 45), new GeoPoint(-30, 45), new GeoPoint(-30, -45),\n+ new GeoPoint(30, -45)).toPolygon();\n \n LineString exterior = polygon.getExteriorRing();\n- assertEquals(exterior.getCoordinateN(0), new Coordinate(-45, 30));\n- assertEquals(exterior.getCoordinateN(1), new Coordinate(45, 30));\n- assertEquals(exterior.getCoordinateN(2), new Coordinate(45, -30));\n- assertEquals(exterior.getCoordinateN(3), new Coordinate(-45, -30));\n+ assertEquals(exterior.getCoordinateN(0), new GeoPoint(30, -45));\n+ assertEquals(exterior.getCoordinateN(1), new GeoPoint(30, 45));\n+ assertEquals(exterior.getCoordinateN(2), new GeoPoint(-30, 45));\n+ assertEquals(exterior.getCoordinateN(3), new GeoPoint(-30, -45));\n }\n \n @Test", "filename": "src/test/java/org/elasticsearch/common/geo/ShapeBuilderTests.java", "status": "modified" } ] }
{ "body": "Given this \"small\" example, you can create the mapping, index two simple documents, and aggregate against it. This was tested on 1.4.2 (and internally reported on 1.3.1).\n\nThe mapping defines two nested objects -- one within the other (`comments` and `comments.tags`). The `dates` object just contains some queried and basic aggregated details.\n\nThe data is setup so that it should be immediately obvious if it's wrong or not. Each comment is given a unique ID with `comments.cid` (1, 2, 3, and 4). Only the odd-numbered (1 and 3) comments have tags added to them. The even-numbered (2 and 4) comments have a `comments.identifier` meant to be found in the aggregation. The tags are given unique IDs that reflect their parent comment with `comments.tags.tid` (22 and 44; extended versions of cid just to be unique to avoid confusion).\n\nThere are two nested aggregations toward the bottom of the aggregation, and the inner one is unexpectedly passed the tag associated with `comments.cid` of `2` even though that comment does not match parent aggregations (as evidenced by printouts on the command line seen after the setup)!\n\n```\n# Delete the index/mapping\nDELETE /agg-test\n\nPUT /agg-test\n{\n \"mappings\": {\n \"provider\": {\n \"properties\": {\n \"comments\": {\n \"type\": \"nested\",\n \"properties\": {\n \"cid\": {\n \"type\": \"long\"\n },\n \"identifier\": {\n \"type\": \"string\",\n \"index\": \"not_analyzed\"\n },\n \"tags\": {\n \"type\": \"nested\",\n \"properties\": {\n \"tid\": {\n \"type\": \"long\"\n },\n \"name\": {\n \"type\": \"string\",\n \"index\": \"not_analyzed\"\n }\n }\n }\n }\n },\n \"dates\": {\n \"properties\": {\n \"day\": {\n \"type\": \"date\",\n \"format\": \"dateOptionalTime\"\n },\n \"month\": {\n \"properties\": {\n \"end\": {\n \"type\": \"date\",\n \"format\": \"dateOptionalTime\"\n },\n \"label\": {\n \"type\": \"string\",\n \"index\": \"not_analyzed\"\n },\n \"start\": {\n \"type\": \"date\",\n \"format\": \"dateOptionalTime\"\n }\n }\n }\n }\n }\n }\n }\n }\n}\n\n# Feed 2 documents.\nPOST /agg-test/provider/_bulk\n{\"index\": {\"_id\": \"12208\"}}\n{\"_id\": 12208, \"dates\": {\"month\": {\"label\": \"2014-11\", \"end\": \"2014-11-30\", \"start\": \"2014-11-01\"}, \"day\": \"2014-11-30\"}, \"comments\": [{\"cid\": 3,\"identifier\": \"29111\"}, {\"cid\": 4,\"tags\": [{\"tid\" :44,\"name\": \"Roles\"}], \"identifier\": \"29101\"}]}\n{\"index\": {\"_id\": \"12222\"}}\n{\"_id\": 12222, \"dates\": {\"month\": {\"label\": \"2014-12\", \"end\": \"2014-12-31\", \"start\": \"2014-12-01\"}, \"day\": \"2014-12-03\"}, \"comments\": [{\"cid\": 1, \"identifier\": \"29111\"}, {\"cid\": 2,\"tags\": [{\"tid\" : 22, \"name\": \"DataChannels\"}], \"identifier\": \"29101\"}]}\n\n# Verify the mappings if you want \nGET /agg-test/_mappings\n\n# Run query, it should return no comments.tags because no comment has identifier 29111 _and_ a tag, but it will a tag!\nGET /agg-test/provider/_search\n{\n \"aggregations\": {\n \"startDate\": {\n \"terms\": {\n \"field\": \"dates.month.start\",\n \"size\": 50\n },\n \"aggregations\": {\n \"endDate\": {\n \"terms\": {\n \"field\": \"dates.month.end\",\n \"size\": 50\n },\n \"aggregations\": {\n \"period\": {\n \"terms\": {\n \"field\": \"dates.month.label\",\n \"size\": 50\n },\n \"aggregations\": {\n \"ctxt_idfier_nested\": {\n \"nested\": {\n \"path\": \"comments\"\n },\n \"aggregations\": {\n \"comment_filter\": {\n \"filter\": {\n \"term\": {\n \"comments.identifier\": \"29111\"\n }\n },\n \"aggregations\": {\n \"comment_script\": {\n \"terms\": {\n \"script\": \"println \\\"Comment Map: ${doc.get('cid')}\\\"; println \\\"ID: ${doc['comments.cid'].value}\\\"; return doc['comments.identifier'].value\",\n \"size\": 50\n }\n },\n \"nested_tags\": {\n \"nested\": {\n \"path\": \"comments.tags\"\n },\n \"aggregations\": {\n \"tag_script\": {\n \"terms\": {\n \"script\": \"println \\\"Tag Map: ${doc.get('tid')}\\\"; println \\\"ID: ${doc['comments.tags.tid'].value}\\\"; return doc['comments.tags.name'].value\",\n \"size\": 50\n }\n },\n \"tag\": {\n \"terms\": {\n \"field\": \"comments.tags.name\",\n \"size\": 50\n }\n }\n }\n }\n }\n }\n }\n }\n }\n }\n }\n }\n }\n }\n }\n}\n```\n\nThe scripts exist to verify steps along the way, but this prints out (on the ES node's command line):\n\n```\nComment Map: [3]\nID: 3\nComment Map: [1]\nID: 1\nTag Map: [22]\nID: 22\n```\n\nSince `Comment Map: [2]\\nID: 2` never appeared in the output, there is no reason that the Tag should have been processed!\n\nWhat's more baffling is that you can remove the top two aggregations and it will execute properly:\n\n```\nGET /agg-test/provider/_search\n{\n \"aggregations\": {\n \"period\": {\n \"terms\": {\n \"field\": \"dates.month.label\",\n \"size\": 50\n },\n \"aggregations\": {\n \"ctxt_idfier_nested\": {\n \"nested\": {\n \"path\": \"comments\"\n },\n \"aggregations\": {\n \"comment_filter\": {\n \"filter\": {\n \"term\": {\n \"comments.identifier\": \"29111\"\n }\n },\n \"aggregations\": {\n \"comment_script\": {\n \"terms\": {\n \"script\": \"println \\\"Comment Map: ${doc.get('cid')}\\\"; println \\\"ID: ${doc['comments.cid'].value}\\\"; return doc['comments.identifier'].value\",\n \"size\": 50\n }\n },\n \"nested_tags\": {\n \"nested\": {\n \"path\": \"comments.tags\"\n },\n \"aggregations\": {\n \"tag_script\": {\n \"terms\": {\n \"script\": \"println \\\"Tag Map: ${doc.get('tid')}\\\"; println \\\"ID: ${doc['comments.tags.tid'].value}\\\"; return doc['comments.tags.name'].value\",\n \"size\": 50\n }\n },\n \"tag\": {\n \"terms\": {\n \"field\": \"comments.tags.name\",\n \"size\": 50\n }\n }\n }\n }\n }\n }\n }\n }\n }\n }\n }\n}\n```\n\nPrints out \n\n```\nComment Map: [3]\nID: 3\nComment Map: [1]\nID: 1\n```\n\nWithout improperly finding any tags.\n\nEDITED: I fixed the mapping, which must have been copy/pasted wrong.\n", "comments": [ { "body": "The mapping seems to be wrong in the above (it doesn't parse) I think it should be:\n\n```\nPUT /agg-test\n{\n \"mappings\": {\n \"provider\": {\n \"properties\": {\n \"comments\": {\n \"type\": \"nested\",\n \"properties\": {\n \"cid\": {\n \"type\": \"long\"\n },\n \"identifier\": {\n \"type\": \"string\",\n \"index\": \"not_analyzed\"\n },\n \"tags\": {\n \"type\": \"nested\",\n \"properties\": {\n \"tid\": {\n \"type\": \"long\"\n },\n \"name\": {\n \"type\": \"string\",\n \"index\": \"not_analyzed\"\n }\n }\n }\n }\n },\n \"dates\": {\n \"properties\": {\n \"day\": {\n \"type\": \"date\",\n \"format\": \"dateOptionalTime\"\n },\n \"month\": {\n \"properties\": {\n \"end\": {\n \"type\": \"date\",\n \"format\": \"dateOptionalTime\"\n },\n \"label\": {\n \"type\": \"string\",\n \"index\": \"not_analyzed\"\n },\n \"start\": {\n \"type\": \"date\",\n \"format\": \"dateOptionalTime\"\n }\n }\n }\n }\n }\n }\n }\n }\n}\n```\n", "created_at": "2015-01-14T12:09:35Z" }, { "body": "Also, this issue doesn't seem to reproduce on the current master branch\n", "created_at": "2015-01-14T12:10:46Z" }, { "body": "@colings86 @pickypg I think I found the bug. The `nested` aggregator work based on the wrong assumption. It assumes: https://github.com/elasticsearch/elasticsearch/blob/a56520d26d70ae731ed929660e29df119290114f/src/main/java/org/elasticsearch/search/aggregations/bucket/nested/NestedAggregator.java#L65\n\nBut this isn't the case if buckets are introduced during executing of aggs. If buckets are created before the actual execution this assumption is true. This results in the parent filter not being resolved correctly for nested `nested` aggs. (It resolves the root level as filter)\n\nI'll open a pr and move the resolving of the parentFilter to the collect method instead, in that method we can assume that the child filter of a parent nested aggregator is resolved at that time, which can than be as a parent filter for the nested aggregator that are nested below.\n", "created_at": "2015-01-15T11:09:01Z" } ], "number": 9280, "title": "Deeply nested aggregations treated like unnested document aggregations" }
{ "body": "The `nested` aggregator's parent filter isn't resolved properly in the case the nested agg gets created on the fly for buckets that are constructed during query execution.\n\nThe fix is the move the parent filter resolving from the nextReader(...) method to the collect(...) method, because only then any parent nested filter's parent filter is then properly instantiated.\n\nCloses #9280\n", "number": 9335, "review_comments": [], "title": "The parent filter of the nested aggregator isn't resolved correctly all the time" }
{ "commits": [ { "message": "aggs: The `nested` aggregator's parent filter is n't resolved properly in the case the nested agg gets created on the fly for buckets that are constructed during query execution.\n\nThe fix is the move the parent filter resolving from the nextReader(...) method to the collect(...) method, because only then any parent nested filter's parent filter is then properly instantiated.\n\nCloses #9280\nCloses #9335" } ], "files": [ { "diff": "@@ -48,6 +48,8 @@ public class NestedAggregator extends SingleBucketAggregator implements ReaderCo\n private DocIdSetIterator childDocs;\n private FixedBitSet parentDocs;\n \n+ private AtomicReaderContext reader;\n+\n public NestedAggregator(String name, AggregatorFactories factories, ObjectMapper objectMapper, AggregationContext aggregationContext, Aggregator parentAggregator) {\n super(name, factories, aggregationContext, parentAggregator);\n this.parentAggregator = parentAggregator;\n@@ -67,20 +69,8 @@ public NestedAggregator(String name, AggregatorFactories factories, ObjectMapper\n \n @Override\n public void setNextReader(AtomicReaderContext reader) {\n- if (parentFilter == null) {\n- // The aggs are instantiated in reverse, first the most inner nested aggs and lastly the top level aggs\n- // So at the time a nested 'nested' aggs is parsed its closest parent nested aggs hasn't been constructed.\n- // So the trick to set at the last moment just before needed and we can use its child filter as the\n- // parent filter.\n- Filter parentFilterNotCached = findClosestNestedPath(parentAggregator);\n- if (parentFilterNotCached == null) {\n- parentFilterNotCached = NonNestedDocsFilter.INSTANCE;\n- }\n- parentFilter = SearchContext.current().fixedBitSetFilterCache().getFixedBitSetFilter(parentFilterNotCached);\n- }\n-\n+ this.reader = reader;\n try {\n- parentDocs = parentFilter.getDocIdSet(reader, null);\n // In ES if parent is deleted, then also the children are deleted. Therefore acceptedDocs can also null here.\n DocIdSet childDocIdSet = childFilter.getDocIdSet(reader, null);\n if (DocIdSets.isEmpty(childDocIdSet)) {\n@@ -101,6 +91,23 @@ public void collect(int parentDoc, long bucketOrd) throws IOException {\n if (parentDoc == 0 || childDocs == null) {\n return;\n }\n+ if (parentFilter == null) {\n+ // The aggs are instantiated in reverse, first the most inner nested aggs and lastly the top level aggs\n+ // So at the time a nested 'nested' aggs is parsed its closest parent nested aggs hasn't been constructed.\n+ // So the trick is to set at the last moment just before needed and we can use its child filter as the\n+ // parent filter.\n+\n+ // Additional NOTE: Before this logic was performed in the setNextReader(...) method, but the the assumption\n+ // that aggs instances are constructed in reverse doesn't hold when buckets are constructed lazily during\n+ // aggs execution\n+ Filter parentFilterNotCached = findClosestNestedPath(parentAggregator);\n+ if (parentFilterNotCached == null) {\n+ parentFilterNotCached = NonNestedDocsFilter.INSTANCE;\n+ }\n+ parentFilter = SearchContext.current().fixedBitSetFilterCache().getFixedBitSetFilter(parentFilterNotCached);\n+ parentDocs = parentFilter.getDocIdSet(reader, null);\n+ }\n+\n int prevParentDoc = parentDocs.prevSetBit(parentDoc - 1);\n int childDocId;\n if (childDocs.docID() > prevParentDoc) {", "filename": "src/main/java/org/elasticsearch/search/aggregations/bucket/nested/NestedAggregator.java", "status": "modified" }, { "diff": "@@ -21,8 +21,10 @@\n import org.elasticsearch.action.index.IndexRequestBuilder;\n import org.elasticsearch.action.search.SearchPhaseExecutionException;\n import org.elasticsearch.action.search.SearchResponse;\n+import org.elasticsearch.common.settings.ImmutableSettings;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.search.aggregations.Aggregator.SubAggCollectionMode;\n+import org.elasticsearch.search.aggregations.bucket.filter.Filter;\n import org.elasticsearch.search.aggregations.bucket.histogram.Histogram;\n import org.elasticsearch.search.aggregations.bucket.nested.Nested;\n import org.elasticsearch.search.aggregations.bucket.terms.LongTerms;\n@@ -39,11 +41,13 @@\n import java.util.ArrayList;\n import java.util.List;\n \n+import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_NUMBER_OF_REPLICAS;\n+import static org.elasticsearch.cluster.metadata.IndexMetaData.SETTING_NUMBER_OF_SHARDS;\n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n+import static org.elasticsearch.index.query.FilterBuilders.termFilter;\n import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.*;\n-import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n-import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchResponse;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.*;\n import static org.hamcrest.Matchers.*;\n import static org.hamcrest.core.IsNull.notNullValue;\n \n@@ -347,4 +351,102 @@ public void nestedOnObjectField() throws Exception {\n assertThat(e.getMessage(), containsString(\"[nested] nested path [incorrect] is not nested\"));\n }\n }\n+\n+ @Test\n+ // Test based on: https://github.com/elasticsearch/elasticsearch/issues/9280\n+ public void testParentFilterResolvedCorrectly() throws Exception {\n+ XContentBuilder mapping = jsonBuilder().startObject().startObject(\"provider\").startObject(\"properties\")\n+ .startObject(\"comments\")\n+ .field(\"type\", \"nested\")\n+ .startObject(\"properties\")\n+ .startObject(\"cid\").field(\"type\", \"long\").endObject()\n+ .startObject(\"identifier\").field(\"type\", \"string\").field(\"index\", \"not_analyzed\").endObject()\n+ .startObject(\"tags\")\n+ .field(\"type\", \"nested\")\n+ .startObject(\"properties\")\n+ .startObject(\"tid\").field(\"type\", \"long\").endObject()\n+ .startObject(\"name\").field(\"type\", \"string\").field(\"index\", \"not_analyzed\").endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .startObject(\"dates\")\n+ .field(\"type\", \"object\")\n+ .startObject(\"properties\")\n+ .startObject(\"day\").field(\"type\", \"date\").field(\"format\", \"dateOptionalTime\").endObject()\n+ .startObject(\"month\")\n+ .field(\"type\", \"object\")\n+ .startObject(\"properties\")\n+ .startObject(\"end\").field(\"type\", \"date\").field(\"format\", \"dateOptionalTime\").endObject()\n+ .startObject(\"start\").field(\"type\", \"date\").field(\"format\", \"dateOptionalTime\").endObject()\n+ .startObject(\"label\").field(\"type\", \"string\").field(\"index\", \"not_analyzed\").endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject().endObject().endObject();\n+ assertAcked(prepareCreate(\"idx2\")\n+ .setSettings(ImmutableSettings.builder().put(SETTING_NUMBER_OF_SHARDS, 1).put(SETTING_NUMBER_OF_REPLICAS, 0))\n+ .addMapping(\"provider\", mapping));\n+\n+ List<IndexRequestBuilder> indexRequests = new ArrayList<>(2);\n+ indexRequests.add(client().prepareIndex(\"idx2\", \"provider\", \"1\").setSource(\"{\\\"dates\\\": {\\\"month\\\": {\\\"label\\\": \\\"2014-11\\\", \\\"end\\\": \\\"2014-11-30\\\", \\\"start\\\": \\\"2014-11-01\\\"}, \\\"day\\\": \\\"2014-11-30\\\"}, \\\"comments\\\": [{\\\"cid\\\": 3,\\\"identifier\\\": \\\"29111\\\"}, {\\\"cid\\\": 4,\\\"tags\\\": [{\\\"tid\\\" :44,\\\"name\\\": \\\"Roles\\\"}], \\\"identifier\\\": \\\"29101\\\"}]}\"));\n+ indexRequests.add(client().prepareIndex(\"idx2\", \"provider\", \"2\").setSource(\"{\\\"dates\\\": {\\\"month\\\": {\\\"label\\\": \\\"2014-12\\\", \\\"end\\\": \\\"2014-12-31\\\", \\\"start\\\": \\\"2014-12-01\\\"}, \\\"day\\\": \\\"2014-12-03\\\"}, \\\"comments\\\": [{\\\"cid\\\": 1, \\\"identifier\\\": \\\"29111\\\"}, {\\\"cid\\\": 2,\\\"tags\\\": [{\\\"tid\\\" : 22, \\\"name\\\": \\\"DataChannels\\\"}], \\\"identifier\\\": \\\"29101\\\"}]}\"));\n+ indexRandom(true, indexRequests);\n+\n+ SearchResponse response = client().prepareSearch(\"idx2\").setTypes(\"provider\")\n+ .addAggregation(\n+ terms(\"startDate\").field(\"dates.month.start\").subAggregation(\n+ terms(\"endDate\").field(\"dates.month.end\").subAggregation(\n+ terms(\"period\").field(\"dates.month.label\").subAggregation(\n+ nested(\"ctxt_idfier_nested\").path(\"comments\").subAggregation(\n+ filter(\"comment_filter\").filter(termFilter(\"comments.identifier\", \"29111\")).subAggregation(\n+ nested(\"nested_tags\").path(\"comments.tags\").subAggregation(\n+ terms(\"tag\").field(\"comments.tags.name\")\n+ )\n+ )\n+ )\n+ )\n+ )\n+ )\n+ ).get();\n+ assertNoFailures(response);\n+ assertHitCount(response, 2);\n+\n+ Terms startDate = response.getAggregations().get(\"startDate\");\n+ assertThat(startDate.getBuckets().size(), equalTo(2));\n+ Terms.Bucket bucket = startDate.getBucketByKey(\"1414800000000\"); // 2014-11-01T00:00:00.000Z\n+ assertThat(bucket.getDocCount(), equalTo(1l));\n+ Terms endDate = bucket.getAggregations().get(\"endDate\");\n+ bucket = endDate.getBucketByKey(\"1417305600000\"); // 2014-11-30T00:00:00.000Z\n+ assertThat(bucket.getDocCount(), equalTo(1l));\n+ Terms period = bucket.getAggregations().get(\"period\");\n+ bucket = period.getBucketByKey(\"2014-11\");\n+ assertThat(bucket.getDocCount(), equalTo(1l));\n+ Nested comments = bucket.getAggregations().get(\"ctxt_idfier_nested\");\n+ assertThat(comments.getDocCount(), equalTo(2l));\n+ Filter filter = comments.getAggregations().get(\"comment_filter\");\n+ assertThat(filter.getDocCount(), equalTo(1l));\n+ Nested nestedTags = filter.getAggregations().get(\"nested_tags\");\n+ assertThat(nestedTags.getDocCount(), equalTo(0l)); // This must be 0\n+ Terms tags = nestedTags.getAggregations().get(\"tag\");\n+ assertThat(tags.getBuckets().size(), equalTo(0)); // and this must be empty\n+\n+ bucket = startDate.getBucketByKey(\"1417392000000\"); // 2014-12-01T00:00:00.000Z\n+ assertThat(bucket.getDocCount(), equalTo(1l));\n+ endDate = bucket.getAggregations().get(\"endDate\");\n+ bucket = endDate.getBucketByKey(\"1419984000000\"); // 2014-12-31T00:00:00.000Z\n+ assertThat(bucket.getDocCount(), equalTo(1l));\n+ period = bucket.getAggregations().get(\"period\");\n+ bucket = period.getBucketByKey(\"2014-12\");\n+ assertThat(bucket.getDocCount(), equalTo(1l));\n+ comments = bucket.getAggregations().get(\"ctxt_idfier_nested\");\n+ assertThat(comments.getDocCount(), equalTo(2l));\n+ filter = comments.getAggregations().get(\"comment_filter\");\n+ assertThat(filter.getDocCount(), equalTo(1l));\n+ nestedTags = filter.getAggregations().get(\"nested_tags\");\n+ assertThat(nestedTags.getDocCount(), equalTo(0l)); // This must be 0\n+ tags = nestedTags.getAggregations().get(\"tag\");\n+ assertThat(tags.getBuckets().size(), equalTo(0)); // and this must be empty\n+ }\n }", "filename": "src/test/java/org/elasticsearch/search/aggregations/bucket/NestedTests.java", "status": "modified" } ] }
{ "body": "As per documentation (0) all APIs that support multi-index also support expand_wildcards parameter, that is not the case with at least /_cluster/state:\n\n```\ncurl -X PUT localhost:9200/test_index\n{\"acknowledged\":true}\ncurl -X POST localhost:9200/test_index/_close\n{\"acknowledged\":true}\ncurl localhost:9200/*/_settings?expand_wildcards=closed\n{\"test_index\":{\"settings\":{\"index\":{\"uuid\":\"CkMTVYfhS12TmHrWrNSkfQ\",\"number_of_replicas\":\"1\",\"number_of_shards\":\"5\",\"version\":{\"created\":\"2000099\"}}}}}\ncurl localhost:9200/_cluster/state/metadata/*?expand_wildcards=closed\n{\"cluster_name\":\"es_client_test\",\"metadata\":{\"templates\":{},\"indices\":{}}}\n```\n\n0 - http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/multi-index.html\n", "comments": [ { "body": "Since I just looked at IndicesOptions in another issue and start to get to know the rest apis a little bit I can look into this.\n\nSo far I think the ClusterStateRequest has no way to set IndicesOptions other than the default one (here LENIENT_EXPAND_OPEN) which does not expand wildcards. I could add overwritting these parameters with the request parameters like in e.g RestIndicesStatsAction, but for this I would need to introduce setter and internal field for indicesOptions in ClusterStateRequest. \n\nMost of the index-related actions inherit this from BroadcastOperationRequest, but since the ClusterStateRequest extends MasterNodeReadOperationRequest, theres no such setter / internal field here. Wondering where to put it. If the \"expand_wildcard\" parameter is supposed to work for all Cluster-related commands, then I think it should be somewhere higher up in the hierarchy. \n", "created_at": "2015-01-12T17:39:10Z" }, { "body": "I just saw that ClusterSearchShardsRequest is doing the same thing I would do in ClusterStateRequest:\n\n```\nprivate IndicesOptions indicesOptions = IndicesOptions.lenientExpandOpen();\npublic ClusterSearchShardsRequest indicesOptions(IndicesOptions indicesOptions) \n```\n", "created_at": "2015-01-12T18:21:56Z" }, { "body": "Hey @cbuescher what you proposed sounds good to me, let's add the ability to set `IndicesOptions` to `ClusterStateRequest`.\n", "created_at": "2015-01-13T08:57:15Z" }, { "body": "From reading the documentation above on multi-index parameters I guess that also \"ignore_unavailable\" and \"allow_no_indices\" should work here. Do I just add them to the cluster.state.json and write some test for them or does that make no sense here?\n", "created_at": "2015-01-14T09:16:13Z" }, { "body": "@cbuescher yes you got it right, those params need to be added to the spec and some tests for them would be appreciated ;)\n", "created_at": "2015-01-14T09:25:02Z" } ], "number": 5229, "title": " expand_wildcards not accepted by /_cluster/state as advertised" }
{ "body": "Added missing support for the multi-index query parameters 'ignore_unavailable',\n'allow_no_indices', and 'expand_wildcards' to '_cluster/state' API. These\nparameters are supposed to be supported for APIs that refer to index parameter\nsupport execution across multiple indices.\n\nCloses #5229\n", "number": 9295, "review_comments": [ { "body": "we need to add a version check here on the 1.x branch, to make sure that we maintain bw compatibility:\n\n```\nif (in.getVersion().onOrAfter(Version.1_5_0)) {\n indicesOptions = IndicesOptions.readIndicesOptions(in);\n}\n```\n", "created_at": "2015-01-14T11:09:04Z" }, { "body": "we need to add a version check here on the 1.x branch, to make sure that we maintain bw compatibility:\n\n```\nif (out.getVersion().onOrAfter(Version.1_5_0)) {\n indicesOptions = IndicesOptions.writeIndicesOptions(out);\n}\n```\n", "created_at": "2015-01-14T11:09:21Z" }, { "body": "I think this assert doesn't make much sense given that the last call returns an error. `catch: missing` seems enough.\n", "created_at": "2015-01-14T11:10:53Z" }, { "body": "same as below, I would remove this assertion\n", "created_at": "2015-01-14T11:11:59Z" }, { "body": "I was wondering if it makes sense to merge this file with the previous one, since they both test indices options?\n", "created_at": "2015-01-14T11:12:47Z" }, { "body": "for consistency: assertThat(clusterStateResponse.getState().metaData().indices().isEmpty(), is(true));\n", "created_at": "2015-01-14T14:51:24Z" }, { "body": "same as above\n", "created_at": "2015-01-14T14:51:29Z" }, { "body": "I think you want to randomize version of out and in to make sure that we keep bw comp. In master that makes only little sense now but it will be good in the future. I think we can ranzomize versions starting from the minCompatibilityVersion on. That should make the test the same in both master and 1.x\n", "created_at": "2015-01-19T11:17:05Z" }, { "body": "this is the wrong import ;)\n", "created_at": "2015-01-19T11:18:09Z" }, { "body": "why did you use scop SUITE here? and why no client nodes?\n", "created_at": "2015-01-21T11:52:24Z" }, { "body": "Think I got that from NodesStatsBasicBackwardsCompat test, no reason besides that. Both settings seem to be the default in ElasticsearchBackwardsCompatIntegrationTest anyway, so I could just leave them. \nShould I change that, and if yes, whats the prefered setting here?\n", "created_at": "2015-01-21T12:01:14Z" }, { "body": "remove the whole annotation, it inherits from ElasticsearchBackwardsCompatIntegrationTest that has already scope SUITE and numClientNodes set to 0, defaults should be good, no need to override them\n", "created_at": "2015-01-21T14:11:02Z" }, { "body": "this needs to be wrapped into a try finally block, so no explicit `close` is needed. \n", "created_at": "2015-01-21T14:11:57Z" }, { "body": "given the randomized behaviour of client() I am wondering if it makes sense to go and test the call from all the potential nodes. That happens already eventually with a single call that uses client() and no loop, no nodes info call either. Unless I am missing something.\n", "created_at": "2015-01-21T14:12:52Z" }, { "body": "I didn't know about random client(), just saw this. However, if I do this, how can I be sure that I test serialization of request between both potential versions? As far as I understand it so far you can be lucky an just always hit the nodes with the newer versions, or am I mistaken? If by \"eventually\" you mean after running this test lots of times, that will be okay I guess. Are these tests run multiple times on the test machines?\n", "created_at": "2015-01-21T14:58:37Z" }, { "body": "nitpick, this method could be static\n", "created_at": "2015-01-22T07:28:46Z" }, { "body": "sorry my previous comment wasn't clear. I meant this:\n\n```\ntry (TransportClient tc = new TransportClient(settings)) {\n tc.addTransportAddress(n.getNode().address());\n tc.admin().cluster().prepareState().clear().execute().actionGet();\n}\n```\n", "created_at": "2015-01-22T07:30:37Z" }, { "body": "you got it, by eventually I mean after running the test some times (I don't think it's going to be a lot of times given that we have only a few nodes in this bw setup). These tests run consitnuosly on our CI. I think we are good if we simplify the test a bit as I said above. Makes sense?\n", "created_at": "2015-01-22T07:33:09Z" }, { "body": "maybe also look at what it returns and run some assertions on the response?\n", "created_at": "2015-01-22T07:33:57Z" }, { "body": "I tried to make some reasonable assertions on the response but found that with the current test setup there is not much interesting to assert. Only made sure that the name and state are set in the response now.\n", "created_at": "2015-01-22T11:28:15Z" } ], "title": "Add support for multi-index query parameters for `_cluster/state`" }
{ "commits": [ { "message": "Rest: Adding support of multi-index query parameters for _cluster/state\n\nAdded missing support for the multi-index query parameters 'ignore_unavailable',\n'allow_no_indices', and 'expand_wildcards' to '_cluster/state' API. These\nparameters are supposed to be supported for APIs that refer to index parameter\nsupport execution across multiple indices.\n\nCloses #5229" }, { "message": "Changes according to review comments" }, { "message": "adding tests for different settings of indices options on ClusterStateRequest" }, { "message": "Added tests for ClusterStateRequest serialization" }, { "message": "Adding serialization test for ClusterStateRequest" }, { "message": "adding randomization to test" }, { "message": "adding ClusterStateRequest backwards compatibility test" }, { "message": "use default settings in ClusterStateBackwardsCompat test\nimproving on backwards test" }, { "message": "corrected comments" }, { "message": "corrected try-with-resource statement, added assertions" }, { "message": "adding to bwcompat test" } ], "files": [ { "diff": "@@ -32,6 +32,18 @@\n \"flat_settings\": {\n \"type\": \"boolean\",\n \"description\": \"Return settings in flat format (default: false)\"\n+ },\n+ \"ignore_unavailable\": {\n+ \"type\" : \"boolean\",\n+ \"description\" : \"Whether specified concrete indices should be ignored when unavailable (missing or closed)\"\n+ },\n+ \"allow_no_indices\": {\n+ \"type\" : \"boolean\",\n+ \"description\" : \"Whether to ignore if a wildcard indices expression resolves into no concrete indices. (This includes `_all` string or when no indices have been specified)\"\n+ },\n+ \"expand_wildcards\":{\n+ \"type\":\"list\",\n+ \"description\":\"Whether wildcard expressions should get expanded to open or closed indices (default: open)\"\n }\n }\n },", "filename": "rest-api-spec/api/cluster.state.json", "status": "modified" }, { "diff": "@@ -0,0 +1,92 @@\n+setup:\n+\n+ - do:\n+ indices.create:\n+ index: test_close_index\n+ body:\n+ settings:\n+ number_of_shards: \"1\"\n+ number_of_replicas: \"0\"\n+\n+ - do:\n+ indices.create:\n+ index: test_open_index\n+ body:\n+ settings:\n+ number_of_shards: \"1\"\n+ number_of_replicas: \"0\"\n+\n+ - do:\n+ cluster.health:\n+ wait_for_status: green\n+\n+# close one index, keep other open for later test\n+\n+ - do:\n+ indices.close:\n+ index: test_close_index\n+\n+---\n+\"Test expand_wildcards parameter on closed, open indices and both\":\n+\n+ - do:\n+ cluster.state:\n+ metric: [ metadata ]\n+ index: test*\n+ expand_wildcards: [ closed ]\n+\n+ - is_false: metadata.indices.test_open_index\n+ - match: {metadata.indices.test_close_index.state: \"close\"}\n+\n+ - do:\n+ cluster.state:\n+ metric: [ metadata ]\n+ index: test*\n+ expand_wildcards: [ open ]\n+\n+ - match: {metadata.indices.test_open_index.state: \"open\"}\n+ - is_false: metadata.indices.test_close_index\n+\n+ - do:\n+ cluster.state:\n+ metric: [ metadata ]\n+ index: test*\n+ expand_wildcards: [ open,closed ]\n+\n+ - match: {metadata.indices.test_open_index.state: \"open\"}\n+ - match: {metadata.indices.test_close_index.state: \"close\"}\n+\n+---\n+\"Test ignore_unavailable parameter\":\n+\n+ - do:\n+ cluster.state:\n+ metric: [ metadata ]\n+ index: foobla\n+ ignore_unavailable: true\n+\n+ - match: {metadata.indices: {}}\n+\n+ - do:\n+ catch: missing\n+ cluster.state:\n+ metric: [ metadata ]\n+ index: foobla\n+ ignore_unavailable: false\n+\n+---\n+\"Test allow_no_indices parameter\":\n+\n+ - do:\n+ cluster.state:\n+ metric: [ metadata ]\n+ index: not_there*\n+\n+ - match: {metadata.indices: {}}\n+ \n+ - do:\n+ catch: missing\n+ cluster.state:\n+ metric: [ metadata ]\n+ index: not_there*\n+ allow_no_indices: false", "filename": "rest-api-spec/test/cluster.state/30_expand_wildcards.yaml", "status": "added" }, { "diff": "@@ -40,6 +40,7 @@ public class ClusterStateRequest extends MasterNodeReadOperationRequest<ClusterS\n private boolean metaData = true;\n private boolean blocks = true;\n private String[] indices = Strings.EMPTY_ARRAY;\n+ private IndicesOptions indicesOptions = IndicesOptions.lenientExpandOpen();\n \n public ClusterStateRequest() {\n }\n@@ -57,7 +58,7 @@ public ClusterStateRequest all() {\n indices = Strings.EMPTY_ARRAY;\n return this;\n }\n- \n+\n public ClusterStateRequest clear() {\n routingTable = false;\n nodes = false;\n@@ -116,7 +117,12 @@ public ClusterStateRequest indices(String... indices) {\n \n @Override\n public IndicesOptions indicesOptions() {\n- return IndicesOptions.lenientExpandOpen();\n+ return this.indicesOptions;\n+ }\n+\n+ public final ClusterStateRequest indicesOptions(IndicesOptions indicesOptions) {\n+ this.indicesOptions = indicesOptions;\n+ return this;\n }\n \n @Override\n@@ -127,6 +133,7 @@ public void readFrom(StreamInput in) throws IOException {\n metaData = in.readBoolean();\n blocks = in.readBoolean();\n indices = in.readStringArray();\n+ indicesOptions = IndicesOptions.readIndicesOptions(in);\n }\n \n @Override\n@@ -137,5 +144,6 @@ public void writeTo(StreamOutput out) throws IOException {\n out.writeBoolean(metaData);\n out.writeBoolean(blocks);\n out.writeStringArray(indices);\n+ indicesOptions.writeIndicesOptions(out);\n }\n }", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/state/ClusterStateRequest.java", "status": "modified" }, { "diff": "@@ -20,6 +20,7 @@\n package org.elasticsearch.action.admin.cluster.state;\n \n import org.elasticsearch.action.ActionListener;\n+import org.elasticsearch.action.support.IndicesOptions;\n import org.elasticsearch.action.support.master.MasterNodeReadOperationRequestBuilder;\n import org.elasticsearch.client.ClusterAdminClient;\n \n@@ -89,6 +90,11 @@ public ClusterStateRequestBuilder setIndices(String... indices) {\n return this;\n }\n \n+ public ClusterStateRequestBuilder setIndicesOptions(IndicesOptions indicesOptions) {\n+ request.indicesOptions(indicesOptions);\n+ return this;\n+ }\n+\n @Override\n protected void doExecute(ActionListener<ClusterStateResponse> listener) {\n client.state(request, listener);", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/state/ClusterStateRequestBuilder.java", "status": "modified" }, { "diff": "@@ -21,6 +21,7 @@\n \n import org.elasticsearch.action.admin.cluster.state.ClusterStateRequest;\n import org.elasticsearch.action.admin.cluster.state.ClusterStateResponse;\n+import org.elasticsearch.action.support.IndicesOptions;\n import org.elasticsearch.client.Client;\n import org.elasticsearch.client.Requests;\n import org.elasticsearch.cluster.ClusterState;\n@@ -57,6 +58,7 @@ public RestClusterStateAction(Settings settings, RestController controller, Clie\n public void handleRequest(final RestRequest request, final RestChannel channel, final Client client) {\n final ClusterStateRequest clusterStateRequest = Requests.clusterStateRequest();\n clusterStateRequest.listenerThreaded(false);\n+ clusterStateRequest.indicesOptions(IndicesOptions.fromRequest(request, clusterStateRequest.indicesOptions()));\n clusterStateRequest.local(request.paramAsBoolean(\"local\", clusterStateRequest.local()));\n clusterStateRequest.masterNodeTimeout(request.paramAsTime(\"master_timeout\", clusterStateRequest.masterNodeTimeout()));\n ", "filename": "src/main/java/org/elasticsearch/rest/action/admin/cluster/state/RestClusterStateAction.java", "status": "modified" }, { "diff": "@@ -0,0 +1,76 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.action.admin.cluster.state;\n+\n+import org.elasticsearch.Version;\n+import org.elasticsearch.action.support.IndicesOptions;\n+import org.elasticsearch.common.io.stream.BytesStreamInput;\n+import org.elasticsearch.common.io.stream.BytesStreamOutput;\n+import org.elasticsearch.test.ElasticsearchTestCase;\n+import org.junit.Test;\n+\n+import static org.hamcrest.CoreMatchers.equalTo;\n+\n+/**\n+ * Unit tests for the {@link ClusterStateRequest}.\n+ */\n+public class ClusterStateRequestTest extends ElasticsearchTestCase {\n+\n+ @Test\n+ public void testSerialization() throws Exception {\n+ int iterations = randomIntBetween(5, 20);\n+ for (int i = 0; i < iterations; i++) {\n+\n+ IndicesOptions indicesOptions = IndicesOptions.fromOptions(randomBoolean(), randomBoolean(), randomBoolean(), randomBoolean());\n+ ClusterStateRequest clusterStateRequest = new ClusterStateRequest().routingTable(randomBoolean()).metaData(randomBoolean())\n+ .nodes(randomBoolean()).blocks(randomBoolean()).indices(\"testindex\", \"testindex2\").indicesOptions(indicesOptions);\n+\n+ Version testVersion = randomVersionBetween(Version.CURRENT.minimumCompatibilityVersion(), Version.CURRENT);\n+ BytesStreamOutput output = new BytesStreamOutput();\n+ output.setVersion(testVersion);\n+ clusterStateRequest.writeTo(output);\n+\n+ BytesStreamInput bytesStreamInput = new BytesStreamInput(output.bytes());\n+ bytesStreamInput.setVersion(testVersion);\n+ ClusterStateRequest deserializedCSRequest = new ClusterStateRequest();\n+ deserializedCSRequest.readFrom(bytesStreamInput);\n+\n+ assertThat(deserializedCSRequest.routingTable(), equalTo(clusterStateRequest.routingTable()));\n+ assertThat(deserializedCSRequest.metaData(), equalTo(clusterStateRequest.metaData()));\n+ assertThat(deserializedCSRequest.nodes(), equalTo(clusterStateRequest.nodes()));\n+ assertThat(deserializedCSRequest.blocks(), equalTo(clusterStateRequest.blocks()));\n+ assertThat(deserializedCSRequest.indices(), equalTo(clusterStateRequest.indices()));\n+\n+ if (testVersion.onOrAfter(Version.V_1_5_0)) {\n+ assertOptionsMatch(deserializedCSRequest.indicesOptions(), clusterStateRequest.indicesOptions());\n+ } else {\n+ // versions before V_1_5_0 use IndicesOptions.lenientExpandOpen()\n+ assertOptionsMatch(deserializedCSRequest.indicesOptions(), IndicesOptions.lenientExpandOpen());\n+ }\n+ }\n+ }\n+\n+ private static void assertOptionsMatch(IndicesOptions in, IndicesOptions out) {\n+ assertThat(in.ignoreUnavailable(), equalTo(out.ignoreUnavailable()));\n+ assertThat(in.expandWildcardsClosed(), equalTo(out.expandWildcardsClosed()));\n+ assertThat(in.expandWildcardsOpen(), equalTo(out.expandWildcardsOpen()));\n+ assertThat(in.allowNoIndices(), equalTo(out.allowNoIndices()));\n+ }\n+}", "filename": "src/test/java/org/elasticsearch/action/admin/cluster/state/ClusterStateRequestTest.java", "status": "added" }, { "diff": "@@ -0,0 +1,55 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.bwcompat;\n+\n+import org.elasticsearch.action.admin.cluster.state.ClusterStateResponse;\n+import org.elasticsearch.action.admin.cluster.node.info.NodeInfo;\n+import org.elasticsearch.action.admin.cluster.node.info.NodesInfoResponse;\n+import org.elasticsearch.client.transport.TransportClient;\n+import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.common.settings.ImmutableSettings;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.test.ElasticsearchBackwardsCompatIntegrationTest;\n+import org.junit.Test;\n+import static org.hamcrest.Matchers.*;\n+\n+public class ClusterStateBackwardsCompat extends ElasticsearchBackwardsCompatIntegrationTest {\n+\n+ @Test\n+ public void testClusterState() throws Exception {\n+ createIndex(\"test\");\n+\n+ NodesInfoResponse nodesInfo = client().admin().cluster().prepareNodesInfo().execute().actionGet();\n+ Settings settings = ImmutableSettings.settingsBuilder().put(\"client.transport.ignore_cluster_name\", true)\n+ .put(\"node.name\", \"transport_client_\" + getTestName()).build();\n+\n+ // connect to each node with a custom TransportClient, issue a ClusterStateRequest to test serialization\n+ for (NodeInfo n : nodesInfo.getNodes()) {\n+ try (TransportClient tc = new TransportClient(settings)) {\n+ tc.addTransportAddress(n.getNode().address());\n+ ClusterStateResponse response = tc.admin().cluster().prepareState().execute().actionGet();\n+\n+ assertThat(response.getState().status(), equalTo(ClusterState.ClusterStateStatus.UNKNOWN));\n+ assertNotNull(response.getClusterName());\n+ assertTrue(response.getState().getMetaData().hasIndex(\"test\"));\n+ }\n+ }\n+ }\n+}", "filename": "src/test/java/org/elasticsearch/bwcompat/ClusterStateBackwardsCompat.java", "status": "added" }, { "diff": "@@ -21,13 +21,16 @@\n \n import org.elasticsearch.action.admin.cluster.state.ClusterStateResponse;\n import org.elasticsearch.action.admin.indices.template.get.GetIndexTemplatesResponse;\n+import org.elasticsearch.action.support.IndicesOptions;\n import org.elasticsearch.client.Client;\n+import org.elasticsearch.client.Requests;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.MappingMetaData;\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.unit.ByteSizeValue;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentFactory;\n+import org.elasticsearch.indices.IndexMissingException;\n import org.elasticsearch.test.ElasticsearchIntegrationTest;\n import org.elasticsearch.test.hamcrest.CollectionAssertions;\n import org.junit.Before;\n@@ -157,4 +160,51 @@ public void testLargeClusterStatePublishing() throws Exception {\n assertThat(mappingMetadata, equalTo(masterMappingMetaData));\n }\n }\n+\n+ @Test\n+ public void testIndicesOptions() throws Exception {\n+ ClusterStateResponse clusterStateResponse = client().admin().cluster().prepareState().clear().setMetaData(true).setIndices(\"f*\")\n+ .get();\n+ assertThat(clusterStateResponse.getState().metaData().indices().size(), is(2));\n+\n+ // close one index\n+ client().admin().indices().close(Requests.closeIndexRequest(\"fuu\")).get();\n+ clusterStateResponse = client().admin().cluster().prepareState().clear().setMetaData(true).setIndices(\"f*\").get();\n+ assertThat(clusterStateResponse.getState().metaData().indices().size(), is(1));\n+ assertThat(clusterStateResponse.getState().metaData().index(\"foo\").state(), equalTo(IndexMetaData.State.OPEN));\n+\n+ // expand_wildcards_closed should toggle return only closed index fuu\n+ IndicesOptions expandCloseOptions = IndicesOptions.fromOptions(false, true, false, true);\n+ clusterStateResponse = client().admin().cluster().prepareState().clear().setMetaData(true).setIndices(\"f*\")\n+ .setIndicesOptions(expandCloseOptions).get();\n+ assertThat(clusterStateResponse.getState().metaData().indices().size(), is(1));\n+ assertThat(clusterStateResponse.getState().metaData().index(\"fuu\").state(), equalTo(IndexMetaData.State.CLOSE));\n+\n+ // ignore_unavailable set to true should not raise exception on fzzbzz\n+ IndicesOptions ignoreUnavailabe = IndicesOptions.fromOptions(true, true, true, false);\n+ clusterStateResponse = client().admin().cluster().prepareState().clear().setMetaData(true).setIndices(\"fzzbzz\")\n+ .setIndicesOptions(ignoreUnavailabe).get();\n+ assertThat(clusterStateResponse.getState().metaData().indices().isEmpty(), is(true));\n+\n+ // empty wildcard expansion result should work when allowNoIndices is\n+ // turned on\n+ IndicesOptions allowNoIndices = IndicesOptions.fromOptions(false, true, true, false);\n+ clusterStateResponse = client().admin().cluster().prepareState().clear().setMetaData(true).setIndices(\"a*\")\n+ .setIndicesOptions(allowNoIndices).get();\n+ assertThat(clusterStateResponse.getState().metaData().indices().isEmpty(), is(true));\n+ }\n+\n+ @Test(expected=IndexMissingException.class)\n+ public void testIndicesOptionsOnAllowNoIndicesFalse() throws Exception {\n+ // empty wildcard expansion throws exception when allowNoIndices is turned off\n+ IndicesOptions allowNoIndices = IndicesOptions.fromOptions(false, false, true, false);\n+ client().admin().cluster().prepareState().clear().setMetaData(true).setIndices(\"a*\").setIndicesOptions(allowNoIndices).get();\n+ }\n+\n+ @Test(expected=IndexMissingException.class)\n+ public void testIndicesIgnoreUnavailableFalse() throws Exception {\n+ // ignore_unavailable set to false throws exception when allowNoIndices is turned off\n+ IndicesOptions allowNoIndices = IndicesOptions.fromOptions(false, true, true, false);\n+ client().admin().cluster().prepareState().clear().setMetaData(true).setIndices(\"fzzbzz\").setIndicesOptions(allowNoIndices).get();\n+ }\n }", "filename": "src/test/java/org/elasticsearch/cluster/SimpleClusterStateTests.java", "status": "modified" } ] }
{ "body": "There appears to be a bug in the `children` aggregation, which misses one matching document.\n\nThere is a workaround for the bug, by adding an outer aggregation at the parent level. However, this clutters the query. Worse, even with the workaround the bug showed up again when I did multiple Children Aggregations (involving one parent type and two distinct children types).\n\n```\nDELETE _all \n\n# Product Catalogue Index\n\nDELETE /prodcatalog\n\nPUT /prodcatalog\n\n# Use parent-child mapping to capture a Master Product and its Variant SKUs.\n\nPUT /prodcatalog/masterprod/_mapping\n{ \n \"masterprod\":{ \n \"properties\":{ \n \"brand\":{ \n \"type\":\"string\",\n \"index\":\"not_analyzed\"\n },\n \"name\":{ \n \"type\":\"string\"\n }\n }\n }\n}\n\nPUT /prodcatalog/variantsku/_mapping\n{ \n \"variantsku\":{ \n \"_parent\":{ \n \"type\":\"masterprod\"\n },\n \"properties\":{ \n \"color\":{ \n \"type\":\"string\",\n \"index\":\"not_analyzed\"\n },\n \"size\":{ \n \"type\":\"string\",\n \"index\":\"not_analyzed\"\n }\n }\n }\n}\n\nGET /prodcatalog/_mapping\n\n\n# Index 2 parents and several children.\n\n\nPUT /prodcatalog/masterprod/1\n{\n \"brand\": \"Levis\",\n \"name\": \"Style 501\",\n \"material\": \"Denim\"\n}\n\nPUT /prodcatalog/variantsku/10001?parent=1\n{\n \"color\" : \"blue\",\n \"size\" : \"32\"\n}\n\n\nPUT /prodcatalog/variantsku/10002?parent=1\n{\n \"color\" : \"blue\",\n \"size\" : \"34\"\n}\n\n\nPUT /prodcatalog/variantsku/10003?parent=1\n{\n \"color\" : \"blue\",\n \"size\" : \"36\"\n}\n\n\nPUT /prodcatalog/variantsku/10004?parent=1\n{\n \"color\" : \"black\",\n \"size\" : \"38\"\n}\n\n\nPUT /prodcatalog/variantsku/10005?parent=1\n{\n \"color\" : \"black\",\n \"size\" : \"40\"\n}\n\n\nPUT /prodcatalog/variantsku/10006?parent=1\n{\n \"color\" : \"gray\",\n \"size\" : \"36\"\n}\n\n\nPUT /prodcatalog/masterprod/2\n{\n \"brand\" : \"Wrangler\",\n \"name\" : \"Regular Cut\",\n \"material\" : \"Leather\"\n}\n\nPUT /prodcatalog/variantsku/20001?parent=2\n{\n \"color\" : \"blue\",\n \"size\" : \"32\"\n}\n\nPUT /prodcatalog/variantsku/20002?parent=2\n{\n \"color\" : \"blue\",\n \"size\" : \"34\"\n}\n\n\nPUT /prodcatalog/variantsku/20003?parent=2\n{\n \"color\" : \"black\",\n \"size\" : \"36\"\n}\n\n\nPUT /prodcatalog/variantsku/20004?parent=2\n{\n \"color\" : \"black\",\n \"size\" : \"38\"\n}\n\n\nPUT /prodcatalog/variantsku/20005?parent=2\n{\n \"color\" : \"black\",\n \"size\" : \"40\"\n}\n\n\nPUT /prodcatalog/variantsku/20006?parent=2\n{\n \"color\" : \"orange\",\n \"size\" : \"36\"\n}\n\n\nPUT /prodcatalog/variantsku/20007?parent=2\n{\n \"color\" : \"green\",\n \"size\" : \"44\"\n}\n\n\n\n\n# The query below should match the 1 masterprod doc with an orange variantsku.\n# The aggregations should return 7 refinements for the Children.\n# BUG: One color Children Aggregation is missing !!\n\nPOST /prodcatalog/masterprod/_search\n{ \n \"query\":{ \n \"has_child\":{ \n \"type\":\"variantsku\",\n \"score_mode\":\"none\",\n \"query\":{ \n \"term\":{ \n \"color\":\"orange\"\n }\n }\n }\n },\n \"aggs\":{ \n \"my-refinements\":{ \n \"children\":{ \n \"type\":\"variantsku\"\n },\n \"aggs\":{ \n \"my-sizes\":{ \n \"terms\":{ \n \"field\":\"variantsku.size\"\n }\n },\n \"my-colors\":{ \n \"terms\":{ \n \"field\":\"variantsku.color\"\n }\n } \n }\n }\n }\n}\n\n\n# Partial results:\n# Note that the green variantsku is missing,\n# and we only have 6 results, though expecting 7.\n\n \"my-colors\": {\n \"doc_count_error_upper_bound\": 0,\n \"sum_other_doc_count\": 0,\n \"buckets\": [\n {\n \"key\": \"black\",\n \"doc_count\": 3\n },\n {\n \"key\": \"blue\",\n \"doc_count\": 2\n },\n {\n \"key\": \"orange\",\n \"doc_count\": 1\n }\n ]\n }\n\n\n\n# Here is my Workaround.\n# By adding an aggregation at the parent level (brand),\n# we no longer lose 1 variantsku document.\n\nPOST /prodcatalog/masterprod/_search\n{\n \"query\":{ \n \"has_child\":{ \n \"type\":\"variantsku\",\n \"score_mode\":\"none\",\n \"query\":{ \n \"term\":{ \n \"color\":\"orange\"\n }\n }\n }\n }, \n \"aggs\": {\n \"my-brands\": {\n \"terms\": {\n \"field\": \"brand\"\n },\n \"aggs\": {\n \"my-refinements\": {\n \"children\": {\n \"type\" : \"variantsku\" \n },\n \"aggs\": {\n \"my-colors\": {\n \"terms\": {\n \"field\": \"variantsku.color\"\n }\n },\n \"my-sizes\": {\n \"terms\": {\n \"field\": \"variantsku.size\"\n }\n }\n } \n }\n }\n }\n }\n}\n\n# Partial Results:\n\n \"my-colors\": {\n \"doc_count_error_upper_bound\": 0,\n \"sum_other_doc_count\": 0,\n \"buckets\": [\n {\n \"key\": \"black\",\n \"doc_count\": 3\n },\n {\n \"key\": \"blue\",\n \"doc_count\": 2\n },\n {\n \"key\": \"green\",\n \"doc_count\": 1\n },\n {\n \"key\": \"orange\",\n \"doc_count\": 1\n }\n ]\n\n\n# Since we are not interested in a breakdown by brand,\n# we can use something generic, like _type\n# which should only return 1 bucket at the parent level.\n# Again, we no longer lose 1 variantsku document.\n\nPOST /prodcatalog/masterprod/_search\n{\n \"query\": {\n \"has_child\": {\n \"type\": \"variantsku\",\n \"score_mode\": \"none\",\n \"query\": {\n \"term\": {\n \"color\": \"orange\"\n }\n }\n }\n },\n \"aggs\": {\n \"my-types\": {\n \"terms\": {\n \"field\": \"_type\"\n },\n \"aggs\": {\n \"my-refinements\": {\n \"children\": {\n \"type\": \"variantsku\"\n },\n \"aggs\": {\n \"my-colors\": {\n \"terms\": {\n \"field\": \"variantsku.color\"\n }\n },\n \"my-sizes\": {\n \"terms\": {\n \"field\": \"variantsku.size\"\n }\n }\n }\n }\n }\n }\n }\n}\n```\n", "comments": [ { "body": "This is a bug is caused by the children agg, but has only effect if the `global_ordinals_low_cardinality` execution hint is enabled and this is what happens in the first search request.\n\nI'll open a PR to fix this. A work around in the meantime would be for all terms agg that are wrapped by a children agg to use another execution hint (Either `global_ordinals` or `global_ordinals_hash`). \n", "created_at": "2015-01-14T08:28:49Z" } ], "number": 9271, "title": "`children` aggregation missing documents" }
{ "body": "This PR also includes changes for the GlobalOrdinalsStringTermsAggregator.java file, but that is just indentation fix. \n\nPR for #9271\n", "number": 9291, "review_comments": [], "title": "Post collection the children agg should also invoke that phase on its wrapped child aggs." }
{ "commits": [ { "message": "Fix identation" }, { "message": "Children aggregation: The children aggs' post collection translates the buckets on the parent level to the child level and because of that it needs to invoke the post collection of its nested aggs.\n\nCloses #9271" } ], "files": [ { "diff": "@@ -180,6 +180,10 @@ protected void doPostCollection() throws IOException {\n }\n }\n }\n+ // Need to invoke post collection on all aggs that the children agg is wrapping,\n+ // otherwise any post work that is required, because we started to collect buckets\n+ // in the method will not be performed.\n+ collectableSubAggregators.postCollection();\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/search/aggregations/bucket/children/ParentToChildrenAggregator.java", "status": "modified" }, { "diff": "@@ -188,18 +188,20 @@ public InternalAggregation buildAggregation(long owningBucketOrdinal) {\n }\n //replay any deferred collections\n runDeferredCollections(survivingBucketOrds);\n- \n+\n //Now build the aggs\n for (int i = 0; i < list.length; i++) {\n- Bucket bucket = list[i];\n- bucket.aggregations = bucket.docCount == 0 ? bucketEmptyAggregations() : bucketAggregations(bucket.bucketOrd);\n- bucket.docCountError = 0;\n+ Bucket bucket = list[i];\n+ bucket.aggregations = bucket.docCount == 0 ? bucketEmptyAggregations() : bucketAggregations(bucket.bucketOrd);\n+ bucket.docCountError = 0;\n }\n \n return new StringTerms(name, order, bucketCountThresholds.getRequiredSize(), bucketCountThresholds.getShardSize(), bucketCountThresholds.getMinDocCount(), Arrays.asList(list), showTermDocCountError, 0, otherDocCount);\n }\n- \n- /** This is used internally only, just for compare using global ordinal instead of term bytes in the PQ */\n+\n+ /**\n+ * This is used internally only, just for compare using global ordinal instead of term bytes in the PQ\n+ */\n static class OrdBucket extends InternalTerms.Bucket {\n long globalOrd;\n \n@@ -210,7 +212,7 @@ static class OrdBucket extends InternalTerms.Bucket {\n \n @Override\n int compareTerm(Terms.Bucket other) {\n- return Long.compare(globalOrd, ((OrdBucket)other).globalOrd);\n+ return Long.compare(globalOrd, ((OrdBucket) other).globalOrd);\n }\n \n @Override\n@@ -282,17 +284,17 @@ public void collect(int doc) throws IOException {\n public void collect(int doc) throws IOException {\n ords.setDocument(doc);\n final int numOrds = ords.cardinality();\n- for (int i = 0; i < numOrds; i++) {\n+ for (int i = 0; i < numOrds; i++) {\n final long globalOrd = ords.ordAt(i);\n- long bucketOrd = bucketOrds.add(globalOrd);\n- if (bucketOrd < 0) {\n- bucketOrd = -1 - bucketOrd;\n- collectExistingBucket(doc, bucketOrd);\n- } else {\n- collectBucket(doc, bucketOrd);\n- }\n- }\n- }\n+ long bucketOrd = bucketOrds.add(globalOrd);\n+ if (bucketOrd < 0) {\n+ bucketOrd = -1 - bucketOrd;\n+ collectExistingBucket(doc, bucketOrd);\n+ } else {\n+ collectBucket(doc, bucketOrd);\n+ }\n+ }\n+ }\n };\n }\n }\n@@ -332,7 +334,7 @@ protected Collector newCollector(final RandomAccessOrds ords) {\n final SortedDocValues singleValues = DocValues.unwrapSingleton(segmentOrds);\n if (singleValues != null) {\n return new Collector() {\n- @Override\n+ @Override\n public void collect(int doc) throws IOException {\n final int ord = singleValues.getOrd(doc);\n segmentDocCounts.increment(ord + 1, 1);\n@@ -343,7 +345,7 @@ public void collect(int doc) throws IOException {\n public void collect(int doc) throws IOException {\n segmentOrds.setDocument(doc);\n final int numOrds = segmentOrds.cardinality();\n- for (int i = 0; i < numOrds; i++) {\n+ for (int i = 0; i < numOrds; i++) {\n final long segmentOrd = segmentOrds.ordAt(i);\n segmentDocCounts.increment(segmentOrd + 1, 1);\n }\n@@ -361,7 +363,7 @@ public void setNextReader(AtomicReaderContext reader) {\n globalOrds = valuesSource.globalOrdinalsValues();\n segmentOrds = valuesSource.ordinalsValues();\n collector = newCollector(segmentOrds);\n- }\n+ }\n \n @Override\n protected void doPostCollection() {\n@@ -387,6 +389,8 @@ private void mapSegmentCountsToGlobalCounts() {\n mapping = null;\n }\n for (long i = 1; i < segmentDocCounts.size(); i++) {\n+ // We use set(...) here, because we need to reset the slow to 0.\n+ // segmentDocCounts get reused over the segments and otherwise counts would be too high.\n final int inc = segmentDocCounts.set(i, 0);\n if (inc == 0) {\n continue;\n@@ -437,8 +441,8 @@ public void doSetDocument(int docId) {\n if (accepted.get(ord)) {\n ords[cardinality++] = ord;\n }\n- }\n }\n+ }\n \n @Override\n public int cardinality() {", "filename": "src/main/java/org/elasticsearch/search/aggregations/bucket/terms/GlobalOrdinalsStringTermsAggregator.java", "status": "modified" }, { "diff": "@@ -32,11 +32,11 @@\n \n import java.util.*;\n \n+import static org.elasticsearch.index.query.QueryBuilders.hasChildQuery;\n import static org.elasticsearch.index.query.QueryBuilders.matchQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.termQuery;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.*;\n-import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n-import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertNoFailures;\n-import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchResponse;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.*;\n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.greaterThan;\n import static org.hamcrest.Matchers.is;\n@@ -258,6 +258,66 @@ public void testWithDeletes() throws Exception {\n }\n }\n \n+ @Test\n+ public void testPostCollection() throws Exception {\n+ String indexName = \"prodcatalog\";\n+ String masterType = \"masterprod\";\n+ String childType = \"variantsku\";\n+ assertAcked(\n+ prepareCreate(indexName)\n+ .addMapping(masterType, \"brand\", \"type=string\", \"name\", \"type=string\", \"material\", \"type=string\")\n+ .addMapping(childType, \"_parent\", \"type=masterprod\", \"color\", \"type=string\", \"size\", \"type=string\")\n+ );\n+\n+ List<IndexRequestBuilder> requests = new ArrayList<>();\n+ requests.add(client().prepareIndex(indexName, masterType, \"1\").setSource(\"brand\", \"Levis\", \"name\", \"Style 501\", \"material\", \"Denim\"));\n+ requests.add(client().prepareIndex(indexName, childType, \"0\").setParent(\"1\").setSource(\"color\", \"blue\", \"size\", \"32\"));\n+ requests.add(client().prepareIndex(indexName, childType, \"1\").setParent(\"1\").setSource(\"color\", \"blue\", \"size\", \"34\"));\n+ requests.add(client().prepareIndex(indexName, childType, \"2\").setParent(\"1\").setSource(\"color\", \"blue\", \"size\", \"36\"));\n+ requests.add(client().prepareIndex(indexName, childType, \"3\").setParent(\"1\").setSource(\"color\", \"black\", \"size\", \"38\"));\n+ requests.add(client().prepareIndex(indexName, childType, \"4\").setParent(\"1\").setSource(\"color\", \"black\", \"size\", \"40\"));\n+ requests.add(client().prepareIndex(indexName, childType, \"5\").setParent(\"1\").setSource(\"color\", \"gray\", \"size\", \"36\"));\n+\n+ requests.add(client().prepareIndex(indexName, masterType, \"2\").setSource(\"brand\", \"Wrangler\", \"name\", \"Regular Cut\", \"material\", \"Leather\"));\n+ requests.add(client().prepareIndex(indexName, childType, \"6\").setParent(\"2\").setSource(\"color\", \"blue\", \"size\", \"32\"));\n+ requests.add(client().prepareIndex(indexName, childType, \"7\").setParent(\"2\").setSource(\"color\", \"blue\", \"size\", \"34\"));\n+ requests.add(client().prepareIndex(indexName, childType, \"8\").setParent(\"2\").setSource(\"color\", \"black\", \"size\", \"36\"));\n+ requests.add(client().prepareIndex(indexName, childType, \"9\").setParent(\"2\").setSource(\"color\", \"black\", \"size\", \"38\"));\n+ requests.add(client().prepareIndex(indexName, childType, \"10\").setParent(\"2\").setSource(\"color\", \"black\", \"size\", \"40\"));\n+ requests.add(client().prepareIndex(indexName, childType, \"11\").setParent(\"2\").setSource(\"color\", \"orange\", \"size\", \"36\"));\n+ requests.add(client().prepareIndex(indexName, childType, \"12\").setParent(\"2\").setSource(\"color\", \"green\", \"size\", \"44\"));\n+ indexRandom(true, requests);\n+\n+ SearchResponse response = client().prepareSearch(indexName).setTypes(masterType)\n+ .setQuery(hasChildQuery(childType, termQuery(\"color\", \"orange\")))\n+ .addAggregation(children(\"my-refinements\")\n+ .childType(childType)\n+ .subAggregation(terms(\"my-colors\").field(childType + \".color\"))\n+ .subAggregation(terms(\"my-sizes\").field(childType + \".size\"))\n+ ).get();\n+ assertNoFailures(response);\n+ assertHitCount(response, 1);\n+\n+ Children childrenAgg = response.getAggregations().get(\"my-refinements\");\n+ assertThat(childrenAgg.getDocCount(), equalTo(7l));\n+\n+ Terms termsAgg = childrenAgg.getAggregations().get(\"my-colors\");\n+ assertThat(termsAgg.getBuckets().size(), equalTo(4));\n+ assertThat(termsAgg.getBucketByKey(\"black\").getDocCount(), equalTo(3l));\n+ assertThat(termsAgg.getBucketByKey(\"blue\").getDocCount(), equalTo(2l));\n+ assertThat(termsAgg.getBucketByKey(\"green\").getDocCount(), equalTo(1l));\n+ assertThat(termsAgg.getBucketByKey(\"orange\").getDocCount(), equalTo(1l));\n+\n+ termsAgg = childrenAgg.getAggregations().get(\"my-sizes\");\n+ assertThat(termsAgg.getBuckets().size(), equalTo(6));\n+ assertThat(termsAgg.getBucketByKey(\"36\").getDocCount(), equalTo(2l));\n+ assertThat(termsAgg.getBucketByKey(\"32\").getDocCount(), equalTo(1l));\n+ assertThat(termsAgg.getBucketByKey(\"34\").getDocCount(), equalTo(1l));\n+ assertThat(termsAgg.getBucketByKey(\"38\").getDocCount(), equalTo(1l));\n+ assertThat(termsAgg.getBucketByKey(\"40\").getDocCount(), equalTo(1l));\n+ assertThat(termsAgg.getBucketByKey(\"44\").getDocCount(), equalTo(1l));\n+ }\n+\n private static final class Control {\n \n final String category;", "filename": "src/test/java/org/elasticsearch/search/aggregations/bucket/ChildrenTests.java", "status": "modified" } ] }
{ "body": "One of my nodes is 'low' on space. Down to ~16 GB.\n\nSo I tried to run curator to remove older logs and I get the following message:\n\n```\n[2015-01-12 11:53:50,250][INFO ][cluster.metadata ] [tetrad] [logstash-2014.12.13] deleting index\n[2015-01-12 11:53:50,251][DEBUG][action.admin.indices.delete] [tetrad] [logstash-2014.12.13] failed to delete index\njava.lang.IllegalStateException: Free bytes [4518450191893] cannot be less than 0 or greater than total bytes [4509977353216]\n at org.elasticsearch.cluster.DiskUsage.<init>(DiskUsage.java:36)\n at org.elasticsearch.cluster.routing.allocation.decider.DiskThresholdDecider.canRemain(DiskThresholdDecider.java:439)\n at org.elasticsearch.cluster.routing.allocation.decider.AllocationDeciders.canRemain(AllocationDeciders.java:105)\n at org.elasticsearch.cluster.routing.allocation.AllocationService.moveShards(AllocationService.java:257)\n at org.elasticsearch.cluster.routing.allocation.AllocationService.reroute(AllocationService.java:223)\n at org.elasticsearch.cluster.routing.allocation.AllocationService.reroute(AllocationService.java:160)\n at org.elasticsearch.cluster.routing.allocation.AllocationService.reroute(AllocationService.java:146)\n at org.elasticsearch.cluster.metadata.MetaDataDeleteIndexService$2.execute(MetaDataDeleteIndexService.java:130)\n at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:329)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:153)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n at java.lang.Thread.run(Thread.java:745)\n[2015-01-12 11:53:56,767][INFO ][cluster.routing.allocation.decider] [tetrad] low disk watermark [15%] exceeded on [Pc_MAIWOQVe6qKNtKVIYpw][zefram] free: 16.3gb[14.5%], replicas will not be assigned to this node\n```\n\nThe actual error from curator is:\n\n```\nroot@tetrad:~# /root/.virtualenvs/curator/bin/curator --host localhost delete --older-than 10;\n2015-01-12 11:53:50,238 INFO Job starting...\n2015-01-12 11:53:50,241 INFO Deleting indices...\nTraceback (most recent call last):\n File \"/root/.virtualenvs/curator/bin/curator\", line 9, in <module>\n load_entry_point('elasticsearch-curator==2.1.1', 'console_scripts', 'curator')()\n File \"/root/.virtualenvs/curator/local/lib/python2.7/site-packages/curator/curator_script.py\", line 364, in main\n arguments.func(client, **argdict)\n File \"/root/.virtualenvs/curator/local/lib/python2.7/site-packages/curator/curator.py\", line 1025, in delete\n _op_loop(client, matching_indices, op=delete_index, dry_run=dry_run, **kwargs)\n File \"/root/.virtualenvs/curator/local/lib/python2.7/site-packages/curator/curator.py\", line 767, in _op_loop\n skipped = op(client, item, **kwargs)\n File \"/root/.virtualenvs/curator/local/lib/python2.7/site-packages/curator/curator.py\", line 610, in delete_index\n client.indices.delete(index=index_name)\n File \"/root/.virtualenvs/curator/local/lib/python2.7/site-packages/elasticsearch/client/utils.py\", line 68, in _wrapped\n return func(*args, params=params, **kwargs)\n File \"/root/.virtualenvs/curator/local/lib/python2.7/site-packages/elasticsearch/client/indices.py\", line 188, in delete\n params=params)\n File \"/root/.virtualenvs/curator/local/lib/python2.7/site-packages/elasticsearch/transport.py\", line 301, in perform_request\n status, headers, data = connection.perform_request(method, url, params, body, ignore=ignore, timeout=timeout)\n File \"/root/.virtualenvs/curator/local/lib/python2.7/site-packages/elasticsearch/connection/http_urllib3.py\", line 82, in perform_request\n self._raise_error(response.status, raw_data)\n File \"/root/.virtualenvs/curator/local/lib/python2.7/site-packages/elasticsearch/connection/base.py\", line 102, in _raise_error\n raise HTTP_EXCEPTIONS.get(status_code, TransportError)(status_code, error_message, additional_info)\nelasticsearch.exceptions.TransportError: TransportError(500, u'IllegalStateException[Free bytes [4518450191893] cannot be less than 0 or greater than total bytes [4509977353216]]')\nroot@tetrad:~# \n```\n", "comments": [ { "body": "One thing I forgot to mention is that the nodes use ZFS for their storage... Maybe the error about free bytes exceeding total bytes might be somehow related to ZFS compression?\n", "created_at": "2015-01-12T20:02:11Z" }, { "body": "The workaround:\n- stop the node that is low on disk space\n- run curator to delete older indexes\n- start the node that is low on disk space\n- the node will delete its old indexes, freeing up space\n", "created_at": "2015-01-12T20:05:04Z" }, { "body": "Related to #9249, the workaround there will work for this also until it is fixed.\n", "created_at": "2015-01-13T21:47:38Z" }, { "body": "Thanks for the pointer!\n", "created_at": "2015-01-13T22:01:33Z" } ], "number": 9260, "title": "Can't free up space because there's not enough space? ;)" }
{ "body": "Apparently some filesystems such as ZFS and occasionally NTFS can report\nfilesystem usages that are negative, or above the maximum total size of\nthe filesystem. This relaxes the constraints on `DiskUsage` so that an\nexception is not thrown.\n\nIf 0 is passed as the totalBytes, 1 is used to avoid a division-by-zero\nerror.\n\nFixes #9249\nRelates to #9260\n", "number": 9283, "review_comments": [ { "body": "I think this comment is no longer true?\n", "created_at": "2015-01-26T09:11:19Z" }, { "body": "I believe there is no test for this. Might be worth while adding.\n", "created_at": "2015-01-26T09:11:59Z" }, { "body": "maybe add a comment about why we do these weird tests?\n", "created_at": "2015-01-26T09:13:06Z" }, { "body": "maybe add a comment why we interpret totalBytes =0 as all free?\n", "created_at": "2015-01-26T09:13:58Z" } ], "title": "Relax restrictions on filesystem size reporting in DiskUsage" }
{ "commits": [ { "message": "Relax restrictions on filesystem size reporting\n\nApparently some filesystems such as ZFS and occasionally NTFS can report\nfilesystem usages that are negative, or above the maximum total size of\nthe filesystem. This relaxes the constraints on `DiskUsage` so that an\nexception is not thrown.\n\nIf 0 is passed as the totalBytes, `.getFreeDiskAsPercentage()` will\nalways return 100.0% free (to ensure the disk threshold decider fails\nopen)\n\nFixes #9249\nRelates to #9260" } ], "files": [ { "diff": "@@ -31,20 +31,32 @@ public class DiskUsage {\n final long totalBytes;\n final long freeBytes;\n \n+ /**\n+ * Create a new DiskUsage, if {@code totalBytes} is 0, {@get getFreeDiskAsPercentage}\n+ * will always return 100.0% free\n+ */\n public DiskUsage(String nodeId, String nodeName, long totalBytes, long freeBytes) {\n- if ((totalBytes < freeBytes) || (totalBytes < 0)) {\n- throw new IllegalStateException(\"Free bytes [\" + freeBytes +\n- \"] cannot be less than 0 or greater than total bytes [\" + totalBytes + \"]\");\n- }\n this.nodeId = nodeId;\n this.nodeName = nodeName;\n- this.totalBytes = totalBytes;\n this.freeBytes = freeBytes;\n+ this.totalBytes = totalBytes;\n+ }\n+\n+ public String getNodeId() {\n+ return nodeId;\n+ }\n+\n+ public String getNodeName() {\n+ return nodeName;\n }\n \n public double getFreeDiskAsPercentage() {\n- double freePct = 100.0 * ((double)freeBytes / totalBytes);\n- return freePct;\n+ // We return 100.0% in order to fail \"open\", in that if we have invalid\n+ // numbers for the total bytes, it's as if we don't know disk usage.\n+ if (totalBytes == 0) {\n+ return 100.0;\n+ }\n+ return 100.0 * ((double)freeBytes / totalBytes);\n }\n \n public long getFreeBytes() {", "filename": "src/main/java/org/elasticsearch/cluster/DiskUsage.java", "status": "modified" }, { "diff": "@@ -519,6 +519,9 @@ public Decision canRemain(ShardRouting shardRouting, RoutingNode node, RoutingAl\n * @return DiskUsage representing given node using the average disk usage\n */\n public DiskUsage averageUsage(RoutingNode node, Map<String, DiskUsage> usages) {\n+ if (usages.size() == 0) {\n+ return new DiskUsage(node.nodeId(), node.node().name(), 0, 0);\n+ }\n long totalBytes = 0;\n long freeBytes = 0;\n for (DiskUsage du : usages.values()) {\n@@ -537,7 +540,9 @@ public DiskUsage averageUsage(RoutingNode node, Map<String, DiskUsage> usages) {\n */\n public double freeDiskPercentageAfterShardAssigned(DiskUsage usage, Long shardSize) {\n shardSize = (shardSize == null) ? 0 : shardSize;\n- return 100.0 - (((double)(usage.getUsedBytes() + shardSize) / usage.getTotalBytes()) * 100.0);\n+ DiskUsage newUsage = new DiskUsage(usage.getNodeId(), usage.getNodeName(),\n+ usage.getTotalBytes(), usage.getFreeBytes() - shardSize);\n+ return newUsage.getFreeDiskAsPercentage();\n }\n \n /**", "filename": "src/main/java/org/elasticsearch/cluster/routing/allocation/decider/DiskThresholdDecider.java", "status": "modified" }, { "diff": "@@ -34,6 +34,25 @@ public void diskUsageCalcTest() {\n assertThat(du.getUsedBytes(), equalTo(60L));\n assertThat(du.getTotalBytes(), equalTo(100L));\n \n+ // Test that DiskUsage handles invalid numbers, as reported by some\n+ // filesystems (ZFS & NTFS)\n+ DiskUsage du2 = new DiskUsage(\"node1\", \"n1\", 100, 101);\n+ assertThat(du2.getFreeDiskAsPercentage(), equalTo(101.0));\n+ assertThat(du2.getFreeBytes(), equalTo(101L));\n+ assertThat(du2.getUsedBytes(), equalTo(-1L));\n+ assertThat(du2.getTotalBytes(), equalTo(100L));\n+\n+ DiskUsage du3 = new DiskUsage(\"node1\", \"n1\", -1, -1);\n+ assertThat(du3.getFreeDiskAsPercentage(), equalTo(100.0));\n+ assertThat(du3.getFreeBytes(), equalTo(-1L));\n+ assertThat(du3.getUsedBytes(), equalTo(0L));\n+ assertThat(du3.getTotalBytes(), equalTo(-1L));\n+\n+ DiskUsage du4 = new DiskUsage(\"node1\", \"n1\", 0, 0);\n+ assertThat(du4.getFreeDiskAsPercentage(), equalTo(100.0));\n+ assertThat(du4.getFreeBytes(), equalTo(0L));\n+ assertThat(du4.getUsedBytes(), equalTo(0L));\n+ assertThat(du4.getTotalBytes(), equalTo(0L));\n }\n \n @Test\n@@ -42,18 +61,17 @@ public void randomDiskUsageTest() {\n for (int i = 1; i < iters; i++) {\n long total = between(Integer.MIN_VALUE, Integer.MAX_VALUE);\n long free = between(Integer.MIN_VALUE, Integer.MAX_VALUE);\n- if (free > total || total <= 0) {\n- try {\n- new DiskUsage(\"random\", \"random\", total, free);\n- fail(\"should never reach this\");\n- } catch (IllegalStateException e) {\n- }\n+ DiskUsage du = new DiskUsage(\"random\", \"random\", total, free);\n+ if (total == 0) {\n+ assertThat(du.getFreeBytes(), equalTo(free));\n+ assertThat(du.getTotalBytes(), equalTo(0L));\n+ assertThat(du.getUsedBytes(), equalTo(-free));\n+ assertThat(du.getFreeDiskAsPercentage(), equalTo(100.0));\n } else {\n- DiskUsage du = new DiskUsage(\"random\", \"random\", total, free);\n assertThat(du.getFreeBytes(), equalTo(free));\n assertThat(du.getTotalBytes(), equalTo(total));\n assertThat(du.getUsedBytes(), equalTo(total - free));\n- assertThat(du.getFreeDiskAsPercentage(), equalTo(100.0 * ((double)free / total)));\n+ assertThat(du.getFreeDiskAsPercentage(), equalTo(100.0 * ((double) free / total)));\n }\n }\n }", "filename": "src/test/java/org/elasticsearch/cluster/DiskUsageTests.java", "status": "modified" } ] }
{ "body": "I am using UNC path to store the data eg: \\pc1\\data.\nThis works perfect in version 1.3.4 but after I upgrade it to 1.3.7 or 1.4.2 it started give me error\n\nBelow is the error log\n\n[2015-01-12 16:03:33,087][INFO ][node ] [master1] version[1.3.7], pid[7068], build[3042293/2014-12-16T13:59:32Z]\n[2015-01-12 16:03:33,087][INFO ][node ] [master1] initializing ...\n[2015-01-12 16:03:33,102][INFO ][plugins ] [master1] loaded [], sites []\n[2015-01-12 16:03:36,249][INFO ][node ] [master1] initialized\n[2015-01-12 16:03:36,249][INFO ][node ] [master1] starting ...\n[2015-01-12 16:03:36,514][INFO ][transport ] [master1] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/192.168.2.101:9300]}\n[2015-01-12 16:03:36,795][INFO ][discovery ] [master1] local1/xBx3MDWZT--1dKhACi3KSA\n[2015-01-12 16:03:39,844][INFO ][cluster.service ] [master1] new_master [master1][xBx3MDWZT--1dKhACi3KSA][dell13][inet[/192.168.2.101:9300]]{master=true}, reason: zen-disco-join (elected_as_master)\n[2015-01-12 16:03:39,901][INFO ][http ] [master1] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/192.168.2.101:9200]}\n[2015-01-12 16:03:39,901][INFO ][node ] [master1] started\n[2015-01-12 16:03:39,964][INFO ][gateway ] [master1] recovered [0] indices into cluster_state\n[2015-01-12 16:04:09,878][DEBUG][action.admin.cluster.node.stats] [master1] failed to execute on node [xBx3MDWZT--1dKhACi3KSA]\njava.lang.IllegalStateException: Free bytes [-1] cannot be less than 0 or greater than total bytes [-1]\nat org.elasticsearch.cluster.DiskUsage.(DiskUsage.java:32)\nat org.elasticsearch.cluster.InternalClusterInfoService$ClusterInfoUpdateJob$1.onResponse(InternalClusterInfoService.java:279)\nat org.elasticsearch.cluster.InternalClusterInfoService$ClusterInfoUpdateJob$1.onResponse(InternalClusterInfoService.java:260)\nat org.elasticsearch.action.support.nodes.TransportNodesOperationAction$AsyncAction.finishHim(TransportNodesOperationAction.java:221)\nat org.elasticsearch.action.support.nodes.TransportNodesOperationAction$AsyncAction.onOperation(TransportNodesOperationAction.java:196)\nat org.elasticsearch.action.support.nodes.TransportNodesOperationAction$AsyncAction.access$900(TransportNodesOperationAction.java:96)\nat org.elasticsearch.action.support.nodes.TransportNodesOperationAction$AsyncAction$2.run(TransportNodesOperationAction.java:140)\nat java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)\nat java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)\nat java.lang.Thread.run(Unknown Source)\n", "comments": [ { "body": "@dobariya thanks for reporting this, can you tell me what kind of filesystem `\\pc1\\data` is? Is it a samba share or local disk?\n", "created_at": "2015-01-12T10:41:03Z" }, { "body": "@dakrone thanks for quick response. we have windows environment ,file system is NTFS\n", "created_at": "2015-01-12T10:45:13Z" }, { "body": "@dobariya also, as a workaround in the meantime, if you are _not_ using the [disk-based shard allocation](http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/index-modules-allocation.html#disk) you can disable this by setting `cluster.routing.allocation.disk.threshold_enabled` to `false` and the disk usages will not be checked.\n", "created_at": "2015-01-12T10:45:33Z" }, { "body": "@dakrone Thanks for the quick response and this resolved my issue.\nI have one more question regarding version upgrade.\nI have indexed 2 million data using version 1.3.4 and now i want to grab the latest elasticsearch version.Can I directly upgrade it by just replacing the existing setup elasticsearch-1.3.4 folder to the latest version or do I have to configure other settings also?\n", "created_at": "2015-01-12T10:52:29Z" }, { "body": "> Can I directly upgrade it by just replacing the existing setup elasticsearch-1.3.4 folder to the latest version or do I have to configure other settings also?\n\nYou should be able to upgrade doing this, however, it would be good to check the release notes for the versions you are skipping to make sure there aren't any breaking changes that will impact you, in particular, if you're upgrading to 1.4.2, you should check the [1.4.0-beta1 release notes](http://www.elasticsearch.org/downloads/1-4-0-beta1/) which lists the breaking changes between 1.3.x and 1.4.x (it is a small list)\n", "created_at": "2015-01-12T10:58:04Z" }, { "body": "Thanks buddy.\ncan you please guide me in setting up my configuration.\nI have 3 nodes\nnode1:master=true and data=true\nnode2:master=true and data=false\nnode3:master=false and data=false\n\nam storing the data on only one single UNC path which i already mentioned the the above comments.All above nodes are pointed to this single UNC path\n\nPlease guide whether above nodes setting, all nodes pointing the one single UNC path is a good practice or not\n", "created_at": "2015-01-12T11:04:18Z" } ], "number": 9249, "title": "Elastic search 1.4.2 data path issue" }
{ "body": "Apparently some filesystems such as ZFS and occasionally NTFS can report\nfilesystem usages that are negative, or above the maximum total size of\nthe filesystem. This relaxes the constraints on `DiskUsage` so that an\nexception is not thrown.\n\nIf 0 is passed as the totalBytes, 1 is used to avoid a division-by-zero\nerror.\n\nFixes #9249\nRelates to #9260\n", "number": 9283, "review_comments": [ { "body": "I think this comment is no longer true?\n", "created_at": "2015-01-26T09:11:19Z" }, { "body": "I believe there is no test for this. Might be worth while adding.\n", "created_at": "2015-01-26T09:11:59Z" }, { "body": "maybe add a comment about why we do these weird tests?\n", "created_at": "2015-01-26T09:13:06Z" }, { "body": "maybe add a comment why we interpret totalBytes =0 as all free?\n", "created_at": "2015-01-26T09:13:58Z" } ], "title": "Relax restrictions on filesystem size reporting in DiskUsage" }
{ "commits": [ { "message": "Relax restrictions on filesystem size reporting\n\nApparently some filesystems such as ZFS and occasionally NTFS can report\nfilesystem usages that are negative, or above the maximum total size of\nthe filesystem. This relaxes the constraints on `DiskUsage` so that an\nexception is not thrown.\n\nIf 0 is passed as the totalBytes, `.getFreeDiskAsPercentage()` will\nalways return 100.0% free (to ensure the disk threshold decider fails\nopen)\n\nFixes #9249\nRelates to #9260" } ], "files": [ { "diff": "@@ -31,20 +31,32 @@ public class DiskUsage {\n final long totalBytes;\n final long freeBytes;\n \n+ /**\n+ * Create a new DiskUsage, if {@code totalBytes} is 0, {@get getFreeDiskAsPercentage}\n+ * will always return 100.0% free\n+ */\n public DiskUsage(String nodeId, String nodeName, long totalBytes, long freeBytes) {\n- if ((totalBytes < freeBytes) || (totalBytes < 0)) {\n- throw new IllegalStateException(\"Free bytes [\" + freeBytes +\n- \"] cannot be less than 0 or greater than total bytes [\" + totalBytes + \"]\");\n- }\n this.nodeId = nodeId;\n this.nodeName = nodeName;\n- this.totalBytes = totalBytes;\n this.freeBytes = freeBytes;\n+ this.totalBytes = totalBytes;\n+ }\n+\n+ public String getNodeId() {\n+ return nodeId;\n+ }\n+\n+ public String getNodeName() {\n+ return nodeName;\n }\n \n public double getFreeDiskAsPercentage() {\n- double freePct = 100.0 * ((double)freeBytes / totalBytes);\n- return freePct;\n+ // We return 100.0% in order to fail \"open\", in that if we have invalid\n+ // numbers for the total bytes, it's as if we don't know disk usage.\n+ if (totalBytes == 0) {\n+ return 100.0;\n+ }\n+ return 100.0 * ((double)freeBytes / totalBytes);\n }\n \n public long getFreeBytes() {", "filename": "src/main/java/org/elasticsearch/cluster/DiskUsage.java", "status": "modified" }, { "diff": "@@ -519,6 +519,9 @@ public Decision canRemain(ShardRouting shardRouting, RoutingNode node, RoutingAl\n * @return DiskUsage representing given node using the average disk usage\n */\n public DiskUsage averageUsage(RoutingNode node, Map<String, DiskUsage> usages) {\n+ if (usages.size() == 0) {\n+ return new DiskUsage(node.nodeId(), node.node().name(), 0, 0);\n+ }\n long totalBytes = 0;\n long freeBytes = 0;\n for (DiskUsage du : usages.values()) {\n@@ -537,7 +540,9 @@ public DiskUsage averageUsage(RoutingNode node, Map<String, DiskUsage> usages) {\n */\n public double freeDiskPercentageAfterShardAssigned(DiskUsage usage, Long shardSize) {\n shardSize = (shardSize == null) ? 0 : shardSize;\n- return 100.0 - (((double)(usage.getUsedBytes() + shardSize) / usage.getTotalBytes()) * 100.0);\n+ DiskUsage newUsage = new DiskUsage(usage.getNodeId(), usage.getNodeName(),\n+ usage.getTotalBytes(), usage.getFreeBytes() - shardSize);\n+ return newUsage.getFreeDiskAsPercentage();\n }\n \n /**", "filename": "src/main/java/org/elasticsearch/cluster/routing/allocation/decider/DiskThresholdDecider.java", "status": "modified" }, { "diff": "@@ -34,6 +34,25 @@ public void diskUsageCalcTest() {\n assertThat(du.getUsedBytes(), equalTo(60L));\n assertThat(du.getTotalBytes(), equalTo(100L));\n \n+ // Test that DiskUsage handles invalid numbers, as reported by some\n+ // filesystems (ZFS & NTFS)\n+ DiskUsage du2 = new DiskUsage(\"node1\", \"n1\", 100, 101);\n+ assertThat(du2.getFreeDiskAsPercentage(), equalTo(101.0));\n+ assertThat(du2.getFreeBytes(), equalTo(101L));\n+ assertThat(du2.getUsedBytes(), equalTo(-1L));\n+ assertThat(du2.getTotalBytes(), equalTo(100L));\n+\n+ DiskUsage du3 = new DiskUsage(\"node1\", \"n1\", -1, -1);\n+ assertThat(du3.getFreeDiskAsPercentage(), equalTo(100.0));\n+ assertThat(du3.getFreeBytes(), equalTo(-1L));\n+ assertThat(du3.getUsedBytes(), equalTo(0L));\n+ assertThat(du3.getTotalBytes(), equalTo(-1L));\n+\n+ DiskUsage du4 = new DiskUsage(\"node1\", \"n1\", 0, 0);\n+ assertThat(du4.getFreeDiskAsPercentage(), equalTo(100.0));\n+ assertThat(du4.getFreeBytes(), equalTo(0L));\n+ assertThat(du4.getUsedBytes(), equalTo(0L));\n+ assertThat(du4.getTotalBytes(), equalTo(0L));\n }\n \n @Test\n@@ -42,18 +61,17 @@ public void randomDiskUsageTest() {\n for (int i = 1; i < iters; i++) {\n long total = between(Integer.MIN_VALUE, Integer.MAX_VALUE);\n long free = between(Integer.MIN_VALUE, Integer.MAX_VALUE);\n- if (free > total || total <= 0) {\n- try {\n- new DiskUsage(\"random\", \"random\", total, free);\n- fail(\"should never reach this\");\n- } catch (IllegalStateException e) {\n- }\n+ DiskUsage du = new DiskUsage(\"random\", \"random\", total, free);\n+ if (total == 0) {\n+ assertThat(du.getFreeBytes(), equalTo(free));\n+ assertThat(du.getTotalBytes(), equalTo(0L));\n+ assertThat(du.getUsedBytes(), equalTo(-free));\n+ assertThat(du.getFreeDiskAsPercentage(), equalTo(100.0));\n } else {\n- DiskUsage du = new DiskUsage(\"random\", \"random\", total, free);\n assertThat(du.getFreeBytes(), equalTo(free));\n assertThat(du.getTotalBytes(), equalTo(total));\n assertThat(du.getUsedBytes(), equalTo(total - free));\n- assertThat(du.getFreeDiskAsPercentage(), equalTo(100.0 * ((double)free / total)));\n+ assertThat(du.getFreeDiskAsPercentage(), equalTo(100.0 * ((double) free / total)));\n }\n }\n }", "filename": "src/test/java/org/elasticsearch/cluster/DiskUsageTests.java", "status": "modified" } ] }
{ "body": "Use case below.\n\nhttps://gist.github.com/ppf2/bfa38b8284e09c0740ad\n\nQuery cache is turned on and the query uses search_type=count. If the query is run with just the aggregation part, the query cache is populated. As soon as a range filter is added to the query, it stops populating the query cache. Not sure why that is ...\n", "comments": [ { "body": "I reported this Dec. 19 https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/elasticsearch/N39iejK6c5o/qj1AUqPxweUJ with a gist but got no feedback whatsoever.\n", "created_at": "2015-02-17T16:42:44Z" }, { "body": "@ppf2 - were you able to confirm that the Gist you posted here works with ES 1.4.3? Using 1.4.3, I've been trying to get the query cache populated with a date range filter, but have been unsuccessful so far - not sure if it's something I'm doing wrong, if the bug is still there.\n", "created_at": "2015-02-18T16:54:37Z" }, { "body": "@jpountz Just tested 1.4.4 with the gist in the original filing, it looks like it is still not populating the query cache when there is a range filter.\n", "created_at": "2015-02-26T04:52:30Z" }, { "body": "I am reopening this then... thanks for testing\n", "created_at": "2015-02-26T06:53:11Z" }, { "body": "OK, I just found the issue. There were merge conflicts when I backported to 1.4 and I left a call to SearchContext.nowInMillis. Although it did not break functionality, it disabled the query cache since we do not cache requests when the current timestamp is used. I just pushed a fix and will update the labels. For the record, things were and are still fine on master and 1.x.\n", "created_at": "2015-02-26T08:31:36Z" }, { "body": "It did not work on 1.4 because of a merging issue. This is fixed now and will be available in 1.4.5. Sorry for the annoyance.\n", "created_at": "2015-03-16T17:43:12Z" } ], "number": 9225, "title": "Shard query cache not populated when there is a range filter" }
{ "body": "The query cache has a mechanism that disables it automatically when\nSearchContext.nowInMillis() is used. One issue with that is that the date math\nparser always evaluates the current timestamp when parsing a date, even if it\nis not needed. As a consequence, whenever you use a date expression in your\nqueries, the query cache would not be used.\n\nClose #9225\n", "number": 9269, "review_comments": [], "title": "Queries are never cached when date math expressions are used (including exact dates)" }
{ "commits": [ { "message": "Query cache: Make the query cache usable on time-based data.\n\nThe query cache has a mechanism that disables it automatically when\nSearchContext.nowInMillis() is used. One issue with that is that the date math\nparser always evaluates the current timestamp when parsing a date, even if it\nis not needed. As a consequence, whenever you use a date expression in your\nqueries, the query cache would not be used.\n\nClose #9225" } ], "files": [ { "diff": "@@ -25,6 +25,7 @@\n import org.joda.time.MutableDateTime;\n import org.joda.time.format.DateTimeFormatter;\n \n+import java.util.concurrent.Callable;\n import java.util.concurrent.TimeUnit;\n \n /**\n@@ -46,15 +47,22 @@ public DateMathParser(FormatDateTimeFormatter dateTimeFormatter, TimeUnit timeUn\n this.timeUnit = timeUnit;\n }\n \n- public long parse(String text, long now) {\n+ public long parse(String text, Callable<Long> now) {\n return parse(text, now, false, null);\n }\n \n- public long parse(String text, long now, boolean roundUp, DateTimeZone timeZone) {\n+ // Note: we take a callable here for the timestamp in order to be able to figure out\n+ // if it has been used. For instance, the query cache does not cache queries that make\n+ // use of `now`.\n+ public long parse(String text, Callable<Long> now, boolean roundUp, DateTimeZone timeZone) {\n long time;\n String mathString;\n if (text.startsWith(\"now\")) {\n- time = now;\n+ try {\n+ time = now.call();\n+ } catch (Exception e) {\n+ throw new ElasticsearchParseException(\"Could not read the current timestamp\", e);\n+ }\n mathString = text.substring(\"now\".length());\n } else {\n int index = text.indexOf(\"||\");", "filename": "src/main/java/org/elasticsearch/common/joda/DateMathParser.java", "status": "modified" }, { "diff": "@@ -68,6 +68,7 @@\n import java.util.List;\n import java.util.Locale;\n import java.util.Map;\n+import java.util.concurrent.Callable;\n import java.util.concurrent.TimeUnit;\n \n import static org.elasticsearch.index.mapper.MapperBuilders.dateField;\n@@ -271,9 +272,21 @@ private String convertToString(Object value) {\n return value.toString();\n }\n \n+ private static Callable<Long> now() {\n+ return new Callable<Long>() {\n+ @Override\n+ public Long call() {\n+ final SearchContext context = SearchContext.current();\n+ return context != null\n+ ? context.nowInMillis()\n+ : System.currentTimeMillis();\n+ }\n+ };\n+ }\n+\n @Override\n public Query fuzzyQuery(String value, Fuzziness fuzziness, int prefixLength, int maxExpansions, boolean transpositions) {\n- long iValue = dateMathParser.parse(value, System.currentTimeMillis());\n+ long iValue = dateMathParser.parse(value, now());\n long iSim;\n try {\n iSim = fuzziness.asTimeValue().millis();\n@@ -306,13 +319,11 @@ public long parseToMilliseconds(Object value, boolean inclusive, @Nullable DateT\n }\n \n public long parseToMilliseconds(String value, boolean inclusive, @Nullable DateTimeZone zone, @Nullable DateMathParser forcedDateParser) {\n- SearchContext sc = SearchContext.current();\n- long now = sc == null ? System.currentTimeMillis() : sc.nowInMillis();\n DateMathParser dateParser = dateMathParser;\n if (forcedDateParser != null) {\n dateParser = forcedDateParser;\n }\n- return dateParser.parse(value, now, inclusive, zone);\n+ return dateParser.parse(value, now(), inclusive, zone);\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/index/mapper/core/DateFieldMapper.java", "status": "modified" }, { "diff": "@@ -31,6 +31,7 @@\n import java.text.NumberFormat;\n import java.text.ParseException;\n import java.util.Locale;\n+import java.util.concurrent.Callable;\n import java.util.concurrent.TimeUnit;\n \n /**\n@@ -92,8 +93,14 @@ public DateMath(DateMathParser parser) {\n }\n \n @Override\n- public long parseLong(String value, SearchContext searchContext) {\n- return parser.parse(value, searchContext.nowInMillis());\n+ public long parseLong(String value, final SearchContext searchContext) {\n+ final Callable<Long> now = new Callable<Long>() {\n+ @Override\n+ public Long call() throws Exception {\n+ return searchContext.nowInMillis();\n+ }\n+ };\n+ return parser.parse(value, now);\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/search/aggregations/support/format/ValueParser.java", "status": "modified" }, { "diff": "@@ -24,25 +24,36 @@\n import org.elasticsearch.test.ElasticsearchTestCase;\n import org.joda.time.DateTimeZone;\n \n+import java.util.concurrent.Callable;\n import java.util.concurrent.TimeUnit;\n+import java.util.concurrent.atomic.AtomicBoolean;\n \n import static org.hamcrest.Matchers.equalTo;\n \n public class DateMathParserTests extends ElasticsearchTestCase {\n FormatDateTimeFormatter formatter = Joda.forPattern(\"dateOptionalTime\");\n DateMathParser parser = new DateMathParser(formatter, TimeUnit.MILLISECONDS);\n \n+ private static Callable<Long> callable(final long value) {\n+ return new Callable<Long>() {\n+ @Override\n+ public Long call() throws Exception {\n+ return value;\n+ }\n+ };\n+ }\n+\n void assertDateMathEquals(String toTest, String expected) {\n assertDateMathEquals(toTest, expected, 0, false, null);\n }\n \n- void assertDateMathEquals(String toTest, String expected, long now, boolean roundUp, DateTimeZone timeZone) {\n- long gotMillis = parser.parse(toTest, now, roundUp, timeZone);\n+ void assertDateMathEquals(String toTest, String expected, final long now, boolean roundUp, DateTimeZone timeZone) {\n+ long gotMillis = parser.parse(toTest, callable(now), roundUp, timeZone);\n assertDateEquals(gotMillis, toTest, expected);\n }\n \n void assertDateEquals(long gotMillis, String original, String expected) {\n- long expectedMillis = parser.parse(expected, 0);\n+ long expectedMillis = parser.parse(expected, callable(0));\n if (gotMillis != expectedMillis) {\n fail(\"Date math not equal\\n\" +\n \"Original : \" + original + \"\\n\" +\n@@ -119,7 +130,8 @@ public void testMultipleAdjustments() {\n \n \n public void testNow() {\n- long now = parser.parse(\"2014-11-18T14:27:32\", 0, false, null);\n+ final long now = parser.parse(\"2014-11-18T14:27:32\", callable(0), false, null);\n+ \n assertDateMathEquals(\"now\", \"2014-11-18T14:27:32\", now, false, null);\n assertDateMathEquals(\"now+M\", \"2014-12-18T14:27:32\", now, false, null);\n assertDateMathEquals(\"now-2d\", \"2014-11-16T14:27:32\", now, false, null);\n@@ -181,7 +193,7 @@ public void testTimestamps() {\n \n // also check other time units\n DateMathParser parser = new DateMathParser(Joda.forPattern(\"dateOptionalTime\"), TimeUnit.SECONDS);\n- long datetime = parser.parse(\"1418248078\", 0);\n+ long datetime = parser.parse(\"1418248078\", callable(0));\n assertDateEquals(datetime, \"1418248078\", \"2014-12-10T21:47:58.000\");\n \n // a timestamp before 10000 is a year\n@@ -194,7 +206,7 @@ public void testTimestamps() {\n \n void assertParseException(String msg, String date, String exc) {\n try {\n- parser.parse(date, 0);\n+ parser.parse(date, callable(0));\n fail(\"Date: \" + date + \"\\n\" + msg);\n } catch (ElasticsearchParseException e) {\n assertThat(ExceptionsHelper.detailedMessage(e).contains(exc), equalTo(true));\n@@ -213,4 +225,19 @@ public void testIllegalDateFormat() {\n assertParseException(\"Expected bad timestamp exception\", Long.toString(Long.MAX_VALUE) + \"0\", \"timestamp\");\n assertParseException(\"Expected bad date format exception\", \"123bogus\", \"with format\");\n }\n+\n+ public void testOnlyCallsNowIfNecessary() {\n+ final AtomicBoolean called = new AtomicBoolean();\n+ final Callable<Long> now = new Callable<Long>() {\n+ @Override\n+ public Long call() throws Exception {\n+ called.set(true);\n+ return 42L;\n+ }\n+ };\n+ parser.parse(\"2014-11-18T14:27:32\", now, false, null);\n+ assertFalse(called.get());\n+ parser.parse(\"now/d\", now, false, null);\n+ assertTrue(called.get());\n+ }\n }", "filename": "src/test/java/org/elasticsearch/common/joda/DateMathParserTests.java", "status": "modified" } ] }
{ "body": "The RPM init.d script makes use of `CONF_FILE` to verify that it exists, but then it does not pass that file onto the ES process.\n\nhttps://github.com/elasticsearch/elasticsearch/blob/master/src/rpm/init.d/elasticsearch#L89\n\nThis should pass in the `es.default.config` setting along with the rest [to match the Debian init.d script's behavior](https://github.com/elasticsearch/elasticsearch/blob/master/src/deb/init.d/elasticsearch#L107) and [the Windows service launcher's behavior](https://github.com/elasticsearch/elasticsearch/blob/master/bin/service.bat#L146).\n\n```\n-Des.default.config=$CONF_FILE\n```\n", "comments": [ { "body": "CONF_FILE has been removed\n", "created_at": "2015-12-03T19:08:49Z" } ], "number": 9050, "title": "RPM init.d script does not pass CONF_FILE to ES" }
{ "body": "The current init.d script does not pass CONF_FILE variable to ES.\n\nCloses #9050\n", "number": 9254, "review_comments": [], "title": "RPM: Add -Des.default.config=$CONF_FILE to daemon command" }
{ "commits": [ { "message": "RPM: Add -Des.default.config=$CONF_FILE to daemon command\n\nThe current init.d script does not pass CONF_FILE variable to ES.\n\nCloses #9050" } ], "files": [ { "diff": "@@ -86,7 +86,7 @@ start() {\n fi\n echo -n $\"Starting $prog: \"\n # if not running, start it up here, usually something like \"daemon $exec\"\n- daemon --user $ES_USER --pidfile $pidfile $exec -p $pidfile -d -Des.default.path.home=$ES_HOME -Des.default.path.logs=$LOG_DIR -Des.default.path.data=$DATA_DIR -Des.default.path.work=$WORK_DIR -Des.default.path.conf=$CONF_DIR\n+ daemon --user $ES_USER --pidfile $pidfile $exec -p $pidfile -d -Des.default.config=$CONF_FILE -Des.default.path.home=$ES_HOME -Des.default.path.logs=$LOG_DIR -Des.default.path.data=$DATA_DIR -Des.default.path.work=$WORK_DIR -Des.default.path.conf=$CONF_DIR\n retval=$?\n echo\n [ $retval -eq 0 ] && touch $lockfile", "filename": "src/rpm/init.d/elasticsearch", "status": "modified" } ] }
{ "body": "In trying to figure out whether I could use multiple fields in a `field_value_factor` clause (which is not possible, I think?), I spotted a bug.\n\nPassing in arrays like so:\n\n```\n\"field_value_factor\": {\n \"field\": [\"age\", \"price\"],\n \"modifier\": [ \"log1p\", \"sqrt\"],\n \"factor\": [1.2, 2]\n}\n```\n\nLeads to this being executed:\n\n```\n\"field_value_factor\": {\n \"field\": \"price\",\n \"modifier\": \"sqrt\"\n \"factor\": 2\n}\n```\n\nIt might be better to throw an error if unexpected data types are passed in here.\n\nAs an aside, it would be great to be able to use multiple fields somehow in the `field_value_factor` to tune the score.\n", "comments": [ { "body": "It seems a good one to start contributing. \n", "created_at": "2014-08-23T21:25:40Z" }, { "body": "I tried it and I think the fix is pretty simple.\n\nThe problem is that to test this it takes a newbie like me way more effort than to fix it.\n\nAny ideas?\n", "created_at": "2014-09-02T07:02:56Z" }, { "body": "@cfontes you should be able to take a look at `FieldValueFactorParser` line 67 for where the factor is parsed and add some code to determine whether it is an array or a single value. The rest of the parsing is in this file also. The actual execution is in `FieldValueFactorFunction.score`.\n", "created_at": "2014-09-02T07:44:01Z" }, { "body": "Whoops sorry, I misread your message about the testing being the hard part instead of the feature. You should be able to add a test in `FunctionScoreTests` that just checks that the query is not allowed and throws an exception.\n", "created_at": "2014-09-02T07:52:36Z" }, { "body": "@dakrone thanks, I looked at that before and as I said it looks easy to fix. I ended up on those exact places.\n\nMy problem is with testing this, because as far as I can see from my limited knowledge of ES code\nI would need to create valid instances of `QueryParseContext parseContext`, `XContentParser parser` which in itself is not easy task as far as I could understand ( I tryed it, not just talking the talk.)\n\nIt would be great to have a mocking framework or some easy way to get those kinds of Objects for testing purpose. Is there any already in place that I couldn't find?\n", "created_at": "2014-09-02T07:54:48Z" }, { "body": "@cfontes you should use an integration test for this instead of mocking everything out, see this test: https://github.com/elasticsearch/elasticsearch/blob/master/src/test/java/org/elasticsearch/search/functionscore/FunctionScoreFieldValueTests.java#L41 the bottom section has some try/catches that catch invalid queries.\n", "created_at": "2014-09-02T07:57:41Z" }, { "body": "@dakrone, thanks! I will!\n\nAny thoughts on why that should be tested as part of an integration test? It looks like a simple case of method internal validations that looks good in a Unit test.\n\nI don't want to be an ass or anything (sorry if it sounds like that...) just trying to understand the way you guys work.\n", "created_at": "2014-09-02T08:20:53Z" }, { "body": "@cfontes no problem at all :)\n\nUsually it makes sense to integration test these sort of queries because you're testing for results returned instead of an exception being thrown, so having an integration framework where you can index some documents and check query results is useful.\n\nYou can get the parseContext by using the injector to get the IndexService and unit test it if you'd like also, here's an example of getting one: https://github.com/elasticsearch/elasticsearch/blob/master/src/test/java/org/elasticsearch/index/search/FieldDataTermsFilterTests.java#L76-L80 , however, just getting the injector means starting up nodes so an integration test works just as well for this. If you do want to add a unit test for this that would be great, but not required.\n", "created_at": "2014-09-02T08:29:07Z" } ], "number": 7408, "title": "field_value_factor accepts arrays values, does not throw error" }
{ "body": "closes #7408\n", "number": 9246, "review_comments": [ { "body": "Can you add a check for `XContentParser.Token.START_OBJECT` as well here, since it can't be an object either?\n", "created_at": "2015-01-21T11:37:01Z" } ], "title": "Raise an exception on an array of values being sent as the factor for a `field_value_factor` query" }
{ "commits": [ { "message": "Raise an exception on an array of values being sent as the factor for a field_value_factor query\ncloses #7408" } ], "files": [ { "diff": "@@ -70,6 +70,8 @@ public ScoreFunction parse(QueryParseContext parseContext, XContentParser parser\n } else {\n throw new QueryParsingException(parseContext.index(), NAMES[0] + \" query does not support [\" + currentFieldName + \"]\");\n }\n+ } else if(\"factor\".equals(currentFieldName) && (token == XContentParser.Token.START_ARRAY || token == XContentParser.Token.START_OBJECT)) {\n+ throw new QueryParsingException(parseContext.index(), \"[\" + NAMES[0] + \"] field 'factor' does not support lists or objects\");\n }\n }\n ", "filename": "src/main/java/org/elasticsearch/index/query/functionscore/fieldvaluefactor/FieldValueFactorFunctionParser.java", "status": "modified" }, { "diff": "@@ -100,5 +100,34 @@ public void testFieldValueFactor() throws IOException {\n // This is fine, the query will throw an exception if executed\n // locally, instead of just having failures\n }\n+\n+ // don't permit an array of factors\n+ try {\n+ String querySource = \"{\" +\n+ \"\\\"query\\\": {\" +\n+ \" \\\"function_score\\\": {\" +\n+ \" \\\"query\\\": {\" +\n+ \" \\\"match\\\": {\\\"name\\\": \\\"foo\\\"}\" +\n+ \" },\" +\n+ \" \\\"functions\\\": [\" +\n+ \" {\" +\n+ \" \\\"field_value_factor\\\": {\" +\n+ \" \\\"field\\\": \\\"test\\\",\" +\n+ \" \\\"factor\\\": [1.2,2]\" +\n+ \" }\" +\n+ \" }\" +\n+ \" ]\" +\n+ \" }\" +\n+ \" }\" +\n+ \"}\";\n+ response = client().prepareSearch(\"test\")\n+ .setSource(querySource)\n+ .get();\n+ assertFailures(response);\n+ } catch (SearchPhaseExecutionException e) {\n+ // This is fine, the query will throw an exception if executed\n+ // locally, instead of just having failures\n+ }\n+\n }\n }", "filename": "src/test/java/org/elasticsearch/search/functionscore/FunctionScoreFieldValueTests.java", "status": "modified" } ] }
{ "body": "As reported in elasticsearch/elasticsearch-ruby#29, the `indices_boost` specified as the URL parameter seems to be ignored.\n\nRelevant source code:\n\nhttps://github.com/elasticsearch/elasticsearch/blob/master/src/main/java/org/elasticsearch/rest/action/search/RestSearchAction.java#L230\n\nI've removed it from the JSON specs in 81cddacffa5e19c5de91ca991a6a9d23a1dec736.\n", "comments": [], "number": 6281, "title": "Remove the `indices_boost` URL param to search as it doesn't work" }
{ "body": "Closes #6281\n", "number": 9244, "review_comments": [], "title": "Remove `indices_boost` URL param" }
{ "commits": [ { "message": "Remove indices_boost URL param, closes #6281" } ], "files": [ { "diff": "@@ -246,27 +246,6 @@ public static SearchSourceBuilder parseSearchSource(RestRequest request) {\n }\n }\n \n- String sIndicesBoost = request.param(\"indices_boost\");\n- if (sIndicesBoost != null) {\n- if (searchSourceBuilder == null) {\n- searchSourceBuilder = new SearchSourceBuilder();\n- }\n- String[] indicesBoost = Strings.splitStringByCommaToArray(sIndicesBoost);\n- for (String indexBoost : indicesBoost) {\n- int divisor = indexBoost.indexOf(',');\n- if (divisor == -1) {\n- throw new ElasticsearchIllegalArgumentException(\"Illegal index boost [\" + indexBoost + \"], no ','\");\n- }\n- String indexName = indexBoost.substring(0, divisor);\n- String sBoost = indexBoost.substring(divisor + 1);\n- try {\n- searchSourceBuilder.indexBoost(indexName, Float.parseFloat(sBoost));\n- } catch (NumberFormatException e) {\n- throw new ElasticsearchIllegalArgumentException(\"Illegal index boost [\" + indexBoost + \"], boost not a float number\");\n- }\n- }\n- }\n-\n String sStats = request.param(\"stats\");\n if (sStats != null) {\n if (searchSourceBuilder == null) {", "filename": "src/main/java/org/elasticsearch/rest/action/search/RestSearchAction.java", "status": "modified" } ] }
{ "body": "The get field mapping api currently checks for data read blocks rather than metadata blocks. Seems like a bug given that the api allows to retrieve mappings and not data. This can be seen by adding a metadata block to an index and then calling the get field mapping api, which works while it should be rejected.\n\nThe problem might stem from the fact that metadata block currently means read and write, causing different issues, which will be solved with #9203. Once that is in we will be able to only check for metadata read blocks instead.\n", "comments": [], "number": 10521, "title": "get field mapping api should check for metadata blocks" }
{ "body": "While looking at #3703, it looks like the usage of 'read_only' is a bit confusing.\n\nThe documentation stipulates:\n\n> index.blocks.read_only\n> Set to true to have the index read only, false to allow writes and metadata changes.\n\nActually, it blocks all write operations (creating, indexing or deleting a document) but also all read/write operations on index metadata. This can lead to confusion because subsequent 'indices exists' requests will return an undocumented 403 Forbidden response code where one could expect a 200 OK (as reported in #3703). Same for 'type exists' and 'alias exists' requests.\n\nSimilar issues have been reported:\n- _status #2833 (deprecated in 1.2.0)\n- _stats #5876\n- /_cat/indices #5855\n\nBut several API calls returns a 403 when at least one index of the cluster has 'index.blocks.read_only' set to true:\n\n```\nDELETE /_all\n\nPOST /library/book/1\n{\n \"title\": \"Elasticsearch: the definitive guide\"\n}\n\nPOST /movies/dvd/1\n{\n \"title\": \"Elasticsearch: the movie\"\n}\n\nPUT /library/_settings\n{\n \"index\": {\n \"blocks.read_only\": true\n }\n}\n```\n\nThe following calls will return 403 / \"ClusterBlockException[blocked by: [FORBIDDEN/5/index read-only (api)];]\":\n\n```\nHEAD /library\nHEAD /library/book\nHEAD /_all/dvd\n\nGET /library/_settings\nGET /library/_mappings\nGET /library/_warmers\nGET /library/_aliases\n\nGET /_settings\nGET /_aliases\nGET /_mappings\nGET /_cat/indices\n```\n\nI agree to @imotov's comment (see https://github.com/elasticsearch/elasticsearch/pull/5876#issuecomment-42273841) and I think we should split the METADATA blocks in METADATA_READ & METADATA_WRITE blocks. This way the 'read_only' setting will block all write operations and metadata updates but the metadata will still be available.\n\nThis is a first step before updating all other actions then removing METADATA block completely.\n\nCloses #3703\nCloses #10521\nCloses #10522\nRelated to #8102\n", "number": 9203, "review_comments": [ { "body": "can you remind me what this assertion does? do we really need it here?\n", "created_at": "2015-03-25T10:42:22Z" }, { "body": "Question: do we keep the old `ClusterBlockLevel.METADATA` as constructor argument only for bw comp right?\n", "created_at": "2015-03-25T10:46:04Z" }, { "body": "maybe verify that exists can be called rather than deleting the whole block?\n", "created_at": "2015-03-25T10:47:15Z" }, { "body": "why delete this test?\n", "created_at": "2015-03-25T10:47:45Z" }, { "body": "nevermind, I see you moved it to a more specific test class, that's fine.\n", "created_at": "2015-03-25T10:48:57Z" }, { "body": "I think we can omit catching this and failing when caught, that's default behaviour, what matters if the `finally` I guess\n", "created_at": "2015-03-25T10:50:15Z" }, { "body": "Also, I'm confused here: `INDEX_METADATA_BLOCK` should block metadata read and metadata writes? Should we pass in `METADATA_READ` as well?\n", "created_at": "2015-03-25T10:54:11Z" }, { "body": "It's needed to check the result of the previous 'indices.exists' request.\n\nFrom [here](https://github.com/elastic/elasticsearch/blob/master/src/test/java/org/elasticsearch/test/rest/client/RestResponse.java#L90):\n\n```\n is_true: '' means the response had no body but the client returned true (caused by 200)\n```\n", "created_at": "2015-03-25T13:06:10Z" }, { "body": "Right, there's something confusing here. METADATA must be removed and METADATA_READ must be added.\n", "created_at": "2015-03-25T13:15:53Z" }, { "body": "alright thanks :)\n", "created_at": "2015-03-25T13:31:56Z" }, { "body": "sounds good\n", "created_at": "2015-03-25T13:32:03Z" }, { "body": "I would be more precise on the version, cause it's not clear if it is from 1.5.1 or 1.6.0.\n", "created_at": "2015-03-26T08:06:27Z" }, { "body": "this `id` gets sent over the wire in `ClusterBlock#readFrom` and `ClusterBlock#writeTo`. your change makes it backwards compatible only for reads, cause a 1.6 node that gets 1.2 detects and converts it. But what happens if a newer node sends 3 or 4 to an older node? We need to add some logic based on version of nodes.\n\nAlso, I'd try and make this method package private, not sure why it's public it shouldn't IMO.\n", "created_at": "2015-03-26T08:16:16Z" }, { "body": "You're right. I added some logic in the ClusterBlock.writeTo() method, converting 3/4 values into 2 if the target node has a version <= 1.5.0.\n", "created_at": "2015-03-26T13:21:25Z" }, { "body": "aren't delete mapping & delete by query gone though?\n\nAlso, how did you find this issue?\n", "created_at": "2015-04-02T09:06:29Z" }, { "body": "yeah... I need to rebase this branch, but not until your review is terminated :)\n\nYou can skip DeleteMappingAction and associated test.\n", "created_at": "2015-04-02T09:09:11Z" }, { "body": "interesting, so we were previously checking for READ (only data?), which seems like a bug, and now we only look for METADATA_READ blocks?\n", "created_at": "2015-04-02T09:11:00Z" }, { "body": "not related to this change, but I get confused by these calls with empty index. Will they ever return a block? Seems like we should dig here on another PR.\n", "created_at": "2015-04-02T09:12:54Z" }, { "body": "the version here might need to be adjusted depending on the version we get this PR in e.g. if it ends up being 1.6 should be `before(Version.V_1_6_0)`\n", "created_at": "2015-04-02T09:16:49Z" }, { "body": "maybe I would split this in two ifs, the outer one about version, then the inner one around levels.\n", "created_at": "2015-04-02T09:17:40Z" }, { "body": "Yes, I think that's the right thing to do here.\n", "created_at": "2015-04-02T09:17:49Z" }, { "body": "As far as I know they return only global blocks. It looks like this is the way how blocks are checked when working with data that are persisted in cluster state (index templates, snapshot repos)\n", "created_at": "2015-04-02T09:22:19Z" }, { "body": "just a reminder: version might need to be updated here too depending on which branch we backport the PR to. You might have thought about it already, but these are the things that I usually forget about when I push :)\n", "created_at": "2015-04-02T09:26:21Z" }, { "body": "also, s/splitted/split\n", "created_at": "2015-04-02T09:27:08Z" }, { "body": "version reminder here too, and s/splitted/split\n", "created_at": "2015-04-02T09:27:44Z" }, { "body": "Right now we have bw comp logic in two different places (ClusterBlock for writes & ClusterBlockLevel for reads). Does it make sense to isolate this logic in a single place?\n", "created_at": "2015-04-02T09:28:35Z" }, { "body": "I see that some tests use `enableIndexBlock` and some other `setIndexBlocks`. What is the difference? Can we unify them and maybe add them as common utility methods?\n", "created_at": "2015-04-02T09:34:01Z" }, { "body": "same as above, this seems the same method as before\n", "created_at": "2015-04-02T09:34:29Z" }, { "body": "Right, will do it.\n", "created_at": "2015-04-02T09:45:22Z" }, { "body": "I also thought of it... Maybe add a ClusterBlockLevel.toId() method?\n", "created_at": "2015-04-02T09:47:11Z" } ], "title": "Add METADATA_READ and METADATA_WRITE blocks" }
{ "commits": [ { "message": "Internal: Add METADATA_READ and METADATA_WRITE blocks\n\nThis commit splits the current ClusterBlockLevel.METADATA into two disctins ClusterBlockLevel.METADATA_READ and ClusterBlockLevel.METADATA_WRITE blocks. It allows to make a distinction between\nan operation that modifies the index or cluster metadata and an operation that does not change any metadata.\n\nBefore this commit, many operations where blocked when the cluster was read-only: Cluster Stats, Get Mappings, Get Snapshot, Get Index Settings, etc. Now those operations are allowed even when\nthe cluster or the index is read-only.\n\nRelated to #8102, #2833\n\nCloses #3703\nCloses #5855\nCloses #10521\nCloses #10522" } ], "files": [ { "diff": "@@ -0,0 +1,30 @@\n+---\n+\"Test indices.exists on a read only index\":\n+\n+ - do:\n+ indices.create:\n+ index: test_index_ro\n+\n+ - do:\n+ indices.put_settings:\n+ index: test_index_ro\n+ body:\n+ index.blocks.read_only: true\n+\n+ - do:\n+ indices.exists:\n+ index: test_index_ro\n+\n+ - is_true: ''\n+\n+ - do:\n+ indices.put_settings:\n+ index: test_index_ro\n+ body:\n+ index.blocks.read_only: false\n+\n+ - do:\n+ indices.exists:\n+ index: test_index_ro\n+\n+ - is_true: ''", "filename": "rest-api-spec/test/indices.exists/20_read_only_index.yaml", "status": "added" }, { "diff": "@@ -75,7 +75,8 @@ protected String executor() {\n \n @Override\n protected ClusterBlockException checkBlock(NodesShutdownRequest request, ClusterState state) {\n- return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA);\n+ // Stopping a node impacts the cluster state, so we check for the METADATA_WRITE block here\n+ return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA_WRITE);\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/node/shutdown/TransportNodesShutdownAction.java", "status": "modified" }, { "diff": "@@ -65,7 +65,7 @@ protected DeleteRepositoryResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(DeleteRepositoryRequest request, ClusterState state) {\n- return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA, \"\");\n+ return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_WRITE, \"\");\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/repositories/delete/TransportDeleteRepositoryAction.java", "status": "modified" }, { "diff": "@@ -65,7 +65,7 @@ protected GetRepositoriesResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(GetRepositoriesRequest request, ClusterState state) {\n- return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA, \"\");\n+ return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_READ, \"\");\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/repositories/get/TransportGetRepositoriesAction.java", "status": "modified" }, { "diff": "@@ -65,7 +65,7 @@ protected PutRepositoryResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(PutRepositoryRequest request, ClusterState state) {\n- return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA, \"\");\n+ return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_WRITE, \"\");\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/repositories/put/TransportPutRepositoryAction.java", "status": "modified" }, { "diff": "@@ -69,7 +69,7 @@ protected VerifyRepositoryResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(VerifyRepositoryRequest request, ClusterState state) {\n- return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA, \"\");\n+ return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_READ, \"\");\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/repositories/verify/TransportVerifyRepositoryAction.java", "status": "modified" }, { "diff": "@@ -58,7 +58,7 @@ protected String executor() {\n \n @Override\n protected ClusterBlockException checkBlock(ClusterRerouteRequest request, ClusterState state) {\n- return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA);\n+ return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA_WRITE);\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/reroute/TransportClusterRerouteAction.java", "status": "modified" }, { "diff": "@@ -76,7 +76,7 @@ protected ClusterBlockException checkBlock(ClusterUpdateSettingsRequest request,\n request.persistentSettings().getAsMap().isEmpty() && request.transientSettings().getAsMap().size() == 1 && request.transientSettings().get(MetaData.SETTING_READ_ONLY) != null) {\n return null;\n }\n- return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA);\n+ return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA_WRITE);\n }\n \n ", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/settings/TransportClusterUpdateSettingsAction.java", "status": "modified" }, { "diff": "@@ -58,7 +58,7 @@ protected String executor() {\n \n @Override\n protected ClusterBlockException checkBlock(ClusterSearchShardsRequest request, ClusterState state) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_READ, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/shards/TransportClusterSearchShardsAction.java", "status": "modified" }, { "diff": "@@ -65,7 +65,7 @@ protected CreateSnapshotResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(CreateSnapshotRequest request, ClusterState state) {\n- return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA, \"\");\n+ return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_WRITE, \"\");\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/snapshots/create/TransportCreateSnapshotAction.java", "status": "modified" }, { "diff": "@@ -64,7 +64,7 @@ protected DeleteSnapshotResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(DeleteSnapshotRequest request, ClusterState state) {\n- return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA, \"\");\n+ return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_WRITE, \"\");\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/snapshots/delete/TransportDeleteSnapshotAction.java", "status": "modified" }, { "diff": "@@ -67,7 +67,7 @@ protected GetSnapshotsResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(GetSnapshotsRequest request, ClusterState state) {\n- return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA, \"\");\n+ return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_READ, \"\");\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/snapshots/get/TransportGetSnapshotsAction.java", "status": "modified" }, { "diff": "@@ -65,7 +65,13 @@ protected RestoreSnapshotResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(RestoreSnapshotRequest request, ClusterState state) {\n- return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA, \"\");\n+ // Restoring a snapshot might change the global state and create/change an index,\n+ // so we need to check for METADATA_WRITE and WRITE blocks\n+ ClusterBlockException blockException = state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_WRITE, \"\");\n+ if (blockException != null) {\n+ return blockException;\n+ }\n+ return state.blocks().indexBlockedException(ClusterBlockLevel.WRITE, \"\");\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/snapshots/restore/TransportRestoreSnapshotAction.java", "status": "modified" }, { "diff": "@@ -70,7 +70,7 @@ protected String executor() {\n \n @Override\n protected ClusterBlockException checkBlock(SnapshotsStatusRequest request, ClusterState state) {\n- return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA);\n+ return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA_READ);\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/TransportSnapshotsStatusAction.java", "status": "modified" }, { "diff": "@@ -53,7 +53,7 @@ protected String executor() {\n \n @Override\n protected ClusterBlockException checkBlock(PendingClusterTasksRequest request, ClusterState state) {\n- return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA);\n+ return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA_READ);\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/tasks/TransportPendingClusterTasksAction.java", "status": "modified" }, { "diff": "@@ -78,7 +78,7 @@ protected ClusterBlockException checkBlock(IndicesAliasesRequest request, Cluste\n indices.add(index);\n }\n }\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, indices.toArray(new String[indices.size()]));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_WRITE, indices.toArray(new String[indices.size()]));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/alias/TransportIndicesAliasesAction.java", "status": "modified" }, { "diff": "@@ -49,7 +49,7 @@ protected String executor() {\n \n @Override\n protected ClusterBlockException checkBlock(GetAliasesRequest request, ClusterState state) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_READ, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/alias/exists/TransportAliasesExistAction.java", "status": "modified" }, { "diff": "@@ -52,7 +52,7 @@ protected String executor() {\n \n @Override\n protected ClusterBlockException checkBlock(GetAliasesRequest request, ClusterState state) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_READ, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/alias/get/TransportGetAliasesAction.java", "status": "modified" }, { "diff": "@@ -175,12 +175,12 @@ protected GroupShardsIterator shards(ClusterState clusterState, ClearIndicesCach\n \n @Override\n protected ClusterBlockException checkGlobalBlock(ClusterState state, ClearIndicesCacheRequest request) {\n- return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA);\n+ return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA_WRITE);\n }\n \n @Override\n protected ClusterBlockException checkRequestBlock(ClusterState state, ClearIndicesCacheRequest request, String[] concreteIndices) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, concreteIndices);\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_WRITE, concreteIndices);\n }\n \n }", "filename": "src/main/java/org/elasticsearch/action/admin/indices/cache/clear/TransportClearIndicesCacheAction.java", "status": "modified" }, { "diff": "@@ -76,7 +76,7 @@ protected void doExecute(CloseIndexRequest request, ActionListener<CloseIndexRes\n \n @Override\n protected ClusterBlockException checkBlock(CloseIndexRequest request, ClusterState state) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_WRITE, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/close/TransportCloseIndexAction.java", "status": "modified" }, { "diff": "@@ -67,7 +67,7 @@ protected CreateIndexResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(CreateIndexRequest request, ClusterState state) {\n- return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA, request.index());\n+ return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_WRITE, request.index());\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/create/TransportCreateIndexAction.java", "status": "modified" }, { "diff": "@@ -76,7 +76,7 @@ protected void doExecute(DeleteIndexRequest request, ActionListener<DeleteIndexR\n \n @Override\n protected ClusterBlockException checkBlock(DeleteIndexRequest request, ClusterState state) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_WRITE, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/delete/TransportDeleteIndexAction.java", "status": "modified" }, { "diff": "@@ -65,7 +65,7 @@ protected IndicesExistsResponse newResponse() {\n protected ClusterBlockException checkBlock(IndicesExistsRequest request, ClusterState state) {\n //make sure through indices options that the concrete indices call never throws IndexMissingException\n IndicesOptions indicesOptions = IndicesOptions.fromOptions(true, true, request.indicesOptions().expandWildcardsOpen(), request.indicesOptions().expandWildcardsClosed());\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, clusterService.state().metaData().concreteIndices(indicesOptions, request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_READ, clusterService.state().metaData().concreteIndices(indicesOptions, request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/exists/indices/TransportIndicesExistsAction.java", "status": "modified" }, { "diff": "@@ -62,7 +62,7 @@ protected TypesExistsResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(TypesExistsRequest request, ClusterState state) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_READ, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/exists/types/TransportTypesExistsAction.java", "status": "modified" }, { "diff": "@@ -120,11 +120,11 @@ protected GroupShardsIterator shards(ClusterState clusterState, FlushRequest req\n \n @Override\n protected ClusterBlockException checkGlobalBlock(ClusterState state, FlushRequest request) {\n- return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA);\n+ return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA_WRITE);\n }\n \n @Override\n protected ClusterBlockException checkRequestBlock(ClusterState state, FlushRequest countRequest, String[] concreteIndices) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, concreteIndices);\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_WRITE, concreteIndices);\n }\n }", "filename": "src/main/java/org/elasticsearch/action/admin/indices/flush/TransportFlushAction.java", "status": "modified" }, { "diff": "@@ -60,7 +60,7 @@ protected String executor() {\n \n @Override\n protected ClusterBlockException checkBlock(GetIndexRequest request, ClusterState state) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_READ, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/get/TransportGetIndexAction.java", "status": "modified" }, { "diff": "@@ -29,6 +29,8 @@\n import org.elasticsearch.action.support.single.custom.TransportSingleCustomOperationAction;\n import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.block.ClusterBlockException;\n+import org.elasticsearch.cluster.block.ClusterBlockLevel;\n import org.elasticsearch.cluster.routing.ShardsIterator;\n import org.elasticsearch.common.collect.MapBuilder;\n import org.elasticsearch.common.inject.Inject;\n@@ -38,10 +40,10 @@\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.common.xcontent.XContentType;\n+import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.mapper.DocumentFieldMappers;\n import org.elasticsearch.index.mapper.DocumentMapper;\n import org.elasticsearch.index.mapper.FieldMapper;\n-import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.indices.IndicesService;\n import org.elasticsearch.indices.TypeMissingException;\n@@ -134,6 +136,11 @@ protected GetFieldMappingsResponse newResponse() {\n return new GetFieldMappingsResponse();\n }\n \n+ @Override\n+ protected ClusterBlockException checkRequestBlock(ClusterState state, InternalRequest request) {\n+ return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_READ, request.concreteIndex());\n+ }\n+\n private static final ToXContent.Params includeDefaultsParams = new ToXContent.Params() {\n \n final static String INCLUDE_DEFAULTS = \"include_defaults\";", "filename": "src/main/java/org/elasticsearch/action/admin/indices/mapping/get/TransportGetFieldMappingsIndexAction.java", "status": "modified" }, { "diff": "@@ -51,7 +51,7 @@ protected String executor() {\n \n @Override\n protected ClusterBlockException checkBlock(GetMappingsRequest request, ClusterState state) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_READ, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/mapping/get/TransportGetMappingsAction.java", "status": "modified" }, { "diff": "@@ -66,7 +66,7 @@ protected PutMappingResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(PutMappingRequest request, ClusterState state) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, clusterService.state().metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_WRITE, clusterService.state().metaData().concreteIndices(request.indicesOptions(), request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/mapping/put/TransportPutMappingAction.java", "status": "modified" }, { "diff": "@@ -76,7 +76,7 @@ protected void doExecute(OpenIndexRequest request, ActionListener<OpenIndexRespo\n \n @Override\n protected ClusterBlockException checkBlock(OpenIndexRequest request, ClusterState state) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_WRITE, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/open/TransportOpenIndexAction.java", "status": "modified" } ] }
{ "body": "I was trying to make a full, consistent backup before an upgrade. Snapshots are at a moment of time, which doesn't work if clients are still updating your indexes.\n\nI tried putting the cluster into read_only mode by setting cluster.blocks.read_only: true, but running a snapshot returned this error:\n\n```\n{\"error\":\"ClusterBlockException[blocked by: [FORBIDDEN/6/cluster read-only (api)];]\",\"status\":403}\n```\n\nPlease consider allowing snapshots to provide a consistent backup by running when in read-only mode.\n", "comments": [ { "body": "@webmstr Snapshots are still moment in time while updates are happening. You don't need to lock anything. A snapshot will only backup the state of the index at the point that the backup starts, it won't take any later changes into account.\n", "created_at": "2014-10-16T18:19:02Z" }, { "body": "As I mentioned, snapshots - as currently implemented - are an unreasonable method of performing a consistent backup prior to an upgrade. This enhancement would have allowed that option.\n\nWithout the enhancement, snapshots should not be used before an upgrade, because the indexes may have been changed while the snapshot was running. As such, the upgrade documentation should be changed to not propose the use of snapshots as backups, and a \"full\" backup procedure should be documented in its place.\n", "created_at": "2014-10-16T20:09:17Z" }, { "body": "Out of interest, why don't you just stop writing to your cluster? Reopening for discussion.\n", "created_at": "2014-10-17T05:04:47Z" }, { "body": "@imotov what are your thoughts?\n", "created_at": "2014-10-17T05:05:03Z" }, { "body": "I could turn off logstash, but that's just one potential client. Someone could be curl'ing, or using an ES plugin (like head), etc. If you need a consistent backup, you have to disconnect and lock out the clients from the server side.\n", "created_at": "2014-10-17T06:18:42Z" }, { "body": "@clintongormley see https://github.com/elasticsearch/elasticsearch/pull/5876 I think this one is similar. \n", "created_at": "2014-10-17T13:33:58Z" }, { "body": "@imotov thanks, so setting `index.blocks.write` to `true` on all indices would be a reasonable workaround, at least until #5855 is resolved.\n", "created_at": "2014-10-17T13:39:15Z" }, { "body": "@clintongormley Actually, I discovered that the `index.blocks.write` attribute only prevents writes to **existing** indices. If a client tries to create a new index, that request succeeds, which brings us back to the same problem. My workaround was to shutdown the proxy node though which our clients access our ES cluster.\nI am running into the same issue as @webmstr , but for different reason: I cannot create a consistent backup for a restore to a secondary datacenter because each snapshot takes ~1 hour to complete and we cannot afford to block writes from our clients for such a long period of time. \nI am still trying to root cause why snapshots are taking so long; the time required for snapshot completion increases with each snapshot. However, when i restore the same data to a new cluster, snapshotting that data to a new S3 bucket takes less than a minute. \n\nEDIT: I may have a theory on why the snapshots were taking so long... i was taking a snapshot every two hours, and the s3 bucket has a LOT of snapshots now (49). I'm thinking that the calls the ES aws plugin makes to the S3 endpoint slow down over time as the number of snapshots increase. \n\nOr may be it's just the number of snapshots that's causing the slowness...i.e. regardless of whether the backend repository is S3 or fs? I guess I should have an additional cron job that deletes older snaphots. Is there a good rule of thumb on the number of snapshots to retain?\n", "created_at": "2014-11-12T23:26:35Z" }, { "body": "@imotov we discussed this issue but were unclear on what the differences are between the index.blocks.\\* options are and why the snapshot fails with read_only set to false?\n", "created_at": "2015-02-20T10:37:12Z" }, { "body": "@colings86 there is an ongoing effort to resolve this issue in #9203\n", "created_at": "2015-02-20T15:54:26Z" }, { "body": "After discussing this with @tlrx it looks like the best way to address this issue is by moving snapshot and restore cluster state elements from cluster metadata to a custom cluster element where it seems to belong (since information about currently running snapshot and restore hardly qualifies as metadata).\n", "created_at": "2015-05-19T16:35:32Z" } ], "number": 8102, "title": "snapshot should work when cluster is in read_only mode." }
{ "body": "While looking at #3703, it looks like the usage of 'read_only' is a bit confusing.\n\nThe documentation stipulates:\n\n> index.blocks.read_only\n> Set to true to have the index read only, false to allow writes and metadata changes.\n\nActually, it blocks all write operations (creating, indexing or deleting a document) but also all read/write operations on index metadata. This can lead to confusion because subsequent 'indices exists' requests will return an undocumented 403 Forbidden response code where one could expect a 200 OK (as reported in #3703). Same for 'type exists' and 'alias exists' requests.\n\nSimilar issues have been reported:\n- _status #2833 (deprecated in 1.2.0)\n- _stats #5876\n- /_cat/indices #5855\n\nBut several API calls returns a 403 when at least one index of the cluster has 'index.blocks.read_only' set to true:\n\n```\nDELETE /_all\n\nPOST /library/book/1\n{\n \"title\": \"Elasticsearch: the definitive guide\"\n}\n\nPOST /movies/dvd/1\n{\n \"title\": \"Elasticsearch: the movie\"\n}\n\nPUT /library/_settings\n{\n \"index\": {\n \"blocks.read_only\": true\n }\n}\n```\n\nThe following calls will return 403 / \"ClusterBlockException[blocked by: [FORBIDDEN/5/index read-only (api)];]\":\n\n```\nHEAD /library\nHEAD /library/book\nHEAD /_all/dvd\n\nGET /library/_settings\nGET /library/_mappings\nGET /library/_warmers\nGET /library/_aliases\n\nGET /_settings\nGET /_aliases\nGET /_mappings\nGET /_cat/indices\n```\n\nI agree to @imotov's comment (see https://github.com/elasticsearch/elasticsearch/pull/5876#issuecomment-42273841) and I think we should split the METADATA blocks in METADATA_READ & METADATA_WRITE blocks. This way the 'read_only' setting will block all write operations and metadata updates but the metadata will still be available.\n\nThis is a first step before updating all other actions then removing METADATA block completely.\n\nCloses #3703\nCloses #10521\nCloses #10522\nRelated to #8102\n", "number": 9203, "review_comments": [ { "body": "can you remind me what this assertion does? do we really need it here?\n", "created_at": "2015-03-25T10:42:22Z" }, { "body": "Question: do we keep the old `ClusterBlockLevel.METADATA` as constructor argument only for bw comp right?\n", "created_at": "2015-03-25T10:46:04Z" }, { "body": "maybe verify that exists can be called rather than deleting the whole block?\n", "created_at": "2015-03-25T10:47:15Z" }, { "body": "why delete this test?\n", "created_at": "2015-03-25T10:47:45Z" }, { "body": "nevermind, I see you moved it to a more specific test class, that's fine.\n", "created_at": "2015-03-25T10:48:57Z" }, { "body": "I think we can omit catching this and failing when caught, that's default behaviour, what matters if the `finally` I guess\n", "created_at": "2015-03-25T10:50:15Z" }, { "body": "Also, I'm confused here: `INDEX_METADATA_BLOCK` should block metadata read and metadata writes? Should we pass in `METADATA_READ` as well?\n", "created_at": "2015-03-25T10:54:11Z" }, { "body": "It's needed to check the result of the previous 'indices.exists' request.\n\nFrom [here](https://github.com/elastic/elasticsearch/blob/master/src/test/java/org/elasticsearch/test/rest/client/RestResponse.java#L90):\n\n```\n is_true: '' means the response had no body but the client returned true (caused by 200)\n```\n", "created_at": "2015-03-25T13:06:10Z" }, { "body": "Right, there's something confusing here. METADATA must be removed and METADATA_READ must be added.\n", "created_at": "2015-03-25T13:15:53Z" }, { "body": "alright thanks :)\n", "created_at": "2015-03-25T13:31:56Z" }, { "body": "sounds good\n", "created_at": "2015-03-25T13:32:03Z" }, { "body": "I would be more precise on the version, cause it's not clear if it is from 1.5.1 or 1.6.0.\n", "created_at": "2015-03-26T08:06:27Z" }, { "body": "this `id` gets sent over the wire in `ClusterBlock#readFrom` and `ClusterBlock#writeTo`. your change makes it backwards compatible only for reads, cause a 1.6 node that gets 1.2 detects and converts it. But what happens if a newer node sends 3 or 4 to an older node? We need to add some logic based on version of nodes.\n\nAlso, I'd try and make this method package private, not sure why it's public it shouldn't IMO.\n", "created_at": "2015-03-26T08:16:16Z" }, { "body": "You're right. I added some logic in the ClusterBlock.writeTo() method, converting 3/4 values into 2 if the target node has a version <= 1.5.0.\n", "created_at": "2015-03-26T13:21:25Z" }, { "body": "aren't delete mapping & delete by query gone though?\n\nAlso, how did you find this issue?\n", "created_at": "2015-04-02T09:06:29Z" }, { "body": "yeah... I need to rebase this branch, but not until your review is terminated :)\n\nYou can skip DeleteMappingAction and associated test.\n", "created_at": "2015-04-02T09:09:11Z" }, { "body": "interesting, so we were previously checking for READ (only data?), which seems like a bug, and now we only look for METADATA_READ blocks?\n", "created_at": "2015-04-02T09:11:00Z" }, { "body": "not related to this change, but I get confused by these calls with empty index. Will they ever return a block? Seems like we should dig here on another PR.\n", "created_at": "2015-04-02T09:12:54Z" }, { "body": "the version here might need to be adjusted depending on the version we get this PR in e.g. if it ends up being 1.6 should be `before(Version.V_1_6_0)`\n", "created_at": "2015-04-02T09:16:49Z" }, { "body": "maybe I would split this in two ifs, the outer one about version, then the inner one around levels.\n", "created_at": "2015-04-02T09:17:40Z" }, { "body": "Yes, I think that's the right thing to do here.\n", "created_at": "2015-04-02T09:17:49Z" }, { "body": "As far as I know they return only global blocks. It looks like this is the way how blocks are checked when working with data that are persisted in cluster state (index templates, snapshot repos)\n", "created_at": "2015-04-02T09:22:19Z" }, { "body": "just a reminder: version might need to be updated here too depending on which branch we backport the PR to. You might have thought about it already, but these are the things that I usually forget about when I push :)\n", "created_at": "2015-04-02T09:26:21Z" }, { "body": "also, s/splitted/split\n", "created_at": "2015-04-02T09:27:08Z" }, { "body": "version reminder here too, and s/splitted/split\n", "created_at": "2015-04-02T09:27:44Z" }, { "body": "Right now we have bw comp logic in two different places (ClusterBlock for writes & ClusterBlockLevel for reads). Does it make sense to isolate this logic in a single place?\n", "created_at": "2015-04-02T09:28:35Z" }, { "body": "I see that some tests use `enableIndexBlock` and some other `setIndexBlocks`. What is the difference? Can we unify them and maybe add them as common utility methods?\n", "created_at": "2015-04-02T09:34:01Z" }, { "body": "same as above, this seems the same method as before\n", "created_at": "2015-04-02T09:34:29Z" }, { "body": "Right, will do it.\n", "created_at": "2015-04-02T09:45:22Z" }, { "body": "I also thought of it... Maybe add a ClusterBlockLevel.toId() method?\n", "created_at": "2015-04-02T09:47:11Z" } ], "title": "Add METADATA_READ and METADATA_WRITE blocks" }
{ "commits": [ { "message": "Internal: Add METADATA_READ and METADATA_WRITE blocks\n\nThis commit splits the current ClusterBlockLevel.METADATA into two disctins ClusterBlockLevel.METADATA_READ and ClusterBlockLevel.METADATA_WRITE blocks. It allows to make a distinction between\nan operation that modifies the index or cluster metadata and an operation that does not change any metadata.\n\nBefore this commit, many operations where blocked when the cluster was read-only: Cluster Stats, Get Mappings, Get Snapshot, Get Index Settings, etc. Now those operations are allowed even when\nthe cluster or the index is read-only.\n\nRelated to #8102, #2833\n\nCloses #3703\nCloses #5855\nCloses #10521\nCloses #10522" } ], "files": [ { "diff": "@@ -0,0 +1,30 @@\n+---\n+\"Test indices.exists on a read only index\":\n+\n+ - do:\n+ indices.create:\n+ index: test_index_ro\n+\n+ - do:\n+ indices.put_settings:\n+ index: test_index_ro\n+ body:\n+ index.blocks.read_only: true\n+\n+ - do:\n+ indices.exists:\n+ index: test_index_ro\n+\n+ - is_true: ''\n+\n+ - do:\n+ indices.put_settings:\n+ index: test_index_ro\n+ body:\n+ index.blocks.read_only: false\n+\n+ - do:\n+ indices.exists:\n+ index: test_index_ro\n+\n+ - is_true: ''", "filename": "rest-api-spec/test/indices.exists/20_read_only_index.yaml", "status": "added" }, { "diff": "@@ -75,7 +75,8 @@ protected String executor() {\n \n @Override\n protected ClusterBlockException checkBlock(NodesShutdownRequest request, ClusterState state) {\n- return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA);\n+ // Stopping a node impacts the cluster state, so we check for the METADATA_WRITE block here\n+ return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA_WRITE);\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/node/shutdown/TransportNodesShutdownAction.java", "status": "modified" }, { "diff": "@@ -65,7 +65,7 @@ protected DeleteRepositoryResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(DeleteRepositoryRequest request, ClusterState state) {\n- return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA, \"\");\n+ return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_WRITE, \"\");\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/repositories/delete/TransportDeleteRepositoryAction.java", "status": "modified" }, { "diff": "@@ -65,7 +65,7 @@ protected GetRepositoriesResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(GetRepositoriesRequest request, ClusterState state) {\n- return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA, \"\");\n+ return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_READ, \"\");\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/repositories/get/TransportGetRepositoriesAction.java", "status": "modified" }, { "diff": "@@ -65,7 +65,7 @@ protected PutRepositoryResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(PutRepositoryRequest request, ClusterState state) {\n- return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA, \"\");\n+ return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_WRITE, \"\");\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/repositories/put/TransportPutRepositoryAction.java", "status": "modified" }, { "diff": "@@ -69,7 +69,7 @@ protected VerifyRepositoryResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(VerifyRepositoryRequest request, ClusterState state) {\n- return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA, \"\");\n+ return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_READ, \"\");\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/repositories/verify/TransportVerifyRepositoryAction.java", "status": "modified" }, { "diff": "@@ -58,7 +58,7 @@ protected String executor() {\n \n @Override\n protected ClusterBlockException checkBlock(ClusterRerouteRequest request, ClusterState state) {\n- return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA);\n+ return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA_WRITE);\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/reroute/TransportClusterRerouteAction.java", "status": "modified" }, { "diff": "@@ -76,7 +76,7 @@ protected ClusterBlockException checkBlock(ClusterUpdateSettingsRequest request,\n request.persistentSettings().getAsMap().isEmpty() && request.transientSettings().getAsMap().size() == 1 && request.transientSettings().get(MetaData.SETTING_READ_ONLY) != null) {\n return null;\n }\n- return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA);\n+ return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA_WRITE);\n }\n \n ", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/settings/TransportClusterUpdateSettingsAction.java", "status": "modified" }, { "diff": "@@ -58,7 +58,7 @@ protected String executor() {\n \n @Override\n protected ClusterBlockException checkBlock(ClusterSearchShardsRequest request, ClusterState state) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_READ, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/shards/TransportClusterSearchShardsAction.java", "status": "modified" }, { "diff": "@@ -65,7 +65,7 @@ protected CreateSnapshotResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(CreateSnapshotRequest request, ClusterState state) {\n- return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA, \"\");\n+ return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_WRITE, \"\");\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/snapshots/create/TransportCreateSnapshotAction.java", "status": "modified" }, { "diff": "@@ -64,7 +64,7 @@ protected DeleteSnapshotResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(DeleteSnapshotRequest request, ClusterState state) {\n- return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA, \"\");\n+ return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_WRITE, \"\");\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/snapshots/delete/TransportDeleteSnapshotAction.java", "status": "modified" }, { "diff": "@@ -67,7 +67,7 @@ protected GetSnapshotsResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(GetSnapshotsRequest request, ClusterState state) {\n- return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA, \"\");\n+ return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_READ, \"\");\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/snapshots/get/TransportGetSnapshotsAction.java", "status": "modified" }, { "diff": "@@ -65,7 +65,13 @@ protected RestoreSnapshotResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(RestoreSnapshotRequest request, ClusterState state) {\n- return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA, \"\");\n+ // Restoring a snapshot might change the global state and create/change an index,\n+ // so we need to check for METADATA_WRITE and WRITE blocks\n+ ClusterBlockException blockException = state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_WRITE, \"\");\n+ if (blockException != null) {\n+ return blockException;\n+ }\n+ return state.blocks().indexBlockedException(ClusterBlockLevel.WRITE, \"\");\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/snapshots/restore/TransportRestoreSnapshotAction.java", "status": "modified" }, { "diff": "@@ -70,7 +70,7 @@ protected String executor() {\n \n @Override\n protected ClusterBlockException checkBlock(SnapshotsStatusRequest request, ClusterState state) {\n- return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA);\n+ return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA_READ);\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/TransportSnapshotsStatusAction.java", "status": "modified" }, { "diff": "@@ -53,7 +53,7 @@ protected String executor() {\n \n @Override\n protected ClusterBlockException checkBlock(PendingClusterTasksRequest request, ClusterState state) {\n- return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA);\n+ return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA_READ);\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/tasks/TransportPendingClusterTasksAction.java", "status": "modified" }, { "diff": "@@ -78,7 +78,7 @@ protected ClusterBlockException checkBlock(IndicesAliasesRequest request, Cluste\n indices.add(index);\n }\n }\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, indices.toArray(new String[indices.size()]));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_WRITE, indices.toArray(new String[indices.size()]));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/alias/TransportIndicesAliasesAction.java", "status": "modified" }, { "diff": "@@ -49,7 +49,7 @@ protected String executor() {\n \n @Override\n protected ClusterBlockException checkBlock(GetAliasesRequest request, ClusterState state) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_READ, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/alias/exists/TransportAliasesExistAction.java", "status": "modified" }, { "diff": "@@ -52,7 +52,7 @@ protected String executor() {\n \n @Override\n protected ClusterBlockException checkBlock(GetAliasesRequest request, ClusterState state) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_READ, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/alias/get/TransportGetAliasesAction.java", "status": "modified" }, { "diff": "@@ -175,12 +175,12 @@ protected GroupShardsIterator shards(ClusterState clusterState, ClearIndicesCach\n \n @Override\n protected ClusterBlockException checkGlobalBlock(ClusterState state, ClearIndicesCacheRequest request) {\n- return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA);\n+ return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA_WRITE);\n }\n \n @Override\n protected ClusterBlockException checkRequestBlock(ClusterState state, ClearIndicesCacheRequest request, String[] concreteIndices) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, concreteIndices);\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_WRITE, concreteIndices);\n }\n \n }", "filename": "src/main/java/org/elasticsearch/action/admin/indices/cache/clear/TransportClearIndicesCacheAction.java", "status": "modified" }, { "diff": "@@ -76,7 +76,7 @@ protected void doExecute(CloseIndexRequest request, ActionListener<CloseIndexRes\n \n @Override\n protected ClusterBlockException checkBlock(CloseIndexRequest request, ClusterState state) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_WRITE, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/close/TransportCloseIndexAction.java", "status": "modified" }, { "diff": "@@ -67,7 +67,7 @@ protected CreateIndexResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(CreateIndexRequest request, ClusterState state) {\n- return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA, request.index());\n+ return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_WRITE, request.index());\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/create/TransportCreateIndexAction.java", "status": "modified" }, { "diff": "@@ -76,7 +76,7 @@ protected void doExecute(DeleteIndexRequest request, ActionListener<DeleteIndexR\n \n @Override\n protected ClusterBlockException checkBlock(DeleteIndexRequest request, ClusterState state) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_WRITE, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/delete/TransportDeleteIndexAction.java", "status": "modified" }, { "diff": "@@ -65,7 +65,7 @@ protected IndicesExistsResponse newResponse() {\n protected ClusterBlockException checkBlock(IndicesExistsRequest request, ClusterState state) {\n //make sure through indices options that the concrete indices call never throws IndexMissingException\n IndicesOptions indicesOptions = IndicesOptions.fromOptions(true, true, request.indicesOptions().expandWildcardsOpen(), request.indicesOptions().expandWildcardsClosed());\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, clusterService.state().metaData().concreteIndices(indicesOptions, request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_READ, clusterService.state().metaData().concreteIndices(indicesOptions, request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/exists/indices/TransportIndicesExistsAction.java", "status": "modified" }, { "diff": "@@ -62,7 +62,7 @@ protected TypesExistsResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(TypesExistsRequest request, ClusterState state) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_READ, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/exists/types/TransportTypesExistsAction.java", "status": "modified" }, { "diff": "@@ -120,11 +120,11 @@ protected GroupShardsIterator shards(ClusterState clusterState, FlushRequest req\n \n @Override\n protected ClusterBlockException checkGlobalBlock(ClusterState state, FlushRequest request) {\n- return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA);\n+ return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA_WRITE);\n }\n \n @Override\n protected ClusterBlockException checkRequestBlock(ClusterState state, FlushRequest countRequest, String[] concreteIndices) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, concreteIndices);\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_WRITE, concreteIndices);\n }\n }", "filename": "src/main/java/org/elasticsearch/action/admin/indices/flush/TransportFlushAction.java", "status": "modified" }, { "diff": "@@ -60,7 +60,7 @@ protected String executor() {\n \n @Override\n protected ClusterBlockException checkBlock(GetIndexRequest request, ClusterState state) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_READ, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/get/TransportGetIndexAction.java", "status": "modified" }, { "diff": "@@ -29,6 +29,8 @@\n import org.elasticsearch.action.support.single.custom.TransportSingleCustomOperationAction;\n import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.block.ClusterBlockException;\n+import org.elasticsearch.cluster.block.ClusterBlockLevel;\n import org.elasticsearch.cluster.routing.ShardsIterator;\n import org.elasticsearch.common.collect.MapBuilder;\n import org.elasticsearch.common.inject.Inject;\n@@ -38,10 +40,10 @@\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.common.xcontent.XContentType;\n+import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.mapper.DocumentFieldMappers;\n import org.elasticsearch.index.mapper.DocumentMapper;\n import org.elasticsearch.index.mapper.FieldMapper;\n-import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.indices.IndicesService;\n import org.elasticsearch.indices.TypeMissingException;\n@@ -134,6 +136,11 @@ protected GetFieldMappingsResponse newResponse() {\n return new GetFieldMappingsResponse();\n }\n \n+ @Override\n+ protected ClusterBlockException checkRequestBlock(ClusterState state, InternalRequest request) {\n+ return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_READ, request.concreteIndex());\n+ }\n+\n private static final ToXContent.Params includeDefaultsParams = new ToXContent.Params() {\n \n final static String INCLUDE_DEFAULTS = \"include_defaults\";", "filename": "src/main/java/org/elasticsearch/action/admin/indices/mapping/get/TransportGetFieldMappingsIndexAction.java", "status": "modified" }, { "diff": "@@ -51,7 +51,7 @@ protected String executor() {\n \n @Override\n protected ClusterBlockException checkBlock(GetMappingsRequest request, ClusterState state) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_READ, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/mapping/get/TransportGetMappingsAction.java", "status": "modified" }, { "diff": "@@ -66,7 +66,7 @@ protected PutMappingResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(PutMappingRequest request, ClusterState state) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, clusterService.state().metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_WRITE, clusterService.state().metaData().concreteIndices(request.indicesOptions(), request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/mapping/put/TransportPutMappingAction.java", "status": "modified" }, { "diff": "@@ -76,7 +76,7 @@ protected void doExecute(OpenIndexRequest request, ActionListener<OpenIndexRespo\n \n @Override\n protected ClusterBlockException checkBlock(OpenIndexRequest request, ClusterState state) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_WRITE, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/open/TransportOpenIndexAction.java", "status": "modified" } ] }
{ "body": "Closes #5855\n", "comments": [ { "body": "I am not sure if this is a bug, since it blocks on METADATA operation, which all stats/status/... check on, I don't think we need to change it.\n\nLooked at the issue, they should set `index.blocks.read` to `true`, and then it would allow for METADATA operations as well.\n", "created_at": "2014-05-06T07:22:28Z" }, { "body": "A possible workaround for this issue is to use `\"index.blocks.write\": true` instead of `read_only`. We will need to revisit this by adding more granular `METADATA_READ` and `METADATA_WRITE` blocks and implement this change across other methods such as segments, status, recovery...\n", "created_at": "2014-05-06T17:25:47Z" } ], "number": 5876, "title": "Don't block stats for read-only indices" }
{ "body": "While looking at #3703, it looks like the usage of 'read_only' is a bit confusing.\n\nThe documentation stipulates:\n\n> index.blocks.read_only\n> Set to true to have the index read only, false to allow writes and metadata changes.\n\nActually, it blocks all write operations (creating, indexing or deleting a document) but also all read/write operations on index metadata. This can lead to confusion because subsequent 'indices exists' requests will return an undocumented 403 Forbidden response code where one could expect a 200 OK (as reported in #3703). Same for 'type exists' and 'alias exists' requests.\n\nSimilar issues have been reported:\n- _status #2833 (deprecated in 1.2.0)\n- _stats #5876\n- /_cat/indices #5855\n\nBut several API calls returns a 403 when at least one index of the cluster has 'index.blocks.read_only' set to true:\n\n```\nDELETE /_all\n\nPOST /library/book/1\n{\n \"title\": \"Elasticsearch: the definitive guide\"\n}\n\nPOST /movies/dvd/1\n{\n \"title\": \"Elasticsearch: the movie\"\n}\n\nPUT /library/_settings\n{\n \"index\": {\n \"blocks.read_only\": true\n }\n}\n```\n\nThe following calls will return 403 / \"ClusterBlockException[blocked by: [FORBIDDEN/5/index read-only (api)];]\":\n\n```\nHEAD /library\nHEAD /library/book\nHEAD /_all/dvd\n\nGET /library/_settings\nGET /library/_mappings\nGET /library/_warmers\nGET /library/_aliases\n\nGET /_settings\nGET /_aliases\nGET /_mappings\nGET /_cat/indices\n```\n\nI agree to @imotov's comment (see https://github.com/elasticsearch/elasticsearch/pull/5876#issuecomment-42273841) and I think we should split the METADATA blocks in METADATA_READ & METADATA_WRITE blocks. This way the 'read_only' setting will block all write operations and metadata updates but the metadata will still be available.\n\nThis is a first step before updating all other actions then removing METADATA block completely.\n\nCloses #3703\nCloses #10521\nCloses #10522\nRelated to #8102\n", "number": 9203, "review_comments": [ { "body": "can you remind me what this assertion does? do we really need it here?\n", "created_at": "2015-03-25T10:42:22Z" }, { "body": "Question: do we keep the old `ClusterBlockLevel.METADATA` as constructor argument only for bw comp right?\n", "created_at": "2015-03-25T10:46:04Z" }, { "body": "maybe verify that exists can be called rather than deleting the whole block?\n", "created_at": "2015-03-25T10:47:15Z" }, { "body": "why delete this test?\n", "created_at": "2015-03-25T10:47:45Z" }, { "body": "nevermind, I see you moved it to a more specific test class, that's fine.\n", "created_at": "2015-03-25T10:48:57Z" }, { "body": "I think we can omit catching this and failing when caught, that's default behaviour, what matters if the `finally` I guess\n", "created_at": "2015-03-25T10:50:15Z" }, { "body": "Also, I'm confused here: `INDEX_METADATA_BLOCK` should block metadata read and metadata writes? Should we pass in `METADATA_READ` as well?\n", "created_at": "2015-03-25T10:54:11Z" }, { "body": "It's needed to check the result of the previous 'indices.exists' request.\n\nFrom [here](https://github.com/elastic/elasticsearch/blob/master/src/test/java/org/elasticsearch/test/rest/client/RestResponse.java#L90):\n\n```\n is_true: '' means the response had no body but the client returned true (caused by 200)\n```\n", "created_at": "2015-03-25T13:06:10Z" }, { "body": "Right, there's something confusing here. METADATA must be removed and METADATA_READ must be added.\n", "created_at": "2015-03-25T13:15:53Z" }, { "body": "alright thanks :)\n", "created_at": "2015-03-25T13:31:56Z" }, { "body": "sounds good\n", "created_at": "2015-03-25T13:32:03Z" }, { "body": "I would be more precise on the version, cause it's not clear if it is from 1.5.1 or 1.6.0.\n", "created_at": "2015-03-26T08:06:27Z" }, { "body": "this `id` gets sent over the wire in `ClusterBlock#readFrom` and `ClusterBlock#writeTo`. your change makes it backwards compatible only for reads, cause a 1.6 node that gets 1.2 detects and converts it. But what happens if a newer node sends 3 or 4 to an older node? We need to add some logic based on version of nodes.\n\nAlso, I'd try and make this method package private, not sure why it's public it shouldn't IMO.\n", "created_at": "2015-03-26T08:16:16Z" }, { "body": "You're right. I added some logic in the ClusterBlock.writeTo() method, converting 3/4 values into 2 if the target node has a version <= 1.5.0.\n", "created_at": "2015-03-26T13:21:25Z" }, { "body": "aren't delete mapping & delete by query gone though?\n\nAlso, how did you find this issue?\n", "created_at": "2015-04-02T09:06:29Z" }, { "body": "yeah... I need to rebase this branch, but not until your review is terminated :)\n\nYou can skip DeleteMappingAction and associated test.\n", "created_at": "2015-04-02T09:09:11Z" }, { "body": "interesting, so we were previously checking for READ (only data?), which seems like a bug, and now we only look for METADATA_READ blocks?\n", "created_at": "2015-04-02T09:11:00Z" }, { "body": "not related to this change, but I get confused by these calls with empty index. Will they ever return a block? Seems like we should dig here on another PR.\n", "created_at": "2015-04-02T09:12:54Z" }, { "body": "the version here might need to be adjusted depending on the version we get this PR in e.g. if it ends up being 1.6 should be `before(Version.V_1_6_0)`\n", "created_at": "2015-04-02T09:16:49Z" }, { "body": "maybe I would split this in two ifs, the outer one about version, then the inner one around levels.\n", "created_at": "2015-04-02T09:17:40Z" }, { "body": "Yes, I think that's the right thing to do here.\n", "created_at": "2015-04-02T09:17:49Z" }, { "body": "As far as I know they return only global blocks. It looks like this is the way how blocks are checked when working with data that are persisted in cluster state (index templates, snapshot repos)\n", "created_at": "2015-04-02T09:22:19Z" }, { "body": "just a reminder: version might need to be updated here too depending on which branch we backport the PR to. You might have thought about it already, but these are the things that I usually forget about when I push :)\n", "created_at": "2015-04-02T09:26:21Z" }, { "body": "also, s/splitted/split\n", "created_at": "2015-04-02T09:27:08Z" }, { "body": "version reminder here too, and s/splitted/split\n", "created_at": "2015-04-02T09:27:44Z" }, { "body": "Right now we have bw comp logic in two different places (ClusterBlock for writes & ClusterBlockLevel for reads). Does it make sense to isolate this logic in a single place?\n", "created_at": "2015-04-02T09:28:35Z" }, { "body": "I see that some tests use `enableIndexBlock` and some other `setIndexBlocks`. What is the difference? Can we unify them and maybe add them as common utility methods?\n", "created_at": "2015-04-02T09:34:01Z" }, { "body": "same as above, this seems the same method as before\n", "created_at": "2015-04-02T09:34:29Z" }, { "body": "Right, will do it.\n", "created_at": "2015-04-02T09:45:22Z" }, { "body": "I also thought of it... Maybe add a ClusterBlockLevel.toId() method?\n", "created_at": "2015-04-02T09:47:11Z" } ], "title": "Add METADATA_READ and METADATA_WRITE blocks" }
{ "commits": [ { "message": "Internal: Add METADATA_READ and METADATA_WRITE blocks\n\nThis commit splits the current ClusterBlockLevel.METADATA into two disctins ClusterBlockLevel.METADATA_READ and ClusterBlockLevel.METADATA_WRITE blocks. It allows to make a distinction between\nan operation that modifies the index or cluster metadata and an operation that does not change any metadata.\n\nBefore this commit, many operations where blocked when the cluster was read-only: Cluster Stats, Get Mappings, Get Snapshot, Get Index Settings, etc. Now those operations are allowed even when\nthe cluster or the index is read-only.\n\nRelated to #8102, #2833\n\nCloses #3703\nCloses #5855\nCloses #10521\nCloses #10522" } ], "files": [ { "diff": "@@ -0,0 +1,30 @@\n+---\n+\"Test indices.exists on a read only index\":\n+\n+ - do:\n+ indices.create:\n+ index: test_index_ro\n+\n+ - do:\n+ indices.put_settings:\n+ index: test_index_ro\n+ body:\n+ index.blocks.read_only: true\n+\n+ - do:\n+ indices.exists:\n+ index: test_index_ro\n+\n+ - is_true: ''\n+\n+ - do:\n+ indices.put_settings:\n+ index: test_index_ro\n+ body:\n+ index.blocks.read_only: false\n+\n+ - do:\n+ indices.exists:\n+ index: test_index_ro\n+\n+ - is_true: ''", "filename": "rest-api-spec/test/indices.exists/20_read_only_index.yaml", "status": "added" }, { "diff": "@@ -75,7 +75,8 @@ protected String executor() {\n \n @Override\n protected ClusterBlockException checkBlock(NodesShutdownRequest request, ClusterState state) {\n- return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA);\n+ // Stopping a node impacts the cluster state, so we check for the METADATA_WRITE block here\n+ return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA_WRITE);\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/node/shutdown/TransportNodesShutdownAction.java", "status": "modified" }, { "diff": "@@ -65,7 +65,7 @@ protected DeleteRepositoryResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(DeleteRepositoryRequest request, ClusterState state) {\n- return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA, \"\");\n+ return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_WRITE, \"\");\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/repositories/delete/TransportDeleteRepositoryAction.java", "status": "modified" }, { "diff": "@@ -65,7 +65,7 @@ protected GetRepositoriesResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(GetRepositoriesRequest request, ClusterState state) {\n- return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA, \"\");\n+ return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_READ, \"\");\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/repositories/get/TransportGetRepositoriesAction.java", "status": "modified" }, { "diff": "@@ -65,7 +65,7 @@ protected PutRepositoryResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(PutRepositoryRequest request, ClusterState state) {\n- return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA, \"\");\n+ return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_WRITE, \"\");\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/repositories/put/TransportPutRepositoryAction.java", "status": "modified" }, { "diff": "@@ -69,7 +69,7 @@ protected VerifyRepositoryResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(VerifyRepositoryRequest request, ClusterState state) {\n- return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA, \"\");\n+ return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_READ, \"\");\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/repositories/verify/TransportVerifyRepositoryAction.java", "status": "modified" }, { "diff": "@@ -58,7 +58,7 @@ protected String executor() {\n \n @Override\n protected ClusterBlockException checkBlock(ClusterRerouteRequest request, ClusterState state) {\n- return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA);\n+ return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA_WRITE);\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/reroute/TransportClusterRerouteAction.java", "status": "modified" }, { "diff": "@@ -76,7 +76,7 @@ protected ClusterBlockException checkBlock(ClusterUpdateSettingsRequest request,\n request.persistentSettings().getAsMap().isEmpty() && request.transientSettings().getAsMap().size() == 1 && request.transientSettings().get(MetaData.SETTING_READ_ONLY) != null) {\n return null;\n }\n- return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA);\n+ return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA_WRITE);\n }\n \n ", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/settings/TransportClusterUpdateSettingsAction.java", "status": "modified" }, { "diff": "@@ -58,7 +58,7 @@ protected String executor() {\n \n @Override\n protected ClusterBlockException checkBlock(ClusterSearchShardsRequest request, ClusterState state) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_READ, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/shards/TransportClusterSearchShardsAction.java", "status": "modified" }, { "diff": "@@ -65,7 +65,7 @@ protected CreateSnapshotResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(CreateSnapshotRequest request, ClusterState state) {\n- return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA, \"\");\n+ return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_WRITE, \"\");\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/snapshots/create/TransportCreateSnapshotAction.java", "status": "modified" }, { "diff": "@@ -64,7 +64,7 @@ protected DeleteSnapshotResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(DeleteSnapshotRequest request, ClusterState state) {\n- return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA, \"\");\n+ return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_WRITE, \"\");\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/snapshots/delete/TransportDeleteSnapshotAction.java", "status": "modified" }, { "diff": "@@ -67,7 +67,7 @@ protected GetSnapshotsResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(GetSnapshotsRequest request, ClusterState state) {\n- return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA, \"\");\n+ return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_READ, \"\");\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/snapshots/get/TransportGetSnapshotsAction.java", "status": "modified" }, { "diff": "@@ -65,7 +65,13 @@ protected RestoreSnapshotResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(RestoreSnapshotRequest request, ClusterState state) {\n- return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA, \"\");\n+ // Restoring a snapshot might change the global state and create/change an index,\n+ // so we need to check for METADATA_WRITE and WRITE blocks\n+ ClusterBlockException blockException = state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_WRITE, \"\");\n+ if (blockException != null) {\n+ return blockException;\n+ }\n+ return state.blocks().indexBlockedException(ClusterBlockLevel.WRITE, \"\");\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/snapshots/restore/TransportRestoreSnapshotAction.java", "status": "modified" }, { "diff": "@@ -70,7 +70,7 @@ protected String executor() {\n \n @Override\n protected ClusterBlockException checkBlock(SnapshotsStatusRequest request, ClusterState state) {\n- return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA);\n+ return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA_READ);\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/TransportSnapshotsStatusAction.java", "status": "modified" }, { "diff": "@@ -53,7 +53,7 @@ protected String executor() {\n \n @Override\n protected ClusterBlockException checkBlock(PendingClusterTasksRequest request, ClusterState state) {\n- return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA);\n+ return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA_READ);\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/tasks/TransportPendingClusterTasksAction.java", "status": "modified" }, { "diff": "@@ -78,7 +78,7 @@ protected ClusterBlockException checkBlock(IndicesAliasesRequest request, Cluste\n indices.add(index);\n }\n }\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, indices.toArray(new String[indices.size()]));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_WRITE, indices.toArray(new String[indices.size()]));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/alias/TransportIndicesAliasesAction.java", "status": "modified" }, { "diff": "@@ -49,7 +49,7 @@ protected String executor() {\n \n @Override\n protected ClusterBlockException checkBlock(GetAliasesRequest request, ClusterState state) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_READ, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/alias/exists/TransportAliasesExistAction.java", "status": "modified" }, { "diff": "@@ -52,7 +52,7 @@ protected String executor() {\n \n @Override\n protected ClusterBlockException checkBlock(GetAliasesRequest request, ClusterState state) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_READ, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/alias/get/TransportGetAliasesAction.java", "status": "modified" }, { "diff": "@@ -175,12 +175,12 @@ protected GroupShardsIterator shards(ClusterState clusterState, ClearIndicesCach\n \n @Override\n protected ClusterBlockException checkGlobalBlock(ClusterState state, ClearIndicesCacheRequest request) {\n- return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA);\n+ return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA_WRITE);\n }\n \n @Override\n protected ClusterBlockException checkRequestBlock(ClusterState state, ClearIndicesCacheRequest request, String[] concreteIndices) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, concreteIndices);\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_WRITE, concreteIndices);\n }\n \n }", "filename": "src/main/java/org/elasticsearch/action/admin/indices/cache/clear/TransportClearIndicesCacheAction.java", "status": "modified" }, { "diff": "@@ -76,7 +76,7 @@ protected void doExecute(CloseIndexRequest request, ActionListener<CloseIndexRes\n \n @Override\n protected ClusterBlockException checkBlock(CloseIndexRequest request, ClusterState state) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_WRITE, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/close/TransportCloseIndexAction.java", "status": "modified" }, { "diff": "@@ -67,7 +67,7 @@ protected CreateIndexResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(CreateIndexRequest request, ClusterState state) {\n- return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA, request.index());\n+ return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_WRITE, request.index());\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/create/TransportCreateIndexAction.java", "status": "modified" }, { "diff": "@@ -76,7 +76,7 @@ protected void doExecute(DeleteIndexRequest request, ActionListener<DeleteIndexR\n \n @Override\n protected ClusterBlockException checkBlock(DeleteIndexRequest request, ClusterState state) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_WRITE, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/delete/TransportDeleteIndexAction.java", "status": "modified" }, { "diff": "@@ -65,7 +65,7 @@ protected IndicesExistsResponse newResponse() {\n protected ClusterBlockException checkBlock(IndicesExistsRequest request, ClusterState state) {\n //make sure through indices options that the concrete indices call never throws IndexMissingException\n IndicesOptions indicesOptions = IndicesOptions.fromOptions(true, true, request.indicesOptions().expandWildcardsOpen(), request.indicesOptions().expandWildcardsClosed());\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, clusterService.state().metaData().concreteIndices(indicesOptions, request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_READ, clusterService.state().metaData().concreteIndices(indicesOptions, request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/exists/indices/TransportIndicesExistsAction.java", "status": "modified" }, { "diff": "@@ -62,7 +62,7 @@ protected TypesExistsResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(TypesExistsRequest request, ClusterState state) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_READ, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/exists/types/TransportTypesExistsAction.java", "status": "modified" }, { "diff": "@@ -120,11 +120,11 @@ protected GroupShardsIterator shards(ClusterState clusterState, FlushRequest req\n \n @Override\n protected ClusterBlockException checkGlobalBlock(ClusterState state, FlushRequest request) {\n- return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA);\n+ return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA_WRITE);\n }\n \n @Override\n protected ClusterBlockException checkRequestBlock(ClusterState state, FlushRequest countRequest, String[] concreteIndices) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, concreteIndices);\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_WRITE, concreteIndices);\n }\n }", "filename": "src/main/java/org/elasticsearch/action/admin/indices/flush/TransportFlushAction.java", "status": "modified" }, { "diff": "@@ -60,7 +60,7 @@ protected String executor() {\n \n @Override\n protected ClusterBlockException checkBlock(GetIndexRequest request, ClusterState state) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_READ, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/get/TransportGetIndexAction.java", "status": "modified" }, { "diff": "@@ -29,6 +29,8 @@\n import org.elasticsearch.action.support.single.custom.TransportSingleCustomOperationAction;\n import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.block.ClusterBlockException;\n+import org.elasticsearch.cluster.block.ClusterBlockLevel;\n import org.elasticsearch.cluster.routing.ShardsIterator;\n import org.elasticsearch.common.collect.MapBuilder;\n import org.elasticsearch.common.inject.Inject;\n@@ -38,10 +40,10 @@\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.common.xcontent.XContentType;\n+import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.mapper.DocumentFieldMappers;\n import org.elasticsearch.index.mapper.DocumentMapper;\n import org.elasticsearch.index.mapper.FieldMapper;\n-import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.indices.IndicesService;\n import org.elasticsearch.indices.TypeMissingException;\n@@ -134,6 +136,11 @@ protected GetFieldMappingsResponse newResponse() {\n return new GetFieldMappingsResponse();\n }\n \n+ @Override\n+ protected ClusterBlockException checkRequestBlock(ClusterState state, InternalRequest request) {\n+ return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_READ, request.concreteIndex());\n+ }\n+\n private static final ToXContent.Params includeDefaultsParams = new ToXContent.Params() {\n \n final static String INCLUDE_DEFAULTS = \"include_defaults\";", "filename": "src/main/java/org/elasticsearch/action/admin/indices/mapping/get/TransportGetFieldMappingsIndexAction.java", "status": "modified" }, { "diff": "@@ -51,7 +51,7 @@ protected String executor() {\n \n @Override\n protected ClusterBlockException checkBlock(GetMappingsRequest request, ClusterState state) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_READ, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/mapping/get/TransportGetMappingsAction.java", "status": "modified" }, { "diff": "@@ -66,7 +66,7 @@ protected PutMappingResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(PutMappingRequest request, ClusterState state) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, clusterService.state().metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_WRITE, clusterService.state().metaData().concreteIndices(request.indicesOptions(), request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/mapping/put/TransportPutMappingAction.java", "status": "modified" }, { "diff": "@@ -76,7 +76,7 @@ protected void doExecute(OpenIndexRequest request, ActionListener<OpenIndexRespo\n \n @Override\n protected ClusterBlockException checkBlock(OpenIndexRequest request, ClusterState state) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_WRITE, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/open/TransportOpenIndexAction.java", "status": "modified" } ] }
{ "body": "_cat is cool!\n\nWe currently run ES 1.0.1.\n\nWhile attempting to lower the overhead of lots of indices (still havent found a solution) I tried making indices readonly. Not sure yet if it has any positive impact, but I found this.\n\nFirst lets make indices readonly:\n\n```\ncurl -s -XPUT 'localhost:9200/logstash-pro-apache-2014.03.*/_settings?index.blocks.read_only=true'\n```\n\nThen execute:\n\n```\ncurl -s 'localhost:9200/_cat/indices/logstash-pro-apache-2014.03*?v'\n```\n\nAnd the pretty result:\n\n```\n{\n \"error\" : \"ClusterBlockException[blocked by: [FORBIDDEN/5/index read-only (api)];]\",\n \"status\" : 403\n}\n```\n", "comments": [ { "body": "The _stats api call also fails:\n\n```\nhttp://127.0.0.1:9200/_stats?clear=true&docs=true&store=true&indexing=true&get=true&search=true\n```\n\nResult:\n\n```\nHTTP Error 403: Forbidden\n```\n\nMaybe my expectation is wrong?\n", "created_at": "2014-04-17T14:48:01Z" }, { "body": "@rtoma stats not working on read-only indices is a bug, but I don't think making indices read-only will save any resources.\n", "created_at": "2014-04-18T18:10:51Z" }, { "body": "We discussed it in PR #5876 and decided that more work is needed. Moving it to v1.3.\n", "created_at": "2014-05-07T15:03:55Z" }, { "body": "@imotov bumping this to 1.4\n", "created_at": "2014-07-11T08:47:53Z" }, { "body": "Quoting @imotov from https://github.com/elasticsearch/elasticsearch/pull/5876#issuecomment-42332444\n\n> We will need to revisit this by adding more granular METADATA_READ and METADATA_WRITE blocks and implement this change across other methods such as segments, status, recovery...\n", "created_at": "2014-11-13T12:37:15Z" }, { "body": "Also related to #8102\n", "created_at": "2014-11-13T12:38:20Z" } ], "number": 5855, "title": "ClusterBlockException on /_cat/indices/ on readonly indices" }
{ "body": "While looking at #3703, it looks like the usage of 'read_only' is a bit confusing.\n\nThe documentation stipulates:\n\n> index.blocks.read_only\n> Set to true to have the index read only, false to allow writes and metadata changes.\n\nActually, it blocks all write operations (creating, indexing or deleting a document) but also all read/write operations on index metadata. This can lead to confusion because subsequent 'indices exists' requests will return an undocumented 403 Forbidden response code where one could expect a 200 OK (as reported in #3703). Same for 'type exists' and 'alias exists' requests.\n\nSimilar issues have been reported:\n- _status #2833 (deprecated in 1.2.0)\n- _stats #5876\n- /_cat/indices #5855\n\nBut several API calls returns a 403 when at least one index of the cluster has 'index.blocks.read_only' set to true:\n\n```\nDELETE /_all\n\nPOST /library/book/1\n{\n \"title\": \"Elasticsearch: the definitive guide\"\n}\n\nPOST /movies/dvd/1\n{\n \"title\": \"Elasticsearch: the movie\"\n}\n\nPUT /library/_settings\n{\n \"index\": {\n \"blocks.read_only\": true\n }\n}\n```\n\nThe following calls will return 403 / \"ClusterBlockException[blocked by: [FORBIDDEN/5/index read-only (api)];]\":\n\n```\nHEAD /library\nHEAD /library/book\nHEAD /_all/dvd\n\nGET /library/_settings\nGET /library/_mappings\nGET /library/_warmers\nGET /library/_aliases\n\nGET /_settings\nGET /_aliases\nGET /_mappings\nGET /_cat/indices\n```\n\nI agree to @imotov's comment (see https://github.com/elasticsearch/elasticsearch/pull/5876#issuecomment-42273841) and I think we should split the METADATA blocks in METADATA_READ & METADATA_WRITE blocks. This way the 'read_only' setting will block all write operations and metadata updates but the metadata will still be available.\n\nThis is a first step before updating all other actions then removing METADATA block completely.\n\nCloses #3703\nCloses #10521\nCloses #10522\nRelated to #8102\n", "number": 9203, "review_comments": [ { "body": "can you remind me what this assertion does? do we really need it here?\n", "created_at": "2015-03-25T10:42:22Z" }, { "body": "Question: do we keep the old `ClusterBlockLevel.METADATA` as constructor argument only for bw comp right?\n", "created_at": "2015-03-25T10:46:04Z" }, { "body": "maybe verify that exists can be called rather than deleting the whole block?\n", "created_at": "2015-03-25T10:47:15Z" }, { "body": "why delete this test?\n", "created_at": "2015-03-25T10:47:45Z" }, { "body": "nevermind, I see you moved it to a more specific test class, that's fine.\n", "created_at": "2015-03-25T10:48:57Z" }, { "body": "I think we can omit catching this and failing when caught, that's default behaviour, what matters if the `finally` I guess\n", "created_at": "2015-03-25T10:50:15Z" }, { "body": "Also, I'm confused here: `INDEX_METADATA_BLOCK` should block metadata read and metadata writes? Should we pass in `METADATA_READ` as well?\n", "created_at": "2015-03-25T10:54:11Z" }, { "body": "It's needed to check the result of the previous 'indices.exists' request.\n\nFrom [here](https://github.com/elastic/elasticsearch/blob/master/src/test/java/org/elasticsearch/test/rest/client/RestResponse.java#L90):\n\n```\n is_true: '' means the response had no body but the client returned true (caused by 200)\n```\n", "created_at": "2015-03-25T13:06:10Z" }, { "body": "Right, there's something confusing here. METADATA must be removed and METADATA_READ must be added.\n", "created_at": "2015-03-25T13:15:53Z" }, { "body": "alright thanks :)\n", "created_at": "2015-03-25T13:31:56Z" }, { "body": "sounds good\n", "created_at": "2015-03-25T13:32:03Z" }, { "body": "I would be more precise on the version, cause it's not clear if it is from 1.5.1 or 1.6.0.\n", "created_at": "2015-03-26T08:06:27Z" }, { "body": "this `id` gets sent over the wire in `ClusterBlock#readFrom` and `ClusterBlock#writeTo`. your change makes it backwards compatible only for reads, cause a 1.6 node that gets 1.2 detects and converts it. But what happens if a newer node sends 3 or 4 to an older node? We need to add some logic based on version of nodes.\n\nAlso, I'd try and make this method package private, not sure why it's public it shouldn't IMO.\n", "created_at": "2015-03-26T08:16:16Z" }, { "body": "You're right. I added some logic in the ClusterBlock.writeTo() method, converting 3/4 values into 2 if the target node has a version <= 1.5.0.\n", "created_at": "2015-03-26T13:21:25Z" }, { "body": "aren't delete mapping & delete by query gone though?\n\nAlso, how did you find this issue?\n", "created_at": "2015-04-02T09:06:29Z" }, { "body": "yeah... I need to rebase this branch, but not until your review is terminated :)\n\nYou can skip DeleteMappingAction and associated test.\n", "created_at": "2015-04-02T09:09:11Z" }, { "body": "interesting, so we were previously checking for READ (only data?), which seems like a bug, and now we only look for METADATA_READ blocks?\n", "created_at": "2015-04-02T09:11:00Z" }, { "body": "not related to this change, but I get confused by these calls with empty index. Will they ever return a block? Seems like we should dig here on another PR.\n", "created_at": "2015-04-02T09:12:54Z" }, { "body": "the version here might need to be adjusted depending on the version we get this PR in e.g. if it ends up being 1.6 should be `before(Version.V_1_6_0)`\n", "created_at": "2015-04-02T09:16:49Z" }, { "body": "maybe I would split this in two ifs, the outer one about version, then the inner one around levels.\n", "created_at": "2015-04-02T09:17:40Z" }, { "body": "Yes, I think that's the right thing to do here.\n", "created_at": "2015-04-02T09:17:49Z" }, { "body": "As far as I know they return only global blocks. It looks like this is the way how blocks are checked when working with data that are persisted in cluster state (index templates, snapshot repos)\n", "created_at": "2015-04-02T09:22:19Z" }, { "body": "just a reminder: version might need to be updated here too depending on which branch we backport the PR to. You might have thought about it already, but these are the things that I usually forget about when I push :)\n", "created_at": "2015-04-02T09:26:21Z" }, { "body": "also, s/splitted/split\n", "created_at": "2015-04-02T09:27:08Z" }, { "body": "version reminder here too, and s/splitted/split\n", "created_at": "2015-04-02T09:27:44Z" }, { "body": "Right now we have bw comp logic in two different places (ClusterBlock for writes & ClusterBlockLevel for reads). Does it make sense to isolate this logic in a single place?\n", "created_at": "2015-04-02T09:28:35Z" }, { "body": "I see that some tests use `enableIndexBlock` and some other `setIndexBlocks`. What is the difference? Can we unify them and maybe add them as common utility methods?\n", "created_at": "2015-04-02T09:34:01Z" }, { "body": "same as above, this seems the same method as before\n", "created_at": "2015-04-02T09:34:29Z" }, { "body": "Right, will do it.\n", "created_at": "2015-04-02T09:45:22Z" }, { "body": "I also thought of it... Maybe add a ClusterBlockLevel.toId() method?\n", "created_at": "2015-04-02T09:47:11Z" } ], "title": "Add METADATA_READ and METADATA_WRITE blocks" }
{ "commits": [ { "message": "Internal: Add METADATA_READ and METADATA_WRITE blocks\n\nThis commit splits the current ClusterBlockLevel.METADATA into two disctins ClusterBlockLevel.METADATA_READ and ClusterBlockLevel.METADATA_WRITE blocks. It allows to make a distinction between\nan operation that modifies the index or cluster metadata and an operation that does not change any metadata.\n\nBefore this commit, many operations where blocked when the cluster was read-only: Cluster Stats, Get Mappings, Get Snapshot, Get Index Settings, etc. Now those operations are allowed even when\nthe cluster or the index is read-only.\n\nRelated to #8102, #2833\n\nCloses #3703\nCloses #5855\nCloses #10521\nCloses #10522" } ], "files": [ { "diff": "@@ -0,0 +1,30 @@\n+---\n+\"Test indices.exists on a read only index\":\n+\n+ - do:\n+ indices.create:\n+ index: test_index_ro\n+\n+ - do:\n+ indices.put_settings:\n+ index: test_index_ro\n+ body:\n+ index.blocks.read_only: true\n+\n+ - do:\n+ indices.exists:\n+ index: test_index_ro\n+\n+ - is_true: ''\n+\n+ - do:\n+ indices.put_settings:\n+ index: test_index_ro\n+ body:\n+ index.blocks.read_only: false\n+\n+ - do:\n+ indices.exists:\n+ index: test_index_ro\n+\n+ - is_true: ''", "filename": "rest-api-spec/test/indices.exists/20_read_only_index.yaml", "status": "added" }, { "diff": "@@ -75,7 +75,8 @@ protected String executor() {\n \n @Override\n protected ClusterBlockException checkBlock(NodesShutdownRequest request, ClusterState state) {\n- return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA);\n+ // Stopping a node impacts the cluster state, so we check for the METADATA_WRITE block here\n+ return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA_WRITE);\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/node/shutdown/TransportNodesShutdownAction.java", "status": "modified" }, { "diff": "@@ -65,7 +65,7 @@ protected DeleteRepositoryResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(DeleteRepositoryRequest request, ClusterState state) {\n- return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA, \"\");\n+ return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_WRITE, \"\");\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/repositories/delete/TransportDeleteRepositoryAction.java", "status": "modified" }, { "diff": "@@ -65,7 +65,7 @@ protected GetRepositoriesResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(GetRepositoriesRequest request, ClusterState state) {\n- return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA, \"\");\n+ return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_READ, \"\");\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/repositories/get/TransportGetRepositoriesAction.java", "status": "modified" }, { "diff": "@@ -65,7 +65,7 @@ protected PutRepositoryResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(PutRepositoryRequest request, ClusterState state) {\n- return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA, \"\");\n+ return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_WRITE, \"\");\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/repositories/put/TransportPutRepositoryAction.java", "status": "modified" }, { "diff": "@@ -69,7 +69,7 @@ protected VerifyRepositoryResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(VerifyRepositoryRequest request, ClusterState state) {\n- return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA, \"\");\n+ return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_READ, \"\");\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/repositories/verify/TransportVerifyRepositoryAction.java", "status": "modified" }, { "diff": "@@ -58,7 +58,7 @@ protected String executor() {\n \n @Override\n protected ClusterBlockException checkBlock(ClusterRerouteRequest request, ClusterState state) {\n- return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA);\n+ return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA_WRITE);\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/reroute/TransportClusterRerouteAction.java", "status": "modified" }, { "diff": "@@ -76,7 +76,7 @@ protected ClusterBlockException checkBlock(ClusterUpdateSettingsRequest request,\n request.persistentSettings().getAsMap().isEmpty() && request.transientSettings().getAsMap().size() == 1 && request.transientSettings().get(MetaData.SETTING_READ_ONLY) != null) {\n return null;\n }\n- return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA);\n+ return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA_WRITE);\n }\n \n ", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/settings/TransportClusterUpdateSettingsAction.java", "status": "modified" }, { "diff": "@@ -58,7 +58,7 @@ protected String executor() {\n \n @Override\n protected ClusterBlockException checkBlock(ClusterSearchShardsRequest request, ClusterState state) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_READ, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/shards/TransportClusterSearchShardsAction.java", "status": "modified" }, { "diff": "@@ -65,7 +65,7 @@ protected CreateSnapshotResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(CreateSnapshotRequest request, ClusterState state) {\n- return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA, \"\");\n+ return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_WRITE, \"\");\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/snapshots/create/TransportCreateSnapshotAction.java", "status": "modified" }, { "diff": "@@ -64,7 +64,7 @@ protected DeleteSnapshotResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(DeleteSnapshotRequest request, ClusterState state) {\n- return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA, \"\");\n+ return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_WRITE, \"\");\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/snapshots/delete/TransportDeleteSnapshotAction.java", "status": "modified" }, { "diff": "@@ -67,7 +67,7 @@ protected GetSnapshotsResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(GetSnapshotsRequest request, ClusterState state) {\n- return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA, \"\");\n+ return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_READ, \"\");\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/snapshots/get/TransportGetSnapshotsAction.java", "status": "modified" }, { "diff": "@@ -65,7 +65,13 @@ protected RestoreSnapshotResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(RestoreSnapshotRequest request, ClusterState state) {\n- return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA, \"\");\n+ // Restoring a snapshot might change the global state and create/change an index,\n+ // so we need to check for METADATA_WRITE and WRITE blocks\n+ ClusterBlockException blockException = state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_WRITE, \"\");\n+ if (blockException != null) {\n+ return blockException;\n+ }\n+ return state.blocks().indexBlockedException(ClusterBlockLevel.WRITE, \"\");\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/snapshots/restore/TransportRestoreSnapshotAction.java", "status": "modified" }, { "diff": "@@ -70,7 +70,7 @@ protected String executor() {\n \n @Override\n protected ClusterBlockException checkBlock(SnapshotsStatusRequest request, ClusterState state) {\n- return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA);\n+ return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA_READ);\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/TransportSnapshotsStatusAction.java", "status": "modified" }, { "diff": "@@ -53,7 +53,7 @@ protected String executor() {\n \n @Override\n protected ClusterBlockException checkBlock(PendingClusterTasksRequest request, ClusterState state) {\n- return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA);\n+ return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA_READ);\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/tasks/TransportPendingClusterTasksAction.java", "status": "modified" }, { "diff": "@@ -78,7 +78,7 @@ protected ClusterBlockException checkBlock(IndicesAliasesRequest request, Cluste\n indices.add(index);\n }\n }\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, indices.toArray(new String[indices.size()]));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_WRITE, indices.toArray(new String[indices.size()]));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/alias/TransportIndicesAliasesAction.java", "status": "modified" }, { "diff": "@@ -49,7 +49,7 @@ protected String executor() {\n \n @Override\n protected ClusterBlockException checkBlock(GetAliasesRequest request, ClusterState state) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_READ, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/alias/exists/TransportAliasesExistAction.java", "status": "modified" }, { "diff": "@@ -52,7 +52,7 @@ protected String executor() {\n \n @Override\n protected ClusterBlockException checkBlock(GetAliasesRequest request, ClusterState state) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_READ, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/alias/get/TransportGetAliasesAction.java", "status": "modified" }, { "diff": "@@ -175,12 +175,12 @@ protected GroupShardsIterator shards(ClusterState clusterState, ClearIndicesCach\n \n @Override\n protected ClusterBlockException checkGlobalBlock(ClusterState state, ClearIndicesCacheRequest request) {\n- return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA);\n+ return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA_WRITE);\n }\n \n @Override\n protected ClusterBlockException checkRequestBlock(ClusterState state, ClearIndicesCacheRequest request, String[] concreteIndices) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, concreteIndices);\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_WRITE, concreteIndices);\n }\n \n }", "filename": "src/main/java/org/elasticsearch/action/admin/indices/cache/clear/TransportClearIndicesCacheAction.java", "status": "modified" }, { "diff": "@@ -76,7 +76,7 @@ protected void doExecute(CloseIndexRequest request, ActionListener<CloseIndexRes\n \n @Override\n protected ClusterBlockException checkBlock(CloseIndexRequest request, ClusterState state) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_WRITE, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/close/TransportCloseIndexAction.java", "status": "modified" }, { "diff": "@@ -67,7 +67,7 @@ protected CreateIndexResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(CreateIndexRequest request, ClusterState state) {\n- return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA, request.index());\n+ return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_WRITE, request.index());\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/create/TransportCreateIndexAction.java", "status": "modified" }, { "diff": "@@ -76,7 +76,7 @@ protected void doExecute(DeleteIndexRequest request, ActionListener<DeleteIndexR\n \n @Override\n protected ClusterBlockException checkBlock(DeleteIndexRequest request, ClusterState state) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_WRITE, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/delete/TransportDeleteIndexAction.java", "status": "modified" }, { "diff": "@@ -65,7 +65,7 @@ protected IndicesExistsResponse newResponse() {\n protected ClusterBlockException checkBlock(IndicesExistsRequest request, ClusterState state) {\n //make sure through indices options that the concrete indices call never throws IndexMissingException\n IndicesOptions indicesOptions = IndicesOptions.fromOptions(true, true, request.indicesOptions().expandWildcardsOpen(), request.indicesOptions().expandWildcardsClosed());\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, clusterService.state().metaData().concreteIndices(indicesOptions, request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_READ, clusterService.state().metaData().concreteIndices(indicesOptions, request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/exists/indices/TransportIndicesExistsAction.java", "status": "modified" }, { "diff": "@@ -62,7 +62,7 @@ protected TypesExistsResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(TypesExistsRequest request, ClusterState state) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_READ, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/exists/types/TransportTypesExistsAction.java", "status": "modified" }, { "diff": "@@ -120,11 +120,11 @@ protected GroupShardsIterator shards(ClusterState clusterState, FlushRequest req\n \n @Override\n protected ClusterBlockException checkGlobalBlock(ClusterState state, FlushRequest request) {\n- return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA);\n+ return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA_WRITE);\n }\n \n @Override\n protected ClusterBlockException checkRequestBlock(ClusterState state, FlushRequest countRequest, String[] concreteIndices) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, concreteIndices);\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_WRITE, concreteIndices);\n }\n }", "filename": "src/main/java/org/elasticsearch/action/admin/indices/flush/TransportFlushAction.java", "status": "modified" }, { "diff": "@@ -60,7 +60,7 @@ protected String executor() {\n \n @Override\n protected ClusterBlockException checkBlock(GetIndexRequest request, ClusterState state) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_READ, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/get/TransportGetIndexAction.java", "status": "modified" }, { "diff": "@@ -29,6 +29,8 @@\n import org.elasticsearch.action.support.single.custom.TransportSingleCustomOperationAction;\n import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.block.ClusterBlockException;\n+import org.elasticsearch.cluster.block.ClusterBlockLevel;\n import org.elasticsearch.cluster.routing.ShardsIterator;\n import org.elasticsearch.common.collect.MapBuilder;\n import org.elasticsearch.common.inject.Inject;\n@@ -38,10 +40,10 @@\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.common.xcontent.XContentType;\n+import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.mapper.DocumentFieldMappers;\n import org.elasticsearch.index.mapper.DocumentMapper;\n import org.elasticsearch.index.mapper.FieldMapper;\n-import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.indices.IndicesService;\n import org.elasticsearch.indices.TypeMissingException;\n@@ -134,6 +136,11 @@ protected GetFieldMappingsResponse newResponse() {\n return new GetFieldMappingsResponse();\n }\n \n+ @Override\n+ protected ClusterBlockException checkRequestBlock(ClusterState state, InternalRequest request) {\n+ return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_READ, request.concreteIndex());\n+ }\n+\n private static final ToXContent.Params includeDefaultsParams = new ToXContent.Params() {\n \n final static String INCLUDE_DEFAULTS = \"include_defaults\";", "filename": "src/main/java/org/elasticsearch/action/admin/indices/mapping/get/TransportGetFieldMappingsIndexAction.java", "status": "modified" }, { "diff": "@@ -51,7 +51,7 @@ protected String executor() {\n \n @Override\n protected ClusterBlockException checkBlock(GetMappingsRequest request, ClusterState state) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_READ, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/mapping/get/TransportGetMappingsAction.java", "status": "modified" }, { "diff": "@@ -66,7 +66,7 @@ protected PutMappingResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(PutMappingRequest request, ClusterState state) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, clusterService.state().metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_WRITE, clusterService.state().metaData().concreteIndices(request.indicesOptions(), request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/mapping/put/TransportPutMappingAction.java", "status": "modified" }, { "diff": "@@ -76,7 +76,7 @@ protected void doExecute(OpenIndexRequest request, ActionListener<OpenIndexRespo\n \n @Override\n protected ClusterBlockException checkBlock(OpenIndexRequest request, ClusterState state) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_WRITE, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/open/TransportOpenIndexAction.java", "status": "modified" } ] }
{ "body": "Elasticsearch head requests return 403 when made to readonly index.\n\ncurl recreation of issue here.\n\nhttps://gist.github.com/paulrblakey/6580888\n\nissue originally opened with [ruflin/elastica](https://github.com/ruflin/Elastica/issues/457)\n", "comments": [ { "body": "I've noticed this behavior as well using `POST` for search. It seems like it only allows `GET` requests on readonly indexes.\n", "created_at": "2013-09-18T15:24:13Z" }, { "body": "I'm currently having this issue, it can causes some weird situations.\n\nIf I want to check for the existence of an index, e.g `curl -i -X HEAD localhost:9200/test` you will get a `403` if the index is set to `read_only : true`. So now you have to set `read_only : false` on your index before you know if it exists (and catch the potential `404`)\n", "created_at": "2014-08-14T17:08:22Z" }, { "body": "Setting `read_only: true` on an index blocks:\n- any write operation such as indexing/deleting/updating a document\n- any read & write operation on index metadata such as checking if the index exists, checking if a type exists, and also reading or updating the settings/aliases/warmers/mappings (and many more) of the index.\n\nAs a workaround, you can set `index.block.write: true`. This will block all write operation on the index but you will still be able to check for existance or update the settings/mappings etc.\n\nRelated to #2833, #5855, #5876, #8102\n", "created_at": "2015-01-08T13:57:55Z" }, { "body": "@tlrx Will this workaround to set \"index.block.write: true\" not fulfill the need to make the data read only?\n\nFor logging data, protecting the json documents is what is needed and stopping writes (and hopefully updates and deletes as well) fulfills that need.\n\nTo protect whole indexes, for example for debugging of the health of the index or whatever \"elasticsearch\" related need comes up, by making the whole index read only.\n", "created_at": "2015-01-15T13:12:05Z" } ], "number": 3703, "title": "Elasticsearch head requests return 403 when made to readonly index." }
{ "body": "While looking at #3703, it looks like the usage of 'read_only' is a bit confusing.\n\nThe documentation stipulates:\n\n> index.blocks.read_only\n> Set to true to have the index read only, false to allow writes and metadata changes.\n\nActually, it blocks all write operations (creating, indexing or deleting a document) but also all read/write operations on index metadata. This can lead to confusion because subsequent 'indices exists' requests will return an undocumented 403 Forbidden response code where one could expect a 200 OK (as reported in #3703). Same for 'type exists' and 'alias exists' requests.\n\nSimilar issues have been reported:\n- _status #2833 (deprecated in 1.2.0)\n- _stats #5876\n- /_cat/indices #5855\n\nBut several API calls returns a 403 when at least one index of the cluster has 'index.blocks.read_only' set to true:\n\n```\nDELETE /_all\n\nPOST /library/book/1\n{\n \"title\": \"Elasticsearch: the definitive guide\"\n}\n\nPOST /movies/dvd/1\n{\n \"title\": \"Elasticsearch: the movie\"\n}\n\nPUT /library/_settings\n{\n \"index\": {\n \"blocks.read_only\": true\n }\n}\n```\n\nThe following calls will return 403 / \"ClusterBlockException[blocked by: [FORBIDDEN/5/index read-only (api)];]\":\n\n```\nHEAD /library\nHEAD /library/book\nHEAD /_all/dvd\n\nGET /library/_settings\nGET /library/_mappings\nGET /library/_warmers\nGET /library/_aliases\n\nGET /_settings\nGET /_aliases\nGET /_mappings\nGET /_cat/indices\n```\n\nI agree to @imotov's comment (see https://github.com/elasticsearch/elasticsearch/pull/5876#issuecomment-42273841) and I think we should split the METADATA blocks in METADATA_READ & METADATA_WRITE blocks. This way the 'read_only' setting will block all write operations and metadata updates but the metadata will still be available.\n\nThis is a first step before updating all other actions then removing METADATA block completely.\n\nCloses #3703\nCloses #10521\nCloses #10522\nRelated to #8102\n", "number": 9203, "review_comments": [ { "body": "can you remind me what this assertion does? do we really need it here?\n", "created_at": "2015-03-25T10:42:22Z" }, { "body": "Question: do we keep the old `ClusterBlockLevel.METADATA` as constructor argument only for bw comp right?\n", "created_at": "2015-03-25T10:46:04Z" }, { "body": "maybe verify that exists can be called rather than deleting the whole block?\n", "created_at": "2015-03-25T10:47:15Z" }, { "body": "why delete this test?\n", "created_at": "2015-03-25T10:47:45Z" }, { "body": "nevermind, I see you moved it to a more specific test class, that's fine.\n", "created_at": "2015-03-25T10:48:57Z" }, { "body": "I think we can omit catching this and failing when caught, that's default behaviour, what matters if the `finally` I guess\n", "created_at": "2015-03-25T10:50:15Z" }, { "body": "Also, I'm confused here: `INDEX_METADATA_BLOCK` should block metadata read and metadata writes? Should we pass in `METADATA_READ` as well?\n", "created_at": "2015-03-25T10:54:11Z" }, { "body": "It's needed to check the result of the previous 'indices.exists' request.\n\nFrom [here](https://github.com/elastic/elasticsearch/blob/master/src/test/java/org/elasticsearch/test/rest/client/RestResponse.java#L90):\n\n```\n is_true: '' means the response had no body but the client returned true (caused by 200)\n```\n", "created_at": "2015-03-25T13:06:10Z" }, { "body": "Right, there's something confusing here. METADATA must be removed and METADATA_READ must be added.\n", "created_at": "2015-03-25T13:15:53Z" }, { "body": "alright thanks :)\n", "created_at": "2015-03-25T13:31:56Z" }, { "body": "sounds good\n", "created_at": "2015-03-25T13:32:03Z" }, { "body": "I would be more precise on the version, cause it's not clear if it is from 1.5.1 or 1.6.0.\n", "created_at": "2015-03-26T08:06:27Z" }, { "body": "this `id` gets sent over the wire in `ClusterBlock#readFrom` and `ClusterBlock#writeTo`. your change makes it backwards compatible only for reads, cause a 1.6 node that gets 1.2 detects and converts it. But what happens if a newer node sends 3 or 4 to an older node? We need to add some logic based on version of nodes.\n\nAlso, I'd try and make this method package private, not sure why it's public it shouldn't IMO.\n", "created_at": "2015-03-26T08:16:16Z" }, { "body": "You're right. I added some logic in the ClusterBlock.writeTo() method, converting 3/4 values into 2 if the target node has a version <= 1.5.0.\n", "created_at": "2015-03-26T13:21:25Z" }, { "body": "aren't delete mapping & delete by query gone though?\n\nAlso, how did you find this issue?\n", "created_at": "2015-04-02T09:06:29Z" }, { "body": "yeah... I need to rebase this branch, but not until your review is terminated :)\n\nYou can skip DeleteMappingAction and associated test.\n", "created_at": "2015-04-02T09:09:11Z" }, { "body": "interesting, so we were previously checking for READ (only data?), which seems like a bug, and now we only look for METADATA_READ blocks?\n", "created_at": "2015-04-02T09:11:00Z" }, { "body": "not related to this change, but I get confused by these calls with empty index. Will they ever return a block? Seems like we should dig here on another PR.\n", "created_at": "2015-04-02T09:12:54Z" }, { "body": "the version here might need to be adjusted depending on the version we get this PR in e.g. if it ends up being 1.6 should be `before(Version.V_1_6_0)`\n", "created_at": "2015-04-02T09:16:49Z" }, { "body": "maybe I would split this in two ifs, the outer one about version, then the inner one around levels.\n", "created_at": "2015-04-02T09:17:40Z" }, { "body": "Yes, I think that's the right thing to do here.\n", "created_at": "2015-04-02T09:17:49Z" }, { "body": "As far as I know they return only global blocks. It looks like this is the way how blocks are checked when working with data that are persisted in cluster state (index templates, snapshot repos)\n", "created_at": "2015-04-02T09:22:19Z" }, { "body": "just a reminder: version might need to be updated here too depending on which branch we backport the PR to. You might have thought about it already, but these are the things that I usually forget about when I push :)\n", "created_at": "2015-04-02T09:26:21Z" }, { "body": "also, s/splitted/split\n", "created_at": "2015-04-02T09:27:08Z" }, { "body": "version reminder here too, and s/splitted/split\n", "created_at": "2015-04-02T09:27:44Z" }, { "body": "Right now we have bw comp logic in two different places (ClusterBlock for writes & ClusterBlockLevel for reads). Does it make sense to isolate this logic in a single place?\n", "created_at": "2015-04-02T09:28:35Z" }, { "body": "I see that some tests use `enableIndexBlock` and some other `setIndexBlocks`. What is the difference? Can we unify them and maybe add them as common utility methods?\n", "created_at": "2015-04-02T09:34:01Z" }, { "body": "same as above, this seems the same method as before\n", "created_at": "2015-04-02T09:34:29Z" }, { "body": "Right, will do it.\n", "created_at": "2015-04-02T09:45:22Z" }, { "body": "I also thought of it... Maybe add a ClusterBlockLevel.toId() method?\n", "created_at": "2015-04-02T09:47:11Z" } ], "title": "Add METADATA_READ and METADATA_WRITE blocks" }
{ "commits": [ { "message": "Internal: Add METADATA_READ and METADATA_WRITE blocks\n\nThis commit splits the current ClusterBlockLevel.METADATA into two disctins ClusterBlockLevel.METADATA_READ and ClusterBlockLevel.METADATA_WRITE blocks. It allows to make a distinction between\nan operation that modifies the index or cluster metadata and an operation that does not change any metadata.\n\nBefore this commit, many operations where blocked when the cluster was read-only: Cluster Stats, Get Mappings, Get Snapshot, Get Index Settings, etc. Now those operations are allowed even when\nthe cluster or the index is read-only.\n\nRelated to #8102, #2833\n\nCloses #3703\nCloses #5855\nCloses #10521\nCloses #10522" } ], "files": [ { "diff": "@@ -0,0 +1,30 @@\n+---\n+\"Test indices.exists on a read only index\":\n+\n+ - do:\n+ indices.create:\n+ index: test_index_ro\n+\n+ - do:\n+ indices.put_settings:\n+ index: test_index_ro\n+ body:\n+ index.blocks.read_only: true\n+\n+ - do:\n+ indices.exists:\n+ index: test_index_ro\n+\n+ - is_true: ''\n+\n+ - do:\n+ indices.put_settings:\n+ index: test_index_ro\n+ body:\n+ index.blocks.read_only: false\n+\n+ - do:\n+ indices.exists:\n+ index: test_index_ro\n+\n+ - is_true: ''", "filename": "rest-api-spec/test/indices.exists/20_read_only_index.yaml", "status": "added" }, { "diff": "@@ -75,7 +75,8 @@ protected String executor() {\n \n @Override\n protected ClusterBlockException checkBlock(NodesShutdownRequest request, ClusterState state) {\n- return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA);\n+ // Stopping a node impacts the cluster state, so we check for the METADATA_WRITE block here\n+ return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA_WRITE);\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/node/shutdown/TransportNodesShutdownAction.java", "status": "modified" }, { "diff": "@@ -65,7 +65,7 @@ protected DeleteRepositoryResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(DeleteRepositoryRequest request, ClusterState state) {\n- return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA, \"\");\n+ return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_WRITE, \"\");\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/repositories/delete/TransportDeleteRepositoryAction.java", "status": "modified" }, { "diff": "@@ -65,7 +65,7 @@ protected GetRepositoriesResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(GetRepositoriesRequest request, ClusterState state) {\n- return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA, \"\");\n+ return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_READ, \"\");\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/repositories/get/TransportGetRepositoriesAction.java", "status": "modified" }, { "diff": "@@ -65,7 +65,7 @@ protected PutRepositoryResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(PutRepositoryRequest request, ClusterState state) {\n- return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA, \"\");\n+ return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_WRITE, \"\");\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/repositories/put/TransportPutRepositoryAction.java", "status": "modified" }, { "diff": "@@ -69,7 +69,7 @@ protected VerifyRepositoryResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(VerifyRepositoryRequest request, ClusterState state) {\n- return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA, \"\");\n+ return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_READ, \"\");\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/repositories/verify/TransportVerifyRepositoryAction.java", "status": "modified" }, { "diff": "@@ -58,7 +58,7 @@ protected String executor() {\n \n @Override\n protected ClusterBlockException checkBlock(ClusterRerouteRequest request, ClusterState state) {\n- return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA);\n+ return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA_WRITE);\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/reroute/TransportClusterRerouteAction.java", "status": "modified" }, { "diff": "@@ -76,7 +76,7 @@ protected ClusterBlockException checkBlock(ClusterUpdateSettingsRequest request,\n request.persistentSettings().getAsMap().isEmpty() && request.transientSettings().getAsMap().size() == 1 && request.transientSettings().get(MetaData.SETTING_READ_ONLY) != null) {\n return null;\n }\n- return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA);\n+ return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA_WRITE);\n }\n \n ", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/settings/TransportClusterUpdateSettingsAction.java", "status": "modified" }, { "diff": "@@ -58,7 +58,7 @@ protected String executor() {\n \n @Override\n protected ClusterBlockException checkBlock(ClusterSearchShardsRequest request, ClusterState state) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_READ, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/shards/TransportClusterSearchShardsAction.java", "status": "modified" }, { "diff": "@@ -65,7 +65,7 @@ protected CreateSnapshotResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(CreateSnapshotRequest request, ClusterState state) {\n- return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA, \"\");\n+ return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_WRITE, \"\");\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/snapshots/create/TransportCreateSnapshotAction.java", "status": "modified" }, { "diff": "@@ -64,7 +64,7 @@ protected DeleteSnapshotResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(DeleteSnapshotRequest request, ClusterState state) {\n- return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA, \"\");\n+ return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_WRITE, \"\");\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/snapshots/delete/TransportDeleteSnapshotAction.java", "status": "modified" }, { "diff": "@@ -67,7 +67,7 @@ protected GetSnapshotsResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(GetSnapshotsRequest request, ClusterState state) {\n- return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA, \"\");\n+ return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_READ, \"\");\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/snapshots/get/TransportGetSnapshotsAction.java", "status": "modified" }, { "diff": "@@ -65,7 +65,13 @@ protected RestoreSnapshotResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(RestoreSnapshotRequest request, ClusterState state) {\n- return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA, \"\");\n+ // Restoring a snapshot might change the global state and create/change an index,\n+ // so we need to check for METADATA_WRITE and WRITE blocks\n+ ClusterBlockException blockException = state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_WRITE, \"\");\n+ if (blockException != null) {\n+ return blockException;\n+ }\n+ return state.blocks().indexBlockedException(ClusterBlockLevel.WRITE, \"\");\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/snapshots/restore/TransportRestoreSnapshotAction.java", "status": "modified" }, { "diff": "@@ -70,7 +70,7 @@ protected String executor() {\n \n @Override\n protected ClusterBlockException checkBlock(SnapshotsStatusRequest request, ClusterState state) {\n- return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA);\n+ return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA_READ);\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/TransportSnapshotsStatusAction.java", "status": "modified" }, { "diff": "@@ -53,7 +53,7 @@ protected String executor() {\n \n @Override\n protected ClusterBlockException checkBlock(PendingClusterTasksRequest request, ClusterState state) {\n- return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA);\n+ return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA_READ);\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/tasks/TransportPendingClusterTasksAction.java", "status": "modified" }, { "diff": "@@ -78,7 +78,7 @@ protected ClusterBlockException checkBlock(IndicesAliasesRequest request, Cluste\n indices.add(index);\n }\n }\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, indices.toArray(new String[indices.size()]));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_WRITE, indices.toArray(new String[indices.size()]));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/alias/TransportIndicesAliasesAction.java", "status": "modified" }, { "diff": "@@ -49,7 +49,7 @@ protected String executor() {\n \n @Override\n protected ClusterBlockException checkBlock(GetAliasesRequest request, ClusterState state) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_READ, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/alias/exists/TransportAliasesExistAction.java", "status": "modified" }, { "diff": "@@ -52,7 +52,7 @@ protected String executor() {\n \n @Override\n protected ClusterBlockException checkBlock(GetAliasesRequest request, ClusterState state) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_READ, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/alias/get/TransportGetAliasesAction.java", "status": "modified" }, { "diff": "@@ -175,12 +175,12 @@ protected GroupShardsIterator shards(ClusterState clusterState, ClearIndicesCach\n \n @Override\n protected ClusterBlockException checkGlobalBlock(ClusterState state, ClearIndicesCacheRequest request) {\n- return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA);\n+ return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA_WRITE);\n }\n \n @Override\n protected ClusterBlockException checkRequestBlock(ClusterState state, ClearIndicesCacheRequest request, String[] concreteIndices) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, concreteIndices);\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_WRITE, concreteIndices);\n }\n \n }", "filename": "src/main/java/org/elasticsearch/action/admin/indices/cache/clear/TransportClearIndicesCacheAction.java", "status": "modified" }, { "diff": "@@ -76,7 +76,7 @@ protected void doExecute(CloseIndexRequest request, ActionListener<CloseIndexRes\n \n @Override\n protected ClusterBlockException checkBlock(CloseIndexRequest request, ClusterState state) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_WRITE, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/close/TransportCloseIndexAction.java", "status": "modified" }, { "diff": "@@ -67,7 +67,7 @@ protected CreateIndexResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(CreateIndexRequest request, ClusterState state) {\n- return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA, request.index());\n+ return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_WRITE, request.index());\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/create/TransportCreateIndexAction.java", "status": "modified" }, { "diff": "@@ -76,7 +76,7 @@ protected void doExecute(DeleteIndexRequest request, ActionListener<DeleteIndexR\n \n @Override\n protected ClusterBlockException checkBlock(DeleteIndexRequest request, ClusterState state) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_WRITE, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/delete/TransportDeleteIndexAction.java", "status": "modified" }, { "diff": "@@ -65,7 +65,7 @@ protected IndicesExistsResponse newResponse() {\n protected ClusterBlockException checkBlock(IndicesExistsRequest request, ClusterState state) {\n //make sure through indices options that the concrete indices call never throws IndexMissingException\n IndicesOptions indicesOptions = IndicesOptions.fromOptions(true, true, request.indicesOptions().expandWildcardsOpen(), request.indicesOptions().expandWildcardsClosed());\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, clusterService.state().metaData().concreteIndices(indicesOptions, request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_READ, clusterService.state().metaData().concreteIndices(indicesOptions, request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/exists/indices/TransportIndicesExistsAction.java", "status": "modified" }, { "diff": "@@ -62,7 +62,7 @@ protected TypesExistsResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(TypesExistsRequest request, ClusterState state) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_READ, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/exists/types/TransportTypesExistsAction.java", "status": "modified" }, { "diff": "@@ -120,11 +120,11 @@ protected GroupShardsIterator shards(ClusterState clusterState, FlushRequest req\n \n @Override\n protected ClusterBlockException checkGlobalBlock(ClusterState state, FlushRequest request) {\n- return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA);\n+ return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA_WRITE);\n }\n \n @Override\n protected ClusterBlockException checkRequestBlock(ClusterState state, FlushRequest countRequest, String[] concreteIndices) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, concreteIndices);\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_WRITE, concreteIndices);\n }\n }", "filename": "src/main/java/org/elasticsearch/action/admin/indices/flush/TransportFlushAction.java", "status": "modified" }, { "diff": "@@ -60,7 +60,7 @@ protected String executor() {\n \n @Override\n protected ClusterBlockException checkBlock(GetIndexRequest request, ClusterState state) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_READ, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/get/TransportGetIndexAction.java", "status": "modified" }, { "diff": "@@ -29,6 +29,8 @@\n import org.elasticsearch.action.support.single.custom.TransportSingleCustomOperationAction;\n import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.block.ClusterBlockException;\n+import org.elasticsearch.cluster.block.ClusterBlockLevel;\n import org.elasticsearch.cluster.routing.ShardsIterator;\n import org.elasticsearch.common.collect.MapBuilder;\n import org.elasticsearch.common.inject.Inject;\n@@ -38,10 +40,10 @@\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.common.xcontent.XContentType;\n+import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.mapper.DocumentFieldMappers;\n import org.elasticsearch.index.mapper.DocumentMapper;\n import org.elasticsearch.index.mapper.FieldMapper;\n-import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.indices.IndicesService;\n import org.elasticsearch.indices.TypeMissingException;\n@@ -134,6 +136,11 @@ protected GetFieldMappingsResponse newResponse() {\n return new GetFieldMappingsResponse();\n }\n \n+ @Override\n+ protected ClusterBlockException checkRequestBlock(ClusterState state, InternalRequest request) {\n+ return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_READ, request.concreteIndex());\n+ }\n+\n private static final ToXContent.Params includeDefaultsParams = new ToXContent.Params() {\n \n final static String INCLUDE_DEFAULTS = \"include_defaults\";", "filename": "src/main/java/org/elasticsearch/action/admin/indices/mapping/get/TransportGetFieldMappingsIndexAction.java", "status": "modified" }, { "diff": "@@ -51,7 +51,7 @@ protected String executor() {\n \n @Override\n protected ClusterBlockException checkBlock(GetMappingsRequest request, ClusterState state) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_READ, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/mapping/get/TransportGetMappingsAction.java", "status": "modified" }, { "diff": "@@ -66,7 +66,7 @@ protected PutMappingResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(PutMappingRequest request, ClusterState state) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, clusterService.state().metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_WRITE, clusterService.state().metaData().concreteIndices(request.indicesOptions(), request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/mapping/put/TransportPutMappingAction.java", "status": "modified" }, { "diff": "@@ -76,7 +76,7 @@ protected void doExecute(OpenIndexRequest request, ActionListener<OpenIndexRespo\n \n @Override\n protected ClusterBlockException checkBlock(OpenIndexRequest request, ClusterState state) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_WRITE, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/open/TransportOpenIndexAction.java", "status": "modified" } ] }
{ "body": "In trying to protect completed daily indices from future changes, we tried setting index.blocks.read_only : true, but found that we were then unable to query _status, and head plugin could no longer display shard information. I think this best illustrates the behavior we're seeing, but not expecting:\n\ncurl -XGET 'http://localhost:9200/messages_20130301/_status'\n{\"error\":\"ClusterBlockException[blocked by: [FORBIDDEN/5/index read-only (api)];]\",\"status\":403}\n\nAlso, as an aside: what's the difference between index.blocks.read_only : true and index.blocks.write : true (which lets me query status and prevents delete?)\n", "comments": [ { "body": "+1 on that. \nany plans on fixing this soon? even though i'm not really using this setting, i'm developing an admin tool that allows changing index settings and this just breaks everything.\n", "created_at": "2013-08-15T00:09:39Z" }, { "body": "just as a side note, blocks.metadata also prevents _status from working correctly.\n", "created_at": "2013-08-15T00:16:44Z" }, { "body": "No fix for this till now ?\n", "created_at": "2014-07-27T14:45:47Z" }, { "body": "Related #8102, #5855, #5876\n", "created_at": "2014-11-29T14:20:08Z" }, { "body": "Closed in af79a2ae49073cd18d415d3c7bc4a798c6ffd9f1\n", "created_at": "2015-04-23T13:23:43Z" } ], "number": 2833, "title": "index.blocks.read_only prevents _status" }
{ "body": "While looking at #3703, it looks like the usage of 'read_only' is a bit confusing.\n\nThe documentation stipulates:\n\n> index.blocks.read_only\n> Set to true to have the index read only, false to allow writes and metadata changes.\n\nActually, it blocks all write operations (creating, indexing or deleting a document) but also all read/write operations on index metadata. This can lead to confusion because subsequent 'indices exists' requests will return an undocumented 403 Forbidden response code where one could expect a 200 OK (as reported in #3703). Same for 'type exists' and 'alias exists' requests.\n\nSimilar issues have been reported:\n- _status #2833 (deprecated in 1.2.0)\n- _stats #5876\n- /_cat/indices #5855\n\nBut several API calls returns a 403 when at least one index of the cluster has 'index.blocks.read_only' set to true:\n\n```\nDELETE /_all\n\nPOST /library/book/1\n{\n \"title\": \"Elasticsearch: the definitive guide\"\n}\n\nPOST /movies/dvd/1\n{\n \"title\": \"Elasticsearch: the movie\"\n}\n\nPUT /library/_settings\n{\n \"index\": {\n \"blocks.read_only\": true\n }\n}\n```\n\nThe following calls will return 403 / \"ClusterBlockException[blocked by: [FORBIDDEN/5/index read-only (api)];]\":\n\n```\nHEAD /library\nHEAD /library/book\nHEAD /_all/dvd\n\nGET /library/_settings\nGET /library/_mappings\nGET /library/_warmers\nGET /library/_aliases\n\nGET /_settings\nGET /_aliases\nGET /_mappings\nGET /_cat/indices\n```\n\nI agree to @imotov's comment (see https://github.com/elasticsearch/elasticsearch/pull/5876#issuecomment-42273841) and I think we should split the METADATA blocks in METADATA_READ & METADATA_WRITE blocks. This way the 'read_only' setting will block all write operations and metadata updates but the metadata will still be available.\n\nThis is a first step before updating all other actions then removing METADATA block completely.\n\nCloses #3703\nCloses #10521\nCloses #10522\nRelated to #8102\n", "number": 9203, "review_comments": [ { "body": "can you remind me what this assertion does? do we really need it here?\n", "created_at": "2015-03-25T10:42:22Z" }, { "body": "Question: do we keep the old `ClusterBlockLevel.METADATA` as constructor argument only for bw comp right?\n", "created_at": "2015-03-25T10:46:04Z" }, { "body": "maybe verify that exists can be called rather than deleting the whole block?\n", "created_at": "2015-03-25T10:47:15Z" }, { "body": "why delete this test?\n", "created_at": "2015-03-25T10:47:45Z" }, { "body": "nevermind, I see you moved it to a more specific test class, that's fine.\n", "created_at": "2015-03-25T10:48:57Z" }, { "body": "I think we can omit catching this and failing when caught, that's default behaviour, what matters if the `finally` I guess\n", "created_at": "2015-03-25T10:50:15Z" }, { "body": "Also, I'm confused here: `INDEX_METADATA_BLOCK` should block metadata read and metadata writes? Should we pass in `METADATA_READ` as well?\n", "created_at": "2015-03-25T10:54:11Z" }, { "body": "It's needed to check the result of the previous 'indices.exists' request.\n\nFrom [here](https://github.com/elastic/elasticsearch/blob/master/src/test/java/org/elasticsearch/test/rest/client/RestResponse.java#L90):\n\n```\n is_true: '' means the response had no body but the client returned true (caused by 200)\n```\n", "created_at": "2015-03-25T13:06:10Z" }, { "body": "Right, there's something confusing here. METADATA must be removed and METADATA_READ must be added.\n", "created_at": "2015-03-25T13:15:53Z" }, { "body": "alright thanks :)\n", "created_at": "2015-03-25T13:31:56Z" }, { "body": "sounds good\n", "created_at": "2015-03-25T13:32:03Z" }, { "body": "I would be more precise on the version, cause it's not clear if it is from 1.5.1 or 1.6.0.\n", "created_at": "2015-03-26T08:06:27Z" }, { "body": "this `id` gets sent over the wire in `ClusterBlock#readFrom` and `ClusterBlock#writeTo`. your change makes it backwards compatible only for reads, cause a 1.6 node that gets 1.2 detects and converts it. But what happens if a newer node sends 3 or 4 to an older node? We need to add some logic based on version of nodes.\n\nAlso, I'd try and make this method package private, not sure why it's public it shouldn't IMO.\n", "created_at": "2015-03-26T08:16:16Z" }, { "body": "You're right. I added some logic in the ClusterBlock.writeTo() method, converting 3/4 values into 2 if the target node has a version <= 1.5.0.\n", "created_at": "2015-03-26T13:21:25Z" }, { "body": "aren't delete mapping & delete by query gone though?\n\nAlso, how did you find this issue?\n", "created_at": "2015-04-02T09:06:29Z" }, { "body": "yeah... I need to rebase this branch, but not until your review is terminated :)\n\nYou can skip DeleteMappingAction and associated test.\n", "created_at": "2015-04-02T09:09:11Z" }, { "body": "interesting, so we were previously checking for READ (only data?), which seems like a bug, and now we only look for METADATA_READ blocks?\n", "created_at": "2015-04-02T09:11:00Z" }, { "body": "not related to this change, but I get confused by these calls with empty index. Will they ever return a block? Seems like we should dig here on another PR.\n", "created_at": "2015-04-02T09:12:54Z" }, { "body": "the version here might need to be adjusted depending on the version we get this PR in e.g. if it ends up being 1.6 should be `before(Version.V_1_6_0)`\n", "created_at": "2015-04-02T09:16:49Z" }, { "body": "maybe I would split this in two ifs, the outer one about version, then the inner one around levels.\n", "created_at": "2015-04-02T09:17:40Z" }, { "body": "Yes, I think that's the right thing to do here.\n", "created_at": "2015-04-02T09:17:49Z" }, { "body": "As far as I know they return only global blocks. It looks like this is the way how blocks are checked when working with data that are persisted in cluster state (index templates, snapshot repos)\n", "created_at": "2015-04-02T09:22:19Z" }, { "body": "just a reminder: version might need to be updated here too depending on which branch we backport the PR to. You might have thought about it already, but these are the things that I usually forget about when I push :)\n", "created_at": "2015-04-02T09:26:21Z" }, { "body": "also, s/splitted/split\n", "created_at": "2015-04-02T09:27:08Z" }, { "body": "version reminder here too, and s/splitted/split\n", "created_at": "2015-04-02T09:27:44Z" }, { "body": "Right now we have bw comp logic in two different places (ClusterBlock for writes & ClusterBlockLevel for reads). Does it make sense to isolate this logic in a single place?\n", "created_at": "2015-04-02T09:28:35Z" }, { "body": "I see that some tests use `enableIndexBlock` and some other `setIndexBlocks`. What is the difference? Can we unify them and maybe add them as common utility methods?\n", "created_at": "2015-04-02T09:34:01Z" }, { "body": "same as above, this seems the same method as before\n", "created_at": "2015-04-02T09:34:29Z" }, { "body": "Right, will do it.\n", "created_at": "2015-04-02T09:45:22Z" }, { "body": "I also thought of it... Maybe add a ClusterBlockLevel.toId() method?\n", "created_at": "2015-04-02T09:47:11Z" } ], "title": "Add METADATA_READ and METADATA_WRITE blocks" }
{ "commits": [ { "message": "Internal: Add METADATA_READ and METADATA_WRITE blocks\n\nThis commit splits the current ClusterBlockLevel.METADATA into two disctins ClusterBlockLevel.METADATA_READ and ClusterBlockLevel.METADATA_WRITE blocks. It allows to make a distinction between\nan operation that modifies the index or cluster metadata and an operation that does not change any metadata.\n\nBefore this commit, many operations where blocked when the cluster was read-only: Cluster Stats, Get Mappings, Get Snapshot, Get Index Settings, etc. Now those operations are allowed even when\nthe cluster or the index is read-only.\n\nRelated to #8102, #2833\n\nCloses #3703\nCloses #5855\nCloses #10521\nCloses #10522" } ], "files": [ { "diff": "@@ -0,0 +1,30 @@\n+---\n+\"Test indices.exists on a read only index\":\n+\n+ - do:\n+ indices.create:\n+ index: test_index_ro\n+\n+ - do:\n+ indices.put_settings:\n+ index: test_index_ro\n+ body:\n+ index.blocks.read_only: true\n+\n+ - do:\n+ indices.exists:\n+ index: test_index_ro\n+\n+ - is_true: ''\n+\n+ - do:\n+ indices.put_settings:\n+ index: test_index_ro\n+ body:\n+ index.blocks.read_only: false\n+\n+ - do:\n+ indices.exists:\n+ index: test_index_ro\n+\n+ - is_true: ''", "filename": "rest-api-spec/test/indices.exists/20_read_only_index.yaml", "status": "added" }, { "diff": "@@ -75,7 +75,8 @@ protected String executor() {\n \n @Override\n protected ClusterBlockException checkBlock(NodesShutdownRequest request, ClusterState state) {\n- return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA);\n+ // Stopping a node impacts the cluster state, so we check for the METADATA_WRITE block here\n+ return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA_WRITE);\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/node/shutdown/TransportNodesShutdownAction.java", "status": "modified" }, { "diff": "@@ -65,7 +65,7 @@ protected DeleteRepositoryResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(DeleteRepositoryRequest request, ClusterState state) {\n- return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA, \"\");\n+ return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_WRITE, \"\");\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/repositories/delete/TransportDeleteRepositoryAction.java", "status": "modified" }, { "diff": "@@ -65,7 +65,7 @@ protected GetRepositoriesResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(GetRepositoriesRequest request, ClusterState state) {\n- return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA, \"\");\n+ return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_READ, \"\");\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/repositories/get/TransportGetRepositoriesAction.java", "status": "modified" }, { "diff": "@@ -65,7 +65,7 @@ protected PutRepositoryResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(PutRepositoryRequest request, ClusterState state) {\n- return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA, \"\");\n+ return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_WRITE, \"\");\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/repositories/put/TransportPutRepositoryAction.java", "status": "modified" }, { "diff": "@@ -69,7 +69,7 @@ protected VerifyRepositoryResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(VerifyRepositoryRequest request, ClusterState state) {\n- return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA, \"\");\n+ return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_READ, \"\");\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/repositories/verify/TransportVerifyRepositoryAction.java", "status": "modified" }, { "diff": "@@ -58,7 +58,7 @@ protected String executor() {\n \n @Override\n protected ClusterBlockException checkBlock(ClusterRerouteRequest request, ClusterState state) {\n- return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA);\n+ return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA_WRITE);\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/reroute/TransportClusterRerouteAction.java", "status": "modified" }, { "diff": "@@ -76,7 +76,7 @@ protected ClusterBlockException checkBlock(ClusterUpdateSettingsRequest request,\n request.persistentSettings().getAsMap().isEmpty() && request.transientSettings().getAsMap().size() == 1 && request.transientSettings().get(MetaData.SETTING_READ_ONLY) != null) {\n return null;\n }\n- return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA);\n+ return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA_WRITE);\n }\n \n ", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/settings/TransportClusterUpdateSettingsAction.java", "status": "modified" }, { "diff": "@@ -58,7 +58,7 @@ protected String executor() {\n \n @Override\n protected ClusterBlockException checkBlock(ClusterSearchShardsRequest request, ClusterState state) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_READ, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/shards/TransportClusterSearchShardsAction.java", "status": "modified" }, { "diff": "@@ -65,7 +65,7 @@ protected CreateSnapshotResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(CreateSnapshotRequest request, ClusterState state) {\n- return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA, \"\");\n+ return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_WRITE, \"\");\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/snapshots/create/TransportCreateSnapshotAction.java", "status": "modified" }, { "diff": "@@ -64,7 +64,7 @@ protected DeleteSnapshotResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(DeleteSnapshotRequest request, ClusterState state) {\n- return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA, \"\");\n+ return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_WRITE, \"\");\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/snapshots/delete/TransportDeleteSnapshotAction.java", "status": "modified" }, { "diff": "@@ -67,7 +67,7 @@ protected GetSnapshotsResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(GetSnapshotsRequest request, ClusterState state) {\n- return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA, \"\");\n+ return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_READ, \"\");\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/snapshots/get/TransportGetSnapshotsAction.java", "status": "modified" }, { "diff": "@@ -65,7 +65,13 @@ protected RestoreSnapshotResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(RestoreSnapshotRequest request, ClusterState state) {\n- return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA, \"\");\n+ // Restoring a snapshot might change the global state and create/change an index,\n+ // so we need to check for METADATA_WRITE and WRITE blocks\n+ ClusterBlockException blockException = state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_WRITE, \"\");\n+ if (blockException != null) {\n+ return blockException;\n+ }\n+ return state.blocks().indexBlockedException(ClusterBlockLevel.WRITE, \"\");\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/snapshots/restore/TransportRestoreSnapshotAction.java", "status": "modified" }, { "diff": "@@ -70,7 +70,7 @@ protected String executor() {\n \n @Override\n protected ClusterBlockException checkBlock(SnapshotsStatusRequest request, ClusterState state) {\n- return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA);\n+ return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA_READ);\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/TransportSnapshotsStatusAction.java", "status": "modified" }, { "diff": "@@ -53,7 +53,7 @@ protected String executor() {\n \n @Override\n protected ClusterBlockException checkBlock(PendingClusterTasksRequest request, ClusterState state) {\n- return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA);\n+ return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA_READ);\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/tasks/TransportPendingClusterTasksAction.java", "status": "modified" }, { "diff": "@@ -78,7 +78,7 @@ protected ClusterBlockException checkBlock(IndicesAliasesRequest request, Cluste\n indices.add(index);\n }\n }\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, indices.toArray(new String[indices.size()]));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_WRITE, indices.toArray(new String[indices.size()]));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/alias/TransportIndicesAliasesAction.java", "status": "modified" }, { "diff": "@@ -49,7 +49,7 @@ protected String executor() {\n \n @Override\n protected ClusterBlockException checkBlock(GetAliasesRequest request, ClusterState state) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_READ, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/alias/exists/TransportAliasesExistAction.java", "status": "modified" }, { "diff": "@@ -52,7 +52,7 @@ protected String executor() {\n \n @Override\n protected ClusterBlockException checkBlock(GetAliasesRequest request, ClusterState state) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_READ, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/alias/get/TransportGetAliasesAction.java", "status": "modified" }, { "diff": "@@ -175,12 +175,12 @@ protected GroupShardsIterator shards(ClusterState clusterState, ClearIndicesCach\n \n @Override\n protected ClusterBlockException checkGlobalBlock(ClusterState state, ClearIndicesCacheRequest request) {\n- return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA);\n+ return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA_WRITE);\n }\n \n @Override\n protected ClusterBlockException checkRequestBlock(ClusterState state, ClearIndicesCacheRequest request, String[] concreteIndices) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, concreteIndices);\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_WRITE, concreteIndices);\n }\n \n }", "filename": "src/main/java/org/elasticsearch/action/admin/indices/cache/clear/TransportClearIndicesCacheAction.java", "status": "modified" }, { "diff": "@@ -76,7 +76,7 @@ protected void doExecute(CloseIndexRequest request, ActionListener<CloseIndexRes\n \n @Override\n protected ClusterBlockException checkBlock(CloseIndexRequest request, ClusterState state) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_WRITE, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/close/TransportCloseIndexAction.java", "status": "modified" }, { "diff": "@@ -67,7 +67,7 @@ protected CreateIndexResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(CreateIndexRequest request, ClusterState state) {\n- return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA, request.index());\n+ return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_WRITE, request.index());\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/create/TransportCreateIndexAction.java", "status": "modified" }, { "diff": "@@ -76,7 +76,7 @@ protected void doExecute(DeleteIndexRequest request, ActionListener<DeleteIndexR\n \n @Override\n protected ClusterBlockException checkBlock(DeleteIndexRequest request, ClusterState state) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_WRITE, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/delete/TransportDeleteIndexAction.java", "status": "modified" }, { "diff": "@@ -65,7 +65,7 @@ protected IndicesExistsResponse newResponse() {\n protected ClusterBlockException checkBlock(IndicesExistsRequest request, ClusterState state) {\n //make sure through indices options that the concrete indices call never throws IndexMissingException\n IndicesOptions indicesOptions = IndicesOptions.fromOptions(true, true, request.indicesOptions().expandWildcardsOpen(), request.indicesOptions().expandWildcardsClosed());\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, clusterService.state().metaData().concreteIndices(indicesOptions, request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_READ, clusterService.state().metaData().concreteIndices(indicesOptions, request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/exists/indices/TransportIndicesExistsAction.java", "status": "modified" }, { "diff": "@@ -62,7 +62,7 @@ protected TypesExistsResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(TypesExistsRequest request, ClusterState state) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_READ, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/exists/types/TransportTypesExistsAction.java", "status": "modified" }, { "diff": "@@ -120,11 +120,11 @@ protected GroupShardsIterator shards(ClusterState clusterState, FlushRequest req\n \n @Override\n protected ClusterBlockException checkGlobalBlock(ClusterState state, FlushRequest request) {\n- return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA);\n+ return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA_WRITE);\n }\n \n @Override\n protected ClusterBlockException checkRequestBlock(ClusterState state, FlushRequest countRequest, String[] concreteIndices) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, concreteIndices);\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_WRITE, concreteIndices);\n }\n }", "filename": "src/main/java/org/elasticsearch/action/admin/indices/flush/TransportFlushAction.java", "status": "modified" }, { "diff": "@@ -60,7 +60,7 @@ protected String executor() {\n \n @Override\n protected ClusterBlockException checkBlock(GetIndexRequest request, ClusterState state) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_READ, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/get/TransportGetIndexAction.java", "status": "modified" }, { "diff": "@@ -29,6 +29,8 @@\n import org.elasticsearch.action.support.single.custom.TransportSingleCustomOperationAction;\n import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.block.ClusterBlockException;\n+import org.elasticsearch.cluster.block.ClusterBlockLevel;\n import org.elasticsearch.cluster.routing.ShardsIterator;\n import org.elasticsearch.common.collect.MapBuilder;\n import org.elasticsearch.common.inject.Inject;\n@@ -38,10 +40,10 @@\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.common.xcontent.XContentType;\n+import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.mapper.DocumentFieldMappers;\n import org.elasticsearch.index.mapper.DocumentMapper;\n import org.elasticsearch.index.mapper.FieldMapper;\n-import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.indices.IndicesService;\n import org.elasticsearch.indices.TypeMissingException;\n@@ -134,6 +136,11 @@ protected GetFieldMappingsResponse newResponse() {\n return new GetFieldMappingsResponse();\n }\n \n+ @Override\n+ protected ClusterBlockException checkRequestBlock(ClusterState state, InternalRequest request) {\n+ return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_READ, request.concreteIndex());\n+ }\n+\n private static final ToXContent.Params includeDefaultsParams = new ToXContent.Params() {\n \n final static String INCLUDE_DEFAULTS = \"include_defaults\";", "filename": "src/main/java/org/elasticsearch/action/admin/indices/mapping/get/TransportGetFieldMappingsIndexAction.java", "status": "modified" }, { "diff": "@@ -51,7 +51,7 @@ protected String executor() {\n \n @Override\n protected ClusterBlockException checkBlock(GetMappingsRequest request, ClusterState state) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_READ, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/mapping/get/TransportGetMappingsAction.java", "status": "modified" }, { "diff": "@@ -66,7 +66,7 @@ protected PutMappingResponse newResponse() {\n \n @Override\n protected ClusterBlockException checkBlock(PutMappingRequest request, ClusterState state) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, clusterService.state().metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_WRITE, clusterService.state().metaData().concreteIndices(request.indicesOptions(), request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/mapping/put/TransportPutMappingAction.java", "status": "modified" }, { "diff": "@@ -76,7 +76,7 @@ protected void doExecute(OpenIndexRequest request, ActionListener<OpenIndexRespo\n \n @Override\n protected ClusterBlockException checkBlock(OpenIndexRequest request, ClusterState state) {\n- return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n+ return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_WRITE, state.metaData().concreteIndices(request.indicesOptions(), request.indices()));\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/admin/indices/open/TransportOpenIndexAction.java", "status": "modified" } ] }
{ "body": "We have 6 servers and 14 shards in cluster, the index size 26GB, we have 1 replica so total size is 52GB, and ES v1.4.0, java version \"1.7.0_65\"\nWe use servers with RAM of 14GB (m3.xlarge), and heap is set to 7GB\n\nAfter update from 0.90 we started facing next issue:\nrandom cluster servers around once a day/two hits the heap size limit (java.lang.OutOfMemoryError: Java heap space) in log, and cluster falls - becomes red or yellow\n\nWe tried to add more servers to cluster - even 8, but than it's a matter of time when we'll hit the problem, so looks like there is no matter how many servers are in cluster - it still hits the limit after some time.\nBefore we started facing the problem we were running smoothly with 3 servers\nAlso we tried to set indices.fielddata.cache.size: 40% but it didnt helped\nAlso, there are possible workarounds to flush heap:\n1) reboot some server - than heap becomes under 70% and for some time cluster is ok\nor\n2) decrease number of replicas to 0, and than back to 1\n\nUpgrade to 1.4.1 hasn't solved an issue. \n\nFinally <b>found the query causing the cluster crashes</b>. After I commented code doing this - the claster is ok for few days. Before it was crashing once a day in average.\n\nthe query looks like:\n\n```\n {\n \"sort\": [\n {\n \"user_last_contacted.ct\": {\n \"nested_filter\": {\n \"term\": {\n \"user_last_contacted.owner_id\": \"542b2b7fb0bc2244056fd90f\"\n }\n },\n \"order\": \"desc\",\n \"missing\": \"_last\"\n }\n }\n ],\n \"query\": {\n \"filtered\": {\n \"filter\": {\n \"term\": {\n \"company_id\": \"52c0e0b7e0534664db9dfb9a\"\n }\n },\n \"query\": {\n \"match_all\": {}\n }\n }\n },\n \"explain\": false,\n \"from\": 0,\n \"size\": 100\n }\n```\n\nthe mapping looks like:\n\n```\n \"contact\": {\n \"_all\": {\n \"type\": \"string\",\n \"enabled\": true,\n \"analyzer\": \"default_full\",\n \"index\": \"analyzed\"\n },\n \"_routing\": {\n \"path\": \"company_id\",\n \"required\": true\n },\n \"_source\": {\n \"enabled\": false\n },\n \"include_in_all\": true,\n \"dynamic\": false,\n \"properties\": {\n \"user_last_contacted\": {\n \"include_in_all\": false,\n \"dynamic\": false,\n \"type\": \"nested\",\n \"properties\": {\n \"ct\": {\n \"include_in_all\": false,\n \"index\": \"not_analyzed\",\n \"type\": \"date\"\n },\n \"owner_id\": {\n \"type\": \"string\"\n }\n }\n }...\n```\n\nuser_last_contacted is an array field with nested objects. The size of the array can be 100+ items.\n", "comments": [ { "body": "@martijnvg could you take a look at this one please?\n", "created_at": "2014-12-09T12:46:17Z" }, { "body": "@serj-p This looks related to #8394, wrong cache behaviour (not the filter cache, but for in the fixed bitset cache for nested object fields) is causing higher heap usage that is causing OOM. Can you try to upgrade to version 1.4.1 this should resolve the OOM.\n", "created_at": "2014-12-16T17:21:18Z" }, { "body": "As I mentioned, upgrade to 1.4.1 hasn't solved the issue. \n", "created_at": "2014-12-16T17:32:41Z" }, { "body": "Sorry, I read your description too quickly... I see why an OOM can occur with nested sorting, the fix for the fixed bitset cache that was added in 1.4.1 (#8440) missed to do the change for nested sorting.\n\nAre you using using the `nested` query/filter or `nested` aggregator in another search request by any chance? If so can you confirm that this is working without eventually going OOM?\n", "created_at": "2014-12-16T20:51:13Z" }, { "body": "I'm not using `nested` aggregator, but I'm using `nested` query/filter and can confirm that cluster is running smoothly for two weeks after I disabled `nested` sorting.\n", "created_at": "2014-12-16T21:10:06Z" }, { "body": "@serj-p Thanks for confirming this. I'll fix this issue with nested sorting.\n", "created_at": "2014-12-16T21:47:23Z" } ], "number": 8810, "title": "java.lang.OutOfMemoryError: Java heap space after upgrade from 0.90" }
{ "body": "By executing in docid order the nested sorting doesn't need two random access bitsets, but just one (the parent bitset). In practise for mappings with only single level nested fields this means that only one bitset needs to be put in the bitset cache.\n\nBecause FieldComparator can't tell the IndexSearcher to not execute out of order like a Collector, this PR adds a setting the SearchContext that enforces in order docid execution, which the sort parsing enables if it encounters nested sorting. This was the cleanest way I could find in order for nested sorting to not use many bitsets which get cached by the bitset cache.\n\nPR for #8810\n", "number": 9199, "review_comments": [ { "body": "Maybe this should just be:\n\n``` java\n @Override\n public void setRequireDocsCollectedInOrder() {\n this.executeDocsInOrder = true;\n }\n```\n\nThis way we would be sure that collection happens in order if any of the features that are used require in-order scoring? (Otherwise, you could override a previously set value.)\n", "created_at": "2015-01-24T12:06:40Z" }, { "body": "we could call nextDoc here instead of advance(doc + 1) I think?\n", "created_at": "2015-01-24T12:12:17Z" }, { "body": "Maybe we should also assert here that `rootDoc >= lastSeenRootDoc`?\n", "created_at": "2015-01-24T12:15:00Z" }, { "body": "I think you should only advance if the current doc is not already beyond prevRootDoc?\n", "created_at": "2015-01-24T12:17:06Z" }, { "body": "and I don't think we need to check that `doc > prevRootDoc`, this should be guaranteed since we advanced the iterator before this loop?\n", "created_at": "2015-01-24T12:17:53Z" }, { "body": "this ternary statement + assignement is a bit long, maybe a regular if/else would make it easier to read?\n", "created_at": "2015-01-24T12:25:03Z" }, { "body": "makes sense, will change this\n", "created_at": "2015-01-26T21:13:09Z" } ], "title": "Nested sorting: only use random access bitset for parent" }
{ "commits": [ { "message": "Don't use the fixed bitset filter cache for child nested level filters, but the regular filter cache instead.\n\nRandom access based bitsets are not required for the child level nested level filters.\n\nCloses #8810" }, { "message": "applied feedback" } ], "files": [ { "diff": "@@ -129,9 +129,11 @@ public abstract class XFieldComparatorSource extends FieldComparatorSource {\n * parent + 1, or 0 if there is no previous parent, and R (excluded).\n */\n public static class Nested {\n- private final FixedBitSetFilter rootFilter, innerFilter;\n \n- public Nested(FixedBitSetFilter rootFilter, FixedBitSetFilter innerFilter) {\n+ private final FixedBitSetFilter rootFilter;\n+ private final Filter innerFilter;\n+\n+ public Nested(FixedBitSetFilter rootFilter, Filter innerFilter) {\n this.rootFilter = rootFilter;\n this.innerFilter = innerFilter;\n }\n@@ -146,7 +148,7 @@ public FixedBitSet rootDocs(AtomicReaderContext ctx) throws IOException {\n /**\n * Get a {@link FixedBitSet} that matches the inner documents.\n */\n- public FixedBitSet innerDocs(AtomicReaderContext ctx) throws IOException {\n+ public DocIdSet innerDocs(AtomicReaderContext ctx) throws IOException {\n return innerFilter.getDocIdSet(ctx, null);\n }\n }", "filename": "src/main/java/org/elasticsearch/index/fielddata/IndexFieldData.java", "status": "modified" }, { "diff": "@@ -23,6 +23,7 @@\n import org.apache.lucene.index.BinaryDocValues;\n import org.apache.lucene.index.RandomAccessOrds;\n import org.apache.lucene.index.SortedDocValues;\n+import org.apache.lucene.search.DocIdSet;\n import org.apache.lucene.search.FieldComparator;\n import org.apache.lucene.search.Scorer;\n import org.apache.lucene.search.SortField;\n@@ -81,7 +82,7 @@ protected SortedDocValues getSortedDocValues(AtomicReaderContext context, String\n selectedValues = sortMode.select(values);\n } else {\n final FixedBitSet rootDocs = nested.rootDocs(context);\n- final FixedBitSet innerDocs = nested.innerDocs(context);\n+ final DocIdSet innerDocs = nested.innerDocs(context);\n selectedValues = sortMode.select(values, rootDocs, innerDocs);\n }\n if (sortMissingFirst(missingValue) || sortMissingLast(missingValue)) {\n@@ -132,7 +133,7 @@ protected BinaryDocValues getBinaryDocValues(AtomicReaderContext context, String\n selectedValues = sortMode.select(values, nonNullMissingBytes);\n } else {\n final FixedBitSet rootDocs = nested.rootDocs(context);\n- final FixedBitSet innerDocs = nested.innerDocs(context);\n+ final DocIdSet innerDocs = nested.innerDocs(context);\n selectedValues = sortMode.select(values, nonNullMissingBytes, rootDocs, innerDocs, context.reader().maxDoc());\n }\n return selectedValues;", "filename": "src/main/java/org/elasticsearch/index/fielddata/fieldcomparator/BytesRefFieldComparatorSource.java", "status": "modified" }, { "diff": "@@ -20,6 +20,7 @@\n package org.elasticsearch.index.fielddata.fieldcomparator;\n \n import org.apache.lucene.index.AtomicReaderContext;\n+import org.apache.lucene.search.DocIdSet;\n import org.apache.lucene.search.FieldCache.Doubles;\n import org.apache.lucene.search.FieldComparator;\n import org.apache.lucene.search.Scorer;\n@@ -78,7 +79,7 @@ protected Doubles getDoubleValues(AtomicReaderContext context, String field) thr\n selectedValues = sortMode.select(values, dMissingValue);\n } else {\n final FixedBitSet rootDocs = nested.rootDocs(context);\n- final FixedBitSet innerDocs = nested.innerDocs(context);\n+ final DocIdSet innerDocs = nested.innerDocs(context);\n selectedValues = sortMode.select(values, dMissingValue, rootDocs, innerDocs, context.reader().maxDoc());\n }\n return new Doubles() {", "filename": "src/main/java/org/elasticsearch/index/fielddata/fieldcomparator/DoubleValuesComparatorSource.java", "status": "modified" }, { "diff": "@@ -19,6 +19,7 @@\n package org.elasticsearch.index.fielddata.fieldcomparator;\n \n import org.apache.lucene.index.AtomicReaderContext;\n+import org.apache.lucene.search.DocIdSet;\n import org.apache.lucene.search.FieldCache.Floats;\n import org.apache.lucene.search.FieldComparator;\n import org.apache.lucene.search.SortField;\n@@ -70,7 +71,7 @@ protected Floats getFloatValues(AtomicReaderContext context, String field) throw\n selectedValues = sortMode.select(values, dMissingValue);\n } else {\n final FixedBitSet rootDocs = nested.rootDocs(context);\n- final FixedBitSet innerDocs = nested.innerDocs(context);\n+ final DocIdSet innerDocs = nested.innerDocs(context);\n selectedValues = sortMode.select(values, dMissingValue, rootDocs, innerDocs, context.reader().maxDoc());\n }\n return new Floats() {", "filename": "src/main/java/org/elasticsearch/index/fielddata/fieldcomparator/FloatValuesComparatorSource.java", "status": "modified" }, { "diff": "@@ -21,6 +21,7 @@\n import org.apache.lucene.index.AtomicReaderContext;\n import org.apache.lucene.index.NumericDocValues;\n import org.apache.lucene.index.SortedNumericDocValues;\n+import org.apache.lucene.search.DocIdSet;\n import org.apache.lucene.search.FieldCache.Longs;\n import org.apache.lucene.search.FieldComparator;\n import org.apache.lucene.search.SortField;\n@@ -70,7 +71,7 @@ protected Longs getLongValues(AtomicReaderContext context, String field) throws\n selectedValues = sortMode.select(values, dMissingValue);\n } else {\n final FixedBitSet rootDocs = nested.rootDocs(context);\n- final FixedBitSet innerDocs = nested.innerDocs(context);\n+ final DocIdSet innerDocs = nested.innerDocs(context);\n selectedValues = sortMode.select(values, dMissingValue, rootDocs, innerDocs, context.reader().maxDoc());\n }\n return new Longs() {", "filename": "src/main/java/org/elasticsearch/index/fielddata/fieldcomparator/LongValuesComparatorSource.java", "status": "modified" }, { "diff": "@@ -112,6 +112,7 @@ public class PercolateContext extends SearchContext {\n private SearchContextAggregations aggregations;\n private QuerySearchResult querySearchResult;\n private Sort sort;\n+ private boolean executeDocsInOrder;\n \n public PercolateContext(PercolateShardRequest request, SearchShardTarget searchShardTarget, IndexShard indexShard,\n IndexService indexService, CacheRecycler cacheRecycler, PageCacheRecycler pageCacheRecycler,\n@@ -720,4 +721,14 @@ public SearchContext useSlowScroll(boolean useSlowScroll) {\n public Counter timeEstimateCounter() {\n throw new UnsupportedOperationException();\n }\n+\n+ @Override\n+ public boolean requireDocsCollectedInOrder() {\n+ return executeDocsInOrder;\n+ }\n+\n+ @Override\n+ public void setRequireDocsCollectedInOrder() {\n+ this.executeDocsInOrder = true;\n+ }\n }", "filename": "src/main/java/org/elasticsearch/percolator/PercolateContext.java", "status": "modified" }, { "diff": "@@ -21,6 +21,8 @@\n package org.elasticsearch.search;\n \n import org.apache.lucene.index.*;\n+import org.apache.lucene.search.DocIdSet;\n+import org.apache.lucene.search.DocIdSetIterator;\n import org.apache.lucene.util.Bits;\n import org.apache.lucene.util.BytesRef;\n import org.apache.lucene.util.BytesRefBuilder;\n@@ -31,6 +33,7 @@\n import org.elasticsearch.index.fielddata.SortedBinaryDocValues;\n import org.elasticsearch.index.fielddata.SortedNumericDoubleValues;\n \n+import java.io.IOException;\n import java.util.Locale;\n \n /**\n@@ -438,38 +441,65 @@ public long get(int docID) {\n *\n * NOTE: Calling the returned instance on docs that are not root docs is illegal\n */\n- public NumericDocValues select(final SortedNumericDocValues values, final long missingValue, final FixedBitSet rootDocs, final FixedBitSet innerDocs, int maxDoc) {\n- if (rootDocs == null || innerDocs == null) {\n+ public NumericDocValues select(final SortedNumericDocValues values, final long missingValue, final FixedBitSet rootDocs, final DocIdSet innerDocSet, int maxDoc) throws IOException {\n+ if (rootDocs == null || innerDocSet == null) {\n return select(DocValues.emptySortedNumeric(maxDoc), missingValue);\n }\n+ final DocIdSetIterator innerDocs = innerDocSet.iterator();\n+ if (innerDocs == null) {\n+ return select(DocValues.emptySortedNumeric(maxDoc), missingValue);\n+ }\n+\n return new NumericDocValues() {\n \n+ int lastSeenRootDoc = -1;\n+ long lastEmittedValue;\n+\n @Override\n public long get(int rootDoc) {\n assert rootDocs.get(rootDoc) : \"can only sort root documents\";\n+ assert rootDoc >= lastSeenRootDoc : \"can only evaluate current and upcoming root docs\";\n if (rootDoc == 0) {\n return missingValue;\n }\n \n- final int prevRootDoc = rootDocs.prevSetBit(rootDoc - 1);\n- final int firstNestedDoc = innerDocs.nextSetBit(prevRootDoc + 1);\n+ // If via compareBottom this method has previously invoked for the same rootDoc then we need to use the\n+ // last seen value as innerDocs can't re-iterate over nested child docs it already emitted,\n+ // because DocIdSetIterator can only advance forwards.\n+ if (rootDoc == lastSeenRootDoc) {\n+ return lastEmittedValue;\n+ }\n+ try {\n+ final int prevRootDoc = rootDocs.prevSetBit(rootDoc - 1);\n+ final int firstNestedDoc;\n+ if (innerDocs.docID() > prevRootDoc) {\n+ firstNestedDoc = innerDocs.docID();\n+ } else {\n+ firstNestedDoc = innerDocs.advance(prevRootDoc + 1);\n+ }\n \n- long accumulated = startLong();\n- int numValues = 0;\n+ long accumulated = startLong();\n+ int numValues = 0;\n \n- for (int doc = firstNestedDoc; doc != -1 && doc < rootDoc; doc = innerDocs.nextSetBit(doc + 1)) {\n- values.setDocument(doc);\n- final int count = values.count();\n- for (int i = 0; i < count; ++i) {\n- final long value = values.valueAt(i);\n- accumulated = apply(accumulated, value);\n+ for (int doc = firstNestedDoc; doc < rootDoc; doc = innerDocs.nextDoc()) {\n+ values.setDocument(doc);\n+ final int count = values.count();\n+ for (int i = 0; i < count; ++i) {\n+ final long value = values.valueAt(i);\n+ accumulated = apply(accumulated, value);\n+ }\n+ numValues += count;\n+ }\n+ lastSeenRootDoc = rootDoc;\n+ if (numValues == 0) {\n+ lastEmittedValue = missingValue;\n+ } else {\n+ lastEmittedValue = reduce(accumulated, numValues);\n }\n- numValues += count;\n+ return lastEmittedValue;\n+ } catch (IOException e) {\n+ throw new RuntimeException(e);\n }\n-\n- return numValues == 0\n- ? missingValue\n- : reduce(accumulated, numValues);\n }\n };\n }\n@@ -530,38 +560,64 @@ public double get(int docID) {\n *\n * NOTE: Calling the returned instance on docs that are not root docs is illegal\n */\n- public NumericDoubleValues select(final SortedNumericDoubleValues values, final double missingValue, final FixedBitSet rootDocs, final FixedBitSet innerDocs, int maxDoc) {\n- if (rootDocs == null || innerDocs == null) {\n+ public NumericDoubleValues select(final SortedNumericDoubleValues values, final double missingValue, final FixedBitSet rootDocs, DocIdSet innerDocSet, int maxDoc) throws IOException {\n+ if (rootDocs == null || innerDocSet == null) {\n return select(FieldData.emptySortedNumericDoubles(maxDoc), missingValue);\n }\n+\n+ final DocIdSetIterator innerDocs = innerDocSet.iterator();\n+ if (innerDocs == null) {\n+ return select(FieldData.emptySortedNumericDoubles(maxDoc), missingValue);\n+ }\n+\n return new NumericDoubleValues() {\n \n+ int lastSeenRootDoc = -1;\n+ double lastEmittedValue;\n+\n @Override\n public double get(int rootDoc) {\n assert rootDocs.get(rootDoc) : \"can only sort root documents\";\n+ assert rootDoc >= lastSeenRootDoc : \"can only evaluate current and upcoming root docs\";\n if (rootDoc == 0) {\n return missingValue;\n }\n \n- final int prevRootDoc = rootDocs.prevSetBit(rootDoc - 1);\n- final int firstNestedDoc = innerDocs.nextSetBit(prevRootDoc + 1);\n+ if (rootDoc == lastSeenRootDoc) {\n+ return lastEmittedValue;\n+ }\n+ try {\n+ final int prevRootDoc = rootDocs.prevSetBit(rootDoc - 1);\n+ final int firstNestedDoc;\n+ if (innerDocs.docID() > prevRootDoc) {\n+ firstNestedDoc = innerDocs.docID();\n+ } else {\n+ firstNestedDoc = innerDocs.advance(prevRootDoc + 1);\n+ }\n+\n+ double accumulated = startDouble();\n+ int numValues = 0;\n \n- double accumulated = startDouble();\n- int numValues = 0;\n+ for (int doc = firstNestedDoc; doc > prevRootDoc && doc < rootDoc; doc = innerDocs.nextDoc()) {\n+ values.setDocument(doc);\n+ final int count = values.count();\n+ for (int i = 0; i < count; ++i) {\n+ final double value = values.valueAt(i);\n+ accumulated = apply(accumulated, value);\n+ }\n+ numValues += count;\n+ }\n \n- for (int doc = firstNestedDoc; doc != -1 && doc < rootDoc; doc = innerDocs.nextSetBit(doc + 1)) {\n- values.setDocument(doc);\n- final int count = values.count();\n- for (int i = 0; i < count; ++i) {\n- final double value = values.valueAt(i);\n- accumulated = apply(accumulated, value);\n+ lastSeenRootDoc = rootDoc;\n+ if (numValues == 0) {\n+ lastEmittedValue = missingValue;\n+ } else {\n+ lastEmittedValue = reduce(accumulated, numValues);\n }\n- numValues += count;\n+ return lastEmittedValue;\n+ } catch (IOException e) {\n+ throw new RuntimeException(e);\n }\n-\n- return numValues == 0\n- ? missingValue\n- : reduce(accumulated, numValues);\n }\n };\n }\n@@ -613,10 +669,16 @@ public BytesRef get(int docID) {\n *\n * NOTE: Calling the returned instance on docs that are not root docs is illegal\n */\n- public BinaryDocValues select(final SortedBinaryDocValues values, final BytesRef missingValue, final FixedBitSet rootDocs, final FixedBitSet innerDocs, int maxDoc) {\n- if (rootDocs == null || innerDocs == null) {\n+ public BinaryDocValues select(final SortedBinaryDocValues values, final BytesRef missingValue, final FixedBitSet rootDocs, DocIdSet innerDocSet, int maxDoc) throws IOException {\n+ if (rootDocs == null || innerDocSet == null) {\n+ return select(FieldData.emptySortedBinary(maxDoc), missingValue);\n+ }\n+\n+ final DocIdSetIterator innerDocs = innerDocSet.iterator();\n+ if (innerDocs == null) {\n return select(FieldData.emptySortedBinary(maxDoc), missingValue);\n }\n+\n final BinaryDocValues selectedValues = select(values, new BytesRef());\n final Bits docsWithValue;\n if (FieldData.unwrapSingleton(values) != null) {\n@@ -628,35 +690,58 @@ public BinaryDocValues select(final SortedBinaryDocValues values, final BytesRef\n \n final BytesRefBuilder spare = new BytesRefBuilder();\n \n+ int lastSeenRootDoc = -1;\n+ BytesRef lastEmittedValue;\n+\n @Override\n public BytesRef get(int rootDoc) {\n assert rootDocs.get(rootDoc) : \"can only sort root documents\";\n+ assert rootDoc >= lastSeenRootDoc : \"can only evaluate current and upcoming root docs\";\n if (rootDoc == 0) {\n return missingValue;\n }\n \n- final int prevRootDoc = rootDocs.prevSetBit(rootDoc - 1);\n- final int firstNestedDoc = innerDocs.nextSetBit(prevRootDoc + 1);\n-\n- BytesRefBuilder accumulated = null;\n-\n- for (int doc = firstNestedDoc; doc != -1 && doc < rootDoc; doc = innerDocs.nextSetBit(doc + 1)) {\n- values.setDocument(doc);\n- final BytesRef innerValue = selectedValues.get(doc);\n- if (innerValue.length > 0 || docsWithValue == null || docsWithValue.get(doc)) {\n- if (accumulated == null) {\n- spare.copyBytes(innerValue);\n- accumulated = spare;\n- } else {\n- final BytesRef applied = apply(accumulated.get(), innerValue);\n- if (applied == innerValue) {\n- accumulated.copyBytes(innerValue);\n+ if (rootDoc == lastSeenRootDoc) {\n+ return lastEmittedValue;\n+ }\n+\n+ try {\n+ final int prevRootDoc = rootDocs.prevSetBit(rootDoc - 1);\n+ final int firstNestedDoc;\n+ if (innerDocs.docID() > prevRootDoc) {\n+ firstNestedDoc = innerDocs.docID();\n+ } else {\n+ firstNestedDoc = innerDocs.advance(prevRootDoc + 1);\n+ }\n+\n+ BytesRefBuilder accumulated = null;\n+\n+ for (int doc = firstNestedDoc; doc > prevRootDoc && doc < rootDoc; doc = innerDocs.nextDoc()) {\n+ values.setDocument(doc);\n+ final BytesRef innerValue = selectedValues.get(doc);\n+ if (innerValue.length > 0 || docsWithValue == null || docsWithValue.get(doc)) {\n+ if (accumulated == null) {\n+ spare.copyBytes(innerValue);\n+ accumulated = spare;\n+ } else {\n+ final BytesRef applied = apply(accumulated.get(), innerValue);\n+ if (applied == innerValue) {\n+ accumulated.copyBytes(innerValue);\n+ }\n }\n }\n }\n- }\n \n- return accumulated == null ? missingValue : accumulated.get();\n+ lastSeenRootDoc = rootDoc;\n+ if (accumulated == null) {\n+ lastEmittedValue = missingValue;\n+ } else {\n+ lastEmittedValue = accumulated.get();\n+ }\n+ return lastEmittedValue;\n+ } catch (IOException e) {\n+ throw new RuntimeException(e);\n+ }\n }\n };\n }\n@@ -706,13 +791,22 @@ public int getValueCount() {\n *\n * NOTE: Calling the returned instance on docs that are not root docs is illegal\n */\n- public SortedDocValues select(final RandomAccessOrds values, final FixedBitSet rootDocs, final FixedBitSet innerDocs) {\n- if (rootDocs == null || innerDocs == null) {\n- return select((RandomAccessOrds) DocValues.emptySortedSet());\n+ public SortedDocValues select(final RandomAccessOrds values, final FixedBitSet rootDocs, DocIdSet innerDocSet) throws IOException {\n+ if (rootDocs == null || innerDocSet == null) {\n+ return select(DocValues.emptySortedSet());\n }\n+\n+ final DocIdSetIterator innerDocs = innerDocSet.iterator();\n+ if (innerDocs == null) {\n+ return select(DocValues.emptySortedSet());\n+ }\n+\n final SortedDocValues selectedValues = select(values);\n return new SortedDocValues() {\n \n+ int lastSeenRootDoc = -1;\n+ int lastEmittedOrd;\n+\n @Override\n public BytesRef lookupOrd(int ord) {\n return selectedValues.lookupOrd(ord);\n@@ -726,26 +820,41 @@ public int getValueCount() {\n @Override\n public int getOrd(int rootDoc) {\n assert rootDocs.get(rootDoc) : \"can only sort root documents\";\n+ assert rootDoc >= lastSeenRootDoc : \"can only evaluate current and upcoming root docs\";\n if (rootDoc == 0) {\n return -1;\n }\n \n- final int prevRootDoc = rootDocs.prevSetBit(rootDoc - 1);\n- final int firstNestedDoc = innerDocs.nextSetBit(prevRootDoc + 1);\n- int ord = -1;\n-\n- for (int doc = firstNestedDoc; doc != -1 && doc < rootDoc; doc = innerDocs.nextSetBit(doc + 1)) {\n- final int innerOrd = selectedValues.getOrd(doc);\n- if (innerOrd != -1) {\n- if (ord == -1) {\n- ord = innerOrd;\n- } else {\n- ord = applyOrd(ord, innerOrd);\n+ if (rootDoc == lastSeenRootDoc) {\n+ return lastEmittedOrd;\n+ }\n+\n+ try {\n+ final int prevRootDoc = rootDocs.prevSetBit(rootDoc - 1);\n+ final int firstNestedDoc;\n+ if (innerDocs.docID() > prevRootDoc) {\n+ firstNestedDoc = innerDocs.docID();\n+ } else {\n+ firstNestedDoc = innerDocs.advance(prevRootDoc + 1);\n+ }\n+ int ord = -1;\n+\n+ for (int doc = firstNestedDoc; doc > prevRootDoc && doc < rootDoc; doc = innerDocs.nextDoc()) {\n+ final int innerOrd = selectedValues.getOrd(doc);\n+ if (innerOrd != -1) {\n+ if (ord == -1) {\n+ ord = innerOrd;\n+ } else {\n+ ord = applyOrd(ord, innerOrd);\n+ }\n }\n }\n- }\n \n- return ord;\n+ lastSeenRootDoc = rootDoc;\n+ return lastEmittedOrd = ord;\n+ } catch (IOException e) {\n+ throw new RuntimeException(e);\n+ }\n }\n };\n }", "filename": "src/main/java/org/elasticsearch/search/MultiValueMode.java", "status": "modified" }, { "diff": "@@ -638,4 +638,14 @@ public SearchContext useSlowScroll(boolean useSlowScroll) {\n public Counter timeEstimateCounter() {\n throw new UnsupportedOperationException(\"Not supported\");\n }\n+\n+ @Override\n+ public boolean requireDocsCollectedInOrder() {\n+ return context.requireDocsCollectedInOrder();\n+ }\n+\n+ @Override\n+ public void setRequireDocsCollectedInOrder() {\n+ throw new UnsupportedOperationException(\"Not supported\");\n+ }\n }", "filename": "src/main/java/org/elasticsearch/search/aggregations/metrics/tophits/TopHitsContext.java", "status": "modified" }, { "diff": "@@ -170,6 +170,10 @@ public void search(List<AtomicReaderContext> leaves, Weight weight, Collector co\n if (searchContext.minimumScore() != null) {\n collector = new MinimumScoreCollector(collector, searchContext.minimumScore());\n }\n+\n+ if (searchContext.requireDocsCollectedInOrder() && collector.acceptsDocsOutOfOrder()) {\n+ collector = enforceDocsInOrder(collector);\n+ }\n }\n \n // we only compute the doc id set once since within a context, we execute the same query always...\n@@ -221,4 +225,28 @@ public Explanation explain(Query query, int doc) throws IOException {\n searchContext.clearReleasables(Lifetime.COLLECTION);\n }\n }\n+\n+ private static Collector enforceDocsInOrder(final Collector collector) {\n+ return new Collector() {\n+ @Override\n+ public void setScorer(Scorer scorer) throws IOException {\n+ collector.setScorer(scorer);\n+ }\n+\n+ @Override\n+ public void collect(int doc) throws IOException {\n+ collector.collect(doc);\n+ }\n+\n+ @Override\n+ public void setNextReader(AtomicReaderContext context) throws IOException {\n+ collector.setNextReader(context);\n+ }\n+\n+ @Override\n+ public boolean acceptsDocsOutOfOrder() {\n+ return false;\n+ }\n+ };\n+ }\n }", "filename": "src/main/java/org/elasticsearch/search/internal/ContextIndexSearcher.java", "status": "modified" }, { "diff": "@@ -185,6 +185,8 @@ public class DefaultSearchContext extends SearchContext {\n \n private volatile boolean useSlowScroll;\n \n+ private boolean executeDocsInOrder;\n+\n public DefaultSearchContext(long id, ShardSearchRequest request, SearchShardTarget shardTarget,\n Engine.Searcher engineSearcher, IndexService indexService, IndexShard indexShard,\n ScriptService scriptService, CacheRecycler cacheRecycler, PageCacheRecycler pageCacheRecycler,\n@@ -738,4 +740,14 @@ public DefaultSearchContext useSlowScroll(boolean useSlowScroll) {\n public Counter timeEstimateCounter() {\n return timeEstimateCounter;\n }\n+\n+ @Override\n+ public boolean requireDocsCollectedInOrder() {\n+ return executeDocsInOrder;\n+ }\n+\n+ @Override\n+ public void setRequireDocsCollectedInOrder() {\n+ this.executeDocsInOrder = true;\n+ }\n }", "filename": "src/main/java/org/elasticsearch/search/internal/DefaultSearchContext.java", "status": "modified" }, { "diff": "@@ -368,6 +368,10 @@ public void clearReleasables(Lifetime lifetime) {\n \n public abstract Counter timeEstimateCounter();\n \n+ public abstract boolean requireDocsCollectedInOrder();\n+\n+ public abstract void setRequireDocsCollectedInOrder();\n+\n /**\n * The life time of an object that is used during search execution.\n */", "filename": "src/main/java/org/elasticsearch/search/internal/SearchContext.java", "status": "modified" }, { "diff": "@@ -20,6 +20,7 @@\n package org.elasticsearch.search.sort;\n \n import org.apache.lucene.index.AtomicReaderContext;\n+import org.apache.lucene.search.DocIdSet;\n import org.apache.lucene.search.FieldCache.Doubles;\n import org.apache.lucene.search.FieldComparator;\n import org.apache.lucene.search.Filter;\n@@ -157,11 +158,11 @@ public SortField parse(XContentParser parser, SearchContext context) throws Exce\n final Nested nested;\n if (objectMapper != null && objectMapper.nested().isNested()) {\n FixedBitSetFilter rootDocumentsFilter = context.fixedBitSetFilterCache().getFixedBitSetFilter(NonNestedDocsFilter.INSTANCE);\n- FixedBitSetFilter innerDocumentsFilter;\n+ Filter innerDocumentsFilter;\n if (nestedFilter != null) {\n- innerDocumentsFilter = context.fixedBitSetFilterCache().getFixedBitSetFilter(nestedFilter);\n+ innerDocumentsFilter = context.filterCache().cache(nestedFilter);\n } else {\n- innerDocumentsFilter = context.fixedBitSetFilterCache().getFixedBitSetFilter(objectMapper.nestedTypeFilter());\n+ innerDocumentsFilter = context.filterCache().cache(objectMapper.nestedTypeFilter());\n }\n nested = new Nested(rootDocumentsFilter, innerDocumentsFilter);\n } else {\n@@ -187,7 +188,7 @@ protected Doubles getDoubleValues(AtomicReaderContext context, String field) thr\n selectedValues = finalSortMode.select(distanceValues, Double.MAX_VALUE);\n } else {\n final FixedBitSet rootDocs = nested.rootDocs(context);\n- final FixedBitSet innerDocs = nested.innerDocs(context);\n+ final DocIdSet innerDocs = nested.innerDocs(context);\n selectedValues = finalSortMode.select(distanceValues, Double.MAX_VALUE, rootDocs, innerDocs, context.reader().maxDoc());\n }\n return new Doubles() {", "filename": "src/main/java/org/elasticsearch/search/sort/GeoDistanceSortParser.java", "status": "modified" }, { "diff": "@@ -138,11 +138,11 @@ public SortField parse(XContentParser parser, SearchContext context) throws Exce\n }\n \n FixedBitSetFilter rootDocumentsFilter = context.fixedBitSetFilterCache().getFixedBitSetFilter(NonNestedDocsFilter.INSTANCE);\n- FixedBitSetFilter innerDocumentsFilter;\n+ Filter innerDocumentsFilter;\n if (nestedFilter != null) {\n- innerDocumentsFilter = context.fixedBitSetFilterCache().getFixedBitSetFilter(nestedFilter);\n+ innerDocumentsFilter = context.filterCache().cache(nestedFilter);\n } else {\n- innerDocumentsFilter = context.fixedBitSetFilterCache().getFixedBitSetFilter(objectMapper.nestedTypeFilter());\n+ innerDocumentsFilter = context.filterCache().cache(objectMapper.nestedTypeFilter());\n }\n nested = new Nested(rootDocumentsFilter, innerDocumentsFilter);\n } else {", "filename": "src/main/java/org/elasticsearch/search/sort/ScriptSortParser.java", "status": "modified" }, { "diff": "@@ -248,12 +248,13 @@ private void addSortField(SearchContext context, List<SortField> sortFields, Str\n }\n final Nested nested;\n if (objectMapper != null && objectMapper.nested().isNested()) {\n+ context.setRequireDocsCollectedInOrder();\n FixedBitSetFilter rootDocumentsFilter = context.fixedBitSetFilterCache().getFixedBitSetFilter(NonNestedDocsFilter.INSTANCE);\n- FixedBitSetFilter innerDocumentsFilter;\n+ Filter innerDocumentsFilter;\n if (nestedFilter != null) {\n- innerDocumentsFilter = context.fixedBitSetFilterCache().getFixedBitSetFilter(nestedFilter);\n+ innerDocumentsFilter = context.filterCache().cache(nestedFilter);\n } else {\n- innerDocumentsFilter = context.fixedBitSetFilterCache().getFixedBitSetFilter(objectMapper.nestedTypeFilter());\n+ innerDocumentsFilter = context.filterCache().cache(objectMapper.nestedTypeFilter());\n }\n nested = new Nested(rootDocumentsFilter, innerDocumentsFilter);\n } else {", "filename": "src/main/java/org/elasticsearch/search/sort/SortParseElement.java", "status": "modified" }, { "diff": "@@ -21,6 +21,8 @@\n \n import com.carrotsearch.randomizedtesting.generators.RandomStrings;\n import org.apache.lucene.index.*;\n+import org.apache.lucene.search.DocIdSet;\n+import org.apache.lucene.search.DocIdSetIterator;\n import org.apache.lucene.util.BytesRef;\n import org.apache.lucene.util.FixedBitSet;\n import org.elasticsearch.index.fielddata.FieldData;\n@@ -29,6 +31,7 @@\n import org.elasticsearch.index.fielddata.SortedNumericDoubleValues;\n import org.elasticsearch.test.ElasticsearchTestCase;\n \n+import java.io.IOException;\n import java.util.Arrays;\n \n public class MultiValueModeTests extends ElasticsearchTestCase {\n@@ -55,7 +58,7 @@ private static FixedBitSet randomInnerDocs(FixedBitSet rootDocs) {\n return innerDocs;\n }\n \n- public void testSingleValuedLongs() {\n+ public void testSingleValuedLongs() throws Exception {\n final int numDocs = scaledRandomIntBetween(1, 100);\n final long[] array = new long[numDocs];\n final FixedBitSet docsWithValue = randomBoolean() ? null : new FixedBitSet(numDocs);\n@@ -82,7 +85,7 @@ public long get(int docID) {\n verify(multiValues, numDocs, rootDocs, innerDocs);\n }\n \n- public void testMultiValuedLongs() {\n+ public void testMultiValuedLongs() throws Exception {\n final int numDocs = scaledRandomIntBetween(1, 100);\n final long[][] array = new long[numDocs][];\n for (int i = 0; i < numDocs; ++i) {\n@@ -142,20 +145,24 @@ private void verify(SortedNumericDocValues values, int maxDoc) {\n }\n }\n \n- private void verify(SortedNumericDocValues values, int maxDoc, FixedBitSet rootDocs, FixedBitSet innerDocs) {\n+ private void verify(SortedNumericDocValues values, int maxDoc, FixedBitSet rootDocs, DocIdSet innerDocSet) throws IOException {\n for (long missingValue : new long[] { 0, randomLong() }) {\n for (MultiValueMode mode : MultiValueMode.values()) {\n- final NumericDocValues selected = mode.select(values, missingValue, rootDocs, innerDocs, maxDoc);\n+ final NumericDocValues selected = mode.select(values, missingValue, rootDocs, innerDocSet, maxDoc);\n int prevRoot = -1;\n for (int root = rootDocs.nextSetBit(0); root != -1; root = root + 1 < maxDoc ? rootDocs.nextSetBit(root + 1) : -1) {\n final long actual = selected.get(root);\n long expected = mode.startLong();\n int numValues = 0;\n- for (int child = innerDocs.nextSetBit(prevRoot + 1); child != -1 && child < root; child = innerDocs.nextSetBit(child + 1)) {\n- values.setDocument(child);\n- for (int j = 0; j < values.count(); ++j) {\n- expected = mode.apply(expected, values.valueAt(j));\n- ++numValues;\n+\n+ DocIdSetIterator innerDocs = innerDocSet.iterator();\n+ if (innerDocs != null) {\n+ for (int child = innerDocs.advance(prevRoot + 1); child != -1 && child < root; child = innerDocs.advance(child + 1)) {\n+ values.setDocument(child);\n+ for (int j = 0; j < values.count(); ++j) {\n+ expected = mode.apply(expected, values.valueAt(j));\n+ ++numValues;\n+ }\n }\n }\n if (numValues == 0) {\n@@ -172,7 +179,7 @@ private void verify(SortedNumericDocValues values, int maxDoc, FixedBitSet rootD\n }\n }\n \n- public void testSingleValuedDoubles() {\n+ public void testSingleValuedDoubles() throws Exception {\n final int numDocs = scaledRandomIntBetween(1, 100);\n final double[] array = new double[numDocs];\n final FixedBitSet docsWithValue = randomBoolean() ? null : new FixedBitSet(numDocs);\n@@ -199,7 +206,7 @@ public double get(int docID) {\n verify(multiValues, numDocs, rootDocs, innerDocs);\n }\n \n- public void testMultiValuedDoubles() {\n+ public void testMultiValuedDoubles() throws Exception {\n final int numDocs = scaledRandomIntBetween(1, 100);\n final double[][] array = new double[numDocs][];\n for (int i = 0; i < numDocs; ++i) {\n@@ -259,7 +266,7 @@ private void verify(SortedNumericDoubleValues values, int maxDoc) {\n }\n }\n \n- private void verify(SortedNumericDoubleValues values, int maxDoc, FixedBitSet rootDocs, FixedBitSet innerDocs) {\n+ private void verify(SortedNumericDoubleValues values, int maxDoc, FixedBitSet rootDocs, FixedBitSet innerDocs) throws IOException {\n for (long missingValue : new long[] { 0, randomLong() }) {\n for (MultiValueMode mode : MultiValueMode.values()) {\n final NumericDoubleValues selected = mode.select(values, missingValue, rootDocs, innerDocs, maxDoc);\n@@ -289,7 +296,7 @@ private void verify(SortedNumericDoubleValues values, int maxDoc, FixedBitSet ro\n }\n }\n \n- public void testSingleValuedStrings() {\n+ public void testSingleValuedStrings() throws Exception {\n final int numDocs = scaledRandomIntBetween(1, 100);\n final BytesRef[] array = new BytesRef[numDocs];\n final FixedBitSet docsWithValue = randomBoolean() ? null : new FixedBitSet(numDocs);\n@@ -319,7 +326,7 @@ public BytesRef get(int docID) {\n verify(multiValues, numDocs, rootDocs, innerDocs);\n }\n \n- public void testMultiValuedStrings() {\n+ public void testMultiValuedStrings() throws Exception {\n final int numDocs = scaledRandomIntBetween(1, 100);\n final BytesRef[][] array = new BytesRef[numDocs][];\n for (int i = 0; i < numDocs; ++i) {\n@@ -384,7 +391,7 @@ private void verify(SortedBinaryDocValues values, int maxDoc) {\n }\n }\n \n- private void verify(SortedBinaryDocValues values, int maxDoc, FixedBitSet rootDocs, FixedBitSet innerDocs) {\n+ private void verify(SortedBinaryDocValues values, int maxDoc, FixedBitSet rootDocs, FixedBitSet innerDocs) throws IOException {\n for (BytesRef missingValue : new BytesRef[] { new BytesRef(), new BytesRef(RandomStrings.randomAsciiOfLength(getRandom(), 8)) }) {\n for (MultiValueMode mode : new MultiValueMode[] {MultiValueMode.MIN, MultiValueMode.MAX}) {\n final BinaryDocValues selected = mode.select(values, missingValue, rootDocs, innerDocs, maxDoc);\n@@ -416,7 +423,7 @@ private void verify(SortedBinaryDocValues values, int maxDoc, FixedBitSet rootDo\n }\n \n \n- public void testSingleValuedOrds() {\n+ public void testSingleValuedOrds() throws Exception {\n final int numDocs = scaledRandomIntBetween(1, 100);\n final int[] array = new int[numDocs];\n for (int i = 0; i < array.length; ++i) {\n@@ -449,7 +456,7 @@ public int getValueCount() {\n verify(multiValues, numDocs, rootDocs, innerDocs);\n }\n \n- public void testMultiValuedOrds() {\n+ public void testMultiValuedOrds() throws Exception {\n final int numDocs = scaledRandomIntBetween(1, 100);\n final long[][] array = new long[numDocs][];\n for (int i = 0; i < numDocs; ++i) {\n@@ -518,7 +525,7 @@ private void verify(RandomAccessOrds values, int maxDoc) {\n }\n }\n \n- private void verify(RandomAccessOrds values, int maxDoc, FixedBitSet rootDocs, FixedBitSet innerDocs) {\n+ private void verify(RandomAccessOrds values, int maxDoc, FixedBitSet rootDocs, FixedBitSet innerDocs) throws IOException {\n for (MultiValueMode mode : new MultiValueMode[] {MultiValueMode.MIN, MultiValueMode.MAX}) {\n final SortedDocValues selected = mode.select(values, rootDocs, innerDocs);\n int prevRoot = -1;", "filename": "src/test/java/org/elasticsearch/search/MultiValueModeTests.java", "status": "modified" }, { "diff": "@@ -629,4 +629,14 @@ public SearchContext useSlowScroll(boolean useSlowScroll) {\n public Counter timeEstimateCounter() {\n throw new UnsupportedOperationException();\n }\n+\n+ @Override\n+ public boolean requireDocsCollectedInOrder() {\n+ return false;\n+ }\n+\n+ @Override\n+ public void setRequireDocsCollectedInOrder() {\n+ throw new UnsupportedOperationException();\n+ }\n }", "filename": "src/test/java/org/elasticsearch/test/TestSearchContext.java", "status": "modified" } ] }
{ "body": "Using Elasticsearch 1.1.1. I'm seeing geo_polygon behave oddly when the input polygon cross the date line. From a quick look at GeoPolygonFilter.GeoPolygonDocSet.pointInPolygon it doesn't seem that this is explicitly handled. \n\nThe reproduce this create an index/mapping as:\n\nPOST /geo\n\n``` json\n{ \"mappings\": { \"docs\": { \"properties\": { \"p\": { \"type\": \"geo_point\" } } } } }\n```\n\nUpload a document:\n\nPUT /geo/docs/1\n\n``` json\n{ \"p\": { \"lat\": 40, \"lon\": 179 } }\n```\n\nSearch with a polygon that's a box around the uploaded point and that crosses the date line:\n\nPOST /geo/docs/_search\n\n``` json\n{\n \"filter\": { \"geo_polygon\": { \"p\": { \"points\": [\n { \"lat\": 42, \"lon\": 178 },\n { \"lat\": 39, \"lon\": 178 },\n { \"lat\": 39, \"lon\": -179 },\n { \"lat\": 42, \"lon\": -179 },\n { \"lat\": 42, \"lon\": 178 }\n ] } } }\n}\n```\n\nES returns 0 results. If I use a polygon that stays to the west of the date line I do get results:\n\n``` json\n{\n \"filter\": { \"geo_polygon\": { \"p\": { \"points\": [\n { \"lat\": 42, \"lon\": 178 },\n { \"lat\": 39, \"lon\": 178 },\n { \"lat\": 39, \"lon\": 179.5 },\n { \"lat\": 42, \"lon\": 179.5 },\n { \"lat\": 42, \"lon\": 178 }\n ] } } }\n}\n```\n\nAlso, if I use a bounding box query with the same coordinates as the initial polygon, it does work:\n\n``` json\n{\n \"filter\": { \"geo_bounding_box\": { \"p\": \n { \"top_left\": { \"lat\": 42, \"lon\": 178 },\n \"bottom_right\": { \"lat\": 39, \"lon\": -179 }\n }\n } }\n}\n```\n\nIt seems that this code needs to either split the check into east and west checks or normalize the input values. Am I missing something?\n", "comments": [ { "body": "This is actually working as expected. Your first query will resolve as shown in the following GeoJSON gist which will not contain your document:\nhttps://gist.github.com/anonymous/82b50b74a7b6d170bfc6\n\nTo create the desired results you specified you would need to split the polygon in to two polygons, one to the left of the date line and the other to the right. This can be done with the following query:\n\n```\ncurl -XPOST 'localhost:9200/geo/_search?pretty' -d '{\n \"query\" : {\n \"filtered\" : {\n \"query\" : {\n \"match_all\" : {}\n },\n \"filter\" : {\n \"or\" : [\n {\n \"geo_polygon\" : {\n \"p\" : {\n \"points\" : [\n { \"lat\": 42, \"lon\": 178 },\n { \"lat\": 39, \"lon\": 178 },\n { \"lat\": 39, \"lon\": 180 },\n { \"lat\": 42, \"lon\": 180 },\n { \"lat\": 42, \"lon\": 178 }\n ]\n }\n }\n },\n {\n \"geo_polygon\" : {\n \"p\" : {\n \"points\" : [\n { \"lat\": 42, \"lon\": -180 },\n { \"lat\": 39, \"lon\": -180 },\n { \"lat\": 39, \"lon\": -179 },\n { \"lat\": 42, \"lon\": -179 },\n { \"lat\": 42, \"lon\": -180 }\n ]\n }\n }\n }\n ]\n }\n }\n }\n}'\n```\n\nThe bounding box is a little different since by specifying which coordinate is top_left and which is top_right you are fixing the box to overlap the date line.\n", "created_at": "2014-05-27T15:58:46Z" }, { "body": "For me, I'd expect the line segment between two points to lie in the same direction as the great circle arc. Splitting an arbitrary query into sub-polygons isn't entirely straightforward, because it could intersect longitude 180 multiple times.\n\nI have a rough draft of a commit that fixes this issue by shifting the polygon in GeoPolygonFilter (and all points passed in to pointInPolygon) so that it lies completely on one side of longitude 180. It is low-overhead and has basically no effect on the normal case. The only constraint is that the polygon can't span more than 360 degrees in longitude.\n\nDoes this sound reasonable, and is it worth submitting a PR?\n", "created_at": "2014-09-17T21:44:43Z" }, { "body": "@colings86 could this be solved with a `left/right` parameter or something similar?\n", "created_at": "2014-09-25T18:21:19Z" }, { "body": "@nknize is this fixed by https://github.com/elasticsearch/elasticsearch/pull/8521 ?\n", "created_at": "2014-11-28T09:49:36Z" }, { "body": "It does not. This is the infamous \"ambiguous polygon\" problem that occurs when treating a spherical coordinate system as a cartesian plane. I opened a discussion and feature branch to address this in #8672 \n\ntldr: GeoJSON doesn't specify order, but OGC does.\n\nFeature fix: Default behavior = For GeoJSON poly's specified in OGC order (shell: ccw, holes: cw) ES Ring logic will correctly transform and split polys across the dateline (e.g., see https://gist.github.com/nknize/d122b243dc63dcba8474). For GeoJSON poly's provided in the opposite order original behavior will occur (e.g., @colings86 example https://gist.github.com/anonymous/82b50b74a7b6d170bfc6). \n\nAdditionally, I like @clintongormley suggestion of adding an optional left/right parameter. Its an easy fix letting user's clearly specify intent.\n", "created_at": "2014-12-01T14:32:21Z" }, { "body": "This is now addressed in PR #8762\n", "created_at": "2014-12-03T14:00:29Z" }, { "body": "Optional left/right parameter added in PR #8978 \n", "created_at": "2014-12-16T18:41:49Z" }, { "body": "Merged in edd33c0\n", "created_at": "2014-12-29T22:11:30Z" }, { "body": "I tried @pablocastro's example on trunk, and unfortunately the issue is still there. There might've been some confusion -- the original example refers to the geo_polygon filter, whereas @nknize's fix is for the polygon geo_shape type.\n\nShould I create a new ticket for the geo_polygon filter, or should we re-open this one?\n", "created_at": "2015-01-03T22:17:20Z" }, { "body": "Good catch @jtibshirani! We'll go ahead and reopen this ticket since its a separate issue.\n", "created_at": "2015-01-04T04:41:55Z" }, { "body": "Reopening due to #9462 \n", "created_at": "2015-01-28T15:06:48Z" }, { "body": "The search hits of really huge polygon (elasticsearch 1.4.3)\n\n``` javascript\n{\n \"query\": {\n \"filtered\": {\n \"filter\": {\n \"geo_shape\": {\n \"location\": {\n \"shape\": {\n \"type\": \"Polygon\",\n \"coordinates\": [\n [\n [\n -70.4873046875,\n 79.9818262344106\n ],\n [\n -70.4873046875,\n -28.07230647927298\n ],\n [\n -103.3583984375,\n -28.07230647927298\n ],\n [\n -103.3583984375,\n 79.9818262344106\n ],\n [\n -70.4873046875,\n 79.9818262344106\n ]\n ]\n ],\n \"orientation\": \"ccw\"\n }\n }\n }\n }\n }\n }\n}\n```\n\ndoesn't include any points inside that polygon even with orientation option.\nIs it related with this issue?\n", "created_at": "2015-02-12T17:36:56Z" }, { "body": "Its related. For now, if you have an ambiguous poly that crosses the pole you'll need to manually split it into 2 separate explicit polys and put inside a `MultiPolygon` Depending on the complexity of the poly computing the pole intersections can be non-trivial. The in-work patch will do this for you.\n\nA separate issue is related to the distance_error_pct parameter. If not specified, larger filters will have reduced accuracy. Though this seems unrelated to your GeoJSON\n", "created_at": "2015-02-12T22:49:53Z" }, { "body": "relates to #26286", "created_at": "2018-03-26T16:59:07Z" }, { "body": "closing in favor of #26286 since its an old issue", "created_at": "2018-03-26T17:00:11Z" } ], "number": 5968, "title": "geo_polygon not handling polygons that cross the date line properly" }
{ "body": "By default GeoPolygonFilter normalizes GeoPoints to a [-180:180] lon, [-90:90] lat coordinate boundary. This requires the GeoPolygonFilter be split into east/west, north/south regions to properly return points collection that wrap the dateline and poles. To keep queries fast, this simple fix converts the GeoPoints for both the target polygon and candidate points to a [0:360] lon, [0:180] lat coordinate range. This way the user (or the filter logic) doesn't have to split the filter in two parts.\n\ncloses #5968\n", "number": 9171, "review_comments": [ { "body": "nit: space between `<` and `0`\n", "created_at": "2015-01-07T21:34:49Z" }, { "body": "While it doesn't look like the `normalize` setting is currently documented for geo poly filter, it probably should be? Also, changing this here means the default for the setting would be different for geo poly vs geo point...that seems like it might be confusing for users?\n", "created_at": "2015-01-07T21:37:02Z" }, { "body": "Instead of reapplying the logic for the previous element on every iteration, you could initialize `lon0` and `lat0` to points[0] before the loop, and then add `lon0 = lon1; lat0 = lat1;` at the end of the loop?\n", "created_at": "2015-01-07T21:42:49Z" }, { "body": "While I realize this is the same logic that existed before, it would be nice to add a short comment explaining the flip/flop logic here for determining whether the point is ultimately in the polygon (it took me a bit to think through).\n", "created_at": "2015-01-07T22:05:49Z" }, { "body": "Actually thinking about this more, I am not confident of my understanding of how this is working (especially given the translation of the non-quadrant 1 points, so that the polygon is no longer enclosed). So a comment here would definitely be appreciated.\n", "created_at": "2015-01-07T23:28:51Z" }, { "body": "I agree. `normalize` should be documented. The reason behind changing defaults to `false` is this commit changes `pointInPolygon()` logic to translate the vertex coordinates of the search poly and the candidate point to a great-circle/sphere coordinate system. `normalize()` translates to standard lat/lon coordinate system. So we have a situation for this filter where a call to `normalize()` induces unnecessary computation since everything will be translated back to great-circle anyway.\n", "created_at": "2015-01-08T18:13:54Z" } ], "title": "GeoPolygonFilter not properly handling dateline and pole crossing" }
{ "commits": [ { "message": "[GEO] GeoPolygonFilter not properly handling dateline and pole crossing\n\nBy default GeoPolygonFilter normalizes GeoPoints to a [-180:180] lon, [-90:90] lat coordinate boundary. This requires the GeoPolygonFilter be split into east/west, north/south regions to properly return points collection that wrap the dateline and poles. To keep queries fast, this simple fix converts the GeoPoints for both the target polygon and candidate points to a [0:360] lon, [0:180] lat coordinate range. This way the user (or the filter logic) doesn't have to split the filter in two parts.\n\ncloses #5968" }, { "message": "Updating code formatting." }, { "message": "Updating variable assignments and adding comments to pointInPolygon." } ], "files": [ { "diff": "@@ -76,8 +76,8 @@ public Filter parse(QueryParseContext parseContext) throws IOException, QueryPar\n \n List<GeoPoint> shell = Lists.newArrayList();\n \n- boolean normalizeLon = true;\n- boolean normalizeLat = true;\n+ boolean normalizeLon = false;\n+ boolean normalizeLat = false;\n \n String filterName = null;\n String currentFieldName = null;", "filename": "src/main/java/org/elasticsearch/index/query/GeoPolygonFilterParser.java", "status": "modified" }, { "diff": "@@ -93,15 +93,32 @@ protected boolean matchDoc(int doc) {\n \n private static boolean pointInPolygon(GeoPoint[] points, double lat, double lon) {\n boolean inPoly = false;\n+ double lon0 = (points[0].lon() < 0) ? points[0].lon() + 360.0 : points[0].lon();\n+ double lat0 = (points[0].lat() < 0) ? points[0].lat() + 180.0 : points[0].lat();\n+ double lat1, lon1;\n+ if (lon < 0) {\n+ lon += 360;\n+ }\n+\n+ if (lat < 0) {\n+ lat += 180;\n+ }\n \n+ // simple even-odd PIP computation\n+ // 1. Determine if point is contained in the longitudinal range\n+ // 2. Determine whether point crosses the edge by computing the latitudinal delta\n+ // between the end-point of a parallel vector (originating at the point) and the\n+ // y-component of the edge sink\n for (int i = 1; i < points.length; i++) {\n- if (points[i].lon() < lon && points[i-1].lon() >= lon\n- || points[i-1].lon() < lon && points[i].lon() >= lon) {\n- if (points[i].lat() + (lon - points[i].lon()) /\n- (points[i-1].lon() - points[i].lon()) * (points[i-1].lat() - points[i].lat()) < lat) {\n+ lon1 = (points[i].lon() < 0) ? points[i].lon() + 360.0 : points[i].lon();\n+ lat1 = (points[i].lat() < 0) ? points[i].lat() + 180.0 : points[i].lat();\n+ if (lon1 < lon && lon0 >= lon || lon0 < lon && lon1 >= lon) {\n+ if (lat1 + (lon - lon1) / (lon0 - lon1) * (lat0 - lat1) < lat) {\n inPoly = !inPoly;\n }\n }\n+ lon0 = lon1;\n+ lat0 = lat1;\n }\n return inPoly;\n }", "filename": "src/main/java/org/elasticsearch/index/search/geo/GeoPolygonFilter.java", "status": "modified" } ] }
{ "body": "I have a document in an index such that the following request returns a match, with the `matched_queries` element containing `query1`:\n\n```\ncurl -XPOST 'http://localhost:9200/index/_search' –d '\n{\n \"query\": {\n \"match\": {\n \"stuff\": {\n \"_name\": \"query1\",\n \"query\": \"blah\"\n }\n }\n }\n}\n'\n```\n\nHowever, if that query is wrapped in a wrapper query, then the name does not appear in the list of `matched_queries`:\n\n```\ncurl -XPOST 'http://localhost:9200/index/_search' –d '\n{\n \"query\": {\n \"wrapper\": {\n \"query\": \"eyJtYXRjaCI6IHsic3R1ZmYiOiB7Il9uYW1lIjogInF1ZXJ5MSIsICJxdWVyeSI6ICJibGFoIn19fQ==\"\n }\n }\n}\n'\n```\n\nI am using the java api and the `WrapperQueryBuilder`, but the effect is the same whether using the java api or the rest interface.\n\n(gist with set-up steps in case it's useful: https://gist.github.com/tstibbs/645e01c5dcdfa9d2a193)\n", "comments": [], "number": 6871, "title": "\"matched_queries\" does not include queries within a wrapper query" }
{ "body": "PR for #6871\n", "number": 9166, "review_comments": [], "title": "Make sure that named filters/ queries defined in a wrapped query/filters aren't lost" }
{ "commits": [ { "message": "Made sure that named filters and queries defined in a wrapped query and filter are not lost.\n\nCloses #6871" } ], "files": [ { "diff": "@@ -250,6 +250,10 @@ public ImmutableMap<String, Filter> copyNamedFilters() {\n return ImmutableMap.copyOf(namedFilters);\n }\n \n+ public void combineNamedFilters(QueryParseContext context) {\n+ namedFilters.putAll(context.namedFilters);\n+ }\n+\n public void addInnerHits(String name, InnerHitsContext.BaseInnerHits context) {\n SearchContext sc = SearchContext.current();\n InnerHitsContext innerHitsContext;", "filename": "src/main/java/org/elasticsearch/index/query/QueryParseContext.java", "status": "modified" }, { "diff": "@@ -62,6 +62,7 @@ public Filter parse(QueryParseContext parseContext) throws IOException, QueryPar\n context.reset(qSourceParser);\n Filter result = context.parseInnerFilter();\n parser.nextToken();\n+ parseContext.combineNamedFilters(context);\n return result;\n }\n }", "filename": "src/main/java/org/elasticsearch/index/query/WrapperFilterParser.java", "status": "modified" }, { "diff": "@@ -62,6 +62,7 @@ public Query parse(QueryParseContext parseContext) throws IOException, QueryPars\n context.reset(qSourceParser);\n Query result = context.parseInnerQuery();\n parser.nextToken();\n+ parseContext.combineNamedFilters(context);\n return result;\n }\n }", "filename": "src/main/java/org/elasticsearch/index/query/WrapperQueryParser.java", "status": "modified" }, { "diff": "@@ -20,6 +20,7 @@\n package org.elasticsearch.search.matchedqueries;\n \n import org.elasticsearch.action.search.SearchResponse;\n+import org.elasticsearch.index.query.QueryBuilder;\n import org.elasticsearch.search.SearchHit;\n import org.elasticsearch.test.ElasticsearchIntegrationTest;\n import org.junit.Test;\n@@ -243,4 +244,25 @@ public void testMatchedWithShould() throws Exception {\n }\n }\n }\n+\n+ @Test\n+ public void testMatchedWithWrapperQuery() throws Exception {\n+ createIndex(\"test\");\n+ ensureGreen();\n+\n+ client().prepareIndex(\"test\", \"type1\", \"1\").setSource(\"content\", \"Lorem ipsum dolor sit amet\").get();\n+ refresh();\n+\n+ QueryBuilder[] queries = new QueryBuilder[]{\n+ wrapperQuery(matchQuery(\"content\", \"amet\").queryName(\"abc\").buildAsBytes().toUtf8()),\n+ constantScoreQuery(wrapperFilter(termFilter(\"content\", \"amet\").filterName(\"abc\").buildAsBytes().toUtf8()))\n+ };\n+ for (QueryBuilder query : queries) {\n+ SearchResponse searchResponse = client().prepareSearch()\n+ .setQuery(query)\n+ .get();\n+ assertHitCount(searchResponse, 1l);\n+ assertThat(searchResponse.getHits().getAt(0).getMatchedQueries()[0], equalTo(\"abc\"));\n+ }\n+ }\n }", "filename": "src/test/java/org/elasticsearch/search/matchedqueries/MatchedQueriesTests.java", "status": "modified" } ] }
{ "body": "Hi,\n\ndoing an ASC sorted scroll search on a field that only exists in the mapping but not in the document yields to a non terminating scroll. It always returns the documents that don't have the field on which we sorted.\n\n```\nPOST test/test/1\n{\n \"data\":\"test\"\n}\n\nPUT test/test/_mapping\n{\n \"test\": {\n \"properties\": {\n \"data\": {\n \"type\": \"string\"\n },\n \"bad\": {\n \"type\": \"string\"\n }\n }\n }\n}\n\nPOST test/test/_search?scroll=1m\n{\n \"sort\": [\n {\n \"bad\": {\n \"missing\" : \"_last\",\n \"order\": \"asc\"\n }\n }\n ]\n}\n\nGET _search/scroll?scroll=1m&scroll_id=FILL_IN_SCROLL_ID\nGET _search/scroll?scroll=1m&scroll_id=FILL_IN_SCROLL_ID\n```\n\nBoth _search/scroll requests should not return documents because the initial search already returned the document with id 1, but they do return the document.\n\nHowever, using either a DESC sort order or \"missing\":\"_first\" makes the scroll work/terminate as expected.\n", "comments": [ { "body": "Nice catch! thanks @gokl \n", "created_at": "2015-01-05T13:55:32Z" }, { "body": "@martijnvg could you have a look at this please?\n", "created_at": "2015-01-05T13:58:32Z" }, { "body": "This is related to #9155. When sorting missing values last on a string field, we artificially replace `null` with an hypothetic maximum term so that the coordinating node knows how to compare this document to documents coming from other shards. But we forgot to do the opposite operation in `setTopValue` which is used for paging. So the comparator is feeded with this maximum term, which compares lower than a document without a value given that this maximum term IS a value (we should provide the comparator with `null`). Here is an untested patch that should fix the issue:\n\n``` diff\ndiff --git a/src/main/java/org/elasticsearch/index/fielddata/fieldcomparator/BytesRefFieldComparatorSource.java b/src/main/java/org/elasticsearch/index/fielddata/fieldcomparator/BytesRefFieldComparatorSource.java\nindex 7e64720..66fa306 100644\n--- a/src/main/java/org/elasticsearch/index/fielddata/fieldcomparator/BytesRefFieldComparatorSource.java\n+++ b/src/main/java/org/elasticsearch/index/fielddata/fieldcomparator/BytesRefFieldComparatorSource.java\n@@ -97,11 +97,21 @@ public class BytesRefFieldComparatorSource extends IndexFieldData.XFieldComparat\n // TopDocs.merge deal with it (it knows how to)\n BytesRef value = super.value(slot);\n if (value == null) {\n+ assert sortMissingFirst(missingValue) || sortMissingLast(missingValue);\n value = missingBytes;\n }\n return value;\n }\n\n+ public void setTopValue(BytesRef topValue) {\n+ // symetric of value(int): if we need to feed the comparator with <tt>null</tt>\n+ // if we overrode the value with MAX_TERM in value(int)\n+ if (topValue == missingBytes && (sortMissingFirst(missingValue) || sortMissingLast(missingValue))) {\n+ topValue = null;\n+ }\n+ super.setTopValue(topValue);\n+ }\n+ \n };\n }\n```\n\nBut I hope we can come with a cleaner solution on 2.x by doing #9155 \n", "created_at": "2015-01-06T09:19:21Z" }, { "body": "@jpountz This proposed fix fixes the reported bug, so +1 for adding this initially to master, 1.x and 1.4 branches and then work on a better fix in master.\n", "created_at": "2015-01-06T10:21:27Z" } ], "number": 9136, "title": "Scroll search with ASC sort order on missing fields does not terminate" }
{ "body": "For the comparator to work correctly, we need to give it the same value in\n`setTopValue` as the value that it returned in `value`.\n\nClose #9136\n", "number": 9157, "review_comments": [ { "body": "This is not needed but was just added for consitency\n", "created_at": "2015-01-06T10:47:39Z" } ], "title": "Fix paging on strings sorted in ascending order." }
{ "commits": [ { "message": "Search: Fix paging on strings sorted in ascending order.\n\nFor the comparator to work correctly, we need to give it the same value in\n`setTopValue` as the value that it gave back in `value`.\n\nClose #9136" } ], "files": [ { "diff": "@@ -91,17 +91,32 @@ protected SortedDocValues getSortedDocValues(LeafReaderContext context, String f\n }\n }\n \n+ @Override\n+ public void setScorer(Scorer scorer) {\n+ BytesRefFieldComparatorSource.this.setScorer(scorer);\n+ }\n+\n public BytesRef value(int slot) {\n // TODO: When serializing the response to the coordinating node, we lose the information about\n // whether the comparator sorts missing docs first or last. We should fix it and let\n // TopDocs.merge deal with it (it knows how to)\n BytesRef value = super.value(slot);\n if (value == null) {\n+ assert sortMissingFirst(missingValue) || sortMissingLast(missingValue);\n value = missingBytes;\n }\n return value;\n }\n \n+ public void setTopValue(BytesRef topValue) {\n+ // symetric of value(int): if we need to feed the comparator with <tt>null</tt>\n+ // if we overrode the value with MAX_TERM in value(int)\n+ if (topValue == missingBytes && (sortMissingFirst(missingValue) || sortMissingLast(missingValue))) {\n+ topValue = null;\n+ }\n+ super.setTopValue(topValue);\n+ }\n+\n };\n }\n ", "filename": "src/main/java/org/elasticsearch/index/fielddata/fieldcomparator/BytesRefFieldComparatorSource.java", "status": "modified" }, { "diff": "@@ -24,13 +24,15 @@\n import org.elasticsearch.action.search.SearchRequestBuilder;\n import org.elasticsearch.action.search.SearchResponse;\n import org.elasticsearch.action.search.SearchType;\n+import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.common.Priority;\n import org.elasticsearch.common.settings.ImmutableSettings;\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.common.util.concurrent.UncategorizedExecutionException;\n import org.elasticsearch.index.query.QueryBuilders;\n import org.elasticsearch.rest.RestStatus;\n import org.elasticsearch.search.SearchHit;\n+import org.elasticsearch.search.sort.FieldSortBuilder;\n import org.elasticsearch.search.sort.SortOrder;\n import org.elasticsearch.test.ElasticsearchIntegrationTest;\n import org.elasticsearch.test.hamcrest.ElasticsearchAssertions;\n@@ -39,9 +41,15 @@\n import java.util.Map;\n \n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n-import static org.elasticsearch.index.query.QueryBuilders.*;\n-import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertThrows;\n-import static org.hamcrest.Matchers.*;\n+import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.queryStringQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.termQuery;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.*;\n+import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.greaterThan;\n+import static org.hamcrest.Matchers.instanceOf;\n+import static org.hamcrest.Matchers.is;\n+import static org.hamcrest.Matchers.notNullValue;\n \n /**\n *\n@@ -448,4 +456,38 @@ public void testThatNonExistingScrollIdReturnsCorrectException() throws Exceptio\n \n assertThrows(internalCluster().transportClient().prepareSearchScroll(searchResponse.getScrollId()), RestStatus.NOT_FOUND);\n }\n+\n+ @Test\n+ public void testStringSortMissingAscTerminates() throws Exception {\n+ assertAcked(prepareCreate(\"test\")\n+ .setSettings(ImmutableSettings.settingsBuilder().put(IndexMetaData.SETTING_NUMBER_OF_SHARDS, 1).put(IndexMetaData.SETTING_NUMBER_OF_REPLICAS, 0))\n+ .addMapping(\"test\", \"no_field\", \"type=string\", \"some_field\", \"type=string\"));\n+ client().prepareIndex(\"test\", \"test\", \"1\").setSource(\"some_field\", \"test\").get();\n+ refresh();\n+\n+ SearchResponse response = client().prepareSearch(\"test\")\n+ .setTypes(\"test\")\n+ .addSort(new FieldSortBuilder(\"no_field\").order(SortOrder.ASC).missing(\"_last\"))\n+ .setScroll(\"1m\")\n+ .get();\n+ assertHitCount(response, 1);\n+ assertSearchHits(response, \"1\");\n+\n+ response = client().prepareSearchScroll(response.getScrollId()).get();\n+ assertSearchResponse(response);\n+ assertHitCount(response, 1);\n+ assertNoSearchHits(response);\n+\n+ response = client().prepareSearch(\"test\")\n+ .setTypes(\"test\")\n+ .addSort(new FieldSortBuilder(\"no_field\").order(SortOrder.ASC).missing(\"_first\"))\n+ .setScroll(\"1m\")\n+ .get();\n+ assertHitCount(response, 1);\n+ assertSearchHits(response, \"1\");\n+\n+ response = client().prepareSearchScroll(response.getScrollId()).get();\n+ assertHitCount(response, 1);\n+ assertThat(response.getHits().getHits().length, equalTo(0));\n+ }\n }", "filename": "src/test/java/org/elasticsearch/search/scroll/SearchScrollTests.java", "status": "modified" }, { "diff": "@@ -145,6 +145,10 @@ public static void assertHitCount(SearchResponse searchResponse, long expectedHi\n assertVersionSerializable(searchResponse);\n }\n \n+ public static void assertNoSearchHits(SearchResponse searchResponse) {\n+ assertEquals(0, searchResponse.getHits().getHits().length);\n+ }\n+\n public static void assertSearchHits(SearchResponse searchResponse, String... ids) {\n String shardStatus = formatShardStatus(searchResponse);\n ", "filename": "src/test/java/org/elasticsearch/test/hamcrest/ElasticsearchAssertions.java", "status": "modified" } ] }
{ "body": "I'm using a 4 node Elasticsearch cluster to ingest tweets from one of the public Twitter streams. After upgrading my from 1.3.2 to 1.4.2 I'm noticing an odd behavior with respect to how shards are distributed across the nodes. Whereas with 1.3.2 the 5 primary shards and 1 replica set of shards were evenly distributed across all 4 nodes, now I'm seeing this behavior:\n1. All the primary shards are saved to one node\n2. The replicas are split between two other nodes\n3. The fourth node receives no shards at all\n\nI'm using default values for these settings:\n\ncluster.routing.allocation.balance.shard\ncluster.routing.allocation.balance.index\ncluster.routing.allocation.balance.primary\ncluster.routing.allocation.balance.threshold\n\nHowever I've set cluster.routing.allocation.balance.shard to 0.6f and cluster.routing.allocation.balance.primary to 0.06f in the hope that these settings will coerce Elasticsearch to distribute the shards more evenly. \n\nIs this issue a bug?\n", "comments": [ { "body": "Hi @vichargrave \n\nWhat other settings do you have?\n", "created_at": "2014-12-22T13:10:00Z" }, { "body": "Hi Clinton.\n\nI've enclosed the 3 config files I'm using. Note that I observed the\nproblem with the default values of the following fields:\n\ncluster.routing.allocation.balance.shard: 1.0f\ncluster.routing.allocation.balance.primary: 0.50f\n\nHowever, the problem persists after applying these settings.\n\nOn Mon, Dec 22, 2014 at 5:10 AM, Clinton Gormley notifications@github.com\nwrote:\n\n> Hi @vichargrave https://github.com/vichargrave\n> \n> What other settings do you have?\n> \n> —\n> Reply to this email directly or view it on GitHub\n> https://github.com/elasticsearch/elasticsearch/issues/9023#issuecomment-67835189\n> .\n", "created_at": "2014-12-22T16:20:27Z" }, { "body": "Hi @vichargrave \n\nI assume that you attached them to a reply email? They don't show up on github. While you're about it, please can you also provide:\n\n```\nGET /_cluster/settings\nGET /_settings\n```\n\nthanks\n", "created_at": "2014-12-22T17:33:19Z" }, { "body": "Sorry about that. OK here are the contents of my 3 configuration files and the results of the commands you requested:\n\n<pre>\n==== /etc/elasticsearch/elasticsearch.yml\n######################### Elasticsearch Configuration #########################\n# This file contains an overview of various configuration settings,\n# targeted at operations staff. Application developers should\n# consult the guide at <http://elasticsearch.org/guide>.\n#\n# The installation procedure is covered at\n# <http://elasticsearch.org/guide/en/elasticsearch/reference/current/setup.html>.\n#\n# ElasticSearch comes with reasonable defaults for most settings,\n# so you can try it out without bothering with configuration.\n#\n# Most of the time, these defaults are just fine for running a production\n# cluster. If you're fine-tuning your cluster, or wondering about the\n# effect of certain configuration option, please _do ask_ on the\n# mailing list or IRC channel [http://elasticsearch.org/community].\n# Any element in the configuration can be replaced with environment variables\n# by placing them in ${...} notation. For example:\n#\n# node.rack: ${RACK_ENV_VAR}\n# For information on supported formats and syntax for the config file, see\n# <http://elasticsearch.org/guide/en/elasticsearch/reference/current/setup-configuration.html>\n################################### Cluster ###################################\n# Cluster name identifies your cluster for auto-discovery. If you're running\n# multiple clusters on the same network, make sure you're using unique names.\n#\ncluster.name: twitter_filter\n#################################### Node #####################################\n# Node names are generated dynamically on startup, so you're relieved\n# from configuring them manually. You can tie this node to a specific name:\n#\nnode.name: \"r5-9-37\"\n# Every node can be configured to allow or deny being eligible as the master,\n# and to allow or deny to store the data.\n#\n# Allow this node to be eligible as a master node (enabled by default):\n#\n# node.master: true\n#\n# Allow this node to store data (enabled by default):\n#\n# node.data: true\n# You can exploit these settings to design advanced cluster topologies.\n#\n# 1. You want this node to never become a master node, only to hold data.\n# This will be the \"workhorse\" of your cluster.\n#\n# node.master: false\n# node.data: true\n#\n# 2. You want this node to only serve as a master: to not store any data and\n# to have free resources. This will be the \"coordinator\" of your cluster.\n#\n# node.master: true\n# node.data: false\n#\n# 3. You want this node to be neither master nor data node, but\n# to act as a \"search load balancer\" (fetching data from nodes,\n# aggregating results, etc.)\n#\n# node.master: false\n# node.data: false\n# Use the Cluster Health API [http://localhost:9200/_cluster/health], the\n# Node Info API [http://localhost:9200/_cluster/nodes] or GUI tools\n# such as <http://github.com/lukas-vlcek/bigdesk> and\n# <http://mobz.github.com/elasticsearch-head> to inspect the cluster state.\n# A node can have generic attributes associated with it, which can later be used\n# for customized shard allocation filtering, or allocation awareness. An attribute\n# is a simple key value pair, similar to node.key: value, here is an example:\n#\n# node.rack: rack314\n# By default, multiple nodes are allowed to start from the same installation location\n# to disable it, set the following:\n# node.max_local_storage_nodes: 1\n#################################### Index ####################################\n# You can set a number of options (such as shard/replica options, mapping\n# or analyzer definitions, translog settings, ...) for indices globally,\n# in this file.\n#\n# Note, that it makes more sense to configure index settings specifically for\n# a certain index, either when creating it or by using the index templates API.\n#\n# See <http://elasticsearch.org/guide/en/elasticsearch/reference/current/index-modules.html> and\n# <http://elasticsearch.org/guide/en/elasticsearch/reference/current/indices-create-index.html>\n# for more information.\n# Set the number of shards (splits) of an index (5 by default):\n#\nindex.number_of_shards: 5\n# Set the number of replicas (additional copies) of an index (1 by default):\n#\nindex.number_of_replicas: 1\n# Note, that for development on a local machine, with small indices, it usually\n# makes sense to \"disable\" the distributed features:\n#\n# index.number_of_shards: 1\n# index.number_of_replicas: 0\n# These settings directly affect the performance of index and search operations\n# in your cluster. Assuming you have enough machines to hold shards and\n# replicas, the rule of thumb is:\n#\n# 1. Having more *shards* enhances the _indexing_ performance and allows to\n# _distribute_ a big index across machines.\n# 2. Having more *replicas* enhances the _search_ performance and improves the\n# cluster _availability_.\n#\n# The \"number_of_shards\" is a one-time setting for an index.\n#\n# The \"number_of_replicas\" can be increased or decreased anytime,\n# by using the Index Update Settings API.\n#\n# ElasticSearch takes care about load balancing, relocating, gathering the\n# results from nodes, etc. Experiment with different settings to fine-tune\n# your setup.\n# Use the Index Status API (<http://localhost:9200/A/_status>) to inspect\n# the index status.\n#################################### Paths ####################################\n# Path to directory containing configuration (this file and logging.yml):\n#\n# path.conf: /path/to/conf\n# Path to directory where to store index data allocated for this node.\n#\npath.data: /data/0,/data/1\n#\n# Can optionally include more than one location, causing data to be striped across\n# the locations (a la RAID 0) on a file level, favouring locations with most free\n# space on creation. For example:\n#\n# path.data: /path/to/data1,/path/to/data2\n# Path to temporary files:\n#\n# path.work: /path/to/work\n# Path to log files:\n#\n# path.logs: /path/to/logs\n# Path to where plugins are installed:\n#\n# path.plugins: /path/to/plugins\n#################################### Plugin ###################################\n# If a plugin listed here is not installed for current node, the node will not start.\n#\n# plugin.mandatory: mapper-attachments,lang-groovy\n################################### Memory ####################################\n# ElasticSearch performs poorly when JVM starts swapping: you should ensure that\n# it _never_ swaps.\n#\n# Set this property to true to lock the memory:\n#\nbootstrap.mlockall: true\n# Make sure that the ES_MIN_MEM and ES_MAX_MEM environment variables are set\n# to the same value, and that the machine has enough memory to allocate\n# for ElasticSearch, leaving enough memory for the operating system itself.\n#\n# You should also make sure that the ElasticSearch process is allowed to lock\n# the memory, eg. by using `ulimit -l unlimited`.\n############################## Network And HTTP ###############################\n# ElasticSearch, by default, binds itself to the 0.0.0.0 address, and listens\n# on port [9200-9300] for HTTP traffic and on port [9300-9400] for node-to-node\n# communication. (the range means that if the port is busy, it will automatically\n# try the next port).\n# Set the bind address specifically (IPv4 or IPv6):\n#\n# network.bind_host: 192.168.0.1\n# Set the address other nodes will use to communicate with this node. If not\n# set, it is automatically derived. It must point to an actual IP address.\n#\n# network.publish_host: 192.168.0.1\n# Set both 'bind_host' and 'publish_host':\n#\n# network.host: 192.168.0.1\n# Set a custom port for the node to node communication (9300 by default):\n#\n# transport.tcp.port: 9300\n# Enable compression for all communication between nodes (disabled by default):\n#\n# transport.tcp.compress: true\n# Set a custom port to listen for HTTP traffic:\n#\n# http.port: 9200\n# Set a custom allowed content length:\n#\n# http.max_content_length: 100mb\n# Disable HTTP completely:\n#\n# http.enabled: false\n################################### Gateway ###################################\n# The gateway allows for persisting the cluster state between full cluster\n# restarts. Every change to the state (such as adding an index) will be stored\n# in the gateway, and when the cluster starts up for the first time,\n# it will read its state from the gateway.\n# There are several types of gateway implementations. For more information, see\n# <http://elasticsearch.org/guide/en/elasticsearch/reference/current/modules-gateway.html>.\n# The default gateway type is the \"local\" gateway (recommended):\n#\n# gateway.type: local\n# Settings below control how and when to start the initial recovery process on\n# a full cluster restart (to reuse as much local data as possible when using shared\n# gateway).\n# Allow recovery process after N nodes in a cluster are up:\n#\n# gateway.recover_after_nodes: 1\n# Set the timeout to initiate the recovery process, once the N nodes\n# from previous setting are up (accepts time value):\n#\n# gateway.recover_after_time: 5m\n# Set how many nodes are expected in this cluster. Once these N nodes\n# are up (and recover_after_nodes is met), begin recovery process immediately\n# (without waiting for recover_after_time to expire):\n#\n# gateway.expected_nodes: 2\n############################# Recovery Throttling #############################\n# These settings allow to control the process of shards allocation between\n# nodes during initial recovery, replica allocation, rebalancing,\n# or when adding and removing nodes.\n# Set the number of concurrent recoveries happening on a node:\n#\n# 1. During the initial recovery\n#\n# cluster.routing.allocation.node_initial_primaries_recoveries: 4\n#\n# 2. During adding/removing nodes, rebalancing, etc\n#\n# cluster.routing.allocation.node_concurrent_recoveries: 2\n# Set to throttle throughput when recovering (eg. 100mb, by default 20mb):\n#\n# indices.recovery.max_bytes_per_sec: 20mb\n# Set to limit the number of open concurrent streams when\n# recovering a shard from a peer:\n#\n# indices.recovery.concurrent_streams: 5\n################################## Discovery ##################################\n# Discovery infrastructure ensures nodes can be found within a cluster\n# and master node is elected. Multicast discovery is the default.\n# Set to ensure a node sees N other master eligible nodes to be considered\n# operational within the cluster. Its recommended to set it to a higher value\n# than 1 when running more than 2 nodes in the cluster.\n#\ndiscovery.zen.minimum_master_nodes: 3\n# Set the time to wait for ping responses from other nodes when discovering.\n# Set this option to a higher value on a slow or congested network\n# to minimize discovery failures:\n#\ndiscovery.zen.ping.timeout: 15s\n# For more information, see\n# <http://elasticsearch.org/guide/en/elasticsearch/reference/current/modules-discovery-zen.html>\n# Unicast discovery allows to explicitly control which nodes will be used\n# to discover the cluster. It can be used when multicast is not present,\n# or to restrict the cluster communication-wise.\n#\n# 1. Disable multicast discovery (enabled by default):\n#\ndiscovery.zen.ping.multicast.enabled: false\n#\n# 2. Configure an initial list of master nodes in the cluster\n# to perform discovery when new nodes (master or data) are started:\n#\n# Note these are not the real IP addresses just placeholders\ndiscovery.zen.ping.unicast.hosts: [\"10.0.0.1\",\"10.0.0.2\",\"10.0.0.3\",\"10.0.0.4\"]\n# EC2 discovery allows to use AWS EC2 API in order to perform discovery.\n#\n# You have to install the cloud-aws plugin for enabling the EC2 discovery.\n#\n# For more information, see\n# <http://elasticsearch.org/guide/en/elasticsearch/reference/current/modules-discovery-ec2.html>\n#\n# See <http://elasticsearch.org/tutorials/elasticsearch-on-ec2/>\n# for a step-by-step tutorial.\n################################## Slow Log ##################################\n# Shard level query and fetch threshold logging.\n#index.search.slowlog.threshold.query.warn: 10s\n#index.search.slowlog.threshold.query.info: 5s\n#index.search.slowlog.threshold.query.debug: 2s\n#index.search.slowlog.threshold.query.trace: 500ms\n#index.search.slowlog.threshold.fetch.warn: 1s\n#index.search.slowlog.threshold.fetch.info: 800ms\n#index.search.slowlog.threshold.fetch.debug: 500ms\n#index.search.slowlog.threshold.fetch.trace: 200ms\n#index.indexing.slowlog.threshold.index.warn: 10s\n#index.indexing.slowlog.threshold.index.info: 5s\n#index.indexing.slowlog.threshold.index.debug: 2s\n#index.indexing.slowlog.threshold.index.trace: 500ms\n################################## GC Logging ################################\n#monitor.jvm.gc.ParNew.warn: 1000ms\n#monitor.jvm.gc.ParNew.info: 700ms\n#monitor.jvm.gc.ParNew.debug: 400ms\n#monitor.jvm.gc.ConcurrentMarkSweep.warn: 10s\n#monitor.jvm.gc.ConcurrentMarkSweep.info: 5s\n#monitor.jvm.gc.ConcurrentMarkSweep.debug: 2s\n############################ Lhotsky optimizations ##########################\n## Threadpool Settings ##\n# Search pool\nthreadpool.search.type: fixed\nthreadpool.search.size: 20\nthreadpool.search.queue_size: 100\n# Bulk pool\nthreadpool.bulk.type: fixed\nthreadpool.bulk.size: 60\nthreadpool.bulk.queue_size: 300\n# Index pool\nthreadpool.index.type: fixed\nthreadpool.index.size: 20\nthreadpool.index.queue_size: 100\n## Index Settings ##\n# Indices settings\nindices.memory.min_shard_index_buffer_size: 12mb\nindices.memory.min_index_buffer_size: 96mb\n# Cache Sizes\nindices.fielddata.cache.size: 15%\nindices.cache.filter.size: 15%\n# Indexing Settings for Writes\nindex.refresh_interval: 30s\nindex.translog.flush_threshold_ops: 50000\n############################ Other optimizations ##########################\n## See: http://www.elasticsearch.org/blog/performance-considerations-elasticsearch-indexing/\n# Thread settings\nindex.merge.scheduler.max_thread_count: 1\n# Indices settings\nindices.memory.index_buffer_size: 15%\n# Indexing Settings for Writes\nindex.translog.flush_threshold_size: 1gb\n## See: http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/cluster-update-settings.html\n# Cluster balancing #\ncluster.routing.allocation.balance.shard: 1.0f\ncluster.routing.allocation.balance.primary: 0.50f\n==== /etc/sysconfig/elasticsearch\n# Directory where the Elasticsearch binary distribution resides\nES_HOME=/usr/share/elasticsearch\n# Heap Size (defaults to 256m min, 1g max)\nES_HEAP_SIZE=16g\n# Heap new generation\n#ES_HEAP_NEWSIZE=\n# max direct memory\n#ES_DIRECT_SIZE=\n# Additional Java OPTS\n#ES_JAVA_OPTS=\n# Maximum number of open files\nMAX_OPEN_FILES=131072\n# Maximum amount of locked memory\n#MAX_LOCKED_MEMORY=\n# Maximum number of VMA (Virtual Memory Areas) a process can own\nMAX_MAP_COUNT=262144\n# Elasticsearch log directory\nLOG_DIR=/var/log/elasticsearch\n# Elasticsearch data directory\nDATA_DIR=/var/lib/elasticsearch\n# Elasticsearch work directory\nWORK_DIR=/tmp/elasticsearch\n# Elasticsearch conf directory\nCONF_DIR=/etc/elasticsearch\n# Elasticsearch configuration file (elasticsearch.yml)\nCONF_FILE=/etc/elasticsearch/elasticsearch.yml\n# User to run as, change this to a specific elasticsearch user if possible\n# Also make sure, this user can write into the log directories in case you change them\n# This setting only works for the init script, but has to be configured separately for systemd startup\nES_USER=elasticsearch\n# Configure restart on package upgrade (true, every other setting will lead to not restarting)\n#RESTART_ON_UPGRADE=true\n==== /etc/security/limits.conf\n# /etc/security/limits.conf\n#\n#Each line describes a limit for a user in the form:\n#\n#<domain> <type> <item> <value>\n#\n#Where:\n#<domain> can be:\n# - an user name\n# - a group name, with @group syntax\n# - the wildcard *, for default entry\n# - the wildcard %, can be also used with %group syntax,\n# for maxlogin limit\n#\n#<type> can have the two values:\n# - \"soft\" for enforcing the soft limits\n# - \"hard\" for enforcing hard limits\n#\n#<item> can be one of the following:\n# - core - limits the core file size (KB)\n# - data - max data size (KB)\n# - fsize - maximum filesize (KB)\n# - memlock - max locked-in-memory address space (KB)\n# - nofile - max number of open files\n# - rss - max resident set size (KB)\n# - stack - max stack size (KB)\n# - cpu - max CPU time (MIN)\n# - nproc - max number of processes\n# - as - address space limit (KB)\n# - maxlogins - max number of logins for this user\n# - maxsyslogins - max number of logins on the system\n# - priority - the priority to run user process with\n# - locks - max number of file locks the user can hold\n# - sigpending - max number of pending signals\n# - msgqueue - max memory used by POSIX message queues (bytes)\n# - nice - max nice priority allowed to raise to values: [-20, 19]\n# - rtprio - max realtime priority\n#\n#<domain> <type> <item> <value>\n#\n#* soft core 0\n#* hard rss 10000\n#@student hard nproc 20\n#@faculty soft nproc 20\n#@faculty hard nproc 50\n#ftp hard nproc 0\n#@student - maxlogins 4\n# Ensure ElasticSearch can open files and lock memory!\nelasticsearch soft nofile 131072\nelasticsearch hard nofile 131072\nelasticsearch - memlock unlimited\n==== curl localhost:9200/_cluster/settings\n{\n \"persistent\" : {\n \"cluster\" : {\n \"routing\" : {\n \"allocation\" : {\n \"enable\" : \"all\",\n \"balance\" : {\n \"primary\" : \"0.50f\",\n \"shard\" : \"1.0f\"\n }\n }\n }\n }\n },\n \"transient\" : {\n \"cluster\" : {\n \"routing\" : {\n \"allocation\" : {\n \"enable\" : \"all\"\n }\n }\n }\n }\n}\n==== curl localhost:9200/_settings\n{\n \"tweets-2014-12-13:20\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"UYsabdN4T865kErAdx1AJg\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-13:01\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"XmbQJ9acTvade6S_8qUwDA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-05:20\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"xP38TU1_SiGKxs86uUaiqw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-08:03\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"JJcHeHVfSL-JIRC7cstKnQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-01:20\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"qyZiPYIFRbiu3CffO-0uzA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-05:12\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"bIwh5NEPToGFIMPFR1Vnug\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-12:01\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"x0-PqYayRButRuoY28t_8Q\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-10:22\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"IbPsjCm2SSeZ4c4GvuZ9Pg\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-15:09\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"Hbw7FrzITKCzqYj6Nvq-9g\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-16:07\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"-gHV9htqStOQSi4Sz8LvtA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-18:13\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"L0NCAj_nQAWyHtVBJIqCbw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-02:22\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"IKay9tNNTGuSa_6N4Bqu_g\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-15:21\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"qp0MvPMKTrikwSjJJA--uQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-04:12\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"njWl7thuRlC4lyME-hES2A\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-02:12\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"jrLaWTD1S7OJ7Y2MWNtNjA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-16:23\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"9Xabn995RVqfiiAItxBR1Q\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-11:22\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"PdR6LW1DSg2lOypj_q05_Q\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-07:11\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"t4lYWrKRRI6NsRiIyAkmag\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-13:00\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"znjWieCZQpCqFH-B5EWsdQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-02:15\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"2uuhVBKnQI6fep2fPIlM9Q\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-01:08\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"MYhfJiCAQGmQD8UY9_EvQA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-04:14\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"srpRd9DkRJq7ountfue2bw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-07:02\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"1xDyyYbJQUWbLCFyOiSeaQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-01:13\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"p1Z543ULRfySf2xHBiuqRg\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-01:00\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"PCt35UWXRhuWYej-wR80HQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-06:02\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"htTlSifaSkCr2-5deD_m3A\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-12:20\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"H5Rxi0ACQF6BB0BOIV4i4g\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-15:03\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"j1kbSwEMS3qKRSjg01FNKQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-10:11\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"-MDxgP0UTF-d-4AmwQT78g\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-05:10\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"gv1-kUxGQWSXKhsdz6Co5g\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-12:22\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"EcrqVojxSGy50NT0bFbHpQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-01:21\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"VeyDvV6gQQyOB_RXdK2Ocw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-04:20\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"-594qw5sTcqBubG8aI7sWg\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-10:17\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"GHvNtTY7Qr-_srxRpCOK4g\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-11:20\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"WywItVurR6CZhjI0mVB23Q\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-18:07\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"DhKJz-kuSCejFRxYEJko3Q\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-14:15\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"T7Qoq1ZWSg6lqrGIDW709Q\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-06:22\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"dl2RTgkSTHWzxqNJ_CR-SA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-11:12\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"AouHHvMtQyegMou2IXTF0g\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-11:08\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"lljTY-mOS7Km0u7F0hoXNA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-12:18\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"OpEj0jQfTdm_Jw8LRNWnyA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-12:13\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"2QFiH-iMRUCPyXZ8xLrHng\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-14:01\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"Ccl5agM9QZu84doN6hrc5g\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-05:22\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"DUMj-e3eSTaocM8yplzd4A\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-11:15\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"EQ_OEMIYTZahqUFV-g3wVQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-09:17\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"PAEgY97DT12Ajsqn6Kjeyw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-02:06\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"SURp1kWpRzORNW8QA4XoGQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-04:17\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"FeSh0SZcQmGyI_HSUqeeJw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-17:03\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"GFThutg0QLqvFkB-fzBhdA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-18:05\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"0pRe5fK_QoGapxZMfg658Q\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-13:22\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"htDrRm5STX6xjymV9zwavA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-18:16\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"5FugA0RhQa-g6j8h2gyfIw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-03:02\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"aXvYB__eRyOAbOyL2VIsYQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-09:03\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"DAc3LFbdTpaZKDg5ALkiLw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-02:04\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"LYYiTU6FQFm7mf5e4zLksw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-16:01\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"kV6W4bQKRzObTt1nQBRz-g\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-16:12\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"ugMLX-PORgGn2vvJDVc8Ew\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-18:23\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"E4XHaKt1TY6U2aM9Jy9BPA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-17:16\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"4oRje_rWTAKgsSk7EaE1XA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-05:05\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"G9hJ0NPmRciMWorHhwLdLQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-09:08\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"81VNgAmHT8W_HLlKzKEV0w\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-10:02\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"bmTOmptnSIqfHu3wuw_x6Q\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-07:03\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"6RS8UV4LRDWX_pS0K6U5cA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-08:05\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"15qozk_tTqK1bTt0hbp0eQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-12:00\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"lnML8R-xT0abTTm5ssE3sA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-11:03\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"xO0_3gRTRbuTDDPav1HrLA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-15:08\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"WxkiU7JCSDimulaKFdRaGQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-16:08\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"tD6Q2J5qQS6AhcvVcsG4UQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-10:09\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"DKp30gl3Sxe03-fOOP3nQw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-03:09\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"bmgqQIXQRdGaVfKDo3X9Uw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-06:10\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"unVDIeoSSvmoSqSNoQ1x-g\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-17:23\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"kjGdTH66TVuIVOLKhLQl2g\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-18:12\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"pJcG-qJsQmigApM7UMjBdQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-05:13\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"Ky_lY_GNQsyrXx-WKSzOMw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-05:07\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"_22oYqPkSXqfJnXqC6QN5g\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-12:15\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"EUpEVkVpT6GeWOyKxMidIw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-02:05\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"nQy507JMQJ-nsOIT69nPsA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-11:21\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"20FRd6IvSxK2gVv8JVFrqA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-17:13\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"1abDESSdRvm1AkGwqoqKaw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-09:12\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"-CowMUqYToqosKyqEiE_ag\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-09:15\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"REi8lL3wQwiWsRLLgMq1RQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-10:13\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"L-Z67pJlRi-VuKCR05crvg\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-01:16\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"caz67Lb8T9GmH1OiqRV2_A\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-12:14\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"tm0f6GOaQqO4jyOKWo-JyA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-05:16\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"Sy12Gb9HR62gFNasbvJGkQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-07:01\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"Bk178p4AR4KHwd6LfadX-g\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-17:12\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"ccUrleOFRE-hlCJGOVXlPA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-18:08\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"lsUggPfQSqmnWv4z7befLw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"whois_cache\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"Cblh3K-0ScmiYDqJei4mmw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-15:14\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"F5NwMz9uTdavapwAQI4VCQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-02:17\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"j-TAq6ZwTPa3TRjrd7jP8g\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-13:16\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"9uWEZXIoQ6K-9H9qtByrEA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-01:17\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"Y7n82EPrRNebX0Aaxl-rgQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-09:06\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"kh9OCxVGT2yS2q_GNwy2Kg\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-10:16\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"4C5IMnHKS2eNDBkk4pPrSQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-12:19\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"e2EWlpiZQBC4GTHlN4CfmQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-04:01\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"beUtunNwS225UgEAoNWb0g\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"dn_cache\" : {\n \"settings\" : {\n \"index\" : {\n \"uuid\" : \"N-Vd613DRAqEpTlb5JZrtg\",\n \"analysis\" : {\n \"analyzer\" : {\n \"word\" : {\n \"type\" : \"pattern\",\n \"pattern\" : \"\\\\W\"\n }\n }\n },\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-05:09\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"0LREWi0eTOGhyQtwumPIKw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-04:06\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"u3mCgpjZQ3m10cMKewnTfA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-17:18\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"cYWaMe9wTXe24Y4QbRB6Sw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-03:10\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"if1NIF_6QU6hLNlMi6IH_A\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-03:17\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"u0hcm5-VS6ylnp8x5DIVDQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-07:07\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"DZB1AJfaQfeCp8wCH58f5w\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-12:05\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"dazxQtOjRK68OXsVBhHV5A\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-04:19\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"oLs5vaveQXK09fZHjeaDMw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-10:06\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"5WhVijvgRu2V9eMWcgmEMQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-01:06\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"Yh9NsYqeRt--LOLpkliN9w\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-08:21\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"CYhV1IVuQDWJOQE72F0r1g\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-03:22\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"_2n4EiTlTPGdbbEwkudYlg\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-18:03\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"CiJm4UcBSbum5EYud6wumg\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-02:13\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"j5HRoYoKTMGzmTB6JSs97w\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-03:12\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"Bmp69fHHRpC_1DrzsaBwwQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-07:10\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"LfyvRDFKSOaQyMJr81HrgQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-01:11\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"Nn5P67hoSwOGxt5VQ04JeQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-18:15\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"kHDcYTRJSqanNALamvOnYQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-15:00\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"WAMTMWqmSVGDyBa9aRR-3w\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-14:05\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"BJLUGQjtSDS-T6juhFZ8Tg\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-22\" : {\n \"settings\" : {\n \"index\" : {\n \"creation_date\" : \"1419206402879\",\n \"uuid\" : \"XmLTPkgSTOmgOcX266Ouyw\",\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"version\" : {\n \"created\" : \"1040299\"\n }\n }\n }\n },\n \"tweets-2014-12-19:00\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"4T41ZTexRwuSSnNrjhZ8ow\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-11:17\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"yNV6Mnw6QSOFwI9oiAGGJg\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-16:21\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"K8KNoc40Q6WBTxLiL4o-wA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-09:07\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"vQttEkAHSai3zUaiq_B9hg\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-06:11\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"vo9ZyHf-SnORAFQ1W6dmvw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-10:03\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"wDanhf76QzyoDZ-gZTR4QQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-07:09\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"HiNJVmMFSqyoTskEJF_Lug\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-06:01\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"RKJXz8hDQCKPIy3BIcbP_g\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-17:07\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"0TrgCoRERY6nFfxXr6J81A\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-15:01\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"HPpiPNdjS_m7azPHTw0ceg\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-18:14\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"8PFxJJ8NQDePsEMKdlzWTw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-07:21\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"tc8-h160T0q9xLMmK6Jx6g\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-15:23\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"mcmm18QoQui_eZ7KsZlfdg\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-08:15\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"N4V6PVPcSGWOsBw8ccmJZQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-01:09\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"0ns6gR-0S3itlmS5j5HSrA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-02:19\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"Du8Mjc-oRXCS2pjtUNmSEg\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-07:20\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"iRbjNDYMRre34tJs3vAVUA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-10:19\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"xiyM884jSxaQWVn0LECN_A\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-06:07\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"PfTFS76ZQlGZcdvwXDnrTg\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-01:22\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"vAWtOjNZTmq0j0-tXcotHw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-09:09\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"tchFhPX1TXKdHRkaWo9WuA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-06:21\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"3vdrqSF3Rv6_7JA8YppVlQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-01:12\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"OHrDK8nVT1KyXQNketGrAQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-15:04\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"xaj5-9ZIQPO_B7_D69YXZg\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-16:15\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"jecmdjbSQ0yKNU0BZzDIeQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-16:00\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"fL4PDKH0SGiV35zvrqw6qA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-16:20\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"mCVEHYH_Sb2ZvjBzmhUXXw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-12:12\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"c5EI5RmdSBOgIdrJSgoYEQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-14:02\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"FezK9CJxSv6zS7_1jGvIQg\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-17:04\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"A6DG-JZ4R8S3bQa2NItseQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-11:13\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"XAQyfj4jRG21Ldp7RQSRjg\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-19:01\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"-PY0fawtTQu8cDc3GZmaWA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-12:23\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"bhdUiXXaQGmDwASnoG7EEA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-04:18\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"o1yS3uHHTc2kGPgMjan_xg\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-10:01\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"enlwmtzyQreDYm9zLFP3uA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-09:02\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"TgoszQiBRhmFmxjjOdaPdA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-17:05\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"24RKPXUKRE6wO_SLDprIGQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-15:15\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"YZptP8AZTFa3jya7NlWdnA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-07:16\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"h0T37_HBRRezV5t4gKWcfQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-08:19\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"OeULRArjROmzOD_2Sd1bPA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-02:16\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"cKtDwZBgSwewdYdSZRsvHg\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-17:21\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"hIQURcdLT9yJp6IbN9ocGw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-18:20\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"FliGWQcKSdqH027uwr-k3Q\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-14:08\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"YQyKINoYQbKioKxxqejMGA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-15:05\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"NvzrYw-GTxuHNLNPOyNESw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-14:17\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"Fc2NJaAlSUKU3g4_8wrTrg\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-05:03\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"QaDAFWNqQ5unS8RIsm-XNg\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-11:04\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"tuVJ1PM2SfKVt5xo-xU2mA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-17:14\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"hXFe9ebUT2CuRsGNWZweOw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-18:01\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"VMpOMGiwSJqHPY18_FRGBA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-09:13\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"FX4G6PksRkmhmTLJyg6scg\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-01:02\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"PFsYFQxZSym5vXrkg0fY0g\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-07:15\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"e-C4XPVIQeqKRV1cyS7WYA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-14:16\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"EljN37_1T2aeEvrxGmVWIA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-03:21\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"yQMhNAGBQsivFaNrRWZI9A\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-08:01\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"aYNnjxOhRciFoXJQLbhFUA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-14:21\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"EDDpOPM7QwSFE5UtNe9jqw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-06:14\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"QZmzAkKqSgipw-Cd9OyzUA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-05:06\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"gchidFbpS7eSSCpu2RZpRA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-11:06\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"DxzwBm06RAiAng1XDhSGow\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-18:17\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"Up2GCPkWSHa_DoPFggtZGQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-09:11\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"LF-3OVhuSmC9zqQkdkFxiA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-14:11\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"_lHrpHARSIGUGRYxTEZqrA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-06:05\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"JAvCK8tmQo6xNgKQUVCMEQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-17:17\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"RQ8H3o6fSrelQ8M_0Jn5zw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-06:20\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"lDHflm0mS6SO3Q25ipMhjA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-05:17\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"bLKrjsm_Rz6gdNc67yEe9Q\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-17:15\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"8mg7AV2nRimJvG7xBGdGMA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-01:18\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"acM0ydCqTtOH0BqaS-ojhQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-10:15\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"whqAKmASR-6NSLLpKr2tKA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-14:20\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"mNs12_ZoT7u7SyLNFlAwJQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-11:01\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"G63BOecERWKc-5igVaoKIA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-16:04\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"jqaHo3_BQ166veyPs5LyhA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-13:17\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"dFgCYiRWQIeNnidp-l2RdQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-04:15\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"cyjaNYsuTGyOkPOP6FiQbQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-12:16\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"UvL16K6lS-Og6MD2uvNKPA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-10:10\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"S2ZzBMHSQ2Wm_jN_Hmkrig\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-14:04\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"qIBTeXtxTDqK3fuMh2cklw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-17:08\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"l_seD5ZfQCSXpjpGuazQeQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-02:21\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"YaRs4oEoQZqYj4BcpfGRgw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-07:04\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"NIIDCLOYT-meTqFRwLrA1g\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-08:11\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"QIExRezATSWz0lDjwcsuWw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-09:23\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"7okuvEFpRqyjc4qaWG4dmQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-03:14\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"cq1qj4Y6QCyeeft3YPck1g\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-03:20\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"K3liiY1xQiiLkHS0JtDfHw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-18:09\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"hmeE9dttRIa4IcmlVjA7EA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-03:15\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"bMciGxpMT2yJslDKxVS92w\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-15:13\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"zRvlYW_uSTeh5yqCkktidA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-12:08\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"uP0-Yq5hRP2A6jCAAncGjQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-13:19\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"pjdxU5MKSZeKCCNA0yYMOQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-12:21\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"8VvCyFBVQVaCdV4-gc3FcQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-11:19\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"XJ4ldf04RA-N_FRCFE44lA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-09:19\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"z-qqkw1UQgKPOiJskF9Eng\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-03:04\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"37crw3NTSkqyAPRI5JNtCw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-07:14\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"eQh_7P3WRmSlv7SZZWVuxw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-16:05\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"9laYPeJGQaW4wVHI704Rtg\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-13:23\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"MheQTW_ZQlCTP6U3dmZrEQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-07:23\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"9W6WdTnLQUyJdYLmUwM_Sg\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-05:04\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"X9gB7xeCQUeyvcNo1XLy5A\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-12:07\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"MxR_TeKRTG665A5gB_pn0g\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-06:08\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"cyOi5WmqRuuKBKzQLhiUCQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-08:12\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"pUSKoEBGQlKrOUlbI4Rdbw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-20\" : {\n \"settings\" : {\n \"index\" : {\n \"creation_date\" : \"1419041578845\",\n \"uuid\" : \"K94ciIyDQo6sa8hHdDjT1A\",\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"version\" : {\n \"created\" : \"1040299\"\n }\n }\n }\n },\n \"tweets-2014-12-06:19\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"dIz_C3ICS1msigjeJfHCkQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-12:11\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"pnYQF7Y0Q-uqtrF1MXydyA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-06:04\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"SJkiLoOYQySnWrSN9XBb9A\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-14:23\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"W4rbJpt4Q9O6rcuXyD3oUg\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-04:21\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"jmiBuktJRT6Cozz11hBUGQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-14:22\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"XMlFA9xtR0S7vTeN-DzFcQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-16:11\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"_lpa_zJERQyAmVyl3ES_Pw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-15:20\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"lX4xSmkAT7iM80nFD8OFug\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-17:22\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"LPdIHchnQgqAZi8B9NFruQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-15:06\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"PC8xo1FURre9AEXNGUQPjA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-02:11\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"0CmYrZTtQouHXBnnSt7cmQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-01:19\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"tN2RbsMZQ3-8QVF3AwJC4A\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-11:00\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"73mr3gomS82zVr9-0yH33g\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-14:00\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"-9MXDStzS5O7RkOsiCM3aQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-14:12\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"ycAcvFOMQYWjJctYOwxGlQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-16:09\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"TQm6F5-ySUODbNAN2euTyA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-08:23\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"iJxNrLTdScS2o_4BzIcQJw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-18:22\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"BgZOsa_4SYawfCDPoltLgQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-14:09\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"hVCshHisTdG8RnH4-LLabQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-08:06\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"xomv8WTeTZOzgii1__H4UQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-10:23\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"48BWgRB_TFaLVz3_03FCdg\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-02:02\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"7fo6fg0IR9-bo9bl1xEkhg\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-02:14\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"bHGDokKBQH2acvsiR8bPyA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-06:23\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"GpJkLu-rQqyrcEpjlfib6g\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-18:10\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"jc7MIoBXTm6UP87ZMN8ABw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-18:04\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"c316-bXrRqOhvGzWjmiovg\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-06:18\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"W4nnsfm2QcCyklOMd-DP9Q\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-08:20\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"7bklvc3TT8GhaSyXwr2KSQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-13:18\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"MY1uLKB-QDSQNHXLHqPvCg\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-14:13\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"ZOy8Aw8vQGWFNwl1mkcA6A\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-16:14\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"dO369ikoTMOEdoL6NTYw7Q\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-11:11\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"0UqS94o8TPWoA3xgGayIzw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-14:14\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"uSsC4DGdTj68QvUuh2o_Nw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-03:06\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"_XIeKXcES8OP8onkcZdn_w\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-05:19\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"Sa9XhtwYTEKnjizXfyKrLw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-02:07\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"6cigMTHdRO26nfR6y-6y5Q\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-07:19\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"LJjmjqxDTmKob66dqijFKg\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-10:08\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"JERYE_fkSKmRjk42kWf7hQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-05:11\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"9Cu4DuibQWK8QTyqZB1cQw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-07:18\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"JM1TafaWSt6tlFJJRsN1ag\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-02:23\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"zt4gTYMlTOCKd1j9f19TeQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-15:16\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"i6bJbb5CQu6_CP7XyTTVgw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-04:10\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"O6XU0gy8Sgi4bqHHJ42bFw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-02:18\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"BT56x8CHRrGXX-Y0l3lw7g\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-08:04\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"cB75lTLuQ0azct6mW1ZH2g\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-16:13\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"4Wvi-lOZTZCbRn2p86Tr0g\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-07:00\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"1a_qrZL0TZqTlw8NCmOSDQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-08:10\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"JT1ZFwIqRg6LW8JTMCS5yg\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-14:10\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"cgXtIanTSTm422khRWlaKw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-04:11\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"BAjRTjHTQWSJXmTIep_FEQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-06:06\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"PV01lfRgSz-PfPZH3S82aQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-01:14\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"BSCVy3LTRJiWxiee9t89CQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-07:05\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"1WBbSb6cR_ieBjsweQ7gJA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-07:12\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"ErA7BKvcSdmpwpcSmIyB5w\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-04:08\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"_pPUeA94QJmjaJW-QC087w\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-15:18\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"ipSEOBNFSTeAA3be9WcXCg\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-11:05\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"CC7YSTWKTJuVZeyspXqZFA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-07:08\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"1xQSVOZVT0ypNHdX17qVGg\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-03:00\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"vNNMRTRrQGyE8K7jcMmAjA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-08:17\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"4tk90W9ZRBGBgvJi112Ilw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-05:00\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"ejfKRxEgQNO_Rp1N7iboaQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-07:06\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"svGCFi-_T5CD1MVLiSt2EA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-02:03\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"GszhXL0KTyGguZszKd9b1g\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-15:10\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"OlGw410ZTcqZHcX9rsIWPA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-17:06\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"TFTTaXhoStGCSWu5eZH6Fw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-15:22\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"pACFgfq7TSq_6niyTfOdYA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-10:14\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"HNLJZFPwSW235tIVpg9Pbg\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-03:07\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"J4swhhOqR4uHXJeyZCG6Vw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-04:02\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"UhIa66-lSHCjzEnX0jCAMw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-15:12\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"gcxWa8PMSKy6Xnbqo37x-g\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-03:13\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"GKJ17bpXRV6JBYX1EQCI2w\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-09:04\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"zFZodalkTvumwQBbvAm2FQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-01:23\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"6LMa0qlKQhKCnzc-KiqM1w\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-04:07\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"7SsMATtEQFO_LdpGzhqzUA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-17:19\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"S36f6kXYQqOaG2DKHGDKjw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-11:10\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"5MQiTt7ZRtS6LaF4HiB10w\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-11:14\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"CFOxuFwsSEKSiv-IMenVaQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-14:18\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"Qs_YpWBrTJSL2VGWRXXR_Q\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-06:09\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"NFaoDbxWReKHs0ixAAgiww\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-10:21\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"rK8lYH6bQgiBzgvnqmcFgw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-03:18\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"jWnUs3veRuGBxYgFHyOMbw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-09:16\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"7GxGtTruRcydKO94KbMAxw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-01:07\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"5dqwfhpvQKap8u6p-cRcIg\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-01:15\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"iPPjvdisS-e5zB1muioTdA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-03:11\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"jpMZCFn6RByzjZljRpXySQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-16:17\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"_XSiFYe_RsejBfwzlfnD8w\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-11:18\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"vavuYz_YSHiFgXoHcUktmA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-08:14\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"IgvvBNaWT3SQPTa0m5Vuow\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-12:17\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"-opggS70SWWQo3dTcAQ2sg\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-18:18\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"ePygOAqsTzi2fgE-Ow_Hmg\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-06:03\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"Z5tSiuEhQX2E_9dmFTzR-g\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-03:23\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"k4edXd7cQb2obUA553gG4g\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-04:04\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"V8yuRWy6Sj6kTU2nBgf-qA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-01:01\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"x1VfZH8_SiCN1geaQy3Eng\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-12:06\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"Y2U6mc6TQBanl3ve3SJyhA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-12:10\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"PyHM3h1LS126m2OP9FkFjw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-08:16\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"7ni3rrZRQAOqWXgNLkvbZQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-01:04\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"F1c3PFnTSr-8n8vaJJcnwg\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-03:01\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"Ty8TtCoASLSpUSVkeWKoaw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-16:16\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"mRMZyut_TESWcjTxtNE3nA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-09:05\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"kMCYtgugTTmtruxczrgw2Q\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-17:10\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"r3_-8adtR_qqBpwVkbrp6g\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-03:19\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"8K6rrRRyQueaJQJjYzYWyQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-15:19\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"nRlcYKXsQXuzAZGSL3_X5A\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-07:13\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"i8fIFwTjSri2h9xYmr8ZDg\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-12:04\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"7-HHZZiGS6KSnHtbmQjzAw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-05:23\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"tAZG6CeIRy6WlQ_y3avmZQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-17:01\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"mT8jkrTSTWmS-7RP2g-QYQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-08:18\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"8bv6GdmWSsG_17k8vabFcQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-09:20\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"5ZwYeg3AQKCpgWHY0LXGzQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-09:21\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"w6gydbZJTPSEv9stFTo_1A\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-08:07\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"u5mvY6EjQ4ifnj8Yed3Wjg\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-12:02\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"7oxilByPT-O1sKe7auDy2A\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-17:09\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"BKZ8htPYSkOqz8Oq66hLTg\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-02:00\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"QrnP2ELhRoqsZl8qRzyAEQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-06:12\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"HO9Y2QD5QkSvxt6vYfxePQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-04:13\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"VY32w_t3TT-_t5UQzf2G4Q\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-18:00\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"keNiCAUBSEKDJbhM2jAD3Q\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-09:14\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"bLdTuIDPR9Gj7r_qw-V8bg\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-11:02\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"mai1tOA4QtiTXF1e6kA0VQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-08:09\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"En3LuDS6TtOX1FN1AY8kwg\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-07:17\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"MgRkB8j1SMeC0lArdiDHzA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-14:19\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"Mzlk0tAlSY-WkhyI55tcsA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-01:03\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"rOxGWx5MSU2MsKU8jNJjdQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-04:22\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"1A1lwXNdQc-LzDCjY2V99g\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-05:18\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"e-xbJg3NTWuGMNhs0p_lYg\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-05:21\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"1EJbaSBnTBiUCRWXSh6RHA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-11:23\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"gfsuUdkOTaa0ukwiS0GO3w\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-15:02\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"SG4QSfB4RUSPDJK9YBS7cw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-01:10\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"sgUwjEEJTrCDXLjgMNoQdQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-17:20\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"0mI54mTgST2zJOF3UT0K6g\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-02:20\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"1gf3BNMWTyCG_PXUCyTcbA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-16:18\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"md3vBTwhSgu9ksUnTHXFtg\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-08:02\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"oDIWSfUNQnCOpwlpPVGK8w\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-02:09\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"jthO1U1MSCurnbJY6jMd3w\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-05:14\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"TV1XBlmITzuyxdUUv5s9Dg\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-18:11\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"oWCxk0r2RROeoddAKakeGw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-04:03\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"DZebIVfXTqmTe3_GxeXtDg\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-03:03\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"bpCrV9FcT4aJvBwYRg2zog\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-04:16\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"OuSA6_5MTy2VyrAuXGS4Ug\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-04:00\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"RCa6O7KFRt6i4KLmsVew2w\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-10:18\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"j_ReKubOQvKnbTM71dK3hQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-10:12\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"BfuhEAdLSMe_nhwB-kUbmg\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-18:21\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"VbNtx7ZoQi6yUNPwu1WGsQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-15:07\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"3EpcUt5WScunctSVA83j4g\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-06:17\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"zoKce1rjRemUmbSjiOYhgA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-09:22\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"sUIp_klyRt-lB94F37m5uA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-10:04\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"HVaqK1i2TJODZVMUvNA4HQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-09:01\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"U4EzHkTkRiyTMsfmRGZkRQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-16:03\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"6N3G9UY8Rc-EFpMRZGzQGw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-10:20\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"0tUR2gNnTfqTq3gKyy-pXw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-08:08\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"eJW4a5ZGTkOR5uKiS_XbkA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-08:22\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"q4T3sDO2SBi4fKTIu5ZqDg\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-16:02\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"b09P_exwTiWGPTEokVTCPg\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-05:01\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"2U7DlZ_wQ4mdYgRXIboEZg\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-07:22\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"oxWe_wA6QFSmuocKt5xoSw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-03:16\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"yLZG3u2VQ6mQe4KgrcCF5A\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-04:05\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"x7vkXJXvTq63ATiwMi68CQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-16:06\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"WuwchPdxSy-Me6szB4sXYQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-11:09\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"q3tiMMGHRN-2L4Ctr9DoBg\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-03:05\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"_s3vMTGLT2K1I6cmNVPx6Q\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-06:00\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"VFVwNmlhQ-GmoitPQFpMIg\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-03:08\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"s1HXuMZ5TcCcUdoedX0fEQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-10:05\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"IMIcUcnwSculLXSFjZJHZg\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-18:06\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"lqZ2Ze5qSRqhqcthuE0Dtw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-08:13\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"zQUmfQWjSyKvdD9cNIxKOQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-14:03\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"EH-21G15TQS3NTJznFMa-w\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-06:16\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"FMwigrSBTBeDDzdPgVpdZA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-05:15\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"LSyGtxM9THu3CIe4LIwJVw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-17:02\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"fn407gr2Tf2e_2WiBJ-3_A\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-15:17\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"GUaRuI9jRU-kMWrrb9NvcQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-16:10\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"B2RSTFyrQO6E4DDp6lir0Q\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-13:21\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"TVGWYUNkS2K0bqGxnlFs5A\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-11:16\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"UdOPND9dS0O_WnOwULbdzg\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-12:03\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"kkwxBPwLQLGuV3Ci9du7IA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-17:00\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"4XBQpNDRTUGDBtLLmbEARA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-05:02\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"W8TV3KUbSiOtqK_9wickoA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-11:07\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"lyFEoB6TRJSWkKbly16k5g\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-16:19\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"jspOBzuJSGaowpWCZDLXeg\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-02:01\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"QmpxHcEYSm-cbt6UTH17kQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-02:10\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"h-fiHXYbR6SSZFFJWZ-gZQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-14:07\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"Hr7DN0GLQ1WrMlJy6h0U1g\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-09:10\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"VbI06arxQq6K6ucQoxzY4g\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-08:00\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"lDNDRq7ZThikvl4CoLPwwQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-15:11\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"8rC2jLpHQtKO-xJQkzcALw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-09:18\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"S2PC0xpXQiSRgMP4CdKdTA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-09:00\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"WUdz1emRQIuJZsQkA-XvqQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-05:08\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"NSioQga6Tk6DNszCBPUgDA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-02:08\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"ar80Ou48RUOt4KYJnQz4cQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-18:02\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"qSFuoxmiQoym4H2IbsfiqQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-04:23\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"luksgGIcQWSDJXhjE3fxsw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-12:09\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"RvvbLCqcRkSDv9ZEvwJyTg\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-06:13\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"74rNVrGMTNWlMAlTYiXpgw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-10:00\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"Z8DSs8HlRp6dLSJfNvEt6g\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-01:05\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"VzLSIPUgScKMGIhhFM-FNA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-04:09\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"Cd-fI8kWSuSM3JrxiEJWaA\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-18:19\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"b5xupp2fRPGmYhmG3bruvw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-10:07\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"2AmIloROSAeL8pQ3h5Ns8g\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-21\" : {\n \"settings\" : {\n \"index\" : {\n \"creation_date\" : \"1419120003834\",\n \"uuid\" : \"AUEJtbO7SUaYc19yd4VG4A\",\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"version\" : {\n \"created\" : \"1040299\"\n }\n }\n }\n },\n \"tweets-2014-12-14:06\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"xCHssjvCRN-MONIpy4lUYQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-06:15\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"jun_i_ExQamrv-uhPlxlDw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-17:11\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"zc7CCzITTc-SmoruPMw8Iw\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"tweets-2014-12-16:22\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"JPqZyT9WQl-9KiIBBVS28Q\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n },\n \"kibana-int\" : {\n \"settings\" : {\n \"index\" : {\n \"number_of_replicas\" : \"1\",\n \"number_of_shards\" : \"5\",\n \"uuid\" : \"xU72wrkLQVGCemOFq6UlYQ\",\n \"version\" : {\n \"created\" : \"1030299\"\n }\n }\n }\n }\n}\n</pre>\n", "created_at": "2014-12-22T20:22:44Z" }, { "body": "Hi @vichargrave \n\nOK, so we're not talking about a single index here. It's one index in the midst of many others, all of which have to be taken into account. \n\nPrimaries and replicas are essentially identical, they do the same work (with the exception of updates, see #8369), so it doesn't matter if all the primaries are on a single node or not. The first thing I'd say is: change the allocation settings back to their defaults and don't touch them unless you are seeing an actual problem.\n\nThen, if you ARE seeing a problem, choose one setting only and adjust it by small amounts (eg 0.05) until you see something happen, ie when it crosses the threshold.\n\nThe risk that you run with changes these allocations settings is that you end up shuffling shards around your cluster for no good reason, causing a lot of extra I/O.\n\nBtw, looking at your settings, I'd recommend deleting this:\n\n```\nindex.translog.flush_threshold_ops: 50000\n```\n\nIt is better handled by the `flush_threshold_size` setting.\n\nAlso, your threadpool settings are crazy, unless you actually have 60 processors that you can devote to bulk indexing. The default is one bulk thread per processor. More than that and you're just causing a lot of context switching.\n", "created_at": "2014-12-23T12:38:15Z" }, { "body": "OK thanks for suggestions. I can see what you are saying with regard to\nall the primaries residing on a single node. However in my case, when I\nupgraded to ES 1.4.2 I saw that one of my four nodes did not get any\nshards, primary or replica. That situation is not good from a disk\nutilization standpoint. Elasticsearch should be distributing shards across\nall nodes to take advantage of the extra disk space that the entire cluster\nhas to offer. If some nodes are ignored then the effective disk space is\nreduced.\n\nOn Tue, Dec 23, 2014 at 4:38 AM, Clinton Gormley notifications@github.com\nwrote:\n\n> Hi @vichargrave https://github.com/vichargrave\n> \n> OK, so we're not talking about a single index here. It's one index in the\n> midst of many others, all of which have to be taken into account.\n> \n> Primaries and replicas are essentially identical, they do the same work\n> (with the exception of updates, see #8369\n> https://github.com/elasticsearch/elasticsearch/issues/8369), so it\n> doesn't matter if all the primaries are on a single node or not. The first\n> thing I'd say is: change the allocation settings back to their defaults and\n> don't touch them unless you are seeing an actual problem.\n> \n> Then, if you ARE seeing a problem, choose one setting only and adjust it\n> by small amounts (eg 0.05) until you see something happen, ie when it\n> crosses the threshold.\n> \n> The risk that you run with changes these allocations settings is that you\n> end up shuffling shards around your cluster for no good reason, causing a\n> lot of extra I/O.\n> \n> Btw, looking at your settings, I'd recommend deleting this:\n> \n> index.translog.flush_threshold_ops: 50000\n> \n> It is better handled by the flush_threshold_size setting.\n> \n> Also, your threadpool settings are crazy, unless you actually have 60\n> processors that you can devote to bulk indexing. The default is one bulk\n> thread per processor. More than that and you're just causing a lot of\n> context switching.\n> \n> —\n> Reply to this email directly or view it on GitHub\n> https://github.com/elasticsearch/elasticsearch/issues/9023#issuecomment-67947073\n> .\n", "created_at": "2014-12-23T22:35:55Z" }, { "body": "@vichargrave Actually, in 1.4 there were a number of commits around the disk allocation decider, which includes automatically rerouting shards when your disks start to fill up. See #8206, #8659, #8382, #8270, #8803 \n\nSo (in theory at least) this should be better in 1.4.2 than before.\n\nDisk space is just one of the many factors taken into account. Presumably it wasn't the most important factor at the time you created the index, which is why you ended up with the current layout. We prefer not to move shards unless really necessary otherwise we generate masses of I/O for no good purpose.\n\nUnfortunately, understanding the decisions made by the allocator is really quite hard. While you can turn on debug logging, it produces way too much info to be useful in production (which reflects the complexity of the decision making process). \n", "created_at": "2014-12-24T10:48:50Z" }, { "body": "It sounds like this ticket can be closed?\n", "created_at": "2014-12-24T10:49:09Z" }, { "body": "It appears that these settings get primaries and replicas distributed across all 4 nodes:\n\ncluster.routing.allocation.balance.shard: 1.0f\ncluster.routing.allocation.balance.primary: 1.0f\n\nStill I think the default shard distribution behavior for 1.4.2 is weird and that I have to make these adjustments even weirder. Yes we can close this issue, but it will be raised by other users.\n\nThanks again for your help. It is always appreciated.\n", "created_at": "2014-12-24T22:35:00Z" }, { "body": "Since this issue hasn't been closed yet, I'll continue. I have changed the shard distribution settings to this:\n\ncluster.routing.allocation.balance.shard: 1.5f\ncluster.routing.allocation.balance.primary: 2.0f\n\nAnd yet the shard distribution still looks like the following (note - the bright green squares are primary shards and the dim green squares are replica shards):\n\n![shard_distribution](https://cloud.githubusercontent.com/assets/260028/5572032/8d27eac0-8f4d-11e4-9ad8-3efdfef3a04e.png)\n\nNotice all the primary shards on the second node and no shards of any kind on the first node even though I have increased these settings. \n\nNow I have a 4 node Elasticsearch 1.4.2 cluster where 25% of the disk space is unavailable to store data. This situation has to be a bug of some sort OR I have to increase the balancing settings to values much larger than I have tried so far. \n", "created_at": "2014-12-29T19:31:59Z" }, { "body": "I think this should be logged as a bug. This is not a desirable behavior and it tempts me to roll back to 1.3.2.\n", "created_at": "2015-01-02T19:42:50Z" }, { "body": "why are you setting `cluster.routing.allocation.balance.primary: 2.0f` you should not modify these settings. Did you have them set before? I don't necessarily see this as a bug. We might have some disbalance due to a rolling upgrade, how did you upgrade and did you disable allocation? is the rest of your cluster well balanced? \n", "created_at": "2015-01-02T20:21:35Z" }, { "body": "I'm trying to get the shards evenly distributed between the nodes in my cluster as they were in version 1.3.2. I tried a rolling upgrade from 1.3.2 to 1.4.2, which is when this issues appeared. Now, as I stated before I have a node with no data on it do 25% of my cluster's disk capacity is unused. That can't be right if that is what Elasticsearch is doing, so yes it is a bug as far s I'm concerned.\n\nPlease review the previous posts which tell the entire story. \n", "created_at": "2015-01-02T20:54:24Z" }, { "body": "I did read the post and the story, there are still some things I don't understand. Bare with me and help me to understand:\n- you have only one index in your cluster and one node has no shards of this index so 25% of your cluster is unused assumig 4 nodes, is this correct?\n- if you have more indices are all of these indices well balanced or are there some indices that are not fully balanced?\n- if you look at your cluster, is it balanced based on the number of shards per node cluster wide not per index?\n- you set `cluster.routing.allocation.balance.primary` and `cluster.routing.allocation.balance.shard` did you also set `cluster.routing.allocation.balance.index`? if you didn't I think what you are seeing is expected or rather possible. if you want to have well balanced indices you should give more weight to `cluster.routing.allocation.balance.index` and `cluster.routing.allocation.balance.primary` should really only be a tie-breaker. It's pretty hard to get back to full balance if you have many indices that are not well balanced since the shard allocator was build to be conservative with respect to relocations.\n\nI'd start with restoring defaults for these values:\n\n```\ncluster.routing.allocation.balance.primary: 0.05\ncluster.routing.allocation.balance.shard: 0.45\ncluster.routing.allocation.balance.index: 0.5\n```\n\nand calling `_reroute`\n", "created_at": "2015-01-02T21:08:56Z" }, { "body": "FYI I just fixed the wrong default in my last comment from: `cluster.routing.allocation.balance.primary: 0.5` to `cluster.routing.allocation.balance.primary: 0.05` sorry for the glitch\n", "created_at": "2015-01-02T21:23:38Z" }, { "body": "No I have hundreds of indices. Look at the previous settings out put I\nposted when Clinton was answering my questions. The graphic I induced was\njust the result of a recent index. I see what you mean about the\ncluster.routing.allocation.balance.index field. I have it set to the\ndefault. I experimented with the cluster.routing.allocation.balance.primary\n and cluster.routing.allocation.balance.shard settings according to\nClinton's suggestions. There seems to be some confusion on this balancing\nissue.\n\nI'll set the fields you indicate back to their defaults. I'm not sure what\nyou mean about then call _reroute. I know how to omove shards around, but\nwhat just to make sure we are talking about the same thing, what is the\n_reroute call I should use?\n\nOn Fri, Jan 2, 2015 at 1:09 PM, Simon Willnauer notifications@github.com\nwrote:\n\n> I did read the post and the story, there are still some things I don't\n> understand. Bare with me and help me to understand:\n> - you have only one index in your cluster and one node has no shards\n> of this index so 25% of your cluster is unused assumig 4 nodes, is this\n> correct?\n> - if you have more indices are all of these indices well balanced\n> or are there some indices that are not fully balanced?\n> - if you look at your cluster, is it balanced based on the number\n> of shards per node cluster wide not per index?\n> - you set cluster.routing.allocation.balance.primary and\n> cluster.routing.allocation.balance.shard did you also set\n> cluster.routing.allocation.balance.index? if you didn't I think\n> what you are seeing is expected or rather possible. if you want to have\n> well balanced indices you should give more weight to\n> cluster.routing.allocation.balance.index and\n> cluster.routing.allocation.balance.primary should really only be a\n> tie-breaker. It's pretty hard to get back to full balance if you have many\n> indices that are not well balanced since the shard allocator was build to\n> be conservative with respect to relocations.\n> \n> I'd start with restoring defaults for these values:\n> \n> cluster.routing.allocation.balance.primary: 0.5\n> cluster.routing.allocation.balance.shard: 0.45\n> cluster.routing.allocation.balance.index: 0.5\n> \n> and calling _reroute\n> \n> —\n> Reply to this email directly or view it on GitHub\n> https://github.com/elasticsearch/elasticsearch/issues/9023#issuecomment-68562483\n> .\n", "created_at": "2015-01-02T21:34:23Z" }, { "body": "ok so if you have that many indices I can totally see how this can happen with the settings. Did you have any of these settings changed before this issue happened? \n\n> I'll set the fields you indicate back to their defaults. I'm not sure what\n> you mean about then call _reroute. I know how to omove shards around, but\n> what just to make sure we are talking about the same thing, what is the\n> _reroute call I should use?\n\nI am referring to `http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/cluster-reroute.html` if you call \n\n```\ncurl -XPOST 'localhost:9200/_cluster/reroute' -d '{}`\n```\n\nwe do a rebalance run so you changes should kick in after that call. The pure update call won't trigger a reroute.\n", "created_at": "2015-01-02T21:41:50Z" }, { "body": "No I did not change any of the default settings. That came later when I\nnoticed the problem. Let me restore the default settings, then I'll give\nthe reroute call a try. I'll keep you posted.\n\nMany thanks.\n\nOn Fri, Jan 2, 2015 at 1:42 PM, Simon Willnauer notifications@github.com\nwrote:\n\n> ok so if you have that many indices I can totally see how this can happen\n> with the settings. Did you have any of these settings changed before this\n> issue happened?\n> \n> I'll set the fields you indicate back to their defaults. I'm not sure what\n> you mean about then call _reroute. I know how to omove shards around, but\n> what just to make sure we are talking about the same thing, what is the\n> _reroute call I should use?\n> \n> I am referring to\n> http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/cluster-reroute.html\n> if you call\n> \n> curl -XPOST 'localhost:9200/_cluster/reroute' -d '{}`\n> \n> we do a rebalance run so you changes should kick in after that call. The\n> pure update call won't trigger a reroute.\n> \n> —\n> Reply to this email directly or view it on GitHub\n> https://github.com/elasticsearch/elasticsearch/issues/9023#issuecomment-68565077\n> .\n", "created_at": "2015-01-02T22:05:00Z" }, { "body": "I am curious, would it be possible to get an output of the CAT API for shards: \n`http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/cat-shards.html`\nI wanna try to simulate the situation you are in that might give me a better insight into the shard allocation. It might be that the allocator has numerical problems with lots of shards and low number of nodes?\n", "created_at": "2015-01-02T22:11:28Z" }, { "body": "So I need to do this on each node, correct?\n\nOn Fri, Jan 2, 2015 at 2:12 PM, Simon Willnauer notifications@github.com\nwrote:\n\n> I am curious, would it be possible to get an output of the CAT API for\n> shards:\n> \n> http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/cat-shards.html\n> I wanna try to simulate the situation you are in that might give me a\n> better insight into the shard allocation. It might be that the allocator\n> has numerical problems with lots of shards and low number of nodes?\n> \n> —\n> Reply to this email directly or view it on GitHub\n> https://github.com/elasticsearch/elasticsearch/issues/9023#issuecomment-68567448\n> .\n", "created_at": "2015-01-02T22:16:20Z" }, { "body": "> So I need to do this on each node, correct?\n\nno, any node will do.\n", "created_at": "2015-01-02T22:20:15Z" }, { "body": "This a really long file. Can I send it to you directly via email?\n", "created_at": "2015-01-02T22:25:21Z" }, { "body": "sure `simon at elasticsearch DOT com`\n", "created_at": "2015-01-02T22:27:11Z" }, { "body": "FYI @vichargrave I found the issue. It's in-fact a bug which can put your cluster in quite bad state. I opened a pull-request for it. ^^ \n\nThanks for your input and patience! Once I have this committed to 1.4.x would you be able to verify the fix on your real cluster?\n", "created_at": "2015-01-05T23:29:21Z" }, { "body": "Yes I would be happy to.\n\n-- vic\n\nSent from my iPhone\n\n> On Jan 5, 2015, at 3:30 PM, Simon Willnauer notifications@github.com wrote:\n> \n> FYI @vichargrave I found the issue. It's in-fact a bug which can put your cluster in quite bad state. I opened a pull-request for it. ^^\n> \n> Thanks for your input and patience! Once I have this committed to 1.4.x would you be able to verify the fix on your real cluster?\n> \n> —\n> Reply to this email directly or view it on GitHub.\n", "created_at": "2015-01-05T23:34:25Z" }, { "body": "cool I will keep you posted\n", "created_at": "2015-01-05T23:35:44Z" }, { "body": "@vichargrave I pushed the fix to 1.4 and 1.3 as well. Do you have a chance to test this? Note, if you are are upgrading to a snapshot build you might not be able to revert back to 1.4.2! Please consider this, I don't know what purpose your cluster has or if you are running in production. I can totally understand if you say `no thanks`... if you can deal with it I'd be happy if you can try it out.\nYou also need to be aware that if you move to this new build you might see quite some relocations happening, in my tests I saw about 5% of your shards got relocated to get back balance in the cluster.\n\nif you wanna simualte it I have a tests checked in :)\n", "created_at": "2015-01-06T21:39:52Z" }, { "body": "OK , so by I might not be able to revert back to 1.4.2, what does that\nmean. Would I be able to revert back to anything, 1.3.2, etc? Also, if\nthe upgrade to the new code is successful, will all the shards on my\ncluster be redistributed automaltically or do I still have to do that by\ncalling _reroute?\n\nOn Tue, Jan 6, 2015 at 1:40 PM, Simon Willnauer notifications@github.com\nwrote:\n\n> @vichargrave https://github.com/vichargrave I pushed the fix to 1.4 and\n> 1.3 as well. Do you have a chance to test this? Note, if you are are\n> upgrading to a snapshot build you might not be able to revert back to\n> 1.4.2! Please consider this, I don't know what purpose your cluster has or\n> if you are running in production. I can totally understand if you say no\n> thanks... if you can deal with it I'd be happy if you can try it out.\n> You also need to be aware that if you move to this new build you might see\n> quite some relocations happening, in my tests I saw about 5% of your shards\n> got relocated to get back balance in the cluster.\n> \n> if you wanna simualte it I have a tests checked in :)\n> \n> —\n> Reply to this email directly or view it on GitHub\n> https://github.com/elasticsearch/elasticsearch/issues/9023#issuecomment-68938721\n> .\n", "created_at": "2015-01-07T17:27:15Z" }, { "body": "oh ok so for `1.4.3-SNAPSHOT` you can downgrade to `1.4.2` since they have the same lucene version. You can't go back to `1.3.2` you will likely index too new exceptions. If you are doing a rolling restart I suggestion you to upgrade the non-master nodes first. once they all come up everything should rebalance automactially. Be aware it might take a while...\n", "created_at": "2015-01-07T21:30:17Z" }, { "body": "I checked out the code from github, but I don't see a 1.4.3-SNAPSHOT.\nPerhaps I would be better off rolling back to back to 1.3.2 until the fix\ngoes into the mainstream code. Also I'm more comfortable upgrading from\nRPMs.\n\nOn Wed, Jan 7, 2015 at 1:31 PM, Simon Willnauer notifications@github.com\nwrote:\n\n> oh ok so for 1.4.3-SNAPSHOT you can downgrade to 1.4.2 since they have\n> the same lucene version. You can't go back to 1.3.2 you will likely index\n> too new exceptions. If you are doing a rolling restart I suggestion you to\n> upgrade the non-master nodes first. once they all come up everything should\n> rebalance automactially. Be aware it might take a while...\n> \n> —\n> Reply to this email directly or view it on GitHub\n> https://github.com/elasticsearch/elasticsearch/issues/9023#issuecomment-69095535\n> .\n", "created_at": "2015-01-08T18:58:25Z" } ], "number": 9023, "title": "Odd shard distribution in Elasticsearch 1.4.2" }
{ "body": "In some situations the shard balanceing weight delta becomes negative. Yet,\na negative delta is always treated as `well balanced` which is wrong. I wasn't\nable to reproduce the issue in any way other than useing the real world data\nfrom issue #9023. This commit adds a fix for absolute deltas as well as a base\ntest class that allows to build tests or simulations from the cat API output.\n\nCloses #9023\n", "number": 9149, "review_comments": [ { "body": "Why is this commented out?\n", "created_at": "2015-01-06T09:30:33Z" }, { "body": "Extra space here between \"public\" and \"class\"\n", "created_at": "2015-01-06T09:31:03Z" }, { "body": "Can you name this something more descriptive? maybe `shardIdToRouting` instead of just `map`\n", "created_at": "2015-01-06T10:02:24Z" }, { "body": "aaah it took too long so I commentd it out\n", "created_at": "2015-01-06T10:05:14Z" }, { "body": "I think this could use the `rebalance` function in `CatAllocationTestBase`? It looks like it's performing the same function\n", "created_at": "2015-01-06T10:09:15Z" } ], "title": "Weight deltas must be absolute deltas" }
{ "commits": [ { "message": "[ALLOCATION] Weight deltas must be absolute deltas\n\nIn some situations the shard balanceing weight delta becomes negative. Yet,\na negative delta is always treated as `well balanced` which is wrong. I wasn't\nable to reproduce the issue in any way other than useing the real world data\nfrom issue #9023. This commit adds a fix for absolute deltas as well as a base\ntest class that allows to build tests or simulations from the cat API output.\n\nCloses #9023" } ], "files": [ { "diff": "@@ -78,14 +78,14 @@ due to forced awareness or allocation filtering.\n \n `cluster.routing.allocation.balance.index`::\n Defines a factor to the number of shards per index allocated\n- on a specific node (float). Defaults to `0.5f`. Raising this raises the\n+ on a specific node (float). Defaults to `0.55f`. Raising this raises the\n tendency to equalize the number of shards per index across all nodes in\n the cluster.\n \n `cluster.routing.allocation.balance.primary`::\n Defines a weight factor for the number of primaries of a specific index\n- allocated on a node (float). `0.05f`. Raising this raises the tendency\n- to equalize the number of primary shards across all nodes in the cluster.\n+ allocated on a node (float). `0.00f`. Raising this raises the tendency\n+ to equalize the number of primary shards across all nodes in the cluster. deprecated[1.3.8]\n \n `cluster.routing.allocation.balance.threshold`::\n Minimal optimization value of operations that should be performed (non", "filename": "docs/reference/cluster/update-settings.asciidoc", "status": "modified" }, { "diff": "@@ -259,13 +259,13 @@ public int primaryShardsUnassigned() {\n /**\n * Returns a {@link List} of shards that match one of the states listed in {@link ShardRoutingState states}\n *\n- * @param states a set of {@link ShardRoutingState states}\n+ * @param state {@link ShardRoutingState} to retrieve\n * @return a {@link List} of shards that match one of the given {@link ShardRoutingState states}\n */\n- public List<ShardRouting> shardsWithState(ShardRoutingState... states) {\n+ public List<ShardRouting> shardsWithState(ShardRoutingState state) {\n List<ShardRouting> shards = newArrayList();\n for (IndexShardRoutingTable shardRoutingTable : this) {\n- shards.addAll(shardRoutingTable.shardsWithState(states));\n+ shards.addAll(shardRoutingTable.shardsWithState(state));\n }\n return shards;\n }", "filename": "src/main/java/org/elasticsearch/cluster/routing/IndexRoutingTable.java", "status": "modified" }, { "diff": "@@ -476,13 +476,14 @@ public List<ShardRouting> replicaShardsWithState(ShardRoutingState... states) {\n return shards;\n }\n \n- public List<ShardRouting> shardsWithState(ShardRoutingState... states) {\n+ public List<ShardRouting> shardsWithState(ShardRoutingState state) {\n+ if (state == ShardRoutingState.INITIALIZING) {\n+ return allInitializingShards;\n+ }\n List<ShardRouting> shards = newArrayList();\n for (ShardRouting shardEntry : this) {\n- for (ShardRoutingState state : states) {\n- if (shardEntry.state() == state) {\n- shards.add(shardEntry);\n- }\n+ if (shardEntry.state() == state) {\n+ shards.add(shardEntry);\n }\n }\n return shards;", "filename": "src/main/java/org/elasticsearch/cluster/routing/IndexShardRoutingTable.java", "status": "modified" }, { "diff": "@@ -108,10 +108,10 @@ public RoutingTableValidation validate(MetaData metaData) {\n return validation;\n }\n \n- public List<ShardRouting> shardsWithState(ShardRoutingState... states) {\n+ public List<ShardRouting> shardsWithState(ShardRoutingState state) {\n List<ShardRouting> shards = newArrayList();\n for (IndexRoutingTable indexRoutingTable : this) {\n- shards.addAll(indexRoutingTable.shardsWithState(states));\n+ shards.addAll(indexRoutingTable.shardsWithState(state));\n }\n return shards;\n }", "filename": "src/main/java/org/elasticsearch/cluster/routing/RoutingTable.java", "status": "modified" }, { "diff": "@@ -71,9 +71,18 @@ public class BalancedShardsAllocator extends AbstractComponent implements Shards\n public static final String SETTING_SHARD_BALANCE_FACTOR = \"cluster.routing.allocation.balance.shard\";\n public static final String SETTING_PRIMARY_BALANCE_FACTOR = \"cluster.routing.allocation.balance.primary\";\n \n- private static final float DEFAULT_INDEX_BALANCE_FACTOR = 0.5f;\n+ private static final float DEFAULT_INDEX_BALANCE_FACTOR = 0.55f;\n private static final float DEFAULT_SHARD_BALANCE_FACTOR = 0.45f;\n- private static final float DEFAULT_PRIMARY_BALANCE_FACTOR = 0.05f;\n+ /**\n+ * The primary balance factor was introduces as a tie-breaker to make the initial allocation\n+ * more deterministic. Yet other mechanism have been added ensure that the algorithm is more deterministic such that this\n+ * setting is not needed anymore. Additionally, this setting was abused to balance shards based on their primary flag which can lead\n+ * to unexpected behavior when allocating or balancing the shards.\n+ *\n+ * @deprecated the threshold primary balance factor is deprecated and should not be used.\n+ */\n+ @Deprecated\n+ private static final float DEFAULT_PRIMARY_BALANCE_FACTOR = 0.0f;\n \n class ApplySettings implements NodeSettingsService.Listener {\n @Override\n@@ -191,44 +200,23 @@ public static class WeightFunction {\n private final float indexBalance;\n private final float shardBalance;\n private final float primaryBalance;\n- private final EnumMap<Operation, float[]> thetaMap = new EnumMap<>(Operation.class);\n+ private final float[] theta;\n \n public WeightFunction(float indexBalance, float shardBalance, float primaryBalance) {\n float sum = indexBalance + shardBalance + primaryBalance;\n if (sum <= 0.0f) {\n throw new ElasticsearchIllegalArgumentException(\"Balance factors must sum to a value > 0 but was: \" + sum);\n }\n- final float[] defaultTheta = new float[]{shardBalance / sum, indexBalance / sum, primaryBalance / sum};\n- for (Operation operation : Operation.values()) {\n- switch (operation) {\n- case THRESHOLD_CHECK:\n- sum = indexBalance + shardBalance;\n- if (sum <= 0.0f) {\n- thetaMap.put(operation, defaultTheta);\n- } else {\n- thetaMap.put(operation, new float[]{shardBalance / sum, indexBalance / sum, 0});\n- }\n- break;\n- case BALANCE:\n- case ALLOCATE:\n- case MOVE:\n- thetaMap.put(operation, defaultTheta);\n- break;\n- default:\n- assert false;\n- }\n- }\n+ theta = new float[]{shardBalance / sum, indexBalance / sum, primaryBalance / sum};\n this.indexBalance = indexBalance;\n this.shardBalance = shardBalance;\n this.primaryBalance = primaryBalance;\n }\n \n public float weight(Operation operation, Balancer balancer, ModelNode node, String index) {\n- final float weightShard = (node.numShards() - balancer.avgShardsPerNode());\n- final float weightIndex = (node.numShards(index) - balancer.avgShardsPerNode(index));\n- final float weightPrimary = (node.numPrimaries() - balancer.avgPrimariesPerNode());\n- final float[] theta = thetaMap.get(operation);\n- assert theta != null;\n+ final float weightShard = node.numShards() - balancer.avgShardsPerNode();\n+ final float weightIndex = node.numShards(index) - balancer.avgShardsPerNode(index);\n+ final float weightPrimary = node.numPrimaries() - balancer.avgPrimariesPerNode();\n return theta[0] * weightShard + theta[1] * weightIndex + theta[2] * weightPrimary;\n }\n \n@@ -250,13 +238,7 @@ public static enum Operation {\n /**\n * Provided during move operation.\n */\n- MOVE,\n- /**\n- * Provided when the weight delta is checked against the configured threshold.\n- * This can be used to ignore tie-breaking weight factors that should not\n- * solely trigger a relocation unless the delta is above the threshold.\n- */\n- THRESHOLD_CHECK\n+ MOVE\n }\n \n /**\n@@ -348,11 +330,16 @@ private boolean initialize(RoutingNodes routing, RoutingNodes.UnassignedShards u\n return allocateUnassigned(unassigned, routing.ignoredUnassigned());\n }\n \n+ private static float absDelta(float lower, float higher) {\n+ assert higher >= lower : higher + \" lt \" + lower +\" but was expected to be gte\";\n+ return Math.abs(higher - lower);\n+ }\n+\n private static boolean lessThan(float delta, float threshold) {\n /* deltas close to the threshold are \"rounded\" to the threshold manually\n to prevent floating point problems if the delta is very close to the\n threshold ie. 1.000000002 which can trigger unnecessary balance actions*/\n- return delta <= threshold + 0.001f;\n+ return delta <= (threshold + 0.001f);\n }\n \n /**\n@@ -393,11 +380,10 @@ public boolean balance() {\n final ModelNode maxNode = modelNodes[highIdx];\n advance_range:\n if (maxNode.numShards(index) > 0) {\n- float delta = weights[highIdx] - weights[lowIdx];\n- delta = lessThan(delta, threshold) ? delta : sorter.weight(Operation.THRESHOLD_CHECK, maxNode) - sorter.weight(Operation.THRESHOLD_CHECK, minNode);\n+ float delta = absDelta(weights[lowIdx], weights[highIdx]);\n if (lessThan(delta, threshold)) {\n if (lowIdx > 0 && highIdx-1 > 0 // is there a chance for a higher delta?\n- && (weights[highIdx-1] - weights[0] > threshold) // check if we need to break at all\n+ && (absDelta(weights[0], weights[highIdx-1]) > threshold) // check if we need to break at all\n ) {\n /* This is a special case if allocations from the \"heaviest\" to the \"lighter\" nodes is not possible\n * due to some allocation decider restrictions like zone awareness. if one zone has for instance\n@@ -747,7 +733,7 @@ public int compare(MutableShardRouting o1,\n final RoutingNode node = routingNodes.node(minNode.getNodeId());\n if (deciders.canAllocate(node, allocation).type() != Type.YES) {\n if (logger.isTraceEnabled()) {\n- logger.trace(\"Can not allocate on node [{}] remove from round decisin [{}]\", node, decision.type());\n+ logger.trace(\"Can not allocate on node [{}] remove from round decision [{}]\", node, decision.type());\n }\n throttledNodes.add(minNode);\n }", "filename": "src/main/java/org/elasticsearch/cluster/routing/allocation/allocator/BalancedShardsAllocator.java", "status": "modified" }, { "diff": "@@ -0,0 +1,98 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.cluster.routing.allocation;\n+\n+import org.apache.lucene.util.TestUtil;\n+import org.elasticsearch.Version;\n+import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.metadata.IndexMetaData;\n+import org.elasticsearch.cluster.metadata.MetaData;\n+import org.elasticsearch.cluster.routing.IndexShardRoutingTable;\n+import org.elasticsearch.cluster.routing.RoutingTable;\n+import org.elasticsearch.cluster.routing.ShardRouting;\n+\n+import java.io.IOException;\n+import java.io.InputStream;\n+import java.nio.file.Files;\n+import java.nio.file.Path;\n+import java.util.HashMap;\n+import java.util.Map;\n+\n+import static org.elasticsearch.cluster.routing.ShardRoutingState.INITIALIZING;\n+import static org.elasticsearch.common.settings.ImmutableSettings.settingsBuilder;\n+\n+/**\n+ * see issue #9023\n+ */\n+public class BalanceUnbalancedClusterTest extends CatAllocationTestBase {\n+\n+ @Override\n+ protected Path getCatPath() throws IOException {\n+ Path tmp = newTempDirPath();\n+ try (InputStream stream = Files.newInputStream(getResourcePath(\"/org/elasticsearch/cluster/routing/issue_9023.zip\"))) {\n+ TestUtil.unzip(stream, tmp);\n+ }\n+ return tmp.resolve(\"issue_9023\");\n+ }\n+\n+ @Override\n+ protected ClusterState allocateNew(ClusterState state) {\n+ String index = \"tweets-2014-12-29:00\";\n+ AllocationService strategy = createAllocationService(settingsBuilder()\n+ .build());\n+ MetaData metaData = MetaData.builder(state.metaData())\n+ .put(IndexMetaData.builder(index).settings(settings(Version.CURRENT)).numberOfShards(5).numberOfReplicas(1))\n+ .build();\n+\n+ RoutingTable routingTable = RoutingTable.builder(state.routingTable())\n+ .addAsNew(metaData.index(index))\n+ .build();\n+\n+ ClusterState clusterState = ClusterState.builder(state).metaData(metaData).routingTable(routingTable).build();\n+ routingTable = strategy.reroute(clusterState).routingTable();\n+ clusterState = ClusterState.builder(clusterState).routingTable(routingTable).build();\n+ while (true) {\n+ if (routingTable.shardsWithState(INITIALIZING).isEmpty()) {\n+ break;\n+ }\n+ routingTable = strategy.applyStartedShards(clusterState, routingTable.shardsWithState(INITIALIZING)).routingTable();\n+ clusterState = ClusterState.builder(clusterState).routingTable(routingTable).build();\n+ }\n+ Map<String, Integer> counts = new HashMap<>();\n+ for (IndexShardRoutingTable table : routingTable.index(index)) {\n+ for (ShardRouting r : table) {\n+ String s = r.currentNodeId();\n+ Integer count = counts.get(s);\n+ if (count == null) {\n+ count = 0;\n+ }\n+ count++;\n+ counts.put(s, count);\n+ }\n+ }\n+ for (Map.Entry<String, Integer> count : counts.entrySet()) {\n+ // we have 10 shards and 4 nodes so 2 nodes have 3 shards and 2 nodes have 2 shards\n+ assertTrue(\"Node: \" + count.getKey() + \" has shard mismatch: \" + count.getValue(), count.getValue() >= 2);\n+ assertTrue(\"Node: \" + count.getKey() + \" has shard mismatch: \" + count.getValue(), count.getValue() <= 3);\n+\n+ }\n+ return clusterState;\n+ }\n+\n+}", "filename": "src/test/java/org/elasticsearch/cluster/routing/allocation/BalanceUnbalancedClusterTest.java", "status": "added" }, { "diff": "@@ -0,0 +1,191 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.cluster.routing.allocation;\n+\n+import com.google.common.base.Charsets;\n+import org.elasticsearch.Version;\n+import org.elasticsearch.cluster.ClusterState;\n+import org.elasticsearch.cluster.metadata.IndexMetaData;\n+import org.elasticsearch.cluster.metadata.MetaData;\n+import org.elasticsearch.cluster.node.DiscoveryNodes;\n+import org.elasticsearch.cluster.routing.*;\n+import org.elasticsearch.index.Index;\n+import org.elasticsearch.index.engine.internal.InternalEngine;\n+import org.elasticsearch.index.shard.ShardId;\n+import org.elasticsearch.test.ElasticsearchAllocationTestCase;\n+import org.junit.Ignore;\n+import org.junit.Test;\n+\n+import java.io.BufferedReader;\n+import java.io.IOException;\n+import java.nio.file.Files;\n+import java.nio.file.Path;\n+import java.util.*;\n+import java.util.regex.Matcher;\n+import java.util.regex.Pattern;\n+\n+import static org.elasticsearch.cluster.routing.ShardRoutingState.*;\n+import static org.elasticsearch.common.settings.ImmutableSettings.settingsBuilder;\n+\n+/**\n+ * A base testscase that allows to run tests based on the output of the CAT API\n+ * The input is a line based cat/shards output like:\n+ * kibana-int 0 p STARTED 2 24.8kb 10.202.245.2 r5-9-35\n+ *\n+ * the test builds up a clusterstate from the cat input and optionally runs a full balance on it.\n+ * This can be used to debug cluster allocation decisions.\n+ */\n+@Ignore\n+public abstract class CatAllocationTestBase extends ElasticsearchAllocationTestCase {\n+\n+ protected abstract Path getCatPath() throws IOException;\n+\n+ @Test\n+ public void run() throws IOException {\n+ Set<String> nodes = new HashSet<>();\n+ Map<String, Idx> indices = new HashMap<>();\n+ try (BufferedReader reader = Files.newBufferedReader(getCatPath(), Charsets.UTF_8)) {\n+ String line = null;\n+ // regexp FTW\n+ Pattern pattern = Pattern.compile(\"^(.+)\\\\s+(\\\\d)\\\\s+([rp])\\\\s+(STARTED|RELOCATING|INITIALIZING|UNASSIGNED)\\\\s+\\\\d+\\\\s+[0-9.a-z]+\\\\s+(\\\\d+\\\\.\\\\d+\\\\.\\\\d+\\\\.\\\\d+).*$\");\n+ while((line = reader.readLine()) != null) {\n+ final Matcher matcher;\n+ if ((matcher = pattern.matcher(line)).matches()) {\n+ final String index = matcher.group(1);\n+ Idx idx = indices.get(index);\n+ if (idx == null) {\n+ idx = new Idx(index);\n+ indices.put(index, idx);\n+ }\n+ final int shard = Integer.parseInt(matcher.group(2));\n+ final boolean primary = matcher.group(3).equals(\"p\");\n+ ShardRoutingState state = ShardRoutingState.valueOf(matcher.group(4));\n+ String ip = matcher.group(5);\n+ nodes.add(ip);\n+ MutableShardRouting routing = new MutableShardRouting(index, shard, ip, primary, state, 1);\n+ idx.add(routing);\n+ logger.debug(\"Add routing {}\", routing);\n+ } else {\n+ fail(\"can't read line: \" + line);\n+ }\n+ }\n+\n+ }\n+\n+ logger.info(\"Building initial routing table\");\n+ MetaData.Builder builder = MetaData.builder();\n+ RoutingTable.Builder routingTableBuilder = RoutingTable.builder();\n+ for(Idx idx : indices.values()) {\n+ IndexMetaData idxMeta = IndexMetaData.builder(idx.name).settings(settings(Version.CURRENT)).numberOfShards(idx.numShards()).numberOfReplicas(idx.numReplicas()).build();\n+ builder.put(idxMeta, false);\n+ IndexRoutingTable.Builder tableBuilder = new IndexRoutingTable.Builder(idx.name).initializeAsRecovery(idxMeta);\n+ Map<Integer, IndexShardRoutingTable> shardIdToRouting = new HashMap<>();\n+ for (MutableShardRouting r : idx.routing) {\n+ IndexShardRoutingTable refData = new IndexShardRoutingTable.Builder(new ShardId(idx.name, r.id()), true).addShard(r).build();\n+ if (shardIdToRouting.containsKey(r.getId())) {\n+ refData = new IndexShardRoutingTable.Builder(shardIdToRouting.get(r.getId())).addShard(r).build();\n+ }\n+ shardIdToRouting.put(r.getId(), refData);\n+\n+ }\n+ for (IndexShardRoutingTable t: shardIdToRouting.values()) {\n+ tableBuilder.addIndexShard(t);\n+ }\n+ IndexRoutingTable table = tableBuilder.build();\n+ routingTableBuilder.add(table);\n+ }\n+ MetaData metaData = builder.build();\n+\n+ RoutingTable routingTable = routingTableBuilder.build();\n+ DiscoveryNodes.Builder builderDiscoNodes = DiscoveryNodes.builder();\n+ for (String node : nodes) {\n+ builderDiscoNodes.put(newNode(node));\n+ }\n+ ClusterState clusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT).metaData(metaData).routingTable(routingTable).nodes(builderDiscoNodes.build()).build();\n+ if (balanceFirst()) {\n+ clusterState = rebalance(clusterState);\n+ }\n+ clusterState = allocateNew(clusterState);\n+ }\n+\n+ protected abstract ClusterState allocateNew(ClusterState clusterState);\n+\n+ protected boolean balanceFirst() {\n+ return true;\n+ }\n+\n+ private ClusterState rebalance(ClusterState clusterState) {\n+ RoutingTable routingTable;AllocationService strategy = createAllocationService(settingsBuilder()\n+ .build());\n+ RoutingAllocation.Result reroute = strategy.reroute(clusterState);\n+ routingTable = reroute.routingTable();\n+ clusterState = ClusterState.builder(clusterState).routingTable(routingTable).build();\n+ routingTable = clusterState.routingTable();\n+ int numRelocations = 0;\n+ while (true) {\n+ List<ShardRouting> initializing = routingTable.shardsWithState(INITIALIZING);\n+ if (initializing.isEmpty()) {\n+ break;\n+ }\n+ logger.debug(initializing.toString());\n+ numRelocations += initializing.size();\n+ routingTable = strategy.applyStartedShards(clusterState, initializing).routingTable();\n+ clusterState = ClusterState.builder(clusterState).routingTable(routingTable).build();\n+ }\n+ logger.debug(\"--> num relocations to get balance: \" + numRelocations);\n+ return clusterState;\n+ }\n+\n+\n+\n+ public class Idx {\n+ final String name;\n+ final List<MutableShardRouting> routing = new ArrayList<>();\n+\n+ public Idx(String name) {\n+ this.name = name;\n+ }\n+\n+\n+ public void add(MutableShardRouting r) {\n+ routing.add(r);\n+ }\n+\n+ public int numReplicas() {\n+ int count = 0;\n+ for (MutableShardRouting msr : routing) {\n+ if (msr.primary() == false && msr.id()==0) {\n+ count++;\n+ }\n+ }\n+ return count;\n+ }\n+\n+ public int numShards() {\n+ int max = 0;\n+ for (MutableShardRouting msr : routing) {\n+ if (msr.primary()) {\n+ max = Math.max(msr.getId()+1, max);\n+ }\n+ }\n+ return max;\n+ }\n+ }\n+}", "filename": "src/test/java/org/elasticsearch/cluster/routing/allocation/CatAllocationTestBase.java", "status": "added" }, { "diff": "", "filename": "src/test/resources/org/elasticsearch/cluster/routing/issue_9023.zip", "status": "added" } ] }
{ "body": "We use PlainTransportFuture as a future for our transport calls. If someone blocks on it and it is interrupted, we throw an ElasticsearchIllegalStateException. We should not set Thread.currentThread().interrupt(); in this case because we already communicate the interrupt through an exception.\n", "comments": [ { "body": "+1\n", "created_at": "2014-12-18T10:07:41Z" } ], "number": 9001, "title": "PlainTransportFuture should not set currentThread().interrupt()" }
{ "body": "If someone blocks on it and it is interrupted, we throw an ElasticsearchIllegalStateException. We should not set Thread.currentThread().interrupt(); in this case because we already communicate the interrupt through an exception.\n\nSimilar to #9001\n", "number": 9141, "review_comments": [], "title": "AdapterActionFuture should not set currentThread().interrupt()" }
{ "commits": [ { "message": "Internal: AdapterActionFuture should not set currentThread().interrupt()\n\nIf someone blocks on it and it is interrupted, we throw an ElasticsearchIllegalStateException. We should not set Thread.currentThread().interrupt(); in this case because we already communicate the interrupt through an exception.\n\nSimilar to #9001" } ], "files": [ { "diff": "@@ -34,9 +34,9 @@\n public interface ActionFuture<T> extends Future<T> {\n \n /**\n- * Similar to {@link #get()}, just catching the {@link InterruptedException} with\n- * restoring the interrupted state on the thread and throwing an {@link org.elasticsearch.ElasticsearchIllegalStateException},\n- * and throwing the actual cause of the {@link java.util.concurrent.ExecutionException}.\n+ * Similar to {@link #get()}, just catching the {@link InterruptedException} and throwing\n+ * an {@link org.elasticsearch.ElasticsearchIllegalStateException} instead. Also catches\n+ * {@link java.util.concurrent.ExecutionException} and throws the actual cause instead.\n * <p/>\n * <p>Note, the actual cause is unwrapped to the actual failure (for example, unwrapped\n * from {@link org.elasticsearch.transport.RemoteTransportException}. The root failure is\n@@ -45,9 +45,9 @@ public interface ActionFuture<T> extends Future<T> {\n T actionGet() throws ElasticsearchException;\n \n /**\n- * Similar to {@link #get(long, java.util.concurrent.TimeUnit)}, just catching the {@link InterruptedException} with\n- * restoring the interrupted state on the thread and throwing an {@link org.elasticsearch.ElasticsearchIllegalStateException},\n- * and throwing the actual cause of the {@link java.util.concurrent.ExecutionException}.\n+ * Similar to {@link #get(long, java.util.concurrent.TimeUnit)}, just catching the {@link InterruptedException} and throwing\n+ * an {@link org.elasticsearch.ElasticsearchIllegalStateException} instead. Also catches\n+ * {@link java.util.concurrent.ExecutionException} and throws the actual cause instead.\n * <p/>\n * <p>Note, the actual cause is unwrapped to the actual failure (for example, unwrapped\n * from {@link org.elasticsearch.transport.RemoteTransportException}. The root failure is\n@@ -56,9 +56,9 @@ public interface ActionFuture<T> extends Future<T> {\n T actionGet(String timeout) throws ElasticsearchException;\n \n /**\n- * Similar to {@link #get(long, java.util.concurrent.TimeUnit)}, just catching the {@link InterruptedException} with\n- * restoring the interrupted state on the thread and throwing an {@link org.elasticsearch.ElasticsearchIllegalStateException},\n- * and throwing the actual cause of the {@link java.util.concurrent.ExecutionException}.\n+ * Similar to {@link #get(long, java.util.concurrent.TimeUnit)}, just catching the {@link InterruptedException} and throwing\n+ * an {@link org.elasticsearch.ElasticsearchIllegalStateException} instead. Also catches\n+ * {@link java.util.concurrent.ExecutionException} and throws the actual cause instead.\n * <p/>\n * <p>Note, the actual cause is unwrapped to the actual failure (for example, unwrapped\n * from {@link org.elasticsearch.transport.RemoteTransportException}. The root failure is\n@@ -69,9 +69,9 @@ public interface ActionFuture<T> extends Future<T> {\n T actionGet(long timeoutMillis) throws ElasticsearchException;\n \n /**\n- * Similar to {@link #get(long, java.util.concurrent.TimeUnit)}, just catching the {@link InterruptedException} with\n- * restoring the interrupted state on the thread and throwing an {@link org.elasticsearch.ElasticsearchIllegalStateException},\n- * and throwing the actual cause of the {@link java.util.concurrent.ExecutionException}.\n+ * Similar to {@link #get(long, java.util.concurrent.TimeUnit)}, just catching the {@link InterruptedException} and throwing\n+ * an {@link org.elasticsearch.ElasticsearchIllegalStateException} instead. Also catches\n+ * {@link java.util.concurrent.ExecutionException} and throws the actual cause instead.\n * <p/>\n * <p>Note, the actual cause is unwrapped to the actual failure (for example, unwrapped\n * from {@link org.elasticsearch.transport.RemoteTransportException}. The root failure is\n@@ -80,9 +80,9 @@ public interface ActionFuture<T> extends Future<T> {\n T actionGet(long timeout, TimeUnit unit) throws ElasticsearchException;\n \n /**\n- * Similar to {@link #get(long, java.util.concurrent.TimeUnit)}, just catching the {@link InterruptedException} with\n- * restoring the interrupted state on the thread and throwing an {@link org.elasticsearch.ElasticsearchIllegalStateException},\n- * and throwing the actual cause of the {@link java.util.concurrent.ExecutionException}.\n+ * Similar to {@link #get(long, java.util.concurrent.TimeUnit)}, just catching the {@link InterruptedException} and throwing\n+ * an {@link org.elasticsearch.ElasticsearchIllegalStateException} instead. Also catches\n+ * {@link java.util.concurrent.ExecutionException} and throws the actual cause instead.\n * <p/>\n * <p>Note, the actual cause is unwrapped to the actual failure (for example, unwrapped\n * from {@link org.elasticsearch.transport.RemoteTransportException}. The root failure is", "filename": "src/main/java/org/elasticsearch/action/ActionFuture.java", "status": "modified" }, { "diff": "@@ -44,7 +44,6 @@ public T actionGet() throws ElasticsearchException {\n try {\n return get();\n } catch (InterruptedException e) {\n- Thread.currentThread().interrupt();\n throw new ElasticsearchIllegalStateException(\"Future got interrupted\", e);\n } catch (ExecutionException e) {\n throw rethrowExecutionException(e);\n@@ -73,7 +72,6 @@ public T actionGet(long timeout, TimeUnit unit) throws ElasticsearchException {\n } catch (TimeoutException e) {\n throw new ElasticsearchTimeoutException(e.getMessage());\n } catch (InterruptedException e) {\n- Thread.currentThread().interrupt();\n throw new ElasticsearchIllegalStateException(\"Future got interrupted\", e);\n } catch (ExecutionException e) {\n throw rethrowExecutionException(e);", "filename": "src/main/java/org/elasticsearch/action/support/AdapterActionFuture.java", "status": "modified" } ] }
{ "body": "I decided to reindex my data to take advantage of `doc_values`, but one of 30 indices (~120m docs in each) got less documents after reindexing. I reindexed again and docs disappeared again.\n\nThen I bisected the problem to specific docs and found that some docs in source index has duplicate ids.\n\n```\ncurl -s \"http://web245:9200/statistics-20141110/_search?pretty&q=_id:1jC2LxTjTMS1KHCn0Prf1w\"\n```\n\n``` json\n{\n \"took\" : 1156,\n \"timed_out\" : false,\n \"_shards\" : {\n \"total\" : 5,\n \"successful\" : 5,\n \"failed\" : 0\n },\n \"hits\" : {\n \"total\" : 2,\n \"max_score\" : 1.0,\n \"hits\" : [ {\n \"_index\" : \"statistics-20141110\",\n \"_type\" : \"events\",\n \"_id\" : \"1jC2LxTjTMS1KHCn0Prf1w\",\n \"_score\" : 1.0,\n \"_source\":{\"@timestamp\":\"2014-11-10T14:30:00+0300\",\"@key\":\"client_belarussia_msg_sended_from_mutual__22_1\",\"@value\":\"149\"}\n }, {\n \"_index\" : \"statistics-20141110\",\n \"_type\" : \"events\",\n \"_id\" : \"1jC2LxTjTMS1KHCn0Prf1w\",\n \"_score\" : 1.0,\n \"_source\":{\"@timestamp\":\"2014-11-10T14:30:00+0300\",\"@key\":\"client_belarussia_msg_sended_from_mutual__22_1\",\"@value\":\"149\"}\n } ]\n }\n}\n```\n\nHere are two indices, source and destination:\n\n```\nhealth status index pri rep docs.count docs.deleted store.size pri.store.size\ngreen open statistics-20141110 5 0 116217042 0 12.3gb 12.3gb\ngreen open statistics-20141110-dv 5 1 116216507 0 32.3gb 16.1gb\n```\n\nSegments of problematic index:\n\n```\nindex shard prirep ip segment generation docs.count docs.deleted size size.memory committed searchable version compound\nstatistics-20141110 0 p 192.168.0.190 _gga 21322 14939669 0 1.6gb 4943008 true true 4.9.0 false\nstatistics-20141110 0 p 192.168.0.190 _isc 24348 10913518 0 1.1gb 4101712 true true 4.9.0 false\nstatistics-20141110 1 p 192.168.0.245 _7i7 9727 7023269 0 766mb 2264472 true true 4.9.0 false\nstatistics-20141110 1 p 192.168.0.245 _i01 23329 14689581 0 1.5gb 4788872 true true 4.9.0 false\nstatistics-20141110 2 p 192.168.1.212 _9wx 12849 8995444 0 987.7mb 3326288 true true 4.9.0 false\nstatistics-20141110 2 p 192.168.1.212 _il1 24085 13205585 0 1.4gb 4343736 true true 4.9.0 false\nstatistics-20141110 3 p 192.168.1.212 _8pc 11280 10046395 0 1gb 4003824 true true 4.9.0 false\nstatistics-20141110 3 p 192.168.1.212 _hwt 23213 13226096 0 1.3gb 4287544 true true 4.9.0 false\nstatistics-20141110 4 p 192.168.2.88 _91i 11718 8328558 0 909.2mb 2822712 true true 4.9.0 false\nstatistics-20141110 4 p 192.168.2.88 _hms 22852 14848927 0 1.5gb 4777472 true true 4.9.0 false\n```\n\nThe only thing that happened with index besides indexing is optimizing to 2 segments per shard.\n", "comments": [ { "body": "Hi @bobrik \n\nIs there any chance this index was written with Elasticsearch 1.2.0?\n\nPlease could you provide the output of this request:\n\n```\ncurl -s \"http://web245:9200/statistics-20141110/_search?pretty&q=_id:1jC2LxTjTMS1KHCn0Prf1w&explain&fields=_source,_routing\"\n```\n", "created_at": "2014-12-09T12:14:51Z" }, { "body": "Routing is automatically inferred from `@key`\n\n``` json\n{\n \"took\" : 1744,\n \"timed_out\" : false,\n \"_shards\" : {\n \"total\" : 5,\n \"successful\" : 5,\n \"failed\" : 0\n },\n \"hits\" : {\n \"total\" : 2,\n \"max_score\" : 1.0,\n \"hits\" : [ {\n \"_shard\" : 3,\n \"_node\" : \"YOK_20U7Qee-XSasg0J8VA\",\n \"_index\" : \"statistics-20141110\",\n \"_type\" : \"events\",\n \"_id\" : \"1jC2LxTjTMS1KHCn0Prf1w\",\n \"_score\" : 1.0,\n \"_source\":{\"@timestamp\":\"2014-11-10T14:30:00+0300\",\"@key\":\"client_belarussia_msg_sended_from_mutual__22_1\",\"@value\":\"149\"},\n \"fields\" : {\n \"_routing\" : \"client_belarussia_msg_sended_from_mutual__22_1\"\n },\n \"_explanation\" : {\n \"value\" : 1.0,\n \"description\" : \"ConstantScore(_uid:events#1jC2LxTjTMS1KHCn0Prf1w _uid:markers#1jC2LxTjTMS1KHCn0Prf1w _uid:precise#1jC2LxTjTMS1KHCn0Prf1w _uid:rfm_users#1jC2LxTjTMS1KHCn0Prf1w), product of:\",\n \"details\" : [ {\n \"value\" : 1.0,\n \"description\" : \"boost\"\n }, {\n \"value\" : 1.0,\n \"description\" : \"queryNorm\"\n } ]\n }\n }, {\n \"_shard\" : 3,\n \"_node\" : \"YOK_20U7Qee-XSasg0J8VA\",\n \"_index\" : \"statistics-20141110\",\n \"_type\" : \"events\",\n \"_id\" : \"1jC2LxTjTMS1KHCn0Prf1w\",\n \"_score\" : 1.0,\n \"_source\":{\"@timestamp\":\"2014-11-10T14:30:00+0300\",\"@key\":\"client_belarussia_msg_sended_from_mutual__22_1\",\"@value\":\"149\"},\n \"fields\" : {\n \"_routing\" : \"client_belarussia_msg_sended_from_mutual__22_1\"\n },\n \"_explanation\" : {\n \"value\" : 1.0,\n \"description\" : \"ConstantScore(_uid:events#1jC2LxTjTMS1KHCn0Prf1w _uid:markers#1jC2LxTjTMS1KHCn0Prf1w _uid:precise#1jC2LxTjTMS1KHCn0Prf1w _uid:rfm_users#1jC2LxTjTMS1KHCn0Prf1w), product of:\",\n \"details\" : [ {\n \"value\" : 1.0,\n \"description\" : \"boost\"\n }, {\n \"value\" : 1.0,\n \"description\" : \"queryNorm\"\n } ]\n }\n } ]\n }\n}\n```\n\nIndex was created on 1.3.4, we upgraded from 1.0.1 to 1.3.2 on 2014-09-22\n", "created_at": "2014-12-09T12:33:09Z" }, { "body": "Hi @bobrik \n\nHmm, these two docs are on the same shard! Do you ever run updates on these docs? Could you send the output of this command please?\n\n```\ncurl -s \"http://web245:9200/statistics-20141110/_search?pretty&q=_id:1jC2LxTjTMS1KHCn0Prf1w&explain&fields=_source,_routing,_version\"\n```\n", "created_at": "2014-12-09T13:44:59Z" }, { "body": "Of course they are, that's how routing works :)\n\nI didn't run any updates, because my code only does indexing. It doesn't even know ids that are assigned by elasticsearch.\n\n``` json\n{\n \"took\" : 51,\n \"timed_out\" : false,\n \"_shards\" : {\n \"total\" : 5,\n \"successful\" : 5,\n \"failed\" : 0\n },\n \"hits\" : {\n \"total\" : 2,\n \"max_score\" : 1.0,\n \"hits\" : [ {\n \"_shard\" : 3,\n \"_node\" : \"YOK_20U7Qee-XSasg0J8VA\",\n \"_index\" : \"statistics-20141110\",\n \"_type\" : \"events\",\n \"_id\" : \"1jC2LxTjTMS1KHCn0Prf1w\",\n \"_score\" : 1.0,\n \"_source\":{\"@timestamp\":\"2014-11-10T14:30:00+0300\",\"@key\":\"client_belarussia_msg_sended_from_mutual__22_1\",\"@value\":\"149\"},\n \"fields\" : {\n \"_routing\" : \"client_belarussia_msg_sended_from_mutual__22_1\"\n },\n \"_explanation\" : {\n \"value\" : 1.0,\n \"description\" : \"ConstantScore(_uid:events#1jC2LxTjTMS1KHCn0Prf1w _uid:markers#1jC2LxTjTMS1KHCn0Prf1w _uid:precise#1jC2LxTjTMS1KHCn0Prf1w _uid:rfm_users#1jC2LxTjTMS1KHCn0Prf1w), product of:\",\n \"details\" : [ {\n \"value\" : 1.0,\n \"description\" : \"boost\"\n }, {\n \"value\" : 1.0,\n \"description\" : \"queryNorm\"\n } ]\n }\n }, {\n \"_shard\" : 3,\n \"_node\" : \"YOK_20U7Qee-XSasg0J8VA\",\n \"_index\" : \"statistics-20141110\",\n \"_type\" : \"events\",\n \"_id\" : \"1jC2LxTjTMS1KHCn0Prf1w\",\n \"_score\" : 1.0,\n \"_source\":{\"@timestamp\":\"2014-11-10T14:30:00+0300\",\"@key\":\"client_belarussia_msg_sended_from_mutual__22_1\",\"@value\":\"149\"},\n \"fields\" : {\n \"_routing\" : \"client_belarussia_msg_sended_from_mutual__22_1\"\n },\n \"_explanation\" : {\n \"value\" : 1.0,\n \"description\" : \"ConstantScore(_uid:events#1jC2LxTjTMS1KHCn0Prf1w _uid:markers#1jC2LxTjTMS1KHCn0Prf1w _uid:precise#1jC2LxTjTMS1KHCn0Prf1w _uid:rfm_users#1jC2LxTjTMS1KHCn0Prf1w), product of:\",\n \"details\" : [ {\n \"value\" : 1.0,\n \"description\" : \"boost\"\n }, {\n \"value\" : 1.0,\n \"description\" : \"queryNorm\"\n } ]\n }\n } ]\n }\n}\n```\n", "created_at": "2014-12-09T13:48:41Z" }, { "body": "Sorry @bobrik - I gave you the wrong request, it should be:\n\n```\ncurl -s \"http://web245:9200/statistics-20141110/_search?pretty&q=_id:1jC2LxTjTMS1KHCn0Prf1w&explain&fields=_source,_routing,version\"\n```\n\nAnd so you're using auto-assigned IDs? Did any of your shards migrate to other nodes, or did a primary fail during optimization?\n", "created_at": "2014-12-09T16:56:22Z" }, { "body": "I think this is caused by https://github.com/elasticsearch/elasticsearch/pull/7729 @bobrik are you coming from < 1.3.3 with this index and are you using bulk?\n", "created_at": "2014-12-09T17:05:25Z" }, { "body": "```\ncurl -s 'http://web605:9200/statistics-20141110/_search?pretty&q=_id:1jC2LxTjTMS1KHCn0Prf1w&explain&fields=_source,_routing,version'\n```\n\n``` json\n{\n \"took\" : 46,\n \"timed_out\" : false,\n \"_shards\" : {\n \"total\" : 5,\n \"successful\" : 5,\n \"failed\" : 0\n },\n \"hits\" : {\n \"total\" : 2,\n \"max_score\" : 1.0,\n \"hits\" : [ {\n \"_shard\" : 3,\n \"_node\" : \"YOK_20U7Qee-XSasg0J8VA\",\n \"_index\" : \"statistics-20141110\",\n \"_type\" : \"events\",\n \"_id\" : \"1jC2LxTjTMS1KHCn0Prf1w\",\n \"_score\" : 1.0,\n \"_source\":{\"@timestamp\":\"2014-11-10T14:30:00+0300\",\"@key\":\"client_belarussia_msg_sended_from_mutual__22_1\",\"@value\":\"149\"},\n \"fields\" : {\n \"_routing\" : \"client_belarussia_msg_sended_from_mutual__22_1\"\n },\n \"_explanation\" : {\n \"value\" : 1.0,\n \"description\" : \"ConstantScore(_uid:events#1jC2LxTjTMS1KHCn0Prf1w _uid:markers#1jC2LxTjTMS1KHCn0Prf1w _uid:precise#1jC2LxTjTMS1KHCn0Prf1w _uid:rfm_users#1jC2LxTjTMS1KHCn0Prf1w), product of:\",\n \"details\" : [ {\n \"value\" : 1.0,\n \"description\" : \"boost\"\n }, {\n \"value\" : 1.0,\n \"description\" : \"queryNorm\"\n } ]\n }\n }, {\n \"_shard\" : 3,\n \"_node\" : \"YOK_20U7Qee-XSasg0J8VA\",\n \"_index\" : \"statistics-20141110\",\n \"_type\" : \"events\",\n \"_id\" : \"1jC2LxTjTMS1KHCn0Prf1w\",\n \"_score\" : 1.0,\n \"_source\":{\"@timestamp\":\"2014-11-10T14:30:00+0300\",\"@key\":\"client_belarussia_msg_sended_from_mutual__22_1\",\"@value\":\"149\"},\n \"fields\" : {\n \"_routing\" : \"client_belarussia_msg_sended_from_mutual__22_1\"\n },\n \"_explanation\" : {\n \"value\" : 1.0,\n \"description\" : \"ConstantScore(_uid:events#1jC2LxTjTMS1KHCn0Prf1w _uid:markers#1jC2LxTjTMS1KHCn0Prf1w _uid:precise#1jC2LxTjTMS1KHCn0Prf1w _uid:rfm_users#1jC2LxTjTMS1KHCn0Prf1w), product of:\",\n \"details\" : [ {\n \"value\" : 1.0,\n \"description\" : \"boost\"\n }, {\n \"value\" : 1.0,\n \"description\" : \"queryNorm\"\n } ]\n }\n } ]\n }\n}\n```\n\nI bet you wanted this:\n\n```\ncurl -s 'http://web605:9200/statistics-20141110/_search?pretty&q=_id:1jC2LxTjTMS1KHCn0Prf1w&explain&fields=_source,_routing' -d '{\"version\":true}'\n```\n\n``` json\n{\n \"took\" : 1,\n \"timed_out\" : false,\n \"_shards\" : {\n \"total\" : 5,\n \"successful\" : 5,\n \"failed\" : 0\n },\n \"hits\" : {\n \"total\" : 2,\n \"max_score\" : 1.0,\n \"hits\" : [ {\n \"_shard\" : 3,\n \"_node\" : \"YOK_20U7Qee-XSasg0J8VA\",\n \"_index\" : \"statistics-20141110\",\n \"_type\" : \"events\",\n \"_id\" : \"1jC2LxTjTMS1KHCn0Prf1w\",\n \"_version\" : 1,\n \"_score\" : 1.0,\n \"_source\":{\"@timestamp\":\"2014-11-10T14:30:00+0300\",\"@key\":\"client_belarussia_msg_sended_from_mutual__22_1\",\"@value\":\"149\"},\n \"fields\" : {\n \"_routing\" : \"client_belarussia_msg_sended_from_mutual__22_1\"\n },\n \"_explanation\" : {\n \"value\" : 1.0,\n \"description\" : \"ConstantScore(_uid:events#1jC2LxTjTMS1KHCn0Prf1w _uid:markers#1jC2LxTjTMS1KHCn0Prf1w _uid:precise#1jC2LxTjTMS1KHCn0Prf1w _uid:rfm_users#1jC2LxTjTMS1KHCn0Prf1w), product of:\",\n \"details\" : [ {\n \"value\" : 1.0,\n \"description\" : \"boost\"\n }, {\n \"value\" : 1.0,\n \"description\" : \"queryNorm\"\n } ]\n }\n }, {\n \"_shard\" : 3,\n \"_node\" : \"YOK_20U7Qee-XSasg0J8VA\",\n \"_index\" : \"statistics-20141110\",\n \"_type\" : \"events\",\n \"_id\" : \"1jC2LxTjTMS1KHCn0Prf1w\",\n \"_version\" : 1,\n \"_score\" : 1.0,\n \"_source\":{\"@timestamp\":\"2014-11-10T14:30:00+0300\",\"@key\":\"client_belarussia_msg_sended_from_mutual__22_1\",\"@value\":\"149\"},\n \"fields\" : {\n \"_routing\" : \"client_belarussia_msg_sended_from_mutual__22_1\"\n },\n \"_explanation\" : {\n \"value\" : 1.0,\n \"description\" : \"ConstantScore(_uid:events#1jC2LxTjTMS1KHCn0Prf1w _uid:markers#1jC2LxTjTMS1KHCn0Prf1w _uid:precise#1jC2LxTjTMS1KHCn0Prf1w _uid:rfm_users#1jC2LxTjTMS1KHCn0Prf1w), product of:\",\n \"details\" : [ {\n \"value\" : 1.0,\n \"description\" : \"boost\"\n }, {\n \"value\" : 1.0,\n \"description\" : \"queryNorm\"\n } ]\n }\n } ]\n }\n}\n```\n\nThere were many migrations, but not during optimization, unless es moves shards after new index is created. Basically at 00:00 new index is created and at 00:45 optimization for old indices starts.\n", "created_at": "2014-12-09T17:06:19Z" }, { "body": "do you have client nodes that are pre 1.3.3?\n", "created_at": "2014-12-09T17:09:22Z" }, { "body": "@s1monw index is created on 1.3.4:\n\n```\n[2014-09-30 12:03:49,991][INFO ][node ] [statistics04] version[1.3.3], pid[17937], build[ddf796d/2014-09-29T13:39:00Z]\n```\n\n```\n[2014-09-30 14:03:19,205][INFO ][node ] [statistics04] version[1.3.4], pid[89485], build[a70f3cc/2014-09-30T09:07:17Z]\n```\n\nNov 11 is definitely after Sep 30. Shouldn't be #7729 then.\n\nWe don't have client nodes, everything is over http. But yeah, we use bulk indexing and automatically assigned ids.\n", "created_at": "2014-12-09T17:12:53Z" }, { "body": "Hi @bobrik \n\n(you guessed right about `version=true` :) )\n\nOK - we're going to need more info. Please could you send:\n\n```\ncurl -s 'http://web605:9200/statistics-20141110/_settings?pretty'\ncurl -s 'http://web605:9200/statistics-20141110/_segments?pretty'\n```\n", "created_at": "2014-12-09T17:25:10Z" }, { "body": "``` json\n{\n \"statistics-20141110\" : {\n \"settings\" : {\n \"index\" : {\n \"codec\" : {\n \"bloom\" : {\n \"load\" : \"false\"\n }\n },\n \"uuid\" : \"JZXC-8C3TFC71EnMGMHSWw\",\n \"number_of_replicas\" : \"0\",\n \"number_of_shards\" : \"5\",\n \"version\" : {\n \"created\" : \"1030499\"\n }\n }\n }\n }\n}\n```\n\n``` json\n{\n \"_shards\" : {\n \"total\" : 5,\n \"successful\" : 5,\n \"failed\" : 0\n },\n \"indices\" : {\n \"statistics-20141110\" : {\n \"shards\" : {\n \"0\" : [ {\n \"routing\" : {\n \"state\" : \"STARTED\",\n \"primary\" : true,\n \"node\" : \"hBg3FpLGQw6B9l-Hil2c8Q\"\n },\n \"num_committed_segments\" : 2,\n \"num_search_segments\" : 2,\n \"segments\" : {\n \"_gga\" : {\n \"generation\" : 21322,\n \"num_docs\" : 14939669,\n \"deleted_docs\" : 0,\n \"size_in_bytes\" : 1729206228,\n \"memory_in_bytes\" : 4943008,\n \"committed\" : true,\n \"search\" : true,\n \"version\" : \"4.9.0\",\n \"compound\" : false\n },\n \"_isc\" : {\n \"generation\" : 24348,\n \"num_docs\" : 10913518,\n \"deleted_docs\" : 0,\n \"size_in_bytes\" : 1254410507,\n \"memory_in_bytes\" : 4101712,\n \"committed\" : true,\n \"search\" : true,\n \"version\" : \"4.9.0\",\n \"compound\" : false\n }\n }\n } ],\n \"1\" : [ {\n \"routing\" : {\n \"state\" : \"STARTED\",\n \"primary\" : true,\n \"node\" : \"ajMe-w2lSIO0Tz5WEUs4qQ\"\n },\n \"num_committed_segments\" : 2,\n \"num_search_segments\" : 2,\n \"segments\" : {\n \"_7i7\" : {\n \"generation\" : 9727,\n \"num_docs\" : 7023269,\n \"deleted_docs\" : 0,\n \"size_in_bytes\" : 803299557,\n \"memory_in_bytes\" : 2264472,\n \"committed\" : true,\n \"search\" : true,\n \"version\" : \"4.9.0\",\n \"compound\" : false\n },\n \"_i01\" : {\n \"generation\" : 23329,\n \"num_docs\" : 14689581,\n \"deleted_docs\" : 0,\n \"size_in_bytes\" : 1659303375,\n \"memory_in_bytes\" : 4788872,\n \"committed\" : true,\n \"search\" : true,\n \"version\" : \"4.9.0\",\n \"compound\" : false\n }\n }\n } ],\n \"2\" : [ {\n \"routing\" : {\n \"state\" : \"STARTED\",\n \"primary\" : true,\n \"node\" : \"hyUu93q7SRehHBVZfSmvOg\"\n },\n \"num_committed_segments\" : 2,\n \"num_search_segments\" : 2,\n \"segments\" : {\n \"_9wx\" : {\n \"generation\" : 12849,\n \"num_docs\" : 8995444,\n \"deleted_docs\" : 0,\n \"size_in_bytes\" : 1035711205,\n \"memory_in_bytes\" : 3326288,\n \"committed\" : true,\n \"search\" : true,\n \"version\" : \"4.9.0\",\n \"compound\" : false\n },\n \"_il1\" : {\n \"generation\" : 24085,\n \"num_docs\" : 13205585,\n \"deleted_docs\" : 0,\n \"size_in_bytes\" : 1510021893,\n \"memory_in_bytes\" : 4343736,\n \"committed\" : true,\n \"search\" : true,\n \"version\" : \"4.9.0\",\n \"compound\" : false\n }\n }\n } ],\n \"3\" : [ {\n \"routing\" : {\n \"state\" : \"STARTED\",\n \"primary\" : true,\n \"node\" : \"hyUu93q7SRehHBVZfSmvOg\"\n },\n \"num_committed_segments\" : 2,\n \"num_search_segments\" : 2,\n \"segments\" : {\n \"_8pc\" : {\n \"generation\" : 11280,\n \"num_docs\" : 10046395,\n \"deleted_docs\" : 0,\n \"size_in_bytes\" : 1143637974,\n \"memory_in_bytes\" : 4003824,\n \"committed\" : true,\n \"search\" : true,\n \"version\" : \"4.9.0\",\n \"compound\" : false\n },\n \"_hwt\" : {\n \"generation\" : 23213,\n \"num_docs\" : 13226096,\n \"deleted_docs\" : 0,\n \"size_in_bytes\" : 1485110397,\n \"memory_in_bytes\" : 4287544,\n \"committed\" : true,\n \"search\" : true,\n \"version\" : \"4.9.0\",\n \"compound\" : false\n }\n }\n } ],\n \"4\" : [ {\n \"routing\" : {\n \"state\" : \"STARTED\",\n \"primary\" : true,\n \"node\" : \"hyUu93q7SRehHBVZfSmvOg\"\n },\n \"num_committed_segments\" : 2,\n \"num_search_segments\" : 2,\n \"segments\" : {\n \"_91i\" : {\n \"generation\" : 11718,\n \"num_docs\" : 8328558,\n \"deleted_docs\" : 0,\n \"size_in_bytes\" : 953452801,\n \"memory_in_bytes\" : 2822712,\n \"committed\" : true,\n \"search\" : true,\n \"version\" : \"4.9.0\",\n \"compound\" : false\n },\n \"_hms\" : {\n \"generation\" : 22852,\n \"num_docs\" : 14848927,\n \"deleted_docs\" : 0,\n \"size_in_bytes\" : 1673336536,\n \"memory_in_bytes\" : 4777472,\n \"committed\" : true,\n \"search\" : true,\n \"version\" : \"4.9.0\",\n \"compound\" : false\n }\n }\n } ]\n }\n }\n }\n}\n```\n", "created_at": "2014-12-09T18:18:44Z" }, { "body": "Reopen because the test added with #9125 just failed and the failure is reproducible (about 1/10 runs with same seed and added stress), see http://build-us-00.elasticsearch.org/job/es_core_master_window-2012/725/ \n", "created_at": "2015-01-02T18:23:22Z" }, { "body": "We've just seen this issue for the second time. The first time produced only a single duplicate; this time produced over 16000, across a comparatively tiny index (< 300k docs). We're using 1.3.4, doing bulk indexing with the Java client API's `BulkProcessor` and `TransportClient`. \n\nHowever, we're **not** using autogenerated ids, so from my reading of the fix for this issue it's unlikely to help us. Should I open a separate issue, or should this one be reopened?\n\nMiscellanous other info:\n- The index has not been migrated from an earlier version.\n- Around the time the duplicates appeared, we saw problems in other (non-Elastic) parts of the system. I can't see any way that they could directly cause the duplication, but it's possible that network issues were the common cause of both.\n- We still have the index containing duplicates for now, though it may not last long; this is on an alpha cluster that gets reset fairly often.\n- I'm very much a newbie to Elastic, so may be missing something obvious.\n", "created_at": "2015-04-08T14:59:55Z" }, { "body": "@mrec It would be great if you could open a new issue. Please also add a query that finds duplicates together with `?explain=true` option set and the output of that query like above. \nSomething like: \n\n```\ncurl -s 'http://HOST:PORT/YOURINDEX/_search?pretty&q=_id:A_DUPLICATE_ID&explain&fields=_source,_routing' -d '{\"version\":true}'\n```\n\nIs there a way that you can make available the elasticsearch logs from the time where you had the network issues?\nAlso, the output of\ncurl -s 'http://web605:9200/statistics-20141110/_segments?pretty' might be helpful.\n", "created_at": "2015-04-08T16:26:55Z" } ], "number": 8788, "title": "Duplicate id in index" }
{ "body": "If bulk index request fails due to a disconnect, unavailable shard etc, the request is\nretried before actually failing. However, even in case of failure the documents\nmight already be indexed. For autogenerated ids the request must not add the\ndocuments again and therefore canHaveDuplicates must be set to true.\n\nThere are two options to fix this in this pull request. \n1. Just add the flag canHaveDuplicates to the request (30fac69). The user then gets a version conflict in the bulk response (see https://github.com/brwe/elasticsearch/commit/30fac69e1a5ae704845094b6e8f8bf194d2b5e0a#diff-38d197a0e6b168e3ca10ca0553b238adR111) which is confusing since the document cannot have been there before if the id was autogenerated in case the user only uses auto generated ids.\n2. We additionally prevent the version conflict by checking in InternalEngine for versions and autogenerated ids (9802466, see https://github.com/brwe/elasticsearch/commit/980246615725d2420c56a31171d4a1e93f47b722#diff-5666f4a211c1b7bbaf76addf0c27451bR504). However, this means that if there is an actual version conflict because a document was added before with this id, then the user will not be notified. \n\nI am unsure which of the two is the lesser evil. \n\ncloses #8788\n", "number": 9125, "review_comments": [ { "body": "for master you don't need to specify the gateway.type we only have what used to be local!\n", "created_at": "2015-01-02T13:22:21Z" }, { "body": "can this be a constant somehow? I wonder if we have it as a constant already somewhere?\n", "created_at": "2015-01-02T13:22:52Z" }, { "body": "can this be a debug log?\n", "created_at": "2015-01-02T13:23:06Z" }, { "body": "can we but some more info into the fail message here?\n", "created_at": "2015-01-02T13:24:35Z" }, { "body": "you can simplify this a bit here:\n\n``` Java\n\nNodeStats unluckyNode = randomFrom(Iterables.toArray(nodestats.getNodes()));\n```\n", "created_at": "2015-01-02T13:28:59Z" }, { "body": "can you make the `exceptionThrown` and AtomicBoolean and define it inside the test such that it's scope is clear?\n", "created_at": "2015-01-02T13:30:32Z" }, { "body": "Ok, will make a separate commit to remove everywhere.\n", "created_at": "2015-01-02T13:34:11Z" }, { "body": "can't you just make the `ACTION_NAME` constant public?\n", "created_at": "2015-01-02T13:56:10Z" }, { "body": "extra newline.... I think this optimization deserves a rather longish comment. can you add it?\n", "created_at": "2015-01-02T13:56:42Z" }, { "body": "can you add a comment on why we have this check?\n", "created_at": "2015-01-02T13:56:55Z" } ], "title": "Prevent creation of duplicates on bulk indexing with auto generated ids and retry" }
{ "commits": [ { "message": "[index] Prevent duplication of documents when retry indexing after fail\n\nIf bulk index request fails due to a disconnect, unavailable shard etc, the request is\nretried once before actually failing. However, even in case of failure the documents\nmight already be indexed. For autogenerated ids the request must not add the\ndocuments again and therfore canHaveDuplicates must be set to true.\n\ncloses #8788" }, { "message": "Remove the conversion form index to update request." }, { "message": "Revert \"Remove the conversion form index to update request.\"\n\nThis reverts commit 30fac69e1a5ae704845094b6e8f8bf194d2b5e0a." }, { "message": "format" }, { "message": "use action name from TransportShardBulkAction" }, { "message": "use AtomicBoolean" }, { "message": "DEBUG log" }, { "message": "more descriptive fail" }, { "message": "simplify node pick" }, { "message": "add comment on why we need to check the version etc in InternalEngin" }, { "message": "make ACTION_NAME public" } ], "files": [ { "diff": "@@ -73,7 +73,7 @@ public class TransportShardBulkAction extends TransportShardReplicationOperation\n private final static String OP_TYPE_UPDATE = \"update\";\n private final static String OP_TYPE_DELETE = \"delete\";\n \n- private static final String ACTION_NAME = BulkAction.NAME + \"[s]\";\n+ public static final String ACTION_NAME = BulkAction.NAME + \"[s]\";\n \n private final MappingUpdatedAction mappingUpdatedAction;\n private final UpdateHelper updateHelper;", "filename": "src/main/java/org/elasticsearch/action/bulk/TransportShardBulkAction.java", "status": "modified" }, { "diff": "@@ -443,6 +443,7 @@ public void handleException(TransportException exp) {\n if (exp.unwrapCause() instanceof ConnectTransportException || exp.unwrapCause() instanceof NodeClosedException ||\n retryPrimaryException(exp)) {\n primaryOperationStarted.set(false);\n+ internalRequest.request().setCanHaveDuplicates();\n // we already marked it as started when we executed it (removed the listener) so pass false\n // to re-add to the cluster listener\n logger.trace(\"received an error from node the primary was assigned to ({}), scheduling a retry\", exp.getMessage());", "filename": "src/main/java/org/elasticsearch/action/support/replication/TransportShardReplicationOperationAction.java", "status": "modified" }, { "diff": "@@ -501,6 +501,17 @@ private void innerCreateNoLock(Create create, IndexWriter writer, long currentVe\n // #7142: the primary already determined it's OK to index this document, and we confirmed above that the version doesn't\n // conflict, so we must also update here on the replica to remain consistent:\n doUpdate = true;\n+ } else if (create.origin() == Operation.Origin.PRIMARY && create.autoGeneratedId() && create.canHaveDuplicates() && currentVersion == 1 && create.version() == Versions.MATCH_ANY) {\n+ /**\n+ * If bulk index request fails due to a disconnect, unavailable shard etc. then the request is\n+ * retried before it actually fails. However, the documents might already be indexed.\n+ * For autogenerated ids this means that a version conflict will be reported in the bulk request\n+ * although the document was indexed properly.\n+ * To avoid this we have to make sure that the index request is treated as an update and set updatedVersion to 1.\n+ * See also discussion on https://github.com/elasticsearch/elasticsearch/pull/9125\n+ */\n+ doUpdate = true;\n+ updatedVersion = 1;\n } else {\n // On primary, we throw DAEE if the _uid is already in the index with an older version:\n assert create.origin() == Operation.Origin.PRIMARY;", "filename": "src/main/java/org/elasticsearch/index/engine/internal/InternalEngine.java", "status": "modified" }, { "diff": "@@ -0,0 +1,135 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.index.store;\n+\n+import com.google.common.collect.Iterables;\n+import com.google.common.collect.Lists;\n+import org.elasticsearch.action.admin.cluster.node.stats.NodeStats;\n+import org.elasticsearch.action.admin.cluster.node.stats.NodesStatsResponse;\n+import org.elasticsearch.action.bulk.BulkItemResponse;\n+import org.elasticsearch.action.bulk.BulkRequestBuilder;\n+import org.elasticsearch.action.bulk.BulkResponse;\n+import org.elasticsearch.action.bulk.TransportShardBulkAction;\n+import org.elasticsearch.action.search.SearchResponse;\n+import org.elasticsearch.cluster.node.DiscoveryNode;\n+import org.elasticsearch.common.settings.ImmutableSettings;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.discovery.Discovery;\n+import org.elasticsearch.search.SearchHit;\n+import org.elasticsearch.test.ElasticsearchIntegrationTest;\n+import org.elasticsearch.test.transport.MockTransportService;\n+import org.elasticsearch.transport.*;\n+import org.junit.Test;\n+\n+import java.io.IOException;\n+import java.util.*;\n+import java.util.concurrent.ExecutionException;\n+import java.util.concurrent.atomic.AtomicBoolean;\n+\n+import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n+import static org.elasticsearch.index.query.QueryBuilders.termQuery;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.*;\n+import static org.hamcrest.CoreMatchers.equalTo;\n+import static org.hamcrest.Matchers.greaterThan;\n+\n+@ElasticsearchIntegrationTest.ClusterScope(scope = ElasticsearchIntegrationTest.Scope.SUITE)\n+public class ExceptionRetryTests extends ElasticsearchIntegrationTest {\n+\n+ @Override\n+ protected Settings nodeSettings(int nodeOrdinal) {\n+ return ImmutableSettings.builder()\n+ .put(super.nodeSettings(nodeOrdinal)).put(\"gateway.type\", \"local\")\n+ .put(TransportModule.TRANSPORT_SERVICE_TYPE_KEY, MockTransportService.class.getName())\n+ .build();\n+ }\n+\n+ /**\n+ * Tests retry mechanism when indexing. If an exception occurs when indexing then the indexing request is tried again before finally failing.\n+ * If auto generated ids are used this must not lead to duplicate ids\n+ * see https://github.com/elasticsearch/elasticsearch/issues/8788\n+ */\n+ @Test\n+ public void testRetryDueToExceptionOnNetworkLayer() throws ExecutionException, InterruptedException, IOException {\n+ final AtomicBoolean exceptionThrown = new AtomicBoolean(false);\n+ int numDocs = scaledRandomIntBetween(100, 1000);\n+ NodesStatsResponse nodeStats = client().admin().cluster().prepareNodesStats().get();\n+ NodeStats unluckyNode = randomFrom(nodeStats.getNodes());\n+ assertAcked(client().admin().indices().prepareCreate(\"index\"));\n+ ensureGreen(\"index\");\n+\n+ //create a transport service that throws a ConnectTransportException for one bulk request and therefore triggers a retry.\n+ for (NodeStats dataNode : nodeStats.getNodes()) {\n+ MockTransportService mockTransportService = ((MockTransportService) internalCluster().getInstance(TransportService.class, dataNode.getNode().name()));\n+ mockTransportService.addDelegate(internalCluster().getInstance(Discovery.class, unluckyNode.getNode().name()).localNode(), new MockTransportService.DelegateTransport(mockTransportService.original()) {\n+\n+ @Override\n+ public void sendRequest(DiscoveryNode node, long requestId, String action, TransportRequest request, TransportRequestOptions options) throws IOException, TransportException {\n+ super.sendRequest(node, requestId, action, request, options);\n+ if (action.equals(TransportShardBulkAction.ACTION_NAME) && !exceptionThrown.get()) {\n+ logger.debug(\"Throw ConnectTransportException\");\n+ exceptionThrown.set(true);\n+ throw new ConnectTransportException(node, action);\n+ }\n+ }\n+ });\n+ }\n+\n+ BulkRequestBuilder bulkBuilder = client().prepareBulk();\n+ for (int i = 0; i < numDocs; i++) {\n+ XContentBuilder doc = null;\n+ doc = jsonBuilder().startObject().field(\"foo\", \"bar\").endObject();\n+ bulkBuilder.add(client().prepareIndex(\"index\", \"type\").setSource(doc));\n+ }\n+\n+ BulkResponse response = bulkBuilder.get();\n+ if (response.hasFailures()) {\n+ for (BulkItemResponse singleIndexRespons : response.getItems()) {\n+ if (singleIndexRespons.isFailed()) {\n+ fail(\"None of the bulk items should fail but got \" + singleIndexRespons.getFailureMessage());\n+ }\n+ }\n+ }\n+\n+ refresh();\n+ SearchResponse searchResponse = client().prepareSearch(\"index\").setSize(numDocs * 2).addField(\"_id\").get();\n+\n+ Set<String> uniqueIds = new HashSet();\n+ long dupCounter = 0;\n+ boolean found_duplicate_already = false;\n+ for (int i = 0; i < searchResponse.getHits().getHits().length; i++) {\n+ if (!uniqueIds.add(searchResponse.getHits().getHits()[i].getId())) {\n+ if (!found_duplicate_already) {\n+ SearchResponse dupIdResponse = client().prepareSearch(\"index\").setQuery(termQuery(\"_id\", searchResponse.getHits().getHits()[i].getId())).setExplain(true).get();\n+ assertThat(dupIdResponse.getHits().totalHits(), greaterThan(1l));\n+ logger.info(\"found a duplicate id:\");\n+ for (SearchHit hit : dupIdResponse.getHits()) {\n+ logger.info(\"Doc {} was found on shard {}\", hit.getId(), hit.getShard().getShardId());\n+ }\n+ logger.info(\"will not print anymore in case more duplicates are found.\");\n+ found_duplicate_already = true;\n+ }\n+ dupCounter++;\n+ }\n+ }\n+ assertSearchResponse(searchResponse);\n+ assertThat(dupCounter, equalTo(0l));\n+ assertHitCount(searchResponse, numDocs);\n+ }\n+}", "filename": "src/test/java/org/elasticsearch/index/store/ExceptionRetryTests.java", "status": "added" } ] }
{ "body": "Hi, \n\nWe recently upgraded from v1.3.2 to v1.4.1. A `function_score` query using `script_score` is now failing when using `>` or `<` operators on the `_score`. Our script worked fine in 1.3, but after upgrading to 1.4.1 it fails with the following `GroovyRuntimeException`:\n\n> QueryPhaseExecutionException[[script_score_test][3]: query[filtered(function score (ConstantScore(_:_),function=script[if (_score > max_val) {0.0} else {_score}], params [{max_val=10.0}]))->cache(_type:my_type)],from[0],size[10]: Query Failed [Failed to execute main query]]; nested: GroovyScriptExecutionException[GroovyRuntimeException[Cannot compare org.elasticsearch.script.ScoreAccessor with value 'org.elasticsearch.script.ScoreAccessor@2553f628' and java.lang.Double with value '10.0']];\n\nNot sure if this is a bug in Groovy or we need to change our script syntax to be compatible. Here are some sample steps to reproduce.\n\n``` javascript\nDELETE /script_score_test\nPUT /script_score_test/my_type/1\n{\n \"prop1\" : 5.3,\n \"prop2\" : 6.7\n}\nPUT /script_score_test/my_type/2\n{\n \"prop1\" : 9.6,\n \"prop2\" : 15.4\n}\n```\n\nThe following query fails:\n\n``` javascript\nGET /script_score_test/my_type/_search\n{\n \"query\": {\n \"function_score\": {\n \"boost_mode\": \"replace\",\n \"script_score\": {\n \"script\": \"if (_score < max_val) {0.0} else {_score}\",\n \"params\": {\n \"max_val\": 10\n }\n }\n }\n }\n}\n```\n\nInterestingly, the query works when using the `==` operator. :\n\n``` javascript\nGET /script_score_test/my_type/_search\n{\n \"query\": {\n \"function_score\": {\n \"boost_mode\": \"replace\",\n \"script_score\": {\n \"script\": \"if (_score == max_val) {0.0} else {_score}\",\n \"params\": {\n \"max_val\": 10\n }\n }\n }\n }\n}\n```\n\nAlso, using `compareTo` seems to work:\n\n``` javascript\nGET /script_score_test/my_type/_search\n{\n \"query\": {\n \"function_score\": {\n \"boost_mode\": \"replace\",\n \"script_score\": {\n \"script\": \"if (_score.compareTo(max_val) < 0) {0.0} else {_score}\",\n \"params\": {\n \"max_val\": 10\n }\n }\n }\n }\n}\n```\n\nWe're OK with updating our scripts to use `compareTo` to make everything work, but we'd prefer to use the simpler `<` `>` syntax. I wanted to file this issue in case it's an actual bug or in case someone else comes across it.\n", "comments": [ { "body": "@dakrone do you have any ideas? I made the script accessor a `Number`, and it seemed like that should work the same as the boxed types, but it doesn't appear to in this case? I also am skeptical that `==` is not just comparing reference equality of the boxed value with the script accessor?\n", "created_at": "2014-12-08T21:43:57Z" }, { "body": "@rjernst the ScriptAccessor needs to implement Comparable<Number> in order to work with `<` and `>`, this seems to fix it for me:\n\n``` diff\ndiff --git a/src/main/java/org/elasticsearch/script/ScoreAccessor.java b/src/main/java/org/elasticsearch/script/ScoreAccessor.java\nindex 93536e5..e8c4333 100644\n--- a/src/main/java/org/elasticsearch/script/ScoreAccessor.java\n+++ b/src/main/java/org/elasticsearch/script/ScoreAccessor.java\n@@ -30,7 +30,7 @@ import java.io.IOException;\n * The provided {@link DocLookup} is used to retrieve the score\n * for the current document.\n */\n-public final class ScoreAccessor extends Number {\n+public final class ScoreAccessor extends Number implements Comparable<Number> {\n\n Scorer scorer;\n\n@@ -65,4 +65,9 @@ public final class ScoreAccessor extends Number {\n public double doubleValue() {\n return score();\n }\n+\n+ @Override\n+ public int compareTo(Number o) {\n+ return Float.compare(this.score(), o.floatValue());\n+ }\n }\n```\n\nAnd then the `_score < max_val` worked.\n", "created_at": "2014-12-09T13:46:14Z" }, { "body": "+1 for @dakrone fix\n", "created_at": "2014-12-09T15:03:18Z" } ], "number": 8828, "title": "script_score comparison operators failing on 1.4.1" }
{ "body": "closes #8828\n", "number": 9094, "review_comments": [], "title": "Make _score in groovy scripts comparable" }
{ "commits": [ { "message": "Scripting: Make _score in groovy scripts comparable\n\ncloses #8828\ncloses #9094" } ], "files": [ { "diff": "@@ -30,7 +30,7 @@\n * The provided {@link DocLookup} is used to retrieve the score\n * for the current document.\n */\n-public final class ScoreAccessor extends Number {\n+public final class ScoreAccessor extends Number implements Comparable<Number> {\n \n Scorer scorer;\n \n@@ -65,4 +65,9 @@ public float floatValue() {\n public double doubleValue() {\n return score();\n }\n+\n+ @Override\n+ public int compareTo(Number o) {\n+ return Float.compare(this.score(), o.floatValue());\n+ }\n }", "filename": "src/main/java/org/elasticsearch/script/ScoreAccessor.java", "status": "modified" }, { "diff": "@@ -117,20 +117,33 @@ public void testGroovyScriptAccess() {\n client().prepareIndex(\"test\", \"doc\", \"3\").setSource(\"foo\", \"dog spiders that can eat a dog\", \"bar\", 3).get();\n refresh();\n \n- // _score access\n- SearchResponse resp = client().prepareSearch(\"test\").setQuery(functionScoreQuery(matchQuery(\"foo\", \"dog\"))\n- .add(scriptFunction(\"_score\", \"groovy\"))\n- .boostMode(CombineFunction.REPLACE)).get();\n+ // doc[] access\n+ SearchResponse resp = client().prepareSearch(\"test\").setQuery(functionScoreQuery(matchAllQuery())\n+ .add(scriptFunction(\"doc['bar'].value\", \"groovy\"))\n+ .boostMode(CombineFunction.REPLACE)).get();\n \n assertNoFailures(resp);\n- assertSearchHits(resp, \"3\", \"1\");\n+ assertOrderedSearchHits(resp, \"3\", \"2\", \"1\");\n+ }\n+ \n+ public void testScoreAccess() {\n+ client().prepareIndex(\"test\", \"doc\", \"1\").setSource(\"foo\", \"quick brow fox jumped over the lazy dog\", \"bar\", 1).get();\n+ client().prepareIndex(\"test\", \"doc\", \"2\").setSource(\"foo\", \"fast jumping spiders\", \"bar\", 2).get();\n+ client().prepareIndex(\"test\", \"doc\", \"3\").setSource(\"foo\", \"dog spiders that can eat a dog\", \"bar\", 3).get();\n+ refresh();\n \n- // doc[] access\n- resp = client().prepareSearch(\"test\").setQuery(functionScoreQuery(matchAllQuery())\n- .add(scriptFunction(\"doc['bar'].value\", \"groovy\"))\n- .boostMode(CombineFunction.REPLACE)).get();\n+ // _score can be accessed\n+ SearchResponse resp = client().prepareSearch(\"test\").setQuery(functionScoreQuery(matchQuery(\"foo\", \"dog\"))\n+ .add(scriptFunction(\"_score\", \"groovy\"))\n+ .boostMode(CombineFunction.REPLACE)).get();\n+ assertNoFailures(resp);\n+ assertSearchHits(resp, \"3\", \"1\");\n \n+ // _score is comparable\n+ resp = client().prepareSearch(\"test\").setQuery(functionScoreQuery(matchQuery(\"foo\", \"dog\"))\n+ .add(scriptFunction(\"_score > 0 ? _score : 0\", \"groovy\"))\n+ .boostMode(CombineFunction.REPLACE)).get();\n assertNoFailures(resp);\n- assertOrderedSearchHits(resp, \"3\", \"2\", \"1\");\n+ assertSearchHits(resp, \"3\", \"1\");\n }\n }", "filename": "src/test/java/org/elasticsearch/script/GroovyScriptTests.java", "status": "modified" } ] }
{ "body": "This is likely related to `spatial4j`. There is an inconsistency _[bug?]_ in the way the `envelope` `geo-shape` type is being validated.\n\n[The docs](http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-geo-shape-type.html#_envelope) say that the `envelope` format `\"consists of coordinates for upper left and lower right points of the shape to represent a bounding rectangle\"`.\n\nHowever, being human it's pretty easy to screw this up, of the 4 different ways you can specify the co-ordinates; one is correct, two throw validation errors and one bites you every time by not throwing and messing up all your results.\n\nBelow I've attached a minimal testcase which outlines the 3 different ways you could screw up specifying the `envelope` and the status code and error produced for each.\n\nI would expect the **last request** to error the same as the previous 2 do with a message like `\"maxX must be >= minX: 1.0 to -1.0\"`\n\nhere's an example of the sort of confusion this causes:\n- https://github.com/elasticsearch/elasticsearch/issues/9079\n- https://github.com/elasticsearch/elasticsearch/issues/9067\n\nThere's actually a bunch more like this in the issue tracker so it's pretty `low-hanging-fruit` for resolving much frustration, complaining and time-wasting with GIS bbox queries in ES.\n\n``` bash\n#!/bin/bash\n\n################################################\n# geo_shape envelope validation\n################################################\n\nES='localhost:9200';\n\n# drop index\ncurl -XDELETE \"$ES/foo?pretty=true\" 2>/dev/null;\n\n# create index\ncurl -XPUT \"$ES/foo?pretty=true\" -d'\n{ \n \"settings\": {\n \"index\": {\n \"number_of_shards\": 1,\n \"number_of_replicas\": 0\n }\n },\n \"mappings\": {\n \"bar\": {\n \"properties\": {\n \"polys\" : {\n \"type\" : \"geo_shape\",\n \"tree\" : \"quadtree\",\n \"tree_levels\" : \"26\"\n }\n }\n }\n }\n}';\n\n# index a single polygon, so the index isn't empty\ncurl -XPOST \"$ES/foo/bar/1?pretty=true\" -d'\n{\n \"polys\":{\n \"type\":\"Polygon\",\n \"coordinates\":[[\n [\"-1\",\"1\"],\n [\"-1\",\"-1\"],\n [\"1\",\"-1\"],\n [\"1\",\"1\"],\n [\"-1\",\"1\"]\n ]]\n }\n}';\n\n# top-left / bottom-right (correct)\n#200 OK\ncurl -XPOST \"$ES/foo/_search?pretty=true\" -d'\n{\n \"query\": {\n \"geo_shape\": {\n \"polys\": {\n \"shape\": {\n \"type\": \"envelope\",\n \"coordinates\": [\n [\"-1\", \"1\"],\n [\"1\", \"-1\"]\n ]\n }\n }\n }\n }\n}';\n\n# bottom-left / top-right (wrong)\n#400 InvalidShapeException[maxY must be >= minY: 1.0 to -1.0]\ncurl -XPOST \"$ES/foo/_search?pretty=true\" -d'\n{\n \"query\": {\n \"geo_shape\": {\n \"polys\": {\n \"shape\": {\n \"type\": \"envelope\",\n \"coordinates\": [\n [\"-1\", \"-1\"],\n [\"1\", \"1\"]\n ]\n }\n }\n }\n }\n}';\n\n# bottom-right / top-left (wrong)\n#400 InvalidShapeException[maxY must be >= minY: 1.0 to -1.0]\ncurl -XPOST \"$ES/foo/_search?pretty=true\" -d'\n{\n \"query\": {\n \"geo_shape\": {\n \"polys\": {\n \"shape\": {\n \"type\": \"envelope\",\n \"coordinates\": [\n [\"1\", \"-1\"],\n [\"-1\", \"1\"]\n ]\n }\n }\n }\n }\n}';\n\n# top-right / bottom-left (wrong)\n#200 OK\ncurl -XPOST \"$ES/foo/_search?pretty=true\" -d'\n{\n \"query\": {\n \"geo_shape\": {\n \"polys\": {\n \"shape\": {\n \"type\": \"envelope\",\n \"coordinates\": [\n [\"1\", \"1\"],\n [\"-1\", \"-1\"]\n ]\n }\n }\n }\n }\n}';\n```\n", "comments": [], "number": 9080, "title": "[GEO] GIS envelope validation" }
{ "body": "ShapeBuilder expected coordinates for Envelope types in strict Top-Left, Bottom-Right order. Given that GeoJSON does not enforce coordinate order (as seen in #8672) clients could specify envelope bounds in any order and be compliant with the GeoJSON spec but not the ES ShapeBuilder logic. This change loosens the ShapeBuilder requirements on envelope coordinate order, reordering where necessary.\n\ncloses #2544\ncloses #9067\ncloses #9079\ncloses #9080\n", "number": 9091, "review_comments": [ { "body": "Personally I like some called \"validate\" to only do validation, but it seems like this is also doing some logic for the order of the points. You might consider moving these last lines into `parseEnvelope`, and then you don't need the `points` array. I think the function could then be simplified to:\n\n```\nvalidateEnvelopeNode(coordinates);\nCoordinate uL = coordinates.children.get(0).coordinate;\nCoordinate lR = coordinates.children.get(1).coordinate;\nif (lR.x > uL.x && uL.y > lR.y) {\n Pair<Pair, Pair> range = Edge.range(points, 0, 2);\n uL = ...;\n lR = ...;\n}\nreturn newEnvelope(orientation).topLeft(uL).bottomRight(lR);\n```\n\nAnd at that point I'm not sure the point of having a separate validate function?\n", "created_at": "2014-12-30T06:18:48Z" }, { "body": "That's a good point. Technically 'validate' is semantically incorrect since coordinate order/size does not make an invalid GeoJSON. It could have been named verifyEnvelopeNode. Nevertheless, the more I think about it, the more I like the idea of removing the method all together and just having all the logic in parseEnvelope. This isn't a big enough change with reusable logic to warrant separate methods anyway. \n", "created_at": "2014-12-30T14:22:30Z" } ], "title": "GIS envelope validation" }
{ "commits": [ { "message": "[GEO] GIS envelope validation\n\nShapeBuilder expected coordinates for Envelope types in strict Top-Left, Bottom-Right order. Given that GeoJSON does not enforce coordinate order (as seen in #8672) clients could specify envelope bounds in any order and be compliant with the GeoJSON spec but not the ES ShapeBuilder logic. This change loosens the ShapeBuilder requirements on envelope coordinate order, reordering where necessary.\n\ncloses #2544\ncloses #9067\ncloses #9079\ncloses #9080" } ], "files": [ { "diff": "@@ -787,8 +787,20 @@ protected static CircleBuilder parseCircle(CoordinateNode coordinates, Distance\n }\n \n protected static EnvelopeBuilder parseEnvelope(CoordinateNode coordinates, Orientation orientation) {\n- return newEnvelope(orientation).\n- topLeft(coordinates.children.get(0).coordinate).bottomRight(coordinates.children.get(1).coordinate);\n+ // validate the coordinate array for envelope type\n+ if (coordinates.children.size() != 2) {\n+ throw new ElasticsearchParseException(\"Invalid number of points (\" + coordinates.children.size() + \") provided for \" +\n+ \"geo_shape ('envelope') when expecting an array of 2 coordinates\");\n+ }\n+ // verify coordinate bounds, correct if necessary\n+ Coordinate uL = coordinates.children.get(0).coordinate;\n+ Coordinate lR = coordinates.children.get(1).coordinate;\n+ if (((lR.x < uL.x) || (uL.y < lR.y))) {\n+ Coordinate uLtmp = uL;\n+ uL = new Coordinate(Math.min(uL.x, lR.x), Math.max(uL.y, lR.y));\n+ lR = new Coordinate(Math.max(uLtmp.x, lR.x), Math.min(uLtmp.y, lR.y));\n+ }\n+ return newEnvelope(orientation).topLeft(uL).bottomRight(lR);\n }\n \n protected static void validateMultiPointNode(CoordinateNode coordinates) {", "filename": "src/main/java/org/elasticsearch/common/geo/builders/ShapeBuilder.java", "status": "modified" }, { "diff": "@@ -121,6 +121,7 @@ public void testParse_circle() throws IOException {\n \n @Test\n public void testParse_envelope() throws IOException {\n+ // test #1: envelope with expected coordinate order (TopLeft, BottomRight)\n String multilinesGeoJson = XContentFactory.jsonBuilder().startObject().field(\"type\", \"envelope\")\n .startArray(\"coordinates\")\n .startArray().value(-50).value(30).endArray()\n@@ -130,6 +131,38 @@ public void testParse_envelope() throws IOException {\n \n Rectangle expected = SPATIAL_CONTEXT.makeRectangle(-50, 50, -30, 30);\n assertGeometryEquals(expected, multilinesGeoJson);\n+\n+ // test #2: envelope with agnostic coordinate order (TopRight, BottomLeft)\n+ multilinesGeoJson = XContentFactory.jsonBuilder().startObject().field(\"type\", \"envelope\")\n+ .startArray(\"coordinates\")\n+ .startArray().value(50).value(30).endArray()\n+ .startArray().value(-50).value(-30).endArray()\n+ .endArray()\n+ .endObject().string();\n+\n+ expected = SPATIAL_CONTEXT.makeRectangle(-50, 50, -30, 30);\n+ assertGeometryEquals(expected, multilinesGeoJson);\n+\n+ // test #3: \"envelope\" (actually a triangle) with invalid number of coordinates (TopRight, BottomLeft, BottomRight)\n+ multilinesGeoJson = XContentFactory.jsonBuilder().startObject().field(\"type\", \"envelope\")\n+ .startArray(\"coordinates\")\n+ .startArray().value(50).value(30).endArray()\n+ .startArray().value(-50).value(-30).endArray()\n+ .startArray().value(50).value(-39).endArray()\n+ .endArray()\n+ .endObject().string();\n+ XContentParser parser = JsonXContent.jsonXContent.createParser(multilinesGeoJson);\n+ parser.nextToken();\n+ ElasticsearchGeoAssertions.assertValidException(parser, ElasticsearchParseException.class);\n+\n+ // test #4: \"envelope\" with empty coordinates\n+ multilinesGeoJson = XContentFactory.jsonBuilder().startObject().field(\"type\", \"envelope\")\n+ .startArray(\"coordinates\")\n+ .endArray()\n+ .endObject().string();\n+ parser = JsonXContent.jsonXContent.createParser(multilinesGeoJson);\n+ parser.nextToken();\n+ ElasticsearchGeoAssertions.assertValidException(parser, ElasticsearchParseException.class);\n }\n \n @Test", "filename": "src/test/java/org/elasticsearch/common/geo/GeoJSONShapeParserTests.java", "status": "modified" } ] }
{ "body": "It might happen that the coordinates for an envelope geo type arrive in top right bottom left. When that happens right now ES 0.20.2 the ES dies with oom. When you look at the hprof dump all you see is millions of org.elasticsearch.common.lucene.spatial.prefix.tree.GeohashPrefixTree$GhCell objects in my case that was about 6mio worth of them.\n\nI see two solutions to this:\n- be friendly and correct the coordinates to be in proper order top left, bottom right\n- return an error\n", "comments": [ { "body": "hey, I think this has been fixed but can you provide a testcase for this to make sure I am not missing it?\n", "created_at": "2013-03-02T15:58:37Z" }, { "body": "Hi, It has not been fixed.\ne.g. ,`\"coordinates\":[[150.5981,-30.548016666667],[150.06046666666,-31.037683333334]]` will still break the 0.90b1\n", "created_at": "2013-03-05T08:54:22Z" }, { "body": "Hi @mvrhov, can you provide an example how you use these coordinates. Do you use this in a filter or a query? I just tested it myself and I also do not receive any error response. So I'm going to fix this.\n", "created_at": "2013-06-10T09:29:03Z" }, { "body": "@chilling: I haven't tested this since 0.90b1 come out. However the oom was happening when inserting a new record.\n", "created_at": "2013-06-10T09:51:10Z" }, { "body": "Ok. I think currently this error will be ignored but I'm going to fixit. The solution will be returning an error rather than autocorrecting it.\n", "created_at": "2013-06-10T10:35:09Z" }, { "body": "ref: https://github.com/elasticsearch/elasticsearch/issues/9080\n", "created_at": "2014-12-28T19:25:41Z" } ], "number": 2544, "title": "[crash] ES should reorder envelope corners" }
{ "body": "ShapeBuilder expected coordinates for Envelope types in strict Top-Left, Bottom-Right order. Given that GeoJSON does not enforce coordinate order (as seen in #8672) clients could specify envelope bounds in any order and be compliant with the GeoJSON spec but not the ES ShapeBuilder logic. This change loosens the ShapeBuilder requirements on envelope coordinate order, reordering where necessary.\n\ncloses #2544\ncloses #9067\ncloses #9079\ncloses #9080\n", "number": 9091, "review_comments": [ { "body": "Personally I like some called \"validate\" to only do validation, but it seems like this is also doing some logic for the order of the points. You might consider moving these last lines into `parseEnvelope`, and then you don't need the `points` array. I think the function could then be simplified to:\n\n```\nvalidateEnvelopeNode(coordinates);\nCoordinate uL = coordinates.children.get(0).coordinate;\nCoordinate lR = coordinates.children.get(1).coordinate;\nif (lR.x > uL.x && uL.y > lR.y) {\n Pair<Pair, Pair> range = Edge.range(points, 0, 2);\n uL = ...;\n lR = ...;\n}\nreturn newEnvelope(orientation).topLeft(uL).bottomRight(lR);\n```\n\nAnd at that point I'm not sure the point of having a separate validate function?\n", "created_at": "2014-12-30T06:18:48Z" }, { "body": "That's a good point. Technically 'validate' is semantically incorrect since coordinate order/size does not make an invalid GeoJSON. It could have been named verifyEnvelopeNode. Nevertheless, the more I think about it, the more I like the idea of removing the method all together and just having all the logic in parseEnvelope. This isn't a big enough change with reusable logic to warrant separate methods anyway. \n", "created_at": "2014-12-30T14:22:30Z" } ], "title": "GIS envelope validation" }
{ "commits": [ { "message": "[GEO] GIS envelope validation\n\nShapeBuilder expected coordinates for Envelope types in strict Top-Left, Bottom-Right order. Given that GeoJSON does not enforce coordinate order (as seen in #8672) clients could specify envelope bounds in any order and be compliant with the GeoJSON spec but not the ES ShapeBuilder logic. This change loosens the ShapeBuilder requirements on envelope coordinate order, reordering where necessary.\n\ncloses #2544\ncloses #9067\ncloses #9079\ncloses #9080" } ], "files": [ { "diff": "@@ -787,8 +787,20 @@ protected static CircleBuilder parseCircle(CoordinateNode coordinates, Distance\n }\n \n protected static EnvelopeBuilder parseEnvelope(CoordinateNode coordinates, Orientation orientation) {\n- return newEnvelope(orientation).\n- topLeft(coordinates.children.get(0).coordinate).bottomRight(coordinates.children.get(1).coordinate);\n+ // validate the coordinate array for envelope type\n+ if (coordinates.children.size() != 2) {\n+ throw new ElasticsearchParseException(\"Invalid number of points (\" + coordinates.children.size() + \") provided for \" +\n+ \"geo_shape ('envelope') when expecting an array of 2 coordinates\");\n+ }\n+ // verify coordinate bounds, correct if necessary\n+ Coordinate uL = coordinates.children.get(0).coordinate;\n+ Coordinate lR = coordinates.children.get(1).coordinate;\n+ if (((lR.x < uL.x) || (uL.y < lR.y))) {\n+ Coordinate uLtmp = uL;\n+ uL = new Coordinate(Math.min(uL.x, lR.x), Math.max(uL.y, lR.y));\n+ lR = new Coordinate(Math.max(uLtmp.x, lR.x), Math.min(uLtmp.y, lR.y));\n+ }\n+ return newEnvelope(orientation).topLeft(uL).bottomRight(lR);\n }\n \n protected static void validateMultiPointNode(CoordinateNode coordinates) {", "filename": "src/main/java/org/elasticsearch/common/geo/builders/ShapeBuilder.java", "status": "modified" }, { "diff": "@@ -121,6 +121,7 @@ public void testParse_circle() throws IOException {\n \n @Test\n public void testParse_envelope() throws IOException {\n+ // test #1: envelope with expected coordinate order (TopLeft, BottomRight)\n String multilinesGeoJson = XContentFactory.jsonBuilder().startObject().field(\"type\", \"envelope\")\n .startArray(\"coordinates\")\n .startArray().value(-50).value(30).endArray()\n@@ -130,6 +131,38 @@ public void testParse_envelope() throws IOException {\n \n Rectangle expected = SPATIAL_CONTEXT.makeRectangle(-50, 50, -30, 30);\n assertGeometryEquals(expected, multilinesGeoJson);\n+\n+ // test #2: envelope with agnostic coordinate order (TopRight, BottomLeft)\n+ multilinesGeoJson = XContentFactory.jsonBuilder().startObject().field(\"type\", \"envelope\")\n+ .startArray(\"coordinates\")\n+ .startArray().value(50).value(30).endArray()\n+ .startArray().value(-50).value(-30).endArray()\n+ .endArray()\n+ .endObject().string();\n+\n+ expected = SPATIAL_CONTEXT.makeRectangle(-50, 50, -30, 30);\n+ assertGeometryEquals(expected, multilinesGeoJson);\n+\n+ // test #3: \"envelope\" (actually a triangle) with invalid number of coordinates (TopRight, BottomLeft, BottomRight)\n+ multilinesGeoJson = XContentFactory.jsonBuilder().startObject().field(\"type\", \"envelope\")\n+ .startArray(\"coordinates\")\n+ .startArray().value(50).value(30).endArray()\n+ .startArray().value(-50).value(-30).endArray()\n+ .startArray().value(50).value(-39).endArray()\n+ .endArray()\n+ .endObject().string();\n+ XContentParser parser = JsonXContent.jsonXContent.createParser(multilinesGeoJson);\n+ parser.nextToken();\n+ ElasticsearchGeoAssertions.assertValidException(parser, ElasticsearchParseException.class);\n+\n+ // test #4: \"envelope\" with empty coordinates\n+ multilinesGeoJson = XContentFactory.jsonBuilder().startObject().field(\"type\", \"envelope\")\n+ .startArray(\"coordinates\")\n+ .endArray()\n+ .endObject().string();\n+ parser = JsonXContent.jsonXContent.createParser(multilinesGeoJson);\n+ parser.nextToken();\n+ ElasticsearchGeoAssertions.assertValidException(parser, ElasticsearchParseException.class);\n }\n \n @Test", "filename": "src/test/java/org/elasticsearch/common/geo/GeoJSONShapeParserTests.java", "status": "modified" } ] }
{ "body": "If I have 3 masters and accidentally set `minimum_master_nodes` to 4 dynamically, the cluster will stop. While the cluster is stopped, there's no way to update settings, so I have to do a full restart.\n\nIf I updated a persistent setting, then not even a full restart will fix the issue. I have to manually edit cluster state.\n\nYou could get out of this situation by adding master nodes until the new minimum is reached, but that's not a complete solution because I might have fat fingered `minimum_master_nodes` to 100 or something like that.\n\nI think it'd be worth adding some validation to ensure that an update to `minimum_master_nodes` won't accidentally put the cluster in an unrecoverable state.\n", "comments": [ { "body": "Agreed it's a pain. This is already fixed in https://github.com/elasticsearch/elasticsearch/pull/8321 , which will be released with 1.5.0 \n\nI'm closing this, but if you feel something is missing from that PR, please feel free to re-open.\n", "created_at": "2014-12-08T22:08:41Z" }, { "body": "Excellent. Thanks much @bleskes!\n", "created_at": "2014-12-08T22:13:54Z" }, { "body": "@bleskes it just occurred to me that restoring global state from a snapshot can update persistent settings:\n\nhttp://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-snapshots.html#_restore\n\n> The restored persistent settings are added to the existing persistent settings.\n\n@imotov what happens if a snapshot restores a persistent setting for `minimum_master_nodes` that is greater than the current master count?\n", "created_at": "2014-12-10T20:49:54Z" }, { "body": "@grantr yes, this indeed can be an issue. Thanks!\n", "created_at": "2014-12-24T01:12:15Z" }, { "body": "Thanks for fixing this @imotov!\n", "created_at": "2014-12-24T01:45:36Z" } ], "number": 8830, "title": "Restore process can restore incompatible `minimum_master_nodes` setting" }
{ "body": "Closes #8830\n", "number": 9051, "review_comments": [ { "body": "Duplicate \"gateway.type\"\n\nNote that gateway.local will be removed in #9128 \n", "created_at": "2015-01-06T10:02:35Z" }, { "body": "When global cluster state is restored and has many settings, can't this line be too much verbose?\n", "created_at": "2015-01-06T10:07:47Z" }, { "body": "For the setting to be logged here it had to be 1) dynamic when snapshot was made, 2) converted into non-dynamic when restore is made and 3) configured as a persistent setting in snapshotted cluster. Since removal of a dynamic setting is pretty rare event in elasticsearch codebase, nothing should be logged here under normal circumstances.\n", "created_at": "2015-01-09T20:44:59Z" }, { "body": "Good point. Removing. \n", "created_at": "2015-01-09T21:03:44Z" }, { "body": "Thanks for the explanation :)\n", "created_at": "2015-01-12T08:15:35Z" } ], "title": "Add validation of restored persistent settings" }
{ "commits": [ { "message": "Snapshot/Restore: add validation of restored persistent settings\n\nCloses #8830" } ], "files": [ { "diff": "@@ -186,7 +186,7 @@ public ClusterState execute(final ClusterState currentState) {\n ImmutableSettings.Builder transientSettings = ImmutableSettings.settingsBuilder();\n transientSettings.put(currentState.metaData().transientSettings());\n for (Map.Entry<String, String> entry : request.transientSettings().getAsMap().entrySet()) {\n- if (dynamicSettings.hasDynamicSetting(entry.getKey()) || entry.getKey().startsWith(\"logger.\")) {\n+ if (dynamicSettings.isDynamicOrLoggingSetting(entry.getKey())) {\n String error = dynamicSettings.validateDynamicSetting(entry.getKey(), entry.getValue());\n if (error == null) {\n transientSettings.put(entry.getKey(), entry.getValue());\n@@ -203,7 +203,7 @@ public ClusterState execute(final ClusterState currentState) {\n ImmutableSettings.Builder persistentSettings = ImmutableSettings.settingsBuilder();\n persistentSettings.put(currentState.metaData().persistentSettings());\n for (Map.Entry<String, String> entry : request.persistentSettings().getAsMap().entrySet()) {\n- if (dynamicSettings.hasDynamicSetting(entry.getKey()) || entry.getKey().startsWith(\"logger.\")) {\n+ if (dynamicSettings.isDynamicOrLoggingSetting(entry.getKey())) {\n String error = dynamicSettings.validateDynamicSetting(entry.getKey(), entry.getValue());\n if (error == null) {\n persistentSettings.put(entry.getKey(), entry.getValue());", "filename": "src/main/java/org/elasticsearch/action/admin/cluster/settings/TransportClusterUpdateSettingsAction.java", "status": "modified" }, { "diff": "@@ -31,6 +31,11 @@ public class DynamicSettings {\n \n private ImmutableMap<String, Validator> dynamicSettings = ImmutableMap.of();\n \n+\n+ public boolean isDynamicOrLoggingSetting(String key) {\n+ return hasDynamicSetting(key) || key.startsWith(\"logger.\");\n+ }\n+\n public boolean hasDynamicSetting(String key) {\n for (String dynamicSetting : dynamicSettings.keySet()) {\n if (Regex.simpleMatch(dynamicSetting, key)) {", "filename": "src/main/java/org/elasticsearch/cluster/settings/DynamicSettings.java", "status": "modified" }, { "diff": "@@ -34,11 +34,14 @@\n import org.elasticsearch.cluster.routing.*;\n import org.elasticsearch.cluster.routing.allocation.AllocationService;\n import org.elasticsearch.cluster.routing.allocation.RoutingAllocation;\n+import org.elasticsearch.cluster.settings.ClusterDynamicSettings;\n+import org.elasticsearch.cluster.settings.DynamicSettings;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.component.AbstractComponent;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n+import org.elasticsearch.common.settings.ImmutableSettings;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.index.shard.ShardId;\n@@ -92,16 +95,20 @@ public class RestoreService extends AbstractComponent implements ClusterStateLis\n \n private final MetaDataCreateIndexService createIndexService;\n \n+ private final DynamicSettings dynamicSettings;\n+\n private final CopyOnWriteArrayList<ActionListener<RestoreCompletionResponse>> listeners = new CopyOnWriteArrayList<>();\n \n @Inject\n- public RestoreService(Settings settings, ClusterService clusterService, RepositoriesService repositoriesService, TransportService transportService, AllocationService allocationService, MetaDataCreateIndexService createIndexService) {\n+ public RestoreService(Settings settings, ClusterService clusterService, RepositoriesService repositoriesService, TransportService transportService,\n+ AllocationService allocationService, MetaDataCreateIndexService createIndexService, @ClusterDynamicSettings DynamicSettings dynamicSettings) {\n super(settings);\n this.clusterService = clusterService;\n this.repositoriesService = repositoriesService;\n this.transportService = transportService;\n this.allocationService = allocationService;\n this.createIndexService = createIndexService;\n+ this.dynamicSettings = dynamicSettings;\n transportService.registerHandler(UPDATE_RESTORE_ACTION_NAME, new UpdateRestoreStateRequestHandler());\n clusterService.add(this);\n }\n@@ -283,7 +290,24 @@ private void validateExistingIndex(IndexMetaData currentIndexMetaData, IndexMeta\n private void restoreGlobalStateIfRequested(MetaData.Builder mdBuilder) {\n if (request.includeGlobalState()) {\n if (metaData.persistentSettings() != null) {\n- mdBuilder.persistentSettings(metaData.persistentSettings());\n+ boolean changed = false;\n+ ImmutableSettings.Builder persistentSettings = ImmutableSettings.settingsBuilder().put();\n+ for (Map.Entry<String, String> entry : metaData.persistentSettings().getAsMap().entrySet()) {\n+ if (dynamicSettings.isDynamicOrLoggingSetting(entry.getKey())) {\n+ String error = dynamicSettings.validateDynamicSetting(entry.getKey(), entry.getValue());\n+ if (error == null) {\n+ persistentSettings.put(entry.getKey(), entry.getValue());\n+ changed = true;\n+ } else {\n+ logger.warn(\"ignoring persistent setting [{}], [{}]\", entry.getKey(), error);\n+ }\n+ } else {\n+ logger.warn(\"ignoring persistent setting [{}], not dynamically updateable\", entry.getKey());\n+ }\n+ }\n+ if (changed) {\n+ mdBuilder.persistentSettings(persistentSettings.build());\n+ }\n }\n if (metaData.templates() != null) {\n // TODO: Should all existing templates be deleted first?", "filename": "src/main/java/org/elasticsearch/snapshots/RestoreService.java", "status": "modified" }, { "diff": "@@ -51,11 +51,12 @@\n import org.elasticsearch.common.xcontent.ToXContent;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentParser;\n+import org.elasticsearch.discovery.zen.elect.ElectMasterService;\n import org.elasticsearch.index.store.support.AbstractIndexStore;\n+import org.elasticsearch.indices.ttl.IndicesTTLService;\n import org.elasticsearch.repositories.RepositoryMissingException;\n import org.elasticsearch.snapshots.mockstore.MockRepositoryModule;\n import org.elasticsearch.test.InternalTestCluster;\n-import org.elasticsearch.threadpool.ThreadPool;\n import org.junit.Ignore;\n import org.junit.Test;\n \n@@ -66,6 +67,7 @@\n import java.util.List;\n import java.util.concurrent.CopyOnWriteArrayList;\n import java.util.concurrent.CountDownLatch;\n+import java.util.concurrent.TimeUnit;\n \n import static com.google.common.collect.Lists.newArrayList;\n import static org.elasticsearch.common.settings.ImmutableSettings.settingsBuilder;\n@@ -82,16 +84,29 @@ public class DedicatedClusterSnapshotRestoreTests extends AbstractSnapshotTests\n \n @Test\n public void restorePersistentSettingsTest() throws Exception {\n- logger.info(\"--> start node\");\n- internalCluster().startNode(settingsBuilder().put(\"gateway.type\", \"local\"));\n+ logger.info(\"--> start 2 nodes\");\n+ Settings nodeSettings = settingsBuilder()\n+ .put(\"discovery.type\", \"zen\")\n+ .put(\"discovery.zen.ping_timeout\", \"200ms\")\n+ .put(\"discovery.initial_state_timeout\", \"500ms\")\n+ .build();\n+ internalCluster().startNode(nodeSettings);\n Client client = client();\n+ String secondNode = internalCluster().startNode(nodeSettings);\n+\n+ int random = randomIntBetween(10, 42);\n \n- // Add dummy persistent setting\n logger.info(\"--> set test persistent setting\");\n- String settingValue = \"test-\" + randomInt();\n- client.admin().cluster().prepareUpdateSettings().setPersistentSettings(ImmutableSettings.settingsBuilder().put(ThreadPool.THREADPOOL_GROUP + \"dummy.value\", settingValue)).execute().actionGet();\n+ client.admin().cluster().prepareUpdateSettings().setPersistentSettings(\n+ ImmutableSettings.settingsBuilder()\n+ .put(ElectMasterService.DISCOVERY_ZEN_MINIMUM_MASTER_NODES, 2)\n+ .put(IndicesTTLService.INDICES_TTL_INTERVAL, random, TimeUnit.MINUTES))\n+ .execute().actionGet();\n+\n assertThat(client.admin().cluster().prepareState().setRoutingTable(false).setNodes(false).execute().actionGet().getState()\n- .getMetaData().persistentSettings().get(ThreadPool.THREADPOOL_GROUP + \"dummy.value\"), equalTo(settingValue));\n+ .getMetaData().persistentSettings().getAsTime(IndicesTTLService.INDICES_TTL_INTERVAL, TimeValue.timeValueMinutes(1)).millis(), equalTo(TimeValue.timeValueMinutes(random).millis()));\n+ assertThat(client.admin().cluster().prepareState().setRoutingTable(false).setNodes(false).execute().actionGet().getState()\n+ .getMetaData().persistentSettings().getAsInt(ElectMasterService.DISCOVERY_ZEN_MINIMUM_MASTER_NODES, -1), equalTo(2));\n \n logger.info(\"--> create repository\");\n PutRepositoryResponse putRepositoryResponse = client.admin().cluster().preparePutRepository(\"test-repo\")\n@@ -105,14 +120,25 @@ public void restorePersistentSettingsTest() throws Exception {\n assertThat(client.admin().cluster().prepareGetSnapshots(\"test-repo\").setSnapshots(\"test-snap\").execute().actionGet().getSnapshots().get(0).state(), equalTo(SnapshotState.SUCCESS));\n \n logger.info(\"--> clean the test persistent setting\");\n- client.admin().cluster().prepareUpdateSettings().setPersistentSettings(ImmutableSettings.settingsBuilder().put(ThreadPool.THREADPOOL_GROUP + \"dummy.value\", \"\")).execute().actionGet();\n+ client.admin().cluster().prepareUpdateSettings().setPersistentSettings(\n+ ImmutableSettings.settingsBuilder()\n+ .put(ElectMasterService.DISCOVERY_ZEN_MINIMUM_MASTER_NODES, 1)\n+ .put(IndicesTTLService.INDICES_TTL_INTERVAL, TimeValue.timeValueMinutes(1)))\n+ .execute().actionGet();\n assertThat(client.admin().cluster().prepareState().setRoutingTable(false).setNodes(false).execute().actionGet().getState()\n- .getMetaData().persistentSettings().get(ThreadPool.THREADPOOL_GROUP + \"dummy.value\"), equalTo(\"\"));\n+ .getMetaData().persistentSettings().getAsTime(IndicesTTLService.INDICES_TTL_INTERVAL, TimeValue.timeValueMinutes(1)).millis(), equalTo(TimeValue.timeValueMinutes(1).millis()));\n+\n+ stopNode(secondNode);\n+ assertThat(client.admin().cluster().prepareHealth().setWaitForNodes(\"1\").get().isTimedOut(), equalTo(false));\n \n logger.info(\"--> restore snapshot\");\n client.admin().cluster().prepareRestoreSnapshot(\"test-repo\", \"test-snap\").setRestoreGlobalState(true).setWaitForCompletion(true).execute().actionGet();\n assertThat(client.admin().cluster().prepareState().setRoutingTable(false).setNodes(false).execute().actionGet().getState()\n- .getMetaData().persistentSettings().get(ThreadPool.THREADPOOL_GROUP + \"dummy.value\"), equalTo(settingValue));\n+ .getMetaData().persistentSettings().getAsTime(IndicesTTLService.INDICES_TTL_INTERVAL, TimeValue.timeValueMinutes(1)).millis(), equalTo(TimeValue.timeValueMinutes(random).millis()));\n+\n+ logger.info(\"--> ensure that zen discovery minimum master nodes wasn't restored\");\n+ assertThat(client.admin().cluster().prepareState().setRoutingTable(false).setNodes(false).execute().actionGet().getState()\n+ .getMetaData().persistentSettings().getAsInt(ElectMasterService.DISCOVERY_ZEN_MINIMUM_MASTER_NODES, -1), not(equalTo(2)));\n }\n \n @Test\n@@ -545,7 +571,7 @@ public boolean clearData(String nodeName) {\n }\n }\n logger.info(\"--> check that at least half of the shards had some reuse: [{}]\", reusedShards);\n- assertThat(reusedShards.size(), greaterThanOrEqualTo(numberOfShards/2));\n+ assertThat(reusedShards.size(), greaterThanOrEqualTo(numberOfShards / 2));\n }\n \n @Test", "filename": "src/test/java/org/elasticsearch/snapshots/DedicatedClusterSnapshotRestoreTests.java", "status": "modified" } ] }
{ "body": "We noticed an edge case today when using the `ignore_unavailable` parameter against indices that are closed. Running the following in Sense produces a 403 error, which was very unexcepted.\n\n``` sh\ncurl -XPOST \"http://localhost:9200/test_index/test_type\" -d'\n{\n \"some\": \"value\"\n}'\n\ncurl -XPOST \"http://localhost:9200/test/_close\"\n\ncurl -XPOST \"http://localhost:9200/test/_search?ignore_unavailable=true\"\n# {\n# \"error\": \"ClusterBlockException[blocked by: [FORBIDDEN/4/index closed];]\",\n# \"status\": 403\n# }\n```\n\nAlternatively, if I send `test*` I get the expected empty result.\n\n``` sh\ncurl -XPOST \"http://localhost:9200/test*/_search?ignore_unavailable=true\"\n# {\n# \"took\": 1,\n# \"timed_out\": false,\n# \"_shards\": {\n# \"total\": 0,\n# \"successful\": 0,\n# \"failed\": 0\n# },\n# // etc.\n# }\n```\n", "comments": [ { "body": "A couple more examples of inconsistency: \n\n``` sh\ncurl -XPOST \"http://localhost:9200/test/_search?ignore_unavailable=false&pretty\"\n# {\n# \"error\" : \"IndexClosedException[[test] closed]\",\n# \"status\" : 403\n# }\n```\n\n``` sh\ncurl -XPOST \"http://localhost:9200/+test/_search?ignore_unavailable=true&pretty\"\n# {\n# \"took\" : 0,\n# \"timed_out\" : false,\n# \"_shards\" : {\n# \"total\" : 0,\n# \"successful\" : 0,\n# \"failed\" : 0\n# },\n# \"hits\" : {\n# \"total\" : 0,\n# \"max_score\" : 0.0,\n# \"hits\" : [ ]\n# }\n# }\n```\n", "created_at": "2014-08-04T23:39:05Z" }, { "body": "The different behaviour is caused by the fact that you are referring in the url to a specific index, in that case we ignore the `ignore_unavailable` option. We do consider it though if you specify multiple indices or wildcard expressions etc. That said, let's discuss if this is right or wrong :)\n", "created_at": "2014-08-05T18:53:59Z" }, { "body": "Yeah, it's a good question what the right behaviour here is. \n", "created_at": "2014-08-05T19:22:25Z" }, { "body": "+1 for making the parameter apply to both index names and index patterns.\n\nI think that the current behavior would make more sense if we renamed the parameter `ignore_unavailable_pattern_matches`\n", "created_at": "2014-08-06T01:57:33Z" }, { "body": "I think it make sense that `ignore_unavailable` also applies if just a single concrete index or alias is specified.\n", "created_at": "2014-08-06T21:23:57Z" }, { "body": "Yeah I'm thinking the same thing. \n", "created_at": "2014-08-07T12:31:21Z" }, { "body": "+1\n", "created_at": "2014-10-16T09:45:06Z" }, { "body": "+1 on providing a way for aliases to ignore closed indices, esp. for the time based index use case where older indices may be closed.\n", "created_at": "2014-12-01T02:52:55Z" }, { "body": "Hey @martijnvg, any word on this?\n", "created_at": "2014-12-19T22:54:56Z" }, { "body": "@spenceralger I opened #9047 for this.\n", "created_at": "2014-12-23T13:51:39Z" }, { "body": "Thanks @martijnvg!!\n", "created_at": "2014-12-23T15:51:36Z" }, { "body": "YAHOO!\n", "created_at": "2014-12-24T16:48:50Z" } ], "number": 7153, "title": "`ignore_unavailable=true` ignored when a single index is specified" }
{ "body": "The `ignore_unavailable` request setting shouldn't ignore closed indices if a single index is specified in a search or broadcast request.\n\nPR for #7153\n", "number": 9047, "review_comments": [ { "body": "why?\n", "created_at": "2014-12-23T15:54:21Z" }, { "body": "I think we can remove the `failClosed` parameter too and deduct it from the given `indicesOptions`?\n", "created_at": "2014-12-23T16:01:54Z" }, { "body": "I didn't do this, because otherwise I had to copy-paste this in this method: \n\n``` java\nboolean failClosed = indicesOptions.forbidClosedIndices() && !indicesOptions.ignoreUnavailable();\n```\n\nBut i'm ok with adding it.\n", "created_at": "2014-12-23T16:07:28Z" }, { "body": "I see what you mean, cause `failClosed` is used elsewhere as well... oh man this method would reuse a rewrite, I always wondered if we should just remove all these optimizations around the single index case and this `possiblyAliased` check as well.\nI would consider copy pasting the check anyways because you added an `if` in the method that has to do with `forbidClosedIndices`, it should make things more readable.\n", "created_at": "2014-12-23T16:18:13Z" }, { "body": "can you remind me why we need this check again? Didn't we do it at the beginning of the method already? Maybe this `if` was a leftover?\n", "created_at": "2014-12-23T16:33:52Z" }, { "body": "because the index the alias is pointing to may be in a closed state.\n", "created_at": "2014-12-23T16:35:44Z" }, { "body": "I think we can remove this `if` completely, cause we call the same method given the same input at the beginning of the method, this condition will never be true here.\n", "created_at": "2014-12-23T17:05:34Z" }, { "body": "Actually, I did some digging, this if was introduced with #6475, and I think its purpose was to check that state of indices after aliases resolution. This is a bug that we should address on a separate issue, that should be about index state checks when resolving aliases, since it seems we don't check that at all at this time.\n", "created_at": "2014-12-23T17:34:17Z" }, { "body": "agreed, lets open another issue for this.\n", "created_at": "2014-12-23T17:39:08Z" } ], "title": "`ignore_unavailable` shouldn't ignore closed indices" }
{ "commits": [ { "message": "Core: `ignore_unavailable` shouldn't ignore closed indices if a single index is specified in a search or broadcast request.\n\nCloses #9047\nCloses #7153" } ], "files": [ { "diff": "@@ -23,15 +23,13 @@\n import org.elasticsearch.action.search.type.*;\n import org.elasticsearch.action.support.ActionFilters;\n import org.elasticsearch.action.support.HandledTransportAction;\n-import org.elasticsearch.action.support.TransportAction;\n import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.indices.IndexClosedException;\n import org.elasticsearch.indices.IndexMissingException;\n import org.elasticsearch.threadpool.ThreadPool;\n-import org.elasticsearch.transport.BaseTransportRequestHandler;\n-import org.elasticsearch.transport.TransportChannel;\n import org.elasticsearch.transport.TransportService;\n \n import java.util.Map;\n@@ -89,9 +87,8 @@ protected void doExecute(SearchRequest searchRequest, ActionListener<SearchRespo\n // if we only have one group, then we always want Q_A_F, no need for DFS, and no need to do THEN since we hit one shard\n searchRequest.searchType(QUERY_AND_FETCH);\n }\n- } catch (IndexMissingException e) {\n- // ignore this, we will notify the search response if its really the case\n- // from the actual action\n+ } catch (IndexMissingException|IndexClosedException e) {\n+ // ignore these failures, we will notify the search response if its really the case from the actual action\n } catch (Exception e) {\n logger.debug(\"failed to optimize search type, continue as normal\", e);\n }", "filename": "src/main/java/org/elasticsearch/action/search/TransportSearchAction.java", "status": "modified" }, { "diff": "@@ -679,7 +679,7 @@ public String[] concreteIndices(IndicesOptions indicesOptions, String... aliases\n \n // optimize for single element index (common case)\n if (aliasesOrIndices.length == 1) {\n- return concreteIndices(aliasesOrIndices[0], indicesOptions.allowNoIndices(), failClosed, indicesOptions.allowAliasesToMultipleIndices());\n+ return concreteIndices(aliasesOrIndices[0], indicesOptions.allowNoIndices(), indicesOptions);\n }\n \n // check if its a possible aliased index, if not, just return the passed array\n@@ -712,7 +712,7 @@ public String[] concreteIndices(IndicesOptions indicesOptions, String... aliases\n \n Set<String> actualIndices = new HashSet<>();\n for (String aliasOrIndex : aliasesOrIndices) {\n- String[] indices = concreteIndices(aliasOrIndex, indicesOptions.ignoreUnavailable(), failClosed, indicesOptions.allowAliasesToMultipleIndices());\n+ String[] indices = concreteIndices(aliasOrIndex, indicesOptions.ignoreUnavailable(), indicesOptions);\n Collections.addAll(actualIndices, indices);\n }\n \n@@ -723,16 +723,15 @@ public String[] concreteIndices(IndicesOptions indicesOptions, String... aliases\n }\n \n /**\n- *\n * Utility method that allows to resolve an index or alias to its corresponding single concrete index.\n * Callers should make sure they provide proper {@link org.elasticsearch.action.support.IndicesOptions}\n * that require a single index as a result. The indices resolution must in fact return a single index when\n * using this method, an {@link org.elasticsearch.ElasticsearchIllegalArgumentException} gets thrown otherwise.\n *\n- * @param indexOrAlias the index or alias to be resolved to concrete index\n+ * @param indexOrAlias the index or alias to be resolved to concrete index\n * @param indicesOptions the indices options to be used for the index resolution\n * @return the concrete index obtained as a result of the index resolution\n- * @throws IndexMissingException if the index or alias provided doesn't exist\n+ * @throws IndexMissingException if the index or alias provided doesn't exist\n * @throws ElasticsearchIllegalArgumentException if the index resolution lead to more than one index\n */\n public String concreteSingleIndex(String indexOrAlias, IndicesOptions indicesOptions) throws IndexMissingException, ElasticsearchIllegalArgumentException {\n@@ -743,28 +742,40 @@ public String concreteSingleIndex(String indexOrAlias, IndicesOptions indicesOpt\n return indices[0];\n }\n \n- private String[] concreteIndices(String aliasOrIndex, boolean allowNoIndices, boolean failClosed, boolean allowMultipleIndices) throws IndexMissingException, ElasticsearchIllegalArgumentException {\n+ private String[] concreteIndices(String aliasOrIndex, boolean allowNoIndices, IndicesOptions options) throws IndexMissingException, ElasticsearchIllegalArgumentException {\n+ boolean failClosed = options.forbidClosedIndices() && !options.ignoreUnavailable();\n+\n // a quick check, if this is an actual index, if so, return it\n IndexMetaData indexMetaData = indices.get(aliasOrIndex);\n if (indexMetaData != null) {\n- if (indexMetaData.getState() == IndexMetaData.State.CLOSE && failClosed) {\n- throw new IndexClosedException(new Index(aliasOrIndex));\n+ if (indexMetaData.getState() == IndexMetaData.State.CLOSE) {\n+ if (failClosed) {\n+ throw new IndexClosedException(new Index(aliasOrIndex));\n+ } else {\n+ return options.forbidClosedIndices() ? Strings.EMPTY_ARRAY : new String[]{aliasOrIndex};\n+ }\n } else {\n- return new String[]{aliasOrIndex};\n+ return new String[]{aliasOrIndex};\n }\n }\n // not an actual index, fetch from an alias\n String[] indices = aliasAndIndexToIndexMap.getOrDefault(aliasOrIndex, Strings.EMPTY_ARRAY);\n if (indices.length == 0 && !allowNoIndices) {\n throw new IndexMissingException(new Index(aliasOrIndex));\n }\n- if (indices.length > 1 && !allowMultipleIndices) {\n+ if (indices.length > 1 && !options.allowAliasesToMultipleIndices()) {\n throw new ElasticsearchIllegalArgumentException(\"Alias [\" + aliasOrIndex + \"] has more than one indices associated with it [\" + Arrays.toString(indices) + \"], can't execute a single index op\");\n }\n \n indexMetaData = this.indices.get(aliasOrIndex);\n- if (indexMetaData != null && indexMetaData.getState() == IndexMetaData.State.CLOSE && failClosed) {\n- throw new IndexClosedException(new Index(aliasOrIndex));\n+ if (indexMetaData != null && indexMetaData.getState() == IndexMetaData.State.CLOSE) {\n+ if (failClosed) {\n+ throw new IndexClosedException(new Index(aliasOrIndex));\n+ } else {\n+ if (options.forbidClosedIndices()) {\n+ return Strings.EMPTY_ARRAY;\n+ }\n+ }\n }\n return indices;\n }\n@@ -1317,7 +1328,7 @@ public static void toXContent(MetaData metaData, XContentBuilder builder, ToXCon\n \n for (ObjectObjectCursor<String, Custom> cursor : metaData.customs()) {\n Custom.Factory factory = lookupFactorySafe(cursor.key);\n- if(factory.context().contains(context)) {\n+ if (factory.context().contains(context)) {\n builder.startObject(cursor.key);\n factory.toXContent(cursor.value, builder, params);\n builder.endObject();", "filename": "src/main/java/org/elasticsearch/cluster/metadata/MetaData.java", "status": "modified" }, { "diff": "@@ -68,7 +68,7 @@\n public class IndicesOptionsIntegrationTests extends ElasticsearchIntegrationTest {\n \n @Test\n- public void testSpecifiedIndexUnavailable() throws Exception {\n+ public void testSpecifiedIndexUnavailable_multipleIndices() throws Exception {\n createIndex(\"test1\");\n ensureYellow();\n \n@@ -167,6 +167,158 @@ public void testSpecifiedIndexUnavailable() throws Exception {\n verify(getSettings(\"test1\", \"test2\").setIndicesOptions(options), false);\n }\n \n+ @Test\n+ public void testSpecifiedIndexUnavailable_singleIndexThatIsClosed() throws Exception {\n+ assertAcked(prepareCreate(\"test1\"));\n+ ensureYellow();\n+\n+ assertAcked(client().admin().indices().prepareClose(\"test1\"));\n+\n+ IndicesOptions options = IndicesOptions.strictExpandOpenAndForbidClosed();\n+ verify(search(\"test1\").setIndicesOptions(options), true);\n+ verify(msearch(options, \"test1\"), true);\n+ verify(count(\"test1\").setIndicesOptions(options), true);\n+ verify(clearCache(\"test1\").setIndicesOptions(options), true);\n+ verify(_flush(\"test1\").setIndicesOptions(options),true);\n+ verify(segments(\"test1\").setIndicesOptions(options), true);\n+ verify(stats(\"test1\").setIndicesOptions(options), true);\n+ verify(optimize(\"test1\").setIndicesOptions(options), true);\n+ verify(refresh(\"test1\").setIndicesOptions(options), true);\n+ verify(validateQuery(\"test1\").setIndicesOptions(options), true);\n+ verify(aliasExists(\"test1\").setIndicesOptions(options), true);\n+ verify(typesExists(\"test1\").setIndicesOptions(options), true);\n+ verify(deleteByQuery(\"test1\").setIndicesOptions(options), true);\n+ verify(percolate(\"test1\").setIndicesOptions(options), true);\n+ verify(mpercolate(options, \"test1\").setIndicesOptions(options), true);\n+ verify(suggest(\"test1\").setIndicesOptions(options), true);\n+ verify(getAliases(\"test1\").setIndicesOptions(options), true);\n+ verify(getFieldMapping(\"test1\").setIndicesOptions(options), true);\n+ verify(getMapping(\"test1\").setIndicesOptions(options), true);\n+ verify(getWarmer(\"test1\").setIndicesOptions(options), true);\n+ verify(getSettings(\"test1\").setIndicesOptions(options), true);\n+\n+ options = IndicesOptions.fromOptions(true, options.allowNoIndices(), options.expandWildcardsOpen(), options.expandWildcardsClosed(), options);\n+ verify(search(\"test1\").setIndicesOptions(options), false);\n+ verify(msearch(options, \"test1\"), false);\n+ verify(count(\"test1\").setIndicesOptions(options), false);\n+ verify(clearCache(\"test1\").setIndicesOptions(options), false);\n+ verify(_flush(\"test1\").setIndicesOptions(options),false);\n+ verify(segments(\"test1\").setIndicesOptions(options), false);\n+ verify(stats(\"test1\").setIndicesOptions(options), false);\n+ verify(optimize(\"test1\").setIndicesOptions(options), false);\n+ verify(refresh(\"test1\").setIndicesOptions(options), false);\n+ verify(validateQuery(\"test1\").setIndicesOptions(options), false);\n+ verify(aliasExists(\"test1\").setIndicesOptions(options), false);\n+ verify(typesExists(\"test1\").setIndicesOptions(options), false);\n+ verify(deleteByQuery(\"test1\").setIndicesOptions(options), false);\n+ verify(percolate(\"test1\").setIndicesOptions(options), false);\n+ verify(mpercolate(options, \"test1\").setIndicesOptions(options), false);\n+ verify(suggest(\"test1\").setIndicesOptions(options), false);\n+ verify(getAliases(\"test1\").setIndicesOptions(options), false);\n+ verify(getFieldMapping(\"test1\").setIndicesOptions(options), false);\n+ verify(getMapping(\"test1\").setIndicesOptions(options), false);\n+ verify(getWarmer(\"test1\").setIndicesOptions(options), false);\n+ verify(getSettings(\"test1\").setIndicesOptions(options), false);\n+\n+ assertAcked(client().admin().indices().prepareOpen(\"test1\"));\n+ ensureYellow();\n+\n+ options = IndicesOptions.strictExpandOpenAndForbidClosed();\n+ verify(search(\"test1\").setIndicesOptions(options), false);\n+ verify(msearch(options, \"test1\"), false);\n+ verify(count(\"test1\").setIndicesOptions(options), false);\n+ verify(clearCache(\"test1\").setIndicesOptions(options), false);\n+ verify(_flush(\"test1\").setIndicesOptions(options),false);\n+ verify(segments(\"test1\").setIndicesOptions(options), false);\n+ verify(stats(\"test1\").setIndicesOptions(options), false);\n+ verify(optimize(\"test1\").setIndicesOptions(options), false);\n+ verify(refresh(\"test1\").setIndicesOptions(options), false);\n+ verify(validateQuery(\"test1\").setIndicesOptions(options), false);\n+ verify(aliasExists(\"test1\").setIndicesOptions(options), false);\n+ verify(typesExists(\"test1\").setIndicesOptions(options), false);\n+ verify(deleteByQuery(\"test1\").setIndicesOptions(options), false);\n+ verify(percolate(\"test1\").setIndicesOptions(options), false);\n+ verify(mpercolate(options, \"test1\").setIndicesOptions(options), false);\n+ verify(suggest(\"test1\").setIndicesOptions(options), false);\n+ verify(getAliases(\"test1\").setIndicesOptions(options), false);\n+ verify(getFieldMapping(\"test1\").setIndicesOptions(options), false);\n+ verify(getMapping(\"test1\").setIndicesOptions(options), false);\n+ verify(getWarmer(\"test1\").setIndicesOptions(options), false);\n+ verify(getSettings(\"test1\").setIndicesOptions(options), false);\n+ }\n+\n+ @Test\n+ public void testSpecifiedIndexUnavailable_singleIndex() throws Exception {\n+ IndicesOptions options = IndicesOptions.strictExpandOpenAndForbidClosed();\n+ verify(search(\"test1\").setIndicesOptions(options), true);\n+ verify(msearch(options, \"test1\"), true);\n+ verify(count(\"test1\").setIndicesOptions(options), true);\n+ verify(clearCache(\"test1\").setIndicesOptions(options), true);\n+ verify(_flush(\"test1\").setIndicesOptions(options),true);\n+ verify(segments(\"test1\").setIndicesOptions(options), true);\n+ verify(stats(\"test1\").setIndicesOptions(options), true);\n+ verify(optimize(\"test1\").setIndicesOptions(options), true);\n+ verify(refresh(\"test1\").setIndicesOptions(options), true);\n+ verify(validateQuery(\"test1\").setIndicesOptions(options), true);\n+ verify(aliasExists(\"test1\").setIndicesOptions(options), true);\n+ verify(typesExists(\"test1\").setIndicesOptions(options), true);\n+ verify(deleteByQuery(\"test1\").setIndicesOptions(options), true);\n+ verify(percolate(\"test1\").setIndicesOptions(options), true);\n+ verify(suggest(\"test1\").setIndicesOptions(options), true);\n+ verify(getAliases(\"test1\").setIndicesOptions(options), true);\n+ verify(getFieldMapping(\"test1\").setIndicesOptions(options), true);\n+ verify(getMapping(\"test1\").setIndicesOptions(options), true);\n+ verify(getWarmer(\"test1\").setIndicesOptions(options), true);\n+ verify(getSettings(\"test1\").setIndicesOptions(options), true);\n+\n+ options = IndicesOptions.fromOptions(true, options.allowNoIndices(), options.expandWildcardsOpen(), options.expandWildcardsClosed(), options);\n+ verify(search(\"test1\").setIndicesOptions(options), false);\n+ verify(msearch(options, \"test1\"), false);\n+ verify(count(\"test1\").setIndicesOptions(options), false);\n+ verify(clearCache(\"test1\").setIndicesOptions(options), false);\n+ verify(_flush(\"test1\").setIndicesOptions(options),false);\n+ verify(segments(\"test1\").setIndicesOptions(options), false);\n+ verify(stats(\"test1\").setIndicesOptions(options), false);\n+ verify(optimize(\"test1\").setIndicesOptions(options), false);\n+ verify(refresh(\"test1\").setIndicesOptions(options), false);\n+ verify(validateQuery(\"test1\").setIndicesOptions(options), false);\n+ verify(aliasExists(\"test1\").setIndicesOptions(options), false);\n+ verify(typesExists(\"test1\").setIndicesOptions(options), false);\n+ verify(deleteByQuery(\"test1\").setIndicesOptions(options), false);\n+ verify(percolate(\"test1\").setIndicesOptions(options), false);\n+ verify(suggest(\"test1\").setIndicesOptions(options), false);\n+ verify(getAliases(\"test1\").setIndicesOptions(options), false);\n+ verify(getFieldMapping(\"test1\").setIndicesOptions(options), false);\n+ verify(getMapping(\"test1\").setIndicesOptions(options), false);\n+ verify(getWarmer(\"test1\").setIndicesOptions(options), false);\n+ verify(getSettings(\"test1\").setIndicesOptions(options), false);\n+\n+ assertAcked(prepareCreate(\"test1\"));\n+ ensureYellow();\n+\n+ options = IndicesOptions.strictExpandOpenAndForbidClosed();\n+ verify(search(\"test1\").setIndicesOptions(options), false);\n+ verify(msearch(options, \"test1\"), false);\n+ verify(count(\"test1\").setIndicesOptions(options), false);\n+ verify(clearCache(\"test1\").setIndicesOptions(options), false);\n+ verify(_flush(\"test1\").setIndicesOptions(options),false);\n+ verify(segments(\"test1\").setIndicesOptions(options), false);\n+ verify(stats(\"test1\").setIndicesOptions(options), false);\n+ verify(optimize(\"test1\").setIndicesOptions(options), false);\n+ verify(refresh(\"test1\").setIndicesOptions(options), false);\n+ verify(validateQuery(\"test1\").setIndicesOptions(options), false);\n+ verify(aliasExists(\"test1\").setIndicesOptions(options), false);\n+ verify(typesExists(\"test1\").setIndicesOptions(options), false);\n+ verify(deleteByQuery(\"test1\").setIndicesOptions(options), false);\n+ verify(percolate(\"test1\").setIndicesOptions(options), false);\n+ verify(suggest(\"test1\").setIndicesOptions(options), false);\n+ verify(getAliases(\"test1\").setIndicesOptions(options), false);\n+ verify(getFieldMapping(\"test1\").setIndicesOptions(options), false);\n+ verify(getMapping(\"test1\").setIndicesOptions(options), false);\n+ verify(getWarmer(\"test1\").setIndicesOptions(options), false);\n+ verify(getSettings(\"test1\").setIndicesOptions(options), false);\n+ }\n+\n @Test\n public void testSpecifiedIndexUnavailable_snapshotRestore() throws Exception {\n createIndex(\"test1\");", "filename": "src/test/java/org/elasticsearch/indices/IndicesOptionsIntegrationTests.java", "status": "modified" } ] }
{ "body": "Running 1.4.0 it is possible to hang ES.\n\nSteps to reproduce:\nThree processes in a while(true) loop. One is using the _bulk API to insert (and update) a small random number of documents. The other two processes are executing fairly complex filteredQuery's.\n\nI can run any of the two processes concurrently without problems, but when a the third one starts, ES fails with the exception below.\n\nOther times, the exception doesn't occur but instead ES just hangs and I have to kill -9 its jvm. \n\n```\n[2014-11-07 16:46:36,175][DEBUG][action.search.type ] [Hardcore] [2848] Failed to execute query phase\norg.elasticsearch.search.query.QueryPhaseExecutionException: [xxx.public.test.idxtest][2]: query[filtered(ConstantScore(++cache(_xmin:[2882514 TO 2882514]) +cache(_cmin:[* TO 0}) +cache(_xmax:[0 TO 0]) +cache(_xmax:[2882514 TO 2882514]) +cache(_cmax:[0 TO *]) +cache(_xmin_is_committed:T) +cache(_xmax:[0 TO 0]) +cache(_xmax:[2882514 TO 2882514]) +cache(_cmax:[0 TO *]) +NotFilter(cache(_xmax:[2882514 TO 2882514])) +cache(_xmax_is_committed:F) +CustomQueryWrappingFilter(child_filter[data/xact](filtered(ConstantScore(cache(BooleanFilter(_field_names:id))))->cache(_type:data)))))->cache(+_type:xact +org.elasticsearch.index.search.nested.NonNestedDocsFilter@38f048bd)],from[0],size[32768]: Query Failed [Failed to execute main query]\n at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:163)\n at org.elasticsearch.search.SearchService.executeScan(SearchService.java:245)\n at org.elasticsearch.search.action.SearchServiceTransportAction$21.call(SearchServiceTransportAction.java:520)\n at org.elasticsearch.search.action.SearchServiceTransportAction$21.call(SearchServiceTransportAction.java:517)\n at org.elasticsearch.search.action.SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:559)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n at java.lang.Thread.run(Thread.java:745)\nCaused by: java.lang.RuntimeException: java.lang.ArrayIndexOutOfBoundsException: 13417\n at org.elasticsearch.search.internal.ContextIndexSearcher.createNormalizedWeight(ContextIndexSearcher.java:136)\n at org.elasticsearch.index.search.child.CustomQueryWrappingFilter.getDocIdSet(CustomQueryWrappingFilter.java:72)\n at org.elasticsearch.common.lucene.search.AndFilter.getDocIdSet(AndFilter.java:54)\n at org.elasticsearch.common.lucene.search.ApplyAcceptedDocsFilter.getDocIdSet(ApplyAcceptedDocsFilter.java:46)\n at org.apache.lucene.search.ConstantScoreQuery$ConstantWeight.scorer(ConstantScoreQuery.java:157)\n at org.apache.lucene.search.FilteredQuery$RandomAccessFilterStrategy.filteredScorer(FilteredQuery.java:542)\n at org.apache.lucene.search.FilteredQuery$1.scorer(FilteredQuery.java:136)\n at org.apache.lucene.search.FilteredQuery$RandomAccessFilterStrategy.filteredScorer(FilteredQuery.java:542)\n at org.apache.lucene.search.FilteredQuery$FilterStrategy.filteredBulkScorer(FilteredQuery.java:504)\n at org.apache.lucene.search.FilteredQuery$1.bulkScorer(FilteredQuery.java:150)\n at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:618)\n at org.elasticsearch.search.internal.ContextIndexSearcher.search(ContextIndexSearcher.java:191)\n at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:309)\n at org.elasticsearch.search.scan.ScanContext.execute(ScanContext.java:52)\n at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:120)\n ... 7 more\nCaused by: java.lang.ArrayIndexOutOfBoundsException: 13417\n at org.apache.lucene.util.StringHelper.murmurhash3_x86_32(StringHelper.java:205)\n at org.apache.lucene.util.StringHelper.murmurhash3_x86_32(StringHelper.java:229)\n at org.apache.lucene.util.BytesRef.hashCode(BytesRef.java:143)\n at org.elasticsearch.common.util.BytesRefHash.add(BytesRefHash.java:151)\n at org.elasticsearch.index.search.child.ParentIdsFilter.createShortCircuitFilter(ParentIdsFilter.java:67)\n at org.elasticsearch.index.search.child.ChildrenConstantScoreQuery.createWeight(ChildrenConstantScoreQuery.java:127)\n at org.apache.lucene.search.IndexSearcher.createNormalizedWeight(IndexSearcher.java:684)\n at org.elasticsearch.search.internal.ContextIndexSearcher.createNormalizedWeight(ContextIndexSearcher.java:133)\n ... 21 more\n```\n", "comments": [ { "body": "This looks to me like field data for the _parent field data is buggy and returns an invalid BytesRef that has an `offset+end` that is greater than the length of the wrapped array. I will look into it.\n", "created_at": "2014-11-09T22:54:46Z" }, { "body": "I wrote a basic test hoping it could reproduce the issue, but got no luck. Here it is in case someone would like to iterate on it:\n\n``` java\npublic class PCTests extends ElasticsearchIntegrationTest {\n\n private static final int NUM_PARENTS = 50000;\n private static final int NUM_CHILDREN = 500000;\n\n public void addOrUpdate() {\n final boolean parent = randomBoolean();\n if (parent) {\n final String parentId = Integer.toString(randomInt(NUM_PARENTS));\n client().prepareIndex(\"test\", \"parent\", parentId).setSource(Collections.<String, Object>emptyMap()).get();\n } else {\n final int id = randomInt(NUM_CHILDREN);\n final int parentId = MathUtils.mod(MurmurHash3.hash(id), NUM_PARENTS);\n client().prepareIndex(\"test\", \"child\", Integer.toString(id)).setParent(Integer.toString(parentId)).setSource(\"i\", randomInt(1000)).get();\n }\n }\n\n public void query() {\n SearchResponse resp = client().prepareSearch(\"test\").setSize(1).setQuery(QueryBuilders.hasChildQuery(\"child\", QueryBuilders.rangeQuery(\"i\").to(randomInt(1000)))).get();\n System.out.println(resp);\n }\n\n public void test() throws Exception {\n createIndex(\"test\");\n client().admin().indices().preparePutMapping(\"test\").setType(\"parent\").setSource(JsonXContent.contentBuilder().startObject().startObject(\"parent\").endObject().endObject()).get();\n client().admin().indices().preparePutMapping(\"test\").setType(\"child\").setSource(JsonXContent.contentBuilder().startObject().startObject(\"child\")\n .startObject(\"_parent\")\n .field(\"type\", \"parent\")\n .endObject().endObject().endObject()).get();\n for (int i = 0; i <= NUM_PARENTS; ++i) {\n client().prepareIndex(\"test\", \"parent\", Integer.toString(i)).setSource(Collections.<String, Object>emptyMap()).get();\n }\n\n final Thread[] indexingThreads = new Thread[3];\n final AtomicInteger running = new AtomicInteger(indexingThreads.length);\n for (int i = 0; i < indexingThreads.length; ++i) {\n indexingThreads[i] = new Thread() {\n public void run() {\n for (int i = 0; i < 5 * (NUM_CHILDREN + NUM_PARENTS); ++i) {\n addOrUpdate();\n }\n running.decrementAndGet();\n }\n };\n }\n for (Thread t : indexingThreads) {\n t.start();\n }\n while (running.get() > 0) {\n query();\n Thread.sleep(200);\n }\n }\n\n}\n```\n\nCan you share more information about the setup that reproduces the issue:\n- how many parent/child relations are configured in the mappings, are there some types that are both parent and child?\n- how many parents and children in the index?\n- what does the query look like? Given the stack trace, I guess it uses the has_child query?\n- are there custom plugins installed?\n- is the issue still reproducible after restarting elasticsearch?\n- can you try to reproduce the failure with assertions enabled? (the `-ea` JVM option)\n\nIf sharing the index that triggers this behaviour is possible and the issue is easy to reproduce, that would be great.\n", "created_at": "2014-11-10T12:24:56Z" }, { "body": "I'm working with the reporter to get working code to reproduce.\n", "created_at": "2014-11-10T18:31:31Z" }, { "body": "@aleph-zero any news on this?\n", "created_at": "2014-11-14T15:45:30Z" }, { "body": "@aleph-zero ping?\n", "created_at": "2014-11-23T12:55:12Z" }, { "body": "@clintongormley @s1monw I never got a working example from the reporter. I'll ping him back and see if I can at least get the general steps to reproduce.\n", "created_at": "2014-11-23T14:10:07Z" }, { "body": "@clintongormley @s1monw Still trying to get a proof-of-concept working\n", "created_at": "2014-12-03T17:53:51Z" }, { "body": "Original reporter here...\n\n@jpountz if you can provide some instruction how to actually run your test above (I have the ES repo cloned locally) I can fix it for you. It looks like you're on the right track.\n\nI've provided Chris Earle @ ES a set of shell scripts that recreates the problem (internal support number 6313) by poking at ES with libcurl.\n\nLooking at your code above, you want 1 thread doing updates and two (or more) threads querying (the reverse of what you have now), and you'll want to run the query threads for a few minutes before the exceptions start to appear.\n\nTo answer your questions from above:\n- two types in the index, only one is configured to be a child\n- The query is quite complex (https://gist.github.com/eeeebbbbrrrr/d0491882248160e9afeb)\n- We have custom plugins, but they don't need to be installed to re-create this\n- Restarting ES \"fixes\" things until the test is run again\n- With asserts enabled: https://gist.github.com/eeeebbbbrrrr/708305a968acb13b32c3\n\nThe shell scripts I've provided to Chris Earle know how to create the exact index involved, with proper mappings, settings, and data.\n", "created_at": "2014-12-18T17:19:02Z" }, { "body": "So I figured out how to run the test. :) Looks like this re-creates it:\n\n``` java\nimport com.carrotsearch.hppc.hash.MurmurHash3;\nimport org.elasticsearch.action.search.SearchResponse;\nimport org.elasticsearch.common.math.MathUtils;\nimport org.elasticsearch.common.xcontent.json.JsonXContent;\nimport org.elasticsearch.index.query.QueryBuilders;\nimport org.elasticsearch.test.ElasticsearchIntegrationTest;\nimport org.junit.Test;\n\nimport java.util.Collections;\nimport java.util.concurrent.atomic.AtomicInteger;\n\npublic class PCTests extends ElasticsearchIntegrationTest {\n\n private static final int NUM_PARENTS = 50000;\n private static final int NUM_CHILDREN = 500000;\n private static final int MAX_QUERIES = 32768;\n\n public void addOrUpdate() {\n final boolean parent = randomBoolean();\n if (parent) {\n final String parentId = Integer.toString(randomInt(NUM_PARENTS));\n client().prepareIndex(\"test\", \"parent\", parentId).setSource(Collections.<String, Object>emptyMap()).get();\n } else {\n final int id = randomInt(NUM_CHILDREN);\n final int parentId = MathUtils.mod(MurmurHash3.hash(id), NUM_PARENTS);\n client().prepareIndex(\"test\", \"child\", Integer.toString(id)).setParent(Integer.toString(parentId)).setSource(\"i\", randomInt(1000)).get();\n }\n }\n\n public void query() {\n SearchResponse resp = client().prepareSearch(\"test\").setSize(1).setQuery(QueryBuilders.hasChildQuery(\"child\", QueryBuilders.rangeQuery(\"i\").to(randomInt(1000)))).get();\n System.out.println(resp);\n }\n\n @Test\n public void test() throws Exception {\n createIndex(\"test\");\n client().admin().indices().preparePutMapping(\"test\").setType(\"parent\").setSource(JsonXContent.contentBuilder().startObject().startObject(\"parent\").endObject().endObject()).get();\n client().admin().indices().preparePutMapping(\"test\").setType(\"child\").setSource(JsonXContent.contentBuilder().startObject().startObject(\"child\")\n .startObject(\"_parent\")\n .field(\"type\", \"parent\")\n .endObject().endObject().endObject()).get();\n for (int i = 0; i <= NUM_PARENTS; ++i) {\n client().prepareIndex(\"test\", \"parent\", Integer.toString(i)).setSource(Collections.<String, Object>emptyMap()).get();\n }\n\n logger.info(\"Bootstrapping index...\");\n // bootstrap index with some data\n for (int i = 0; i <NUM_PARENTS; ++i) {\n addOrUpdate();\n }\n\n final Thread[] indexingThreads = new Thread[3];\n final AtomicInteger running = new AtomicInteger(indexingThreads.length);\n for (int i = 0; i < indexingThreads.length; ++i) {\n indexingThreads[i] = new Thread() {\n public void run() {\n for (int i = 0; i < MAX_QUERIES; ++i) {\n query();\n }\n running.decrementAndGet();\n }\n };\n }\n\n logger.info(\"Starting test\");\n for (Thread t : indexingThreads) {\n t.start();\n }\n while (running.get() > 0) {\n addOrUpdate();\n Thread.sleep(200);\n }\n }\n\n}\n```\n\nI think IDEA's output window truncated the exception, but you can see that it's there:\n\n``` json\n{\n \"took\" : 15,\n \"timed_out\" : false,\n \"_shards\" : {\n \"total\" : 10,\n \"successful\" : 9,\n \"failed\" : 1,\n \"failures\" : [ {\n \"index\" : \"test\",\n \"shard\" : 9,\n \"status\" : 500,\n \"reason\" : \"RemoteTransportException[[node_2][local[3]][indices:data/read/search[phase/query/id]]]; nested: QueryPhaseExecutionException[[test][9]: query[child_filter[child/parent](filtered(i:[* TO 204])->_type:child)],from[0],size[1]: Query Failed [Failed to execute main query]]; nested: RuntimeException[java.lang.AssertionError]; nested: AssertionError; \"\n } ]\n },\n \"hits\" : {\n \"total\" : 4254,\n \"max_score\" : 1.0,\n \"hits\" : [ {\n \"_index\" : \"test\",\n \"_type\" : \"parent\",\n \"_id\" : \"9280\",\n \"_score\" : 1.0,\n \"_source\" : { }\n } ]\n }\n}\n```\n", "created_at": "2014-12-18T17:31:28Z" }, { "body": "Oh, it's worth mentioning that ES does _not_ hang. My original report to @aleph-zero said it did, but further diagnosis on my end found that it was my code that was looping indefinitely when it encountered the side effects of this exception (scan+scroll not returning the expected # of results, for example).\n\nThe AIOOB is very real, however. :)\n", "created_at": "2014-12-18T18:40:14Z" }, { "body": "Thanks for making the bug reproducible, it helped a lot!\n", "created_at": "2014-12-22T14:32:32Z" }, { "body": "I built your fix/ branch and can confirm this indeed fixes up my standalone test cases.\n\nI'm glad you wrapped ParentChildIndexFieldData.java in a threaded test as it definitely missed the concurrency boat. Makes me wonder how it ever worked?\n", "created_at": "2014-12-22T16:01:03Z" }, { "body": "This is a bug that I introduced in elasticsearch 1.4 since we almost completely rewrote fielddata to be more in-line with Lucene (this removed a lot of useless wrapping when using doc values, and helped improve the performance of doc values when used from elasticsearch).\n", "created_at": "2014-12-22T16:27:58Z" }, { "body": "That makes sense. I didn't dig through the commit history but I was thinking that the concurrency responsibly had to be happening elsewhere at some point in the past.\n\nThanks for taking care of this so quickly. Happy Holidays!\n", "created_at": "2014-12-23T21:21:42Z" }, { "body": "You're welcome, happy holidays to you too!\n", "created_at": "2014-12-24T09:11:21Z" } ], "number": 8396, "title": "ArrayIndexOutOfBoundsException in murmur hash with has_child" }
{ "body": "`_parent` field data mistakenly shared some stateful data-structures\n(`SortedDocValues`) across threads.\n\nClose #8396\n", "number": 9030, "review_comments": [ { "body": "Maybe just throw Exception? (the error variable generic type should then get changed as well)\n", "created_at": "2014-12-23T09:33:59Z" } ], "title": "Fix concurrency issues of the _parent field data." }
{ "commits": [ { "message": "Parent/child: Fix concurrency issues of the _parent field data.\n\n`_parent` field data mistakenly shared some stateful data-structures across\nthreads.\n\nClose #8396" }, { "message": "Review round 1" } ], "files": [ { "diff": "@@ -22,6 +22,7 @@\n import com.carrotsearch.hppc.ObjectObjectOpenHashMap;\n import com.carrotsearch.hppc.cursors.ObjectObjectCursor;\n import com.google.common.collect.ImmutableSortedSet;\n+\n import org.apache.lucene.index.*;\n import org.apache.lucene.index.MultiDocValues.OrdinalMap;\n import org.apache.lucene.util.Accountable;\n@@ -35,6 +36,8 @@\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.breaker.CircuitBreaker;\n import org.elasticsearch.common.collect.ImmutableOpenMap;\n+import org.elasticsearch.common.lease.Releasable;\n+import org.elasticsearch.common.lease.Releasables;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.unit.TimeValue;\n import org.elasticsearch.index.Index;\n@@ -271,71 +274,49 @@ public IndexParentChildFieldData loadGlobal(IndexReader indexReader) {\n }\n }\n \n+ private static OrdinalMap buildOrdinalMap(AtomicParentChildFieldData[] atomicFD, String parentType) throws IOException {\n+ final SortedDocValues[] ordinals = new SortedDocValues[atomicFD.length];\n+ for (int i = 0; i < ordinals.length; ++i) {\n+ ordinals[i] = atomicFD[i].getOrdinalsValues(parentType);\n+ }\n+ return OrdinalMap.build(null, ordinals, PackedInts.DEFAULT);\n+ }\n+\n+ private static class OrdinalMapAndAtomicFieldData {\n+ final OrdinalMap ordMap;\n+ final AtomicParentChildFieldData[] fieldData;\n+\n+ public OrdinalMapAndAtomicFieldData(OrdinalMap ordMap, AtomicParentChildFieldData[] fieldData) {\n+ this.ordMap = ordMap;\n+ this.fieldData = fieldData;\n+ }\n+ }\n+\n @Override\n public IndexParentChildFieldData localGlobalDirect(IndexReader indexReader) throws Exception {\n final long startTime = System.nanoTime();\n- final Map<String, SortedDocValues[]> types = new HashMap<>();\n+ final Set<String> parentTypes = new HashSet<>();\n synchronized (lock) {\n- for (BytesRef type : parentTypes) {\n- final SortedDocValues[] values = new SortedDocValues[indexReader.leaves().size()];\n- Arrays.fill(values, DocValues.emptySorted());\n- types.put(type.utf8ToString(), values);\n+ for (BytesRef type : this.parentTypes) {\n+ parentTypes.add(type.utf8ToString());\n }\n }\n \n- for (Map.Entry<String, SortedDocValues[]> entry : types.entrySet()) {\n- final String parentType = entry.getKey();\n- final SortedDocValues[] values = entry.getValue();\n+ long ramBytesUsed = 0;\n+ final Map<String, OrdinalMapAndAtomicFieldData> perType = new HashMap<>();\n+ for (String type : parentTypes) {\n+ final AtomicParentChildFieldData[] fieldData = new AtomicParentChildFieldData[indexReader.leaves().size()];\n for (LeafReaderContext context : indexReader.leaves()) {\n- SortedDocValues vals = load(context).getOrdinalsValues(parentType);\n- if (vals != null) {\n- values[context.ord] = vals;\n- }\n+ fieldData[context.ord] = load(context);\n }\n+ final OrdinalMap ordMap = buildOrdinalMap(fieldData, type);\n+ ramBytesUsed += ordMap.ramBytesUsed();\n+ perType.put(type, new OrdinalMapAndAtomicFieldData(ordMap, fieldData));\n }\n \n- long ramBytesUsed = 0;\n- @SuppressWarnings(\"unchecked\")\n- final Map<String, SortedDocValues>[] global = new Map[indexReader.leaves().size()];\n- for (Map.Entry<String, SortedDocValues[]> entry : types.entrySet()) {\n- final String parentType = entry.getKey();\n- final SortedDocValues[] values = entry.getValue();\n- final OrdinalMap ordinalMap = OrdinalMap.build(null, entry.getValue(), PackedInts.DEFAULT);\n- ramBytesUsed += ordinalMap.ramBytesUsed();\n- for (int i = 0; i < values.length; ++i) {\n- final SortedDocValues segmentValues = values[i];\n- final LongValues globalOrds = ordinalMap.getGlobalOrds(i);\n- final SortedDocValues globalSortedValues = new SortedDocValues() {\n- @Override\n- public BytesRef lookupOrd(int ord) {\n- final int segmentNum = ordinalMap.getFirstSegmentNumber(ord);\n- final int segmentOrd = (int) ordinalMap.getFirstSegmentOrd(ord);\n- return values[segmentNum].lookupOrd(segmentOrd);\n- }\n-\n- @Override\n- public int getValueCount() {\n- return (int) ordinalMap.getValueCount();\n- }\n-\n- @Override\n- public int getOrd(int docID) {\n- final int segmentOrd = segmentValues.getOrd(docID);\n- // TODO: is there a way we can get rid of this branch?\n- if (segmentOrd >= 0) {\n- return (int) globalOrds.get(segmentOrd);\n- } else {\n- return segmentOrd;\n- }\n- }\n- };\n- Map<String, SortedDocValues> perSegmentGlobal = global[i];\n- if (perSegmentGlobal == null) {\n- perSegmentGlobal = new HashMap<>(1);\n- global[i] = perSegmentGlobal;\n- }\n- perSegmentGlobal.put(parentType, globalSortedValues);\n- }\n+ final AtomicParentChildFieldData[] fielddata = new AtomicParentChildFieldData[indexReader.leaves().size()];\n+ for (int i = 0; i < fielddata.length; ++i) {\n+ fielddata[i] = new GlobalAtomicFieldData(parentTypes, perType, i);\n }\n \n breakerService.getBreaker(CircuitBreaker.Name.FIELDDATA).addWithoutBreaking(ramBytesUsed);\n@@ -346,59 +327,102 @@ public int getOrd(int docID) {\n );\n }\n \n- return new GlobalFieldData(indexReader, global, ramBytesUsed);\n+ return new GlobalFieldData(indexReader, fielddata, ramBytesUsed);\n }\n \n- private class GlobalFieldData implements IndexParentChildFieldData, Accountable {\n+ private static class GlobalAtomicFieldData extends AbstractAtomicParentChildFieldData {\n \n- private final AtomicParentChildFieldData[] atomicFDs;\n- private final IndexReader reader;\n- private final long ramBytesUsed;\n+ private final Set<String> types;\n+ private final Map<String, OrdinalMapAndAtomicFieldData> atomicFD;\n+ private final int segmentIndex;\n \n- GlobalFieldData(IndexReader reader, final Map<String, SortedDocValues>[] globalValues, long ramBytesUsed) {\n- this.reader = reader;\n- this.ramBytesUsed = ramBytesUsed;\n- this.atomicFDs = new AtomicParentChildFieldData[globalValues.length];\n- for (int i = 0; i < globalValues.length; ++i) {\n- final int ord = i;\n- atomicFDs[i] = new AbstractAtomicParentChildFieldData() {\n- @Override\n- public long ramBytesUsed() {\n- return 0;\n- }\n+ public GlobalAtomicFieldData(Set<String> types, Map<String, OrdinalMapAndAtomicFieldData> atomicFD, int segmentIndex) {\n+ this.types = types;\n+ this.atomicFD = atomicFD;\n+ this.segmentIndex = segmentIndex;\n+ }\n \n- @Override\n- public Iterable<Accountable> getChildResources() {\n- // TODO: is this really the best?\n- return Collections.emptyList();\n- }\n+ @Override\n+ public Set<String> types() {\n+ return types;\n+ }\n \n- @Override\n- public void close() {\n- }\n+ @Override\n+ public SortedDocValues getOrdinalsValues(String type) {\n+ final OrdinalMapAndAtomicFieldData atomicFD = this.atomicFD.get(type);\n+ final OrdinalMap ordMap = atomicFD.ordMap;\n+ final SortedDocValues[] allSegmentValues = new SortedDocValues[atomicFD.fieldData.length];\n+ for (int i = 0; i < allSegmentValues.length; ++i) {\n+ allSegmentValues[i] = atomicFD.fieldData[i].getOrdinalsValues(type);\n+ }\n+ final SortedDocValues segmentValues = allSegmentValues[segmentIndex];\n+ if (segmentValues.getValueCount() == ordMap.getValueCount()) {\n+ // ords are already global\n+ return segmentValues;\n+ }\n+ final LongValues globalOrds = ordMap.getGlobalOrds(segmentIndex);\n+ return new SortedDocValues() {\n+\n+ @Override\n+ public BytesRef lookupOrd(int ord) {\n+ final int segmentIndex = ordMap.getFirstSegmentNumber(ord);\n+ final int segmentOrd = (int) ordMap.getFirstSegmentOrd(ord);\n+ return allSegmentValues[segmentIndex].lookupOrd(segmentOrd);\n+ }\n \n- @Override\n- public Set<String> types() {\n- return Collections.unmodifiableSet(globalValues[ord].keySet());\n- }\n+ @Override\n+ public int getValueCount() {\n+ return (int) ordMap.getValueCount();\n+ }\n \n- @Override\n- public SortedDocValues getOrdinalsValues(String type) {\n- SortedDocValues dv = globalValues[ord].get(type);\n- if (dv == null) {\n- dv = DocValues.emptySorted();\n- }\n- return dv;\n+ @Override\n+ public int getOrd(int docID) {\n+ final int segmentOrd = segmentValues.getOrd(docID);\n+ // TODO: is there a way we can get rid of this branch?\n+ if (segmentOrd >= 0) {\n+ return (int) globalOrds.get(segmentOrd);\n+ } else {\n+ return segmentOrd;\n }\n- };\n- } \n+ }\n+ };\n+ }\n+\n+ @Override\n+ public long ramBytesUsed() {\n+ // this class does not take memory on its own, the index-level field data does\n+ // it through the use of ordinal maps\n+ return 0;\n }\n \n @Override\n public Iterable<Accountable> getChildResources() {\n return Collections.emptyList();\n }\n \n+ @Override\n+ public void close() throws ElasticsearchException {\n+ List<Releasable> closeables = new ArrayList<>();\n+ for (OrdinalMapAndAtomicFieldData fds : atomicFD.values()) {\n+ closeables.addAll(Arrays.asList(fds.fieldData));\n+ }\n+ Releasables.close(closeables);\n+ }\n+\n+ }\n+\n+ private class GlobalFieldData implements IndexParentChildFieldData, Accountable {\n+\n+ private final AtomicParentChildFieldData[] fielddata;\n+ private final IndexReader reader;\n+ private final long ramBytesUsed;\n+\n+ GlobalFieldData(IndexReader reader, AtomicParentChildFieldData[] fielddata, long ramBytesUsed) {\n+ this.reader = reader;\n+ this.ramBytesUsed = ramBytesUsed;\n+ this.fielddata = fielddata;\n+ }\n+\n @Override\n public Names getFieldNames() {\n return ParentChildIndexFieldData.this.getFieldNames();\n@@ -412,7 +436,7 @@ public FieldDataType getFieldDataType() {\n @Override\n public AtomicParentChildFieldData load(LeafReaderContext context) {\n assert context.reader().getCoreCacheKey() == reader.leaves().get(context.ord).reader().getCoreCacheKey();\n- return atomicFDs[context.ord];\n+ return fielddata[context.ord];\n }\n \n @Override\n@@ -445,6 +469,11 @@ public long ramBytesUsed() {\n return ramBytesUsed;\n }\n \n+ @Override\n+ public Iterable<Accountable> getChildResources() {\n+ return Collections.emptyList();\n+ }\n+\n @Override\n public IndexParentChildFieldData loadGlobal(IndexReader indexReader) {\n if (indexReader.getCoreCacheKey() == reader.getCoreCacheKey()) {", "filename": "src/main/java/org/elasticsearch/index/fielddata/plain/ParentChildIndexFieldData.java", "status": "modified" }, { "diff": "@@ -23,19 +23,34 @@\n import org.apache.lucene.document.Field;\n import org.apache.lucene.document.StringField;\n import org.apache.lucene.index.DirectoryReader;\n-import org.apache.lucene.search.*;\n+import org.apache.lucene.index.LeafReaderContext;\n+import org.apache.lucene.index.SortedDocValues;\n+import org.apache.lucene.search.FieldDoc;\n+import org.apache.lucene.search.IndexSearcher;\n+import org.apache.lucene.search.MatchAllDocsQuery;\n+import org.apache.lucene.search.Sort;\n+import org.apache.lucene.search.SortField;\n+import org.apache.lucene.search.TopFieldDocs;\n import org.apache.lucene.util.BytesRef;\n import org.elasticsearch.action.admin.indices.mapping.put.PutMappingRequest;\n import org.elasticsearch.common.compress.CompressedString;\n import org.elasticsearch.index.fielddata.IndexFieldData.XFieldComparatorSource;\n+import org.elasticsearch.index.fielddata.plain.ParentChildIndexFieldData;\n import org.elasticsearch.index.mapper.Uid;\n import org.elasticsearch.index.mapper.internal.ParentFieldMapper;\n import org.elasticsearch.index.mapper.internal.UidFieldMapper;\n import org.elasticsearch.search.MultiValueMode;\n import org.junit.Before;\n import org.junit.Test;\n \n-import static org.hamcrest.Matchers.*;\n+import java.util.HashMap;\n+import java.util.Map;\n+import java.util.concurrent.CountDownLatch;\n+import java.util.concurrent.atomic.AtomicReference;\n+\n+import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.greaterThan;\n+import static org.hamcrest.Matchers.nullValue;\n \n /**\n */\n@@ -184,6 +199,62 @@ public void testSorting() throws Exception {\n assertThat(((FieldDoc) topDocs.scoreDocs[7]).fields[0], nullValue());\n }\n \n+ public void testThreads() throws Exception {\n+ final ParentChildIndexFieldData indexFieldData = getForField(childType);\n+ final DirectoryReader reader = DirectoryReader.open(writer, true);\n+ final IndexParentChildFieldData global = indexFieldData.loadGlobal(reader);\n+ final AtomicReference<Exception> error = new AtomicReference<>();\n+ final int numThreads = scaledRandomIntBetween(3, 8);\n+ final Thread[] threads = new Thread[numThreads];\n+ final CountDownLatch latch = new CountDownLatch(1);\n+\n+ final Map<Object, BytesRef[]> expected = new HashMap<>();\n+ for (LeafReaderContext context : reader.leaves()) {\n+ AtomicParentChildFieldData leafData = global.load(context);\n+ SortedDocValues parentIds = leafData.getOrdinalsValues(parentType);\n+ final BytesRef[] ids = new BytesRef[parentIds.getValueCount()];\n+ for (int j = 0; j < parentIds.getValueCount(); ++j) {\n+ final BytesRef id = parentIds.lookupOrd(j);\n+ if (id != null) {\n+ ids[j] = BytesRef.deepCopyOf(id);\n+ }\n+ }\n+ expected.put(context.reader().getCoreCacheKey(), ids);\n+ }\n+\n+ for (int i = 0; i < numThreads; ++i) {\n+ threads[i] = new Thread() {\n+ @Override\n+ public void run() {\n+ try {\n+ latch.await();\n+ for (int i = 0; i < 100000; ++i) {\n+ for (LeafReaderContext context : reader.leaves()) {\n+ AtomicParentChildFieldData leafData = global.load(context);\n+ SortedDocValues parentIds = leafData.getOrdinalsValues(parentType);\n+ final BytesRef[] expectedIds = expected.get(context.reader().getCoreCacheKey());\n+ for (int j = 0; j < parentIds.getValueCount(); ++j) {\n+ final BytesRef id = parentIds.lookupOrd(j);\n+ assertEquals(expectedIds[j], id);\n+ }\n+ }\n+ }\n+ } catch (Exception e) {\n+ error.compareAndSet(null, e);\n+ }\n+ }\n+ };\n+ threads[i].start();\n+ }\n+ latch.countDown();\n+ for (Thread thread : threads) {\n+ thread.join();\n+ }\n+ if (error.get() != null) {\n+ throw error.get();\n+ }\n+ }\n+\n @Override\n protected FieldDataType getFieldDataType() {\n return new FieldDataType(\"_parent\");", "filename": "src/test/java/org/elasticsearch/index/fielddata/ParentChildFieldDataTests.java", "status": "modified" } ] }
{ "body": "In version 1.4.2, we can't install a plugin when the bin/ and plugin/ directories are located on different filesystems.\n\nThe exception is:\n\n```\nFailed to install test, reason: Could not move [/tmp/elasticsearch/plugins/test/bin] to [/var/elasticsearch/elasticsearch-1.4.2/bin/test]\n```\n\nThe PluginManager uses a [renameTo()](https://github.com/elasticsearch/elasticsearch/blob/1.4/src/main/java/org/elasticsearch/plugins/PluginManager.java#L241) method but the `renameTo` operation might not be able to move a file from one filesystem to another.\n\nRelated to https://github.com/elasticsearch/elasticsearch/commit/921e028e99e332ab58d0956ef77264b9a28359f3 \n", "comments": [], "number": 8999, "title": "Plugin installation failed when bin/ and plugins/ directories are on different filessystems" }
{ "body": "Plugin installation failed when bin/, conf/ and plugins/ directories are on different file systems. The method File.move() can't be used to move a non-empty directory between different file systems.\n\nI didn't find a simple way to unittest that, even with in-memory filesystems like jimfs or with the Lucene test framework.\n\nCloses #8999\n", "number": 9011, "review_comments": [ { "body": "can we call this `move` and maybe just delegate to `Files.move` without throwing the exception? no need for the caller to know that the source is a directory?\n", "created_at": "2014-12-22T23:09:05Z" } ], "title": "Installation failed when directories are on different file systems" }
{ "commits": [ { "message": "Plugins: Installation failed when bin/ and plugins/ directories are on different filesystems\n\nPlugin installation failed when bin/, conf/ and plugins/ directories are on different file systems. The method File.move() can't be used to move a non-empty directory between different filesystems.\n\nI didn't find a simple way to unittest that, even with in-memory filesystems like jimfs or the Lucene test framework.\n\nCloses #8999" } ], "files": [ { "diff": "@@ -20,7 +20,6 @@\n package org.elasticsearch.common.io;\n \n import com.google.common.collect.Iterators;\n-import com.google.common.collect.Sets;\n import org.apache.lucene.util.IOUtils;\n import org.elasticsearch.common.logging.ESLogger;\n \n@@ -33,7 +32,6 @@\n import java.nio.charset.CharsetDecoder;\n import java.nio.file.*;\n import java.nio.file.attribute.BasicFileAttributes;\n-import java.util.List;\n import java.util.concurrent.atomic.AtomicBoolean;\n \n import static java.nio.file.FileVisitResult.CONTINUE;\n@@ -185,7 +183,7 @@ public FileVisitResult preVisitDirectory(Path dir, BasicFileAttributes attrs) th\n if (!Files.exists(path)) {\n // We just move the structure to new dir\n // we can't do atomic move here since src / dest might be on different mounts?\n- Files.move(dir, path);\n+ move(dir, path);\n // We just ignore sub files from here\n return FileVisitResult.SKIP_SUBTREE;\n }\n@@ -224,16 +222,34 @@ public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IO\n * @param destination destination dir\n */\n public static void copyDirectoryRecursively(Path source, Path destination) throws IOException {\n- Files.walkFileTree(source, new TreeCopier(source, destination));\n+ Files.walkFileTree(source, new TreeCopier(source, destination, false));\n+ }\n+\n+ /**\n+ * Move or rename a file to a target file. This method supports moving a file from\n+ * different filesystems (not supported by Files.move()).\n+ *\n+ * @param source source file\n+ * @param destination destination file\n+ */\n+ public static void move(Path source, Path destination) throws IOException {\n+ try {\n+ // We can't use atomic move here since source & target can be on different filesystems.\n+ Files.move(source, destination);\n+ } catch (DirectoryNotEmptyException e) {\n+ Files.walkFileTree(source, new TreeCopier(source, destination, true));\n+ }\n }\n \n static class TreeCopier extends SimpleFileVisitor<Path> {\n private final Path source;\n private final Path target;\n+ private final boolean delete;\n \n- TreeCopier(Path source, Path target) {\n+ TreeCopier(Path source, Path target, boolean delete) {\n this.source = source;\n this.target = target;\n+ this.delete = delete;\n }\n \n @Override\n@@ -249,11 +265,22 @@ public FileVisitResult preVisitDirectory(Path dir, BasicFileAttributes attrs) {\n return CONTINUE;\n }\n \n+ @Override\n+ public FileVisitResult postVisitDirectory(Path dir, IOException exc) throws IOException {\n+ if (delete) {\n+ IOUtils.rm(dir);\n+ }\n+ return CONTINUE;\n+ }\n+\n @Override\n public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IOException {\n Path newFile = target.resolve(source.relativize(file));\n try {\n Files.copy(file, newFile);\n+ if ((delete) && (Files.exists(newFile))) {\n+ Files.delete(file);\n+ }\n } catch (IOException x) {\n // We ignore this\n }", "filename": "src/main/java/org/elasticsearch/common/io/FileSystemUtils.java", "status": "modified" }, { "diff": "@@ -243,7 +243,11 @@ public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IO\n if (Files.exists(toLocation)) {\n IOUtils.rm(toLocation);\n }\n- Files.move(binFile, toLocation);\n+ try {\n+ FileSystemUtils.move(binFile, toLocation);\n+ } catch (IOException e) {\n+ throw new IOException(\"Could not move [\" + binFile + \"] to [\" + toLocation + \"]\", e);\n+ }\n if (Files.getFileStore(toLocation).supportsFileAttributeView(PosixFileAttributeView.class)) {\n final Set<PosixFilePermission> perms = new HashSet<>();\n perms.add(PosixFilePermission.OWNER_EXECUTE);", "filename": "src/main/java/org/elasticsearch/plugins/PluginManager.java", "status": "modified" } ] }
{ "body": "When a node tries to join a master, the master may not yet be ready to accept the join request. In such cases we retry sending the join request up to 3 times before going back to ping. To detect this the current logic uses ExceptionsHelper.unwrapCause(t) to unwrap the incoming RemoteTransportException and inspect it's source, looking for `ElasticsearchIllegalStateException`. However, local `ElasticsearchIllegalStateException` can also be thrown when the join process should be cancelled (i.e., node shut down). In this case we shouldn't retry.\n\nThe PR adds an explicit `NotMasterException` to indicate the remote node is not a master. A similarly named exception (but meaning something else) in the master fault detection code was given a better name. Also clean up some other exceptions while at it.\n\nSee http://build-us-00.elasticsearch.org/job/es_g1gc_master_metal/499/testReport/junit/org.elasticsearch.discovery.zen/ZenDiscoveryTests/testNodeFailuresAreProcessedOnce/ for a test that gets confused by the extra join\n", "comments": [ { "body": "@kimchy I updated the PR to use an explicit exception. Also cleaned up some unused exceptions. I'll update the PR description + commit msg once the review is done.\n", "created_at": "2014-12-16T21:58:51Z" }, { "body": "LGTM\n", "created_at": "2014-12-16T22:04:01Z" } ], "number": 8972, "title": "Only retry join when other node is not (yet) a master" }
{ "body": "When a node tries to join a master, the master may not yet be ready to accept the join request. In such cases we retry sending the join request up to 3 times before going back to ping. To detect this the current logic uses ExceptionsHelper.unwrapCause(t) to unwrap the incoming RemoteTransportException and inspect it's source, looking for `ElasticsearchIllegalStateException`. However, local `ElasticsearchIllegalStateException` can also be thrown when the join process should be cancelled (i.e., node shut down). In this case we shouldn't retry.\n\nSince we can't introduce new exceptions in a BWC manner, we are forced to check the message of the exception.\n\nRelates to #8972\n", "number": 8979, "review_comments": [], "title": "Discovery: only retry join when other node is not (yet) a master" }
{ "commits": [ { "message": "Discovery: only retry join when other node is not (yet) a master\n\nWhen a node tries to join a master, the master may not yet be ready to accept the join request. In such cases we retry sending the join request up to 3 times before going back to ping. To detect this the current logic uses ExceptionsHelper.unwrapCause(t) to unwrap the incoming RemoteTransportException and inspect it's source, looking for `ElasticsearchIllegalStateException`. However, local `ElasticsearchIllegalStateException` can also be thrown when the join process should be cancelled (i.e., node shut down). In this case we shouldn't retry.\n\nSince we can't introduce new exceptions in a BWC manner, we are forced to check the message of the exception.\n\nRelates to #8972" } ], "files": [ { "diff": "@@ -472,7 +472,9 @@ private boolean joinElectedMaster(DiscoveryNode masterNode) {\n return true;\n } catch (Throwable t) {\n Throwable unwrap = ExceptionsHelper.unwrapCause(t);\n- if (unwrap instanceof ElasticsearchIllegalStateException) {\n+ // With #8972 we add an explicit exception to indicate we should retry. We can't do this in a bwc manner\n+ // so we are forced to check for message text here.\n+ if (unwrap instanceof ElasticsearchIllegalStateException && unwrap.getMessage().contains(\"not master for join request\")) {\n if (++joinAttempt == this.joinRetryAttempts) {\n logger.info(\"failed to send join request to master [{}], reason [{}], tried [{}] times\", masterNode, ExceptionsHelper.detailedMessage(t), joinAttempt);\n return false;", "filename": "src/main/java/org/elasticsearch/discovery/zen/ZenDiscovery.java", "status": "modified" } ] }
{ "body": "In order to create a new snapshot or delete an existing snapshot, elasticsearch has to load all existing shard level snapshots to figure out which files need to be copied and which files can be cleaned. The number of files to be checked is equal to `number_of_shards * number_of_snapshots`, which on a large clusters and frequent snapshots can lead to very long operation times especially with non-filesystem repositories. See https://github.com/elasticsearch/elasticsearch-cloud-aws/issues/150 and [this group post](https://groups.google.com/d/msg/elasticsearch/cXl9UrqgwBI/gEX0gSj2yqgJ) for examples of issues that this behavior is causing.\n", "comments": [ { "body": "Just wanted to chime in, this issue has affected us a great deal as well. It made \"sense\" after I thought it through, how ES snapshotting works, but was an unpleasant surprise. \n", "created_at": "2015-01-23T18:26:44Z" }, { "body": "I seem to be seeing this behavior with azure blob storage after upgrading to 1.7.5\n", "created_at": "2016-03-16T19:24:54Z" }, { "body": "@niemyjski It was fixed in #8969 in 2.0.0 and above. The fix wasn't backported to 1.7.5. \n", "created_at": "2016-03-21T15:48:22Z" }, { "body": "And, if you've read this far and were wondering if the fix for this might ever get backported to 1.x... the answer is apparently not:\n\n@ https://github.com/elastic/elasticsearch/pull/8969#issuecomment-199348984 : imotov says\n\n> this was a significant change that required changing the snapshot file format and it was too big of a change for a patch level release. So we didn't port to 1.x and there are no current plans to do it.\n", "created_at": "2016-06-29T00:23:39Z" } ], "number": 8958, "title": "Snapshot deletion and creation slow down as number of snapshots in repository grows" }
{ "body": " Fixes #8958\n", "number": 8969, "review_comments": [ { "body": "Maybe use the method readSnapshot(stream) again?\n", "created_at": "2015-01-26T09:58:50Z" }, { "body": "I find this comment confusing now\n", "created_at": "2015-01-26T10:39:07Z" }, { "body": "Looks unused\n", "created_at": "2015-01-26T10:44:46Z" }, { "body": "Unused imports\n", "created_at": "2015-01-26T10:55:59Z" }, { "body": "The snapshot file can be compressed, do you think this snapshot index file should be compressed too?\n", "created_at": "2015-01-26T11:06:12Z" }, { "body": "Good point, will add compression to it.\n", "created_at": "2015-01-26T17:30:52Z" }, { "body": "I cannot reuse it here because this method reads multiple snapshots. But I can definitely add a new method `readSnapshots()` similiar to `readSnapshot()` and use it here.\n", "created_at": "2015-01-26T17:38:46Z" }, { "body": "Indeed, I inverted code but forgot to invert the comment. Will fix.\n", "created_at": "2015-01-26T18:41:07Z" }, { "body": "Do you need to close the stream after reading it here?\n", "created_at": "2015-02-05T19:20:45Z" }, { "body": "Same here for the stream?\n", "created_at": "2015-02-05T19:21:02Z" }, { "body": "can you add an explanation of `fileListGeneration` here?\n", "created_at": "2015-02-05T19:22:25Z" }, { "body": "Why +1? Because of the finalizing file?\n", "created_at": "2015-02-05T19:22:52Z" }, { "body": "It's responsibility of whoever opened it (caller).\n", "created_at": "2015-02-05T19:23:46Z" }, { "body": "Same here\n", "created_at": "2015-02-05T19:23:55Z" }, { "body": "Might be better to move this into the try-with-resources so it is cleaner to close?\n", "created_at": "2015-02-05T19:24:13Z" }, { "body": "Oh, or maybe you can't because of the compressor stream below?\n", "created_at": "2015-02-05T19:25:07Z" }, { "body": "Because it's next generation, which is previous generation + 1.\n", "created_at": "2015-02-05T19:26:03Z" }, { "body": "Maybe `\"__\"` should be a static var so it can be documented?\n", "created_at": "2015-02-05T19:26:59Z" }, { "body": "Indeed compressor is ruining it. \n", "created_at": "2015-02-05T19:27:13Z" }, { "body": "This javadoc needs to be updated now for the new return type\n", "created_at": "2015-02-05T19:27:32Z" }, { "body": "Again, maybe another comment about why the +1\n", "created_at": "2015-02-05T19:29:03Z" }, { "body": "This javadoc also needs to be updated\n", "created_at": "2015-02-05T19:31:30Z" }, { "body": "Would it be worth transforming the list of `FileInfo`s into a map when `SnapshotFiles` is created so that it doesn't have to look through the list every time for `containPhysicalIndexFile` and `findPhysicalIndexFile`?\n", "created_at": "2015-02-05T19:34:50Z" }, { "body": "Can use the '{}' syntax here for `DATA_BLOB_PREFIX`\n", "created_at": "2015-05-04T21:01:42Z" }, { "body": "double space here\n", "created_at": "2015-05-04T21:02:18Z" }, { "body": "Holy cow, this thing reads like House of Leaves, can we change it to something like:\n\n``` java\nif (ParseFields.SNAPSHOTS.match(currentFieldName) == false) {\n throw new WhateverException(...);\n}\nwhile ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {\n if (token != XContentParser.Token.FIELD_NAME) {\n throw new WhateverException(...);\n }\n if (parser.nextToken != XContentParser.Token.START_OBJECT) {\n throw new WhateverException(...);\n }\n while ((token = parser.nextToken() != XContentToken.END_OBJECT) {\n ... etc ...\n }\n}\n```\n", "created_at": "2015-05-04T21:13:59Z" }, { "body": "Javadoc?\n", "created_at": "2015-05-04T21:14:38Z" }, { "body": "Or we can let poorly formatted values pass through and throw exceptions at the end if values are missing, similar to how we do for queries\n", "created_at": "2015-05-04T21:16:34Z" }, { "body": "This is fallback right? For older repos that don't have the new snapshot index file?\n", "created_at": "2015-06-02T16:52:03Z" }, { "body": "We shouldn't ignore this exception right? This could mean that the Directory couldn't be listed, and as a result none of the files that should be removed on restore are being removed. Should we at least warn here?\n", "created_at": "2015-06-02T16:57:33Z" } ], "title": "Improve snapshot creation and deletion performance on repositories with large number of snapshots" }
{ "commits": [ { "message": "Improve snapshot creation and deletion performance on repositories with large number of snapshots\n\nEach shard repository consists of snapshot file for each snapshot - this file contains a map between original physical file that is snapshotted and its representation in repository. This data includes original filename, checksum and length. When a new snapshot is created, elasticsearch needs to read all these snapshot files to figure which file are already present in the repository and which files still have to be copied there. This change adds a new index file that contains all this information combined into a single file. So, if a repository has 1000 snapshots with 1000 shards elasticsearch will only need to read 1000 blobs (one per shard) instead of 1,000,000 to delete a snapshot. This change should also improve snapshot creation speed on repositories with large number of snapshot and high latency.\n\nFixes #8958" } ], "files": [ { "diff": "@@ -19,9 +19,11 @@\n \n package org.elasticsearch.index.snapshots.blobstore;\n \n+import com.google.common.collect.ImmutableList;\n import com.google.common.collect.ImmutableMap;\n import com.google.common.collect.Iterables;\n import com.google.common.collect.Lists;\n+import com.google.common.io.ByteStreams;\n import org.apache.lucene.index.CorruptIndexException;\n import org.apache.lucene.index.IndexFormatTooNewException;\n import org.apache.lucene.index.IndexFormatTooOldException;\n@@ -41,8 +43,11 @@\n import org.elasticsearch.common.blobstore.BlobMetaData;\n import org.elasticsearch.common.blobstore.BlobPath;\n import org.elasticsearch.common.blobstore.BlobStore;\n+import org.elasticsearch.common.bytes.BytesArray;\n+import org.elasticsearch.common.collect.Tuple;\n import org.elasticsearch.common.component.AbstractComponent;\n import org.elasticsearch.common.inject.Inject;\n+import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.lucene.Lucene;\n import org.elasticsearch.common.lucene.store.InputStreamIndexInput;\n import org.elasticsearch.common.settings.Settings;\n@@ -65,10 +70,10 @@\n import java.io.InputStream;\n import java.io.OutputStream;\n import java.util.*;\n-import java.util.concurrent.CopyOnWriteArrayList;\n \n import static com.google.common.collect.Lists.newArrayList;\n import static org.elasticsearch.repositories.blobstore.BlobStoreRepository.testBlobPrefix;\n+import static org.elasticsearch.repositories.blobstore.BlobStoreRepository.toStreamOutput;\n \n /**\n * Blob store based implementation of IndexShardRepository\n@@ -96,7 +101,15 @@ public class BlobStoreIndexShardRepository extends AbstractComponent implements\n \n private RateLimitingInputStream.Listener snapshotThrottleListener;\n \n- private static final String SNAPSHOT_PREFIX = \"snapshot-\";\n+ private boolean compress;\n+\n+ protected static final String SNAPSHOT_PREFIX = \"snapshot-\";\n+\n+ protected static final String SNAPSHOT_INDEX_PREFIX = \"index-\";\n+\n+ protected static final String SNAPSHOT_TEMP_PREFIX = \"pending-\";\n+\n+ protected static final String DATA_BLOB_PREFIX = \"__\";\n \n @Inject\n public BlobStoreIndexShardRepository(Settings settings, RepositoryName repositoryName, IndicesService indicesService, ClusterService clusterService) {\n@@ -115,7 +128,7 @@ public BlobStoreIndexShardRepository(Settings settings, RepositoryName repositor\n */\n public void initialize(BlobStore blobStore, BlobPath basePath, ByteSizeValue chunkSize,\n RateLimiter snapshotRateLimiter, RateLimiter restoreRateLimiter,\n- final RateLimiterListener rateLimiterListener) {\n+ final RateLimiterListener rateLimiterListener, boolean compress) {\n this.blobStore = blobStore;\n this.basePath = basePath;\n this.chunkSize = chunkSize;\n@@ -128,6 +141,7 @@ public void onPause(long nanos) {\n rateLimiterListener.onSnapshotPause(nanos);\n }\n };\n+ this.compress = compress;\n }\n \n /**\n@@ -232,11 +246,11 @@ private String snapshotBlobName(SnapshotId snapshotId) {\n * Serializes snapshot to JSON\n *\n * @param snapshot snapshot\n- * @param stream the stream to output the snapshot JSON represetation to\n+ * @param output the stream to output the snapshot JSON representation to\n * @throws IOException if an IOException occurs\n */\n- public static void writeSnapshot(BlobStoreIndexShardSnapshot snapshot, OutputStream stream) throws IOException {\n- XContentBuilder builder = XContentFactory.contentBuilder(XContentType.JSON, stream).prettyPrint();\n+ public void writeSnapshot(BlobStoreIndexShardSnapshot snapshot, StreamOutput output) throws IOException {\n+ XContentBuilder builder = XContentFactory.contentBuilder(XContentType.JSON, output).prettyPrint();\n BlobStoreIndexShardSnapshot.toXContent(snapshot, builder, ToXContent.EMPTY_PARAMS);\n builder.flush();\n }\n@@ -249,12 +263,36 @@ public static void writeSnapshot(BlobStoreIndexShardSnapshot snapshot, OutputStr\n * @throws IOException if an IOException occurs\n */\n public static BlobStoreIndexShardSnapshot readSnapshot(InputStream stream) throws IOException {\n- try (XContentParser parser = XContentFactory.xContent(XContentType.JSON).createParser(stream)) {\n+ byte[] data = ByteStreams.toByteArray(stream);\n+ try (XContentParser parser = XContentHelper.createParser(new BytesArray(data))) {\n parser.nextToken();\n return BlobStoreIndexShardSnapshot.fromXContent(parser);\n }\n }\n \n+ /**\n+ * Parses JSON representation of a snapshot\n+ *\n+ * @param stream JSON\n+ * @return snapshot\n+ * @throws IOException if an IOException occurs\n+ * */\n+ public static BlobStoreIndexShardSnapshots readSnapshots(InputStream stream) throws IOException {\n+ byte[] data = ByteStreams.toByteArray(stream);\n+ try (XContentParser parser = XContentHelper.createParser(new BytesArray(data))) {\n+ parser.nextToken();\n+ return BlobStoreIndexShardSnapshots.fromXContent(parser);\n+ }\n+ }\n+ /**\n+ * Returns true if metadata files should be compressed\n+ *\n+ * @return true if compression is needed\n+ */\n+ protected boolean isCompress() {\n+ return compress;\n+ }\n+\n /**\n * Context for snapshot/restore operations\n */\n@@ -287,7 +325,9 @@ public void delete() {\n throw new IndexShardSnapshotException(shardId, \"Failed to list content of gateway\", e);\n }\n \n- BlobStoreIndexShardSnapshots snapshots = buildBlobStoreIndexShardSnapshots(blobs);\n+ Tuple<BlobStoreIndexShardSnapshots, Integer> tuple = buildBlobStoreIndexShardSnapshots(blobs);\n+ BlobStoreIndexShardSnapshots snapshots = tuple.v1();\n+ int fileListGeneration = tuple.v2();\n \n String commitPointName = snapshotBlobName(snapshotId);\n \n@@ -297,15 +337,15 @@ public void delete() {\n logger.debug(\"[{}] [{}] failed to delete shard snapshot file\", shardId, snapshotId);\n }\n \n- // delete all files that are not referenced by any commit point\n- // build a new BlobStoreIndexShardSnapshot, that includes this one and all the saved ones\n- List<BlobStoreIndexShardSnapshot> newSnapshotsList = Lists.newArrayList();\n- for (BlobStoreIndexShardSnapshot point : snapshots) {\n+ // Build a list of snapshots that should be preserved\n+ List<SnapshotFiles> newSnapshotsList = Lists.newArrayList();\n+ for (SnapshotFiles point : snapshots) {\n if (!point.snapshot().equals(snapshotId.getSnapshot())) {\n newSnapshotsList.add(point);\n }\n }\n- cleanup(newSnapshotsList, blobs);\n+ // finalize the snapshot and rewrite the snapshot index with the next sequential snapshot index\n+ finalize(newSnapshotsList, fileListGeneration + 1, blobs);\n }\n \n /**\n@@ -322,26 +362,63 @@ public BlobStoreIndexShardSnapshot loadSnapshot() {\n }\n \n /**\n- * Removes all unreferenced files from the repository\n+ * Removes all unreferenced files from the repository and writes new index file\n+ *\n+ * We need to be really careful in handling index files in case of failures to make sure we have index file that\n+ * points to files that were deleted.\n+ *\n *\n * @param snapshots list of active snapshots in the container\n+ * @param fileListGeneration the generation number of the snapshot index file\n * @param blobs list of blobs in the container\n */\n- protected void cleanup(List<BlobStoreIndexShardSnapshot> snapshots, ImmutableMap<String, BlobMetaData> blobs) {\n+ protected void finalize(List<SnapshotFiles> snapshots, int fileListGeneration, ImmutableMap<String, BlobMetaData> blobs) {\n BlobStoreIndexShardSnapshots newSnapshots = new BlobStoreIndexShardSnapshots(snapshots);\n- // now go over all the blobs, and if they don't exists in a snapshot, delete them\n+ // delete old index files first\n for (String blobName : blobs.keySet()) {\n- if (!blobName.startsWith(\"__\")) {\n- continue;\n- }\n- if (newSnapshots.findNameFile(FileInfo.canonicalName(blobName)) == null) {\n+ // delete old file lists\n+ if (blobName.startsWith(SNAPSHOT_TEMP_PREFIX) || blobName.startsWith(SNAPSHOT_INDEX_PREFIX)) {\n try {\n blobContainer.deleteBlob(blobName);\n } catch (IOException e) {\n- logger.debug(\"[{}] [{}] error deleting blob [{}] during cleanup\", e, snapshotId, shardId, blobName);\n+ // We cannot delete index file - this is fatal, we cannot continue, otherwise we might end up\n+ // with references to non-existing files\n+ throw new IndexShardSnapshotFailedException(shardId, \"error deleting index file [{}] during cleanup\", e);\n+ }\n+ }\n+ }\n+\n+ // now go over all the blobs, and if they don't exists in a snapshot, delete them\n+ for (String blobName : blobs.keySet()) {\n+ // delete old file lists\n+ if (blobName.startsWith(DATA_BLOB_PREFIX)) {\n+ if (newSnapshots.findNameFile(FileInfo.canonicalName(blobName)) == null) {\n+ try {\n+ blobContainer.deleteBlob(blobName);\n+ } catch (IOException e) {\n+ logger.debug(\"[{}] [{}] error deleting blob [{}] during cleanup\", e, snapshotId, shardId, blobName);\n+ }\n }\n }\n }\n+\n+ // If we deleted all snapshots - we don't need to create the index file\n+ if (snapshots.size() > 0) {\n+ String newSnapshotIndexName = SNAPSHOT_INDEX_PREFIX + fileListGeneration;\n+ try (OutputStream output = blobContainer.createOutput(SNAPSHOT_TEMP_PREFIX + fileListGeneration)) {\n+ StreamOutput stream = compressIfNeeded(output);\n+ XContentBuilder builder = XContentFactory.contentBuilder(XContentType.JSON, stream);\n+ newSnapshots.toXContent(builder, ToXContent.EMPTY_PARAMS);\n+ builder.flush();\n+ } catch (IOException e) {\n+ throw new IndexShardSnapshotFailedException(shardId, \"Failed to write file list\", e);\n+ }\n+ try {\n+ blobContainer.move(SNAPSHOT_TEMP_PREFIX + fileListGeneration, newSnapshotIndexName);\n+ } catch (IOException e) {\n+ throw new IndexShardSnapshotFailedException(shardId, \"Failed to rename file list\", e);\n+ }\n+ }\n }\n \n /**\n@@ -351,7 +428,7 @@ protected void cleanup(List<BlobStoreIndexShardSnapshot> snapshots, ImmutableMap\n * @return the blob name\n */\n protected String fileNameFromGeneration(long generation) {\n- return \"__\" + Long.toString(generation, Character.MAX_RADIX);\n+ return DATA_BLOB_PREFIX + Long.toString(generation, Character.MAX_RADIX);\n }\n \n /**\n@@ -363,17 +440,17 @@ protected String fileNameFromGeneration(long generation) {\n protected long findLatestFileNameGeneration(ImmutableMap<String, BlobMetaData> blobs) {\n long generation = -1;\n for (String name : blobs.keySet()) {\n- if (!name.startsWith(\"__\")) {\n+ if (!name.startsWith(DATA_BLOB_PREFIX)) {\n continue;\n }\n name = FileInfo.canonicalName(name);\n try {\n- long currentGen = Long.parseLong(name.substring(2) /*__*/, Character.MAX_RADIX);\n+ long currentGen = Long.parseLong(name.substring(DATA_BLOB_PREFIX.length()), Character.MAX_RADIX);\n if (currentGen > generation) {\n generation = currentGen;\n }\n } catch (NumberFormatException e) {\n- logger.warn(\"file [{}] does not conform to the '__' schema\");\n+ logger.warn(\"file [{}] does not conform to the '{}' schema\", name, DATA_BLOB_PREFIX);\n }\n }\n return generation;\n@@ -383,20 +460,47 @@ protected long findLatestFileNameGeneration(ImmutableMap<String, BlobMetaData> b\n * Loads all available snapshots in the repository\n *\n * @param blobs list of blobs in repository\n- * @return BlobStoreIndexShardSnapshots\n+ * @return tuple of BlobStoreIndexShardSnapshots and the last snapshot index generation\n */\n- protected BlobStoreIndexShardSnapshots buildBlobStoreIndexShardSnapshots(ImmutableMap<String, BlobMetaData> blobs) {\n- List<BlobStoreIndexShardSnapshot> snapshots = Lists.newArrayList();\n+ protected Tuple<BlobStoreIndexShardSnapshots, Integer> buildBlobStoreIndexShardSnapshots(ImmutableMap<String, BlobMetaData> blobs) {\n+ int latest = -1;\n+ for (String name : blobs.keySet()) {\n+ if (name.startsWith(SNAPSHOT_INDEX_PREFIX)) {\n+ try {\n+ int gen = Integer.parseInt(name.substring(SNAPSHOT_INDEX_PREFIX.length()));\n+ if (gen > latest) {\n+ latest = gen;\n+ }\n+ } catch (NumberFormatException ex) {\n+ logger.warn(\"failed to parse index file name [{}]\", name);\n+ }\n+ }\n+ }\n+ if (latest >= 0) {\n+ try (InputStream stream = blobContainer.openInput(SNAPSHOT_INDEX_PREFIX + latest)) {\n+ return new Tuple<>(readSnapshots(stream), latest);\n+ } catch (IOException e) {\n+ logger.warn(\"failed to read index file [{}]\", e, SNAPSHOT_INDEX_PREFIX + latest);\n+ }\n+ }\n+\n+ // We couldn't load the index file - falling back to loading individual snapshots\n+ List<SnapshotFiles> snapshots = Lists.newArrayList();\n for (String name : blobs.keySet()) {\n if (name.startsWith(SNAPSHOT_PREFIX)) {\n try (InputStream stream = blobContainer.openInput(name)) {\n- snapshots.add(readSnapshot(stream));\n+ BlobStoreIndexShardSnapshot snapshot = readSnapshot(stream);\n+ snapshots.add(new SnapshotFiles(snapshot.snapshot(), snapshot.indexFiles()));\n } catch (IOException e) {\n logger.warn(\"failed to read commit point [{}]\", e, name);\n }\n }\n }\n- return new BlobStoreIndexShardSnapshots(snapshots);\n+ return new Tuple<>(new BlobStoreIndexShardSnapshots(snapshots), -1);\n+ }\n+\n+ protected StreamOutput compressIfNeeded(OutputStream output) throws IOException {\n+ return toStreamOutput(output, isCompress());\n }\n \n }\n@@ -442,9 +546,10 @@ public void snapshot(SnapshotIndexCommit snapshotIndexCommit) {\n }\n \n long generation = findLatestFileNameGeneration(blobs);\n- BlobStoreIndexShardSnapshots snapshots = buildBlobStoreIndexShardSnapshots(blobs);\n+ Tuple<BlobStoreIndexShardSnapshots, Integer> tuple = buildBlobStoreIndexShardSnapshots(blobs);\n+ BlobStoreIndexShardSnapshots snapshots = tuple.v1();\n+ int fileListGeneration = tuple.v2();\n \n- final CopyOnWriteArrayList<Throwable> failures = new CopyOnWriteArrayList<>();\n final List<BlobStoreIndexShardSnapshot.FileInfo> indexCommitPointFiles = newArrayList();\n \n int indexNumberOfFiles = 0;\n@@ -464,31 +569,36 @@ public void snapshot(SnapshotIndexCommit snapshotIndexCommit) {\n }\n logger.trace(\"[{}] [{}] Processing [{}]\", shardId, snapshotId, fileName);\n final StoreFileMetaData md = metadata.get(fileName);\n- boolean snapshotRequired = false;\n- BlobStoreIndexShardSnapshot.FileInfo fileInfo = snapshots.findPhysicalIndexFile(fileName);\n- try {\n- // in 1.3.3 we added additional hashes for .si / segments_N files\n- // to ensure we don't double the space in the repo since old snapshots\n- // don't have this hash we try to read that hash from the blob store\n- // in a bwc compatible way.\n- maybeRecalculateMetadataHash(blobContainer, fileInfo, metadata);\n- } catch (Throwable e) {\n- logger.warn(\"{} Can't calculate hash from blob for file [{}] [{}]\", e, shardId, fileInfo.physicalName(), fileInfo.metadata());\n- }\n- if (fileInfo == null || !fileInfo.isSame(md) || !snapshotFileExistsInBlobs(fileInfo, blobs)) {\n- // commit point file does not exists in any commit point, or has different length, or does not fully exists in the listed blobs\n- snapshotRequired = true;\n+ FileInfo existingFileInfo = null;\n+ ImmutableList<FileInfo> filesInfo = snapshots.findPhysicalIndexFiles(fileName);\n+ if (filesInfo != null) {\n+ for (FileInfo fileInfo : filesInfo) {\n+ try {\n+ // in 1.3.3 we added additional hashes for .si / segments_N files\n+ // to ensure we don't double the space in the repo since old snapshots\n+ // don't have this hash we try to read that hash from the blob store\n+ // in a bwc compatible way.\n+ maybeRecalculateMetadataHash(blobContainer, fileInfo, metadata);\n+ } catch (Throwable e) {\n+ logger.warn(\"{} Can't calculate hash from blob for file [{}] [{}]\", e, shardId, fileInfo.physicalName(), fileInfo.metadata());\n+ }\n+ if (fileInfo.isSame(md) && snapshotFileExistsInBlobs(fileInfo, blobs)) {\n+ // a commit point file with the same name, size and checksum was already copied to repository\n+ // we will reuse it for this snapshot\n+ existingFileInfo = fileInfo;\n+ break;\n+ }\n+ }\n }\n-\n- if (snapshotRequired) {\n+ if (existingFileInfo == null) {\n indexNumberOfFiles++;\n indexTotalFilesSize += md.length();\n // create a new FileInfo\n BlobStoreIndexShardSnapshot.FileInfo snapshotFileInfo = new BlobStoreIndexShardSnapshot.FileInfo(fileNameFromGeneration(++generation), md, chunkSize);\n indexCommitPointFiles.add(snapshotFileInfo);\n filesToSnapshot.add(snapshotFileInfo);\n } else {\n- indexCommitPointFiles.add(fileInfo);\n+ indexCommitPointFiles.add(existingFileInfo);\n }\n }\n \n@@ -515,20 +625,21 @@ public void snapshot(SnapshotIndexCommit snapshotIndexCommit) {\n System.currentTimeMillis() - snapshotStatus.startTime(), indexNumberOfFiles, indexTotalFilesSize);\n //TODO: The time stored in snapshot doesn't include cleanup time.\n logger.trace(\"[{}] [{}] writing shard snapshot file\", shardId, snapshotId);\n- try (OutputStream output = blobContainer.createOutput(snapshotBlobName)) {\n+ try (StreamOutput output = compressIfNeeded(blobContainer.createOutput(snapshotBlobName))) {\n writeSnapshot(snapshot, output);\n } catch (IOException e) {\n throw new IndexShardSnapshotFailedException(shardId, \"Failed to write commit point\", e);\n }\n \n // delete all files that are not referenced by any commit point\n // build a new BlobStoreIndexShardSnapshot, that includes this one and all the saved ones\n- List<BlobStoreIndexShardSnapshot> newSnapshotsList = Lists.newArrayList();\n- newSnapshotsList.add(snapshot);\n- for (BlobStoreIndexShardSnapshot point : snapshots) {\n+ List<SnapshotFiles> newSnapshotsList = Lists.newArrayList();\n+ newSnapshotsList.add(new SnapshotFiles(snapshot.snapshot(), snapshot.indexFiles()));\n+ for (SnapshotFiles point : snapshots) {\n newSnapshotsList.add(point);\n }\n- cleanup(newSnapshotsList, blobs);\n+ // finalize the snapshot and rewrite the snapshot index with the next sequential snapshot index\n+ finalize(newSnapshotsList, fileListGeneration + 1, blobs);\n snapshotStatus.updateStage(IndexShardSnapshotStatus.Stage.DONE);\n } finally {\n store.decRef();\n@@ -709,6 +820,7 @@ public void restore() throws IOException {\n try {\n logger.debug(\"[{}] [{}] restoring to [{}] ...\", snapshotId, repositoryName, shardId);\n BlobStoreIndexShardSnapshot snapshot = loadSnapshot();\n+ SnapshotFiles snapshotFiles = new SnapshotFiles(snapshot.snapshot(), snapshot.indexFiles());\n final Store.MetadataSnapshot recoveryTargetMetadata;\n try {\n recoveryTargetMetadata = store.getMetadataOrEmpty();\n@@ -786,6 +898,22 @@ public void restore() throws IOException {\n throw new IndexShardRestoreFailedException(shardId, \"Failed to fetch index version after copying it over\", e);\n }\n recoveryState.getIndex().updateVersion(segmentCommitInfos.getVersion());\n+\n+ /// now, go over and clean files that are in the store, but were not in the snapshot\n+ try {\n+ for (String storeFile : store.directory().listAll()) {\n+ if (!Store.isChecksum(storeFile) && !snapshotFiles.containPhysicalIndexFile(storeFile)) {\n+ try {\n+ store.deleteQuiet(\"restore\", storeFile);\n+ store.directory().deleteFile(storeFile);\n+ } catch (IOException e) {\n+ logger.warn(\"[{}] failed to delete file [{}] during snapshot cleanup\", snapshotId, storeFile);\n+ }\n+ }\n+ }\n+ } catch (IOException e) {\n+ logger.warn(\"[{}] failed to list directory - some of files might not be deleted\", snapshotId);\n+ }\n } finally {\n store.decRef();\n }", "filename": "src/main/java/org/elasticsearch/index/snapshots/blobstore/BlobStoreIndexShardRepository.java", "status": "modified" }, { "diff": "@@ -190,6 +190,24 @@ public boolean isSame(StoreFileMetaData md) {\n return metadata.isSame(md);\n }\n \n+ /**\n+ * Checks if a file in a store is the same file\n+ *\n+ * @param fileInfo file in a store\n+ * @return true if file in a store this this file have the same checksum and length\n+ */\n+ public boolean isSame(FileInfo fileInfo) {\n+ if (numberOfParts != fileInfo.numberOfParts) return false;\n+ if (partBytes != fileInfo.partBytes) return false;\n+ if (!name.equals(fileInfo.name)) return false;\n+ if (partSize != null) {\n+ if (!partSize.equals(fileInfo.partSize)) return false;\n+ } else {\n+ if (fileInfo.partSize != null) return false;\n+ }\n+ return metadata.isSame(fileInfo.metadata);\n+ }\n+\n static final class Fields {\n static final XContentBuilderString NAME = new XContentBuilderString(\"name\");\n static final XContentBuilderString PHYSICAL_NAME = new XContentBuilderString(\"physical_name\");\n@@ -484,38 +502,4 @@ public static BlobStoreIndexShardSnapshot fromXContent(XContentParser parser) th\n startTime, time, numberOfFiles, totalSize);\n }\n \n- /**\n- * Returns true if this snapshot contains a file with a given original name\n- *\n- * @param physicalName original file name\n- * @return true if the file was found, false otherwise\n- */\n- public boolean containPhysicalIndexFile(String physicalName) {\n- return findPhysicalIndexFile(physicalName) != null;\n- }\n-\n- public FileInfo findPhysicalIndexFile(String physicalName) {\n- for (FileInfo file : indexFiles) {\n- if (file.physicalName().equals(physicalName)) {\n- return file;\n- }\n- }\n- return null;\n- }\n-\n- /**\n- * Returns true if this snapshot contains a file with a given name\n- *\n- * @param name file name\n- * @return true if file was found, false otherwise\n- */\n- public FileInfo findNameFile(String name) {\n- for (FileInfo file : indexFiles) {\n- if (file.name().equals(name)) {\n- return file;\n- }\n- }\n- return null;\n- }\n-\n }", "filename": "src/main/java/org/elasticsearch/index/snapshots/blobstore/BlobStoreIndexShardSnapshot.java", "status": "modified" }, { "diff": "@@ -20,47 +20,106 @@\n package org.elasticsearch.index.snapshots.blobstore;\n \n import com.google.common.collect.ImmutableList;\n+import com.google.common.collect.ImmutableMap;\n+import org.elasticsearch.ElasticsearchParseException;\n+import org.elasticsearch.common.ParseField;\n+import org.elasticsearch.common.xcontent.ToXContent;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.common.xcontent.XContentBuilderString;\n+import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.index.snapshots.blobstore.BlobStoreIndexShardSnapshot.FileInfo;\n \n+import java.io.IOException;\n import java.util.Iterator;\n import java.util.List;\n+import java.util.Map;\n+\n+import static com.google.common.collect.Lists.newArrayList;\n+import static com.google.common.collect.Maps.newHashMap;\n \n /**\n * Contains information about all snapshot for the given shard in repository\n * <p/>\n * This class is used to find files that were already snapshoted and clear out files that no longer referenced by any\n * snapshots\n */\n-public class BlobStoreIndexShardSnapshots implements Iterable<BlobStoreIndexShardSnapshot> {\n- private final ImmutableList<BlobStoreIndexShardSnapshot> shardSnapshots;\n+public class BlobStoreIndexShardSnapshots implements Iterable<SnapshotFiles>, ToXContent {\n+ private final ImmutableList<SnapshotFiles> shardSnapshots;\n+ private final ImmutableMap<String, FileInfo> files;\n+ private final ImmutableMap<String, ImmutableList<FileInfo>> physicalFiles;\n \n- public BlobStoreIndexShardSnapshots(List<BlobStoreIndexShardSnapshot> shardSnapshots) {\n+ public BlobStoreIndexShardSnapshots(List<SnapshotFiles> shardSnapshots) {\n this.shardSnapshots = ImmutableList.copyOf(shardSnapshots);\n+ // Map between blob names and file info\n+ Map<String, FileInfo> newFiles = newHashMap();\n+ // Map between original physical names and file info\n+ Map<String, List<FileInfo>> physicalFiles = newHashMap();\n+ for (SnapshotFiles snapshot : shardSnapshots) {\n+ // First we build map between filenames in the repo and their original file info\n+ // this map will be used in the next loop\n+ for (FileInfo fileInfo : snapshot.indexFiles()) {\n+ FileInfo oldFile = newFiles.put(fileInfo.name(), fileInfo);\n+ assert oldFile == null || oldFile.isSame(fileInfo);\n+ }\n+ // We are doing it in two loops here so we keep only one copy of the fileInfo per blob\n+ // the first loop de-duplicates fileInfo objects that were loaded from different snapshots but refer to\n+ // the same blob\n+ for (FileInfo fileInfo : snapshot.indexFiles()) {\n+ List<FileInfo> physicalFileList = physicalFiles.get(fileInfo.physicalName());\n+ if (physicalFileList == null) {\n+ physicalFileList = newArrayList();\n+ physicalFiles.put(fileInfo.physicalName(), physicalFileList);\n+ }\n+ physicalFileList.add(newFiles.get(fileInfo.name()));\n+ }\n+ }\n+ ImmutableMap.Builder<String, ImmutableList<FileInfo>> mapBuilder = ImmutableMap.builder();\n+ for (Map.Entry<String, List<FileInfo>> entry : physicalFiles.entrySet()) {\n+ mapBuilder.put(entry.getKey(), ImmutableList.copyOf(entry.getValue()));\n+ }\n+ this.physicalFiles = mapBuilder.build();\n+ this.files = ImmutableMap.copyOf(newFiles);\n+ }\n+\n+ private BlobStoreIndexShardSnapshots(ImmutableMap<String, FileInfo> files, ImmutableList<SnapshotFiles> shardSnapshots) {\n+ this.shardSnapshots = shardSnapshots;\n+ this.files = files;\n+ Map<String, List<FileInfo>> physicalFiles = newHashMap();\n+ for (SnapshotFiles snapshot : shardSnapshots) {\n+ for (FileInfo fileInfo : snapshot.indexFiles()) {\n+ List<FileInfo> physicalFileList = physicalFiles.get(fileInfo.physicalName());\n+ if (physicalFileList == null) {\n+ physicalFileList = newArrayList();\n+ physicalFiles.put(fileInfo.physicalName(), physicalFileList);\n+ }\n+ physicalFileList.add(files.get(fileInfo.name()));\n+ }\n+ }\n+ ImmutableMap.Builder<String, ImmutableList<FileInfo>> mapBuilder = ImmutableMap.builder();\n+ for (Map.Entry<String, List<FileInfo>> entry : physicalFiles.entrySet()) {\n+ mapBuilder.put(entry.getKey(), ImmutableList.copyOf(entry.getValue()));\n+ }\n+ this.physicalFiles = mapBuilder.build();\n }\n \n+\n /**\n * Returns list of snapshots\n *\n * @return list of snapshots\n */\n- public ImmutableList<BlobStoreIndexShardSnapshot> snapshots() {\n+ public ImmutableList<SnapshotFiles> snapshots() {\n return this.shardSnapshots;\n }\n \n /**\n * Finds reference to a snapshotted file by its original name\n *\n * @param physicalName original name\n- * @return file info or null if file is not present in any of snapshots\n+ * @return a list of file infos that match specified physical file or null if the file is not present in any of snapshots\n */\n- public FileInfo findPhysicalIndexFile(String physicalName) {\n- for (BlobStoreIndexShardSnapshot snapshot : shardSnapshots) {\n- FileInfo fileInfo = snapshot.findPhysicalIndexFile(physicalName);\n- if (fileInfo != null) {\n- return fileInfo;\n- }\n- }\n- return null;\n+ public ImmutableList<FileInfo> findPhysicalIndexFiles(String physicalName) {\n+ return physicalFiles.get(physicalName);\n }\n \n /**\n@@ -70,17 +129,166 @@ public FileInfo findPhysicalIndexFile(String physicalName) {\n * @return file info or null if file is not present in any of snapshots\n */\n public FileInfo findNameFile(String name) {\n- for (BlobStoreIndexShardSnapshot snapshot : shardSnapshots) {\n- FileInfo fileInfo = snapshot.findNameFile(name);\n- if (fileInfo != null) {\n- return fileInfo;\n- }\n- }\n- return null;\n+ return files.get(name);\n }\n \n @Override\n- public Iterator<BlobStoreIndexShardSnapshot> iterator() {\n+ public Iterator<SnapshotFiles> iterator() {\n return shardSnapshots.iterator();\n }\n+\n+ static final class Fields {\n+ static final XContentBuilderString FILES = new XContentBuilderString(\"files\");\n+ static final XContentBuilderString SNAPSHOTS = new XContentBuilderString(\"snapshots\");\n+ }\n+\n+ static final class ParseFields {\n+ static final ParseField FILES = new ParseField(\"files\");\n+ static final ParseField SNAPSHOTS = new ParseField(\"snapshots\");\n+ }\n+\n+ /**\n+ * Writes index file for the shard in the following format.\n+ * <pre>\n+ * {@code\n+ * {\n+ * \"files\": [{\n+ * \"name\": \"__3\",\n+ * \"physical_name\": \"_0.si\",\n+ * \"length\": 310,\n+ * \"checksum\": \"1tpsg3p\",\n+ * \"written_by\": \"5.1.0\",\n+ * \"meta_hash\": \"P9dsFxNMdWNlb......\"\n+ * }, {\n+ * \"name\": \"__2\",\n+ * \"physical_name\": \"segments_2\",\n+ * \"length\": 150,\n+ * \"checksum\": \"11qjpz6\",\n+ * \"written_by\": \"5.1.0\",\n+ * \"meta_hash\": \"P9dsFwhzZWdtZ.......\"\n+ * }, {\n+ * \"name\": \"__1\",\n+ * \"physical_name\": \"_0.cfe\",\n+ * \"length\": 363,\n+ * \"checksum\": \"er9r9g\",\n+ * \"written_by\": \"5.1.0\"\n+ * }, {\n+ * \"name\": \"__0\",\n+ * \"physical_name\": \"_0.cfs\",\n+ * \"length\": 3354,\n+ * \"checksum\": \"491liz\",\n+ * \"written_by\": \"5.1.0\"\n+ * }, {\n+ * \"name\": \"__4\",\n+ * \"physical_name\": \"segments_3\",\n+ * \"length\": 150,\n+ * \"checksum\": \"134567\",\n+ * \"written_by\": \"5.1.0\",\n+ * \"meta_hash\": \"P9dsFwhzZWdtZ.......\"\n+ * }],\n+ * \"snapshots\": {\n+ * \"snapshot_1\": {\n+ * \"files\": [\"__0\", \"__1\", \"__2\", \"__3\"]\n+ * },\n+ * \"snapshot_2\": {\n+ * \"files\": [\"__0\", \"__1\", \"__2\", \"__4\"]\n+ * }\n+ * }\n+ * }\n+ * }\n+ * </pre>\n+ */\n+ @Override\n+ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {\n+ builder.startObject();\n+ // First we list all blobs with their file infos:\n+ builder.startArray(Fields.FILES);\n+ for (Map.Entry<String, FileInfo> entry : files.entrySet()) {\n+ FileInfo.toXContent(entry.getValue(), builder, params);\n+ }\n+ builder.endArray();\n+ // Then we list all snapshots with list of all blobs that are used by the snapshot\n+ builder.startObject(Fields.SNAPSHOTS);\n+ for (SnapshotFiles snapshot : shardSnapshots) {\n+ builder.startObject(snapshot.snapshot(), XContentBuilder.FieldCaseConversion.NONE);\n+ builder.startArray(Fields.FILES);\n+ for (FileInfo fileInfo : snapshot.indexFiles()) {\n+ builder.value(fileInfo.name());\n+ }\n+ builder.endArray();\n+ builder.endObject();\n+ }\n+ builder.endObject();\n+\n+ builder.endObject();\n+ return builder;\n+ }\n+\n+ public static BlobStoreIndexShardSnapshots fromXContent(XContentParser parser) throws IOException {\n+ XContentParser.Token token = parser.currentToken();\n+ Map<String, List<String>> snapshotsMap = newHashMap();\n+ ImmutableMap.Builder<String, FileInfo> filesBuilder = ImmutableMap.builder();\n+ if (token == XContentParser.Token.START_OBJECT) {\n+ while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {\n+ if (token != XContentParser.Token.FIELD_NAME) {\n+ throw new ElasticsearchParseException(\"unexpected token [\" + token + \"]\");\n+ }\n+ String currentFieldName = parser.currentName();\n+ token = parser.nextToken();\n+ if (token == XContentParser.Token.START_ARRAY) {\n+ if (ParseFields.FILES.match(currentFieldName) == false) {\n+ throw new ElasticsearchParseException(\"unknown array [\" + currentFieldName + \"]\");\n+ }\n+ while (parser.nextToken() != XContentParser.Token.END_ARRAY) {\n+ FileInfo fileInfo = FileInfo.fromXContent(parser);\n+ filesBuilder.put(fileInfo.name(), fileInfo);\n+ }\n+ } else if (token == XContentParser.Token.START_OBJECT) {\n+ if (ParseFields.SNAPSHOTS.match(currentFieldName) == false) {\n+ throw new ElasticsearchParseException(\"unknown object [\" + currentFieldName + \"]\");\n+ }\n+ while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {\n+ if (token != XContentParser.Token.FIELD_NAME) {\n+ throw new ElasticsearchParseException(\"unknown object [\" + currentFieldName + \"]\");\n+ }\n+ String snapshot = parser.currentName();\n+ if (parser.nextToken() != XContentParser.Token.START_OBJECT) {\n+ throw new ElasticsearchParseException(\"unknown object [\" + currentFieldName + \"]\");\n+ }\n+ while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {\n+ if (token == XContentParser.Token.FIELD_NAME) {\n+ currentFieldName = parser.currentName();\n+ if (parser.nextToken() == XContentParser.Token.START_ARRAY) {\n+ if (ParseFields.FILES.match(currentFieldName) == false) {\n+ throw new ElasticsearchParseException(\"unknown array [\" + currentFieldName + \"]\");\n+ }\n+ List<String> fileNames = newArrayList();\n+ while (parser.nextToken() != XContentParser.Token.END_ARRAY) {\n+ fileNames.add(parser.text());\n+ }\n+ snapshotsMap.put(snapshot, fileNames);\n+ }\n+ }\n+ }\n+ }\n+ } else {\n+ throw new ElasticsearchParseException(\"unexpected token [\" + token + \"]\");\n+ }\n+ }\n+ }\n+\n+ ImmutableMap<String, FileInfo> files = filesBuilder.build();\n+ ImmutableList.Builder<SnapshotFiles> snapshots = ImmutableList.builder();\n+ for (Map.Entry<String, List<String>> entry : snapshotsMap.entrySet()) {\n+ ImmutableList.Builder<FileInfo> fileInfosBuilder = ImmutableList.builder();\n+ for (String file : entry.getValue()) {\n+ FileInfo fileInfo = files.get(file);\n+ assert fileInfo != null;\n+ fileInfosBuilder.add(fileInfo);\n+ }\n+ snapshots.add(new SnapshotFiles(entry.getKey(), fileInfosBuilder.build()));\n+ }\n+ return new BlobStoreIndexShardSnapshots(files, snapshots.build());\n+ }\n+\n }", "filename": "src/main/java/org/elasticsearch/index/snapshots/blobstore/BlobStoreIndexShardSnapshots.java", "status": "modified" }, { "diff": "@@ -0,0 +1,81 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.index.snapshots.blobstore;\n+\n+import com.google.common.collect.ImmutableList;\n+import org.elasticsearch.index.snapshots.blobstore.BlobStoreIndexShardSnapshot.FileInfo;\n+\n+import java.util.Map;\n+\n+import static com.google.common.collect.Maps.newHashMap;\n+\n+/**\n+ * Contains a list of files participating in a snapshot\n+ */\n+public class SnapshotFiles {\n+\n+ private final String snapshot;\n+\n+ private final ImmutableList<FileInfo> indexFiles;\n+\n+ private Map<String, FileInfo> physicalFiles = null;\n+\n+ public String snapshot() {\n+ return snapshot;\n+ }\n+\n+ public SnapshotFiles(String snapshot, ImmutableList<FileInfo> indexFiles ) {\n+ this.snapshot = snapshot;\n+ this.indexFiles = indexFiles;\n+ }\n+\n+ /**\n+ * Returns a list of file in the snapshot\n+ */\n+ public ImmutableList<FileInfo> indexFiles() {\n+ return indexFiles;\n+ }\n+\n+ /**\n+ * Returns true if this snapshot contains a file with a given original name\n+ *\n+ * @param physicalName original file name\n+ * @return true if the file was found, false otherwise\n+ */\n+ public boolean containPhysicalIndexFile(String physicalName) {\n+ return findPhysicalIndexFile(physicalName) != null;\n+ }\n+\n+ /**\n+ * Returns information about a physical file with the given name\n+ * @param physicalName the original file name\n+ * @return information about this file\n+ */\n+ public FileInfo findPhysicalIndexFile(String physicalName) {\n+ if (physicalFiles == null) {\n+ Map<String, FileInfo> files = newHashMap();\n+ for(FileInfo fileInfo : indexFiles) {\n+ files.put(fileInfo.physicalName(), fileInfo);\n+ }\n+ this.physicalFiles = files;\n+ }\n+ return physicalFiles.get(physicalName);\n+ }\n+\n+}", "filename": "src/main/java/org/elasticsearch/index/snapshots/blobstore/SnapshotFiles.java", "status": "added" }, { "diff": "@@ -174,9 +174,8 @@ protected BlobStoreRepository(String repositoryName, RepositorySettings reposito\n */\n @Override\n protected void doStart() {\n-\n this.snapshotsBlobContainer = blobStore().blobContainer(basePath());\n- indexShardRepository.initialize(blobStore(), basePath(), chunkSize(), snapshotRateLimiter, restoreRateLimiter, this);\n+ indexShardRepository.initialize(blobStore(), basePath(), chunkSize(), snapshotRateLimiter, restoreRateLimiter, this, isCompress());\n }\n \n /**\n@@ -329,11 +328,15 @@ public void deleteSnapshot(SnapshotId snapshotId) {\n }\n \n private StreamOutput compressIfNeeded(OutputStream output) throws IOException {\n+ return toStreamOutput(output, isCompress());\n+ }\n+\n+ public static StreamOutput toStreamOutput(OutputStream output, boolean compress) throws IOException {\n StreamOutput out = null;\n boolean success = false;\n try {\n out = new OutputStreamStreamOutput(output);\n- if (isCompress()) {\n+ if (compress) {\n out = CompressorFactory.defaultCompressor().streamOutput(out);\n }\n success = true;\n@@ -596,10 +599,7 @@ private void writeGlobalMetaData(MetaData metaData, StreamOutput outputStream) t\n */\n protected void writeSnapshotList(ImmutableList<SnapshotId> snapshots) throws IOException {\n BytesStreamOutput bStream = new BytesStreamOutput();\n- StreamOutput stream = bStream;\n- if (isCompress()) {\n- stream = CompressorFactory.defaultCompressor().streamOutput(stream);\n- }\n+ StreamOutput stream = compressIfNeeded(bStream);\n XContentBuilder builder = XContentFactory.contentBuilder(XContentType.JSON, stream);\n builder.startObject();\n builder.startArray(\"snapshots\");", "filename": "src/main/java/org/elasticsearch/repositories/blobstore/BlobStoreRepository.java", "status": "modified" } ] }
{ "body": "`TimestampFieldMapper` extends `DateFieldMapper` and is supposed to support doc values but it overrides `toXContent` and forgets to serialize doc values settings.\n", "comments": [ { "body": "For the record, the bug report comes from the mailing-list: https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/elasticsearch/-ibkkI7dIro/X2KfTjKKt88J\n", "created_at": "2014-12-11T10:38:33Z" } ], "number": 8893, "title": "Mappings: _timestamp does not serialize its doc_values settings" }
{ "body": "This change fixes _timestamp's serialization method to write out\n`doc_values` and `doc_values_format`, which could already be set,\nbut would not be written out.\n\ncloses #8893\n", "number": 8967, "review_comments": [ { "body": "There is also a line above that checks if all options have the default value and returns early. I think we should add the doc-values-related settings to this line as well?\n", "created_at": "2014-12-17T12:17:27Z" }, { "body": "All metadata mappers have this kind of behaviour which is a pita to maintain... I'm wondering if we could come up with a better solution.\n", "created_at": "2014-12-17T12:21:28Z" }, { "body": "@jpountz I added the default value check in the block you were referring to. I also created #9093 to address the repetitive and error prone pattern here.\n", "created_at": "2014-12-29T23:01:49Z" } ], "title": "Serialize doc values settings for _timestamp" }
{ "commits": [ { "message": "Mapping: serialize doc values settings for _timestamp\n\nThis change fixes _timestamp's serialization method to write out\n`doc_values` and `doc_values_format`, which could already be set,\nbut would not be written out.\n\ncloses #8893\ncloses #8967" } ], "files": [ { "diff": "@@ -282,7 +282,7 @@ protected List<Field> initialValue() {\n protected final Names names;\n protected float boost;\n protected FieldType fieldType;\n- private final boolean docValues;\n+ protected final boolean docValues;\n protected final NamedAnalyzer indexAnalyzer;\n protected NamedAnalyzer searchAnalyzer;\n protected PostingsFormatProvider postingsFormat;", "filename": "src/main/java/org/elasticsearch/index/mapper/core/AbstractFieldMapper.java", "status": "modified" }, { "diff": "@@ -32,11 +32,13 @@\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.index.codec.docvaluesformat.DocValuesFormatProvider;\n+import org.elasticsearch.index.codec.docvaluesformat.DocValuesFormatService;\n import org.elasticsearch.index.codec.postingsformat.PostingsFormatProvider;\n import org.elasticsearch.index.mapper.*;\n import org.elasticsearch.index.mapper.core.DateFieldMapper;\n import org.elasticsearch.index.mapper.core.LongFieldMapper;\n import org.elasticsearch.index.mapper.core.NumberFieldMapper;\n+import org.elasticsearch.index.mapper.core.TypeParsers;\n \n import java.io.IOException;\n import java.util.Iterator;\n@@ -264,7 +266,8 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n if (!includeDefaults && indexed == indexedDefault && customFieldDataSettings == null &&\n fieldType.stored() == Defaults.FIELD_TYPE.stored() && enabledState == Defaults.ENABLED && path == Defaults.PATH\n && dateTimeFormatter.format().equals(Defaults.DATE_TIME_FORMATTER.format())\n- && Defaults.DEFAULT_TIMESTAMP.equals(defaultTimestamp)) {\n+ && Defaults.DEFAULT_TIMESTAMP.equals(defaultTimestamp)\n+ && Defaults.DOC_VALUES == hasDocValues()) {\n return builder;\n }\n builder.startObject(CONTENT_TYPE);\n@@ -277,6 +280,9 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n if (includeDefaults || fieldType.stored() != Defaults.FIELD_TYPE.stored()) {\n builder.field(\"store\", fieldType.stored());\n }\n+ if (includeDefaults || hasDocValues() != Defaults.DOC_VALUES) {\n+ builder.field(TypeParsers.DOC_VALUES, docValues);\n+ }\n if (includeDefaults || path != Defaults.PATH) {\n builder.field(\"path\", path);\n }\n@@ -292,6 +298,18 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n builder.field(\"fielddata\", (Map) fieldDataType.getSettings().getAsMap());\n }\n \n+ if (docValuesFormat != null) {\n+ if (includeDefaults || !docValuesFormat.name().equals(defaultDocValuesFormat())) {\n+ builder.field(DOC_VALUES_FORMAT, docValuesFormat.name());\n+ }\n+ } else if (includeDefaults) {\n+ String format = defaultDocValuesFormat();\n+ if (format == null) {\n+ format = DocValuesFormatService.DEFAULT_FORMAT;\n+ }\n+ builder.field(DOC_VALUES_FORMAT, format);\n+ }\n+\n builder.endObject();\n return builder;\n }", "filename": "src/main/java/org/elasticsearch/index/mapper/internal/TimestampFieldMapper.java", "status": "modified" }, { "diff": "@@ -32,11 +32,13 @@\n import org.elasticsearch.common.io.stream.BytesStreamInput;\n import org.elasticsearch.common.io.stream.BytesStreamOutput;\n import org.elasticsearch.common.joda.Joda;\n+import org.elasticsearch.common.lucene.Lucene;\n import org.elasticsearch.common.settings.ImmutableSettings;\n import org.elasticsearch.common.xcontent.ToXContent;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.common.xcontent.json.JsonXContent;\n+import org.elasticsearch.index.codec.docvaluesformat.DocValuesFormatService;\n import org.elasticsearch.index.mapper.DocumentMapper;\n import org.elasticsearch.index.mapper.DocumentMapperParser;\n import org.elasticsearch.index.mapper.FieldMapper;\n@@ -111,6 +113,8 @@ public void testDefaultValues() throws Exception {\n assertThat(docMapper.timestampFieldMapper().fieldType().indexOptions(), equalTo(TimestampFieldMapper.Defaults.FIELD_TYPE.indexOptions()));\n assertThat(docMapper.timestampFieldMapper().path(), equalTo(TimestampFieldMapper.Defaults.PATH));\n assertThat(docMapper.timestampFieldMapper().dateTimeFormatter().format(), equalTo(TimestampFieldMapper.DEFAULT_DATE_TIME_FORMAT));\n+ assertThat(docMapper.timestampFieldMapper().hasDocValues(), equalTo(false));\n+ assertThat(docMapper.timestampFieldMapper().docValuesFormatProvider(), equalTo(null));\n assertAcked(client().admin().indices().prepareDelete(\"test\").execute().get());\n }\n }\n@@ -123,6 +127,8 @@ public void testSetValues() throws Exception {\n .startObject(\"_timestamp\")\n .field(\"enabled\", \"yes\").field(\"store\", \"no\").field(\"index\", \"no\")\n .field(\"path\", \"timestamp\").field(\"format\", \"year\")\n+ .field(\"doc_values\", true)\n+ .field(\"doc_values_format\", Lucene.LATEST_DOC_VALUES_FORMAT)\n .endObject()\n .endObject().endObject().string();\n DocumentMapper docMapper = createIndex(\"test\").mapperService().documentMapperParser().parse(mapping);\n@@ -131,6 +137,8 @@ public void testSetValues() throws Exception {\n assertEquals(IndexOptions.NONE, docMapper.timestampFieldMapper().fieldType().indexOptions());\n assertThat(docMapper.timestampFieldMapper().path(), equalTo(\"timestamp\"));\n assertThat(docMapper.timestampFieldMapper().dateTimeFormatter().format(), equalTo(\"year\"));\n+ assertThat(docMapper.timestampFieldMapper().hasDocValues(), equalTo(true));\n+ assertThat(docMapper.timestampFieldMapper().docValuesFormatProvider().name(), equalTo(Lucene.LATEST_DOC_VALUES_FORMAT));\n }\n \n @Test", "filename": "src/test/java/org/elasticsearch/index/mapper/timestamp/TimestampMappingTests.java", "status": "modified" } ] }
{ "body": "I have a single ES 1.5-SNAPSHOT node (embedded ES in servlet container) and use the node's client.\nWhen I do \n\n```\nclient.prepareSearch(INDEX_NAME)\n .setSearchType(SearchType.DEFAULT).setScroll(SCROLL_KEEP_ALIVE)\n .setExtraSource(query).execute().actionGet() \n...\nsearchResponse = client.prepareSearchScroll(searchResponse.getScrollId())\n .setScroll(SCROLL_KEEP_ALIVE).execute().actionGet();\n```\n\nand the stop my servlet container it gets stuck and I get\n\"Not all shards are closed yet, waited 30sec - stopping service\" in the logs\n\nif I wait after running the search a bit a minute or so the shutdown is instantaneous. The time to wait seem to be related to specified keep alive time \n\n_If I do not use scroll I do not have issue_\n\nI might be doing something wrong but ES examples do not indicate that I need to somehow close my search \n\nI do close the client prior to letting ES NodeServlet close the node (even thoug samles do not seem to imply the need for closing node client) but it ma\n", "comments": [ { "body": "@roytmana can you give it a try if your problem is solved? thanks for reporting\n", "created_at": "2014-12-15T09:34:30Z" }, { "body": "@s1monw \n\npulled from 1.x branch and made a build. The issue is still present for me.\n\nThis is from property file in the JAR file:\nversion=1.5.0-SNAPSHOT\nhash=c98e07a9ac19b7eca412ac6f00e178a348064923\ntimestamp=1418655613243\n", "created_at": "2014-12-15T15:31:36Z" }, { "body": "@roytmana can you reproduce this in a standalone testcase? I wonder if I can also see your shutdown code? are you using any plugins etc?\n", "created_at": "2014-12-15T15:34:43Z" }, { "body": "@s1monw \n\nI am afraid this week I won't be able to work on a standalone recreation (I will just have to try to write it from scratch as it is intermingled within the app (startup/shutdown activities as well as app logic for exporting queries to excel via scrolls) but if you'd like me to try anything in particular, I will try to do it.\n\nthe reason that I think is not so much the shutdown sequence in itself is that removing .setScroll(SCROLL_KEEP_ALIVE) call resolves the issue in my test immediately\n\nI use transport-wares NodeServlet 2.4.1 with Tomcat 7\n\nTomcat calls its destroy() on shutdown prior to any of my other shutdown activities which are triggered by JAX-RS Jersey shutdown later than NodeServlet is done\n\n```\n NodeServlet:\n\n public void destroy() {\n if (node != null) {\n getServletContext().removeAttribute(NODE_KEY);\n node.close();\n }\n }\n```\n\nPrior to call to destroy in my subclass of the NodeServlet i close the Client obtained from the node via node.client()\n\n```\n @Override public void init() throws ServletException {\n ...\n super.init();\n client = node.client();\n ... //here I grab node client so I can publish it in my application as a @Singleton. Same single instance of the client is used by all threads to access ES\n\n }\n```\n\n```\nINFO 15/12/14 10:30:54 reports.ElasticSearchNodeServlet - Destroying ...\nINFO 15/12/14 10:30:54 reports.ElasticSearchNodeServlet - ElasticSearchNodeServlet Close ES Client\nINFO 15/12/14 10:30:54 elasticsearch.node - [es_alexr_gctrack_n1] stopping ...\nWARN 15/12/14 10:31:24 elasticsearch.indices - [es_alexr_gctrack_n1] Not all shards are closed yet, waited 30sec - stopping service\nINFO 15/12/14 10:31:24 elasticsearch.node - [es_alexr_gctrack_n1] stopped\nINFO 15/12/14 10:31:24 elasticsearch.node - [es_alexr_gctrack_n1] closing ...\nINFO 15/12/14 10:31:24 elasticsearch.node - [es_alexr_gctrack_n1] closed\nINFO 15/12/14 10:31:24 reports.ElasticSearchNodeServlet - Destroyed\n```\n", "created_at": "2014-12-15T16:12:29Z" }, { "body": "I can reproduce it now locally - I will open a new PR to fix it\n", "created_at": "2014-12-15T20:41:06Z" }, { "body": "Thanks @s1monw it took care of the issue.\n\nI have not tested 1.4.x but wonder if it is present there as well. What's approximate release timeline for 1.5? If it is within a month maximum two I will wait for ES 1.5 for our next production release \n\nWe use 1.3.x in production, developing against 1.4.x and doing some future stuff that needs aggs filtering against 1.5\n", "created_at": "2014-12-15T23:36:35Z" }, { "body": "@roytmana no this is not an issue in `1.4.x` since we added the shard locking only in 1.5 and 2.0 ie. 1.x & master branches. I can't tell about the release timeline as usual. I think a 1.4 upgrade is the safest option at this point.\n", "created_at": "2014-12-16T09:03:57Z" }, { "body": "Thanks @s1monw !\n", "created_at": "2014-12-16T14:50:37Z" }, { "body": "I have exactly the same problem during closing & opening an index.\n\nES-Version: 1.6.0\n\nSay my scroll keep alive time is set to 1 minute.\n1. I'm executing a search with that scroll settings\n2. Immediately after receiving my search result I'm closing and reopening the index\n3. For exactly one minute I get the following error message and my cluster is red\n4. after 1 minute my cluster is green.\n\n```\n[2015-06-22 14:48:13,846][INFO ][cluster.metadata ] [Grim Hunter] closing indices [[de_v4]]\n[2015-06-22 14:48:22,263][INFO ][cluster.metadata ] [Grim Hunter] opening indices [[de_v4]]\n[2015-06-22 14:48:27,308][WARN ][indices.cluster ] [Grim Hunter] [[de_v4][2]] marking and sending shard failed due to [failed to create shard]\norg.elasticsearch.index.shard.IndexShardCreationException: [de_v4][2] failed to create shard\n at org.elasticsearch.index.IndexService.createShard(IndexService.java:357)\n at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyInitializingShard(IndicesClusterStateService.java:704)\n at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyNewOrUpdatedShards(IndicesClusterStateService.java:605)\n at org.elasticsearch.indices.cluster.IndicesClusterStateService.clusterChanged(IndicesClusterStateService.java:185)\n at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:480)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:188)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:158)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\nCaused by: org.apache.lucene.store.LockObtainFailedException: Can't lock shard [de_v4][2], timed out after 5000ms\n at org.elasticsearch.env.NodeEnvironment$InternalShardLock.acquire(NodeEnvironment.java:576)\n at org.elasticsearch.env.NodeEnvironment.shardLock(NodeEnvironment.java:504)\n at org.elasticsearch.index.IndexService.createShard(IndexService.java:310)\n ... 9 more\n[2015-06-22 14:48:27,309][WARN ][cluster.action.shard ] [Grim Hunter] [de_v4][2] received shard failed for [de_v4][2], node[iTOuyuGSRLab6ZZxwnhb3g], [P], s[INITIALIZING], indexUUID [jsOkLa-_Q8GSPkKAXXfsoQ], reason [shard failure [failed to create shard][IndexShardCreationException[[de_v4][2] failed to create shard]; nested: LockObtainFailedException[Can't lock shard [de_v4][2], timed out after 5000ms]; ]]\n[2015-06-22 14:48:32,309][WARN ][indices.cluster ] [Grim Hunter] [[de_v4][0]] marking and sending shard failed due to [failed to create shard]\norg.elasticsearch.index.shard.IndexShardCreationException: [de_v4][0] failed to create shard\n at org.elasticsearch.index.IndexService.createShard(IndexService.java:357)\n at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyInitializingShard(IndicesClusterStateService.java:704)\n at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyNewOrUpdatedShards(IndicesClusterStateService.java:605)\n at org.elasticsearch.indices.cluster.IndicesClusterStateService.clusterChanged(IndicesClusterStateService.java:185)\n at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:480)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:188)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:158)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\nCaused by: org.apache.lucene.store.LockObtainFailedException: Can't lock shard [de_v4][0], timed out after 5000ms\n at org.elasticsearch.env.NodeEnvironment$InternalShardLock.acquire(NodeEnvironment.java:576)\n at org.elasticsearch.env.NodeEnvironment.shardLock(NodeEnvironment.java:504)\n at org.elasticsearch.index.IndexService.createShard(IndexService.java:310)\n ... 9 more\n\n[... trimmed ...]\n\n[2015-06-22 14:49:17,414][WARN ][cluster.action.shard ] [Grim Hunter] [de_v4][0] received shard failed for [de_v4][0], node[iTOuyuGSRLab6ZZxwnhb3g], [P], s[INITIALIZING], indexUUID [jsOkLa-_Q8GSPkKAXXfsoQ], reason [shard failure [failed to create shard][IndexShardCreationException[[de_v4][0] failed to create shard]; nested: LockObtainFailedException[Can't lock shard [de_v4][0], timed out after 5000ms]; ]]\n[2015-06-22 14:49:17,414][WARN ][cluster.action.shard ] [Grim Hunter] [de_v4][3] received shard failed for [de_v4][3], node[iTOuyuGSRLab6ZZxwnhb3g], [P], s[INITIALIZING], indexUUID [jsOkLa-_Q8GSPkKAXXfsoQ], reason [master [Grim Hunter][iTOuyuGSRLab6ZZxwnhb3g][u-excus][inet[/192.168.1.181:9300]] marked shard as initializing, but shard is marked as failed, resend shard failure]\n```\n\n_Update:_\nToday I adjusted the logging settings and briefly before the cluster is getting green:\n\n```\n[2015-06-23 12:21:00,100][DEBUG][search ] [Prime Mover] freeing search context [1], time [1435054860015], lastAccessTime [1435054776777], keepAlive [60000]\n[2015-06-23 12:21:00,101][DEBUG][search ] [Prime Mover] freeing search context [2], time [1435054860015], lastAccessTime [1435054776777], keepAlive [60000]\n```\n\nI hope this is the correct issue and it is okay to comment it again. \nOtherwise please let me know and I will open a new issue.\n", "created_at": "2015-06-22T13:04:39Z" }, { "body": "btw: [clearing](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-request-scroll.html#_clear_scroll_api) (manually) all scroll ids during closing / opening process solves the problem (as temporary workaround); cluster state will be green immediately after reopening the index.\n\n```\ncurl -XDELETE localhost:9200/_search/scroll/_all\n```\n", "created_at": "2015-06-23T09:53:55Z" }, { "body": "@cleemansen please open as a new ticket\n", "created_at": "2015-06-23T19:06:18Z" } ], "number": 8940, "title": "Using Search Scroll with keep alive prevent shards from closing" }
{ "body": "We do wait for shards to be closed in IndicesService for 30 second.\nYet, if somebody holds on to a store reference ie. an open scroll request\nthe 30 seconds time-out and node shutdown takes very long. We should\nrelease all other resources first before we shutdown IndicesService.\n\nCloses #8940\n", "number": 8966, "review_comments": [], "title": "Shutdown indices service last" }
{ "commits": [ { "message": "Shutdown indices service last\n\nWe do wait for shards to be closed in IndicesService for 30 second.\nYet, if somebody holds on to a store reference ie. an open scroll request\nthe 30 seconds time-out and node shutdown takes very long. We should\nrelease all other resources first before we shutdown IndicesService.\n\nCloses #8940" } ], "files": [ { "diff": "@@ -294,13 +294,6 @@ public Node stop() {\n // we close indices first, so operations won't be allowed on it\n injector.getInstance(IndexingMemoryController.class).stop();\n injector.getInstance(IndicesTTLService.class).stop();\n- injector.getInstance(IndicesService.class).stop();\n- // sleep a bit to let operations finish with indices service\n-// try {\n-// Thread.sleep(500);\n-// } catch (InterruptedException e) {\n-// // ignore\n-// }\n injector.getInstance(RoutingService.class).stop();\n injector.getInstance(ClusterService.class).stop();\n injector.getInstance(DiscoveryService.class).stop();\n@@ -313,7 +306,9 @@ public Node stop() {\n for (Class<? extends LifecycleComponent> plugin : pluginsService.services()) {\n injector.getInstance(plugin).stop();\n }\n-\n+ // we should stop this last since it waits for resources to get released\n+ // if we had scroll searchers etc or recovery going on we wait for to finish.\n+ injector.getInstance(IndicesService.class).stop();\n logger.info(\"stopped\");\n \n return this;", "filename": "src/main/java/org/elasticsearch/node/internal/InternalNode.java", "status": "modified" } ] }
{ "body": "I have a single ES 1.5-SNAPSHOT node (embedded ES in servlet container) and use the node's client.\nWhen I do \n\n```\nclient.prepareSearch(INDEX_NAME)\n .setSearchType(SearchType.DEFAULT).setScroll(SCROLL_KEEP_ALIVE)\n .setExtraSource(query).execute().actionGet() \n...\nsearchResponse = client.prepareSearchScroll(searchResponse.getScrollId())\n .setScroll(SCROLL_KEEP_ALIVE).execute().actionGet();\n```\n\nand the stop my servlet container it gets stuck and I get\n\"Not all shards are closed yet, waited 30sec - stopping service\" in the logs\n\nif I wait after running the search a bit a minute or so the shutdown is instantaneous. The time to wait seem to be related to specified keep alive time \n\n_If I do not use scroll I do not have issue_\n\nI might be doing something wrong but ES examples do not indicate that I need to somehow close my search \n\nI do close the client prior to letting ES NodeServlet close the node (even thoug samles do not seem to imply the need for closing node client) but it ma\n", "comments": [ { "body": "@roytmana can you give it a try if your problem is solved? thanks for reporting\n", "created_at": "2014-12-15T09:34:30Z" }, { "body": "@s1monw \n\npulled from 1.x branch and made a build. The issue is still present for me.\n\nThis is from property file in the JAR file:\nversion=1.5.0-SNAPSHOT\nhash=c98e07a9ac19b7eca412ac6f00e178a348064923\ntimestamp=1418655613243\n", "created_at": "2014-12-15T15:31:36Z" }, { "body": "@roytmana can you reproduce this in a standalone testcase? I wonder if I can also see your shutdown code? are you using any plugins etc?\n", "created_at": "2014-12-15T15:34:43Z" }, { "body": "@s1monw \n\nI am afraid this week I won't be able to work on a standalone recreation (I will just have to try to write it from scratch as it is intermingled within the app (startup/shutdown activities as well as app logic for exporting queries to excel via scrolls) but if you'd like me to try anything in particular, I will try to do it.\n\nthe reason that I think is not so much the shutdown sequence in itself is that removing .setScroll(SCROLL_KEEP_ALIVE) call resolves the issue in my test immediately\n\nI use transport-wares NodeServlet 2.4.1 with Tomcat 7\n\nTomcat calls its destroy() on shutdown prior to any of my other shutdown activities which are triggered by JAX-RS Jersey shutdown later than NodeServlet is done\n\n```\n NodeServlet:\n\n public void destroy() {\n if (node != null) {\n getServletContext().removeAttribute(NODE_KEY);\n node.close();\n }\n }\n```\n\nPrior to call to destroy in my subclass of the NodeServlet i close the Client obtained from the node via node.client()\n\n```\n @Override public void init() throws ServletException {\n ...\n super.init();\n client = node.client();\n ... //here I grab node client so I can publish it in my application as a @Singleton. Same single instance of the client is used by all threads to access ES\n\n }\n```\n\n```\nINFO 15/12/14 10:30:54 reports.ElasticSearchNodeServlet - Destroying ...\nINFO 15/12/14 10:30:54 reports.ElasticSearchNodeServlet - ElasticSearchNodeServlet Close ES Client\nINFO 15/12/14 10:30:54 elasticsearch.node - [es_alexr_gctrack_n1] stopping ...\nWARN 15/12/14 10:31:24 elasticsearch.indices - [es_alexr_gctrack_n1] Not all shards are closed yet, waited 30sec - stopping service\nINFO 15/12/14 10:31:24 elasticsearch.node - [es_alexr_gctrack_n1] stopped\nINFO 15/12/14 10:31:24 elasticsearch.node - [es_alexr_gctrack_n1] closing ...\nINFO 15/12/14 10:31:24 elasticsearch.node - [es_alexr_gctrack_n1] closed\nINFO 15/12/14 10:31:24 reports.ElasticSearchNodeServlet - Destroyed\n```\n", "created_at": "2014-12-15T16:12:29Z" }, { "body": "I can reproduce it now locally - I will open a new PR to fix it\n", "created_at": "2014-12-15T20:41:06Z" }, { "body": "Thanks @s1monw it took care of the issue.\n\nI have not tested 1.4.x but wonder if it is present there as well. What's approximate release timeline for 1.5? If it is within a month maximum two I will wait for ES 1.5 for our next production release \n\nWe use 1.3.x in production, developing against 1.4.x and doing some future stuff that needs aggs filtering against 1.5\n", "created_at": "2014-12-15T23:36:35Z" }, { "body": "@roytmana no this is not an issue in `1.4.x` since we added the shard locking only in 1.5 and 2.0 ie. 1.x & master branches. I can't tell about the release timeline as usual. I think a 1.4 upgrade is the safest option at this point.\n", "created_at": "2014-12-16T09:03:57Z" }, { "body": "Thanks @s1monw !\n", "created_at": "2014-12-16T14:50:37Z" }, { "body": "I have exactly the same problem during closing & opening an index.\n\nES-Version: 1.6.0\n\nSay my scroll keep alive time is set to 1 minute.\n1. I'm executing a search with that scroll settings\n2. Immediately after receiving my search result I'm closing and reopening the index\n3. For exactly one minute I get the following error message and my cluster is red\n4. after 1 minute my cluster is green.\n\n```\n[2015-06-22 14:48:13,846][INFO ][cluster.metadata ] [Grim Hunter] closing indices [[de_v4]]\n[2015-06-22 14:48:22,263][INFO ][cluster.metadata ] [Grim Hunter] opening indices [[de_v4]]\n[2015-06-22 14:48:27,308][WARN ][indices.cluster ] [Grim Hunter] [[de_v4][2]] marking and sending shard failed due to [failed to create shard]\norg.elasticsearch.index.shard.IndexShardCreationException: [de_v4][2] failed to create shard\n at org.elasticsearch.index.IndexService.createShard(IndexService.java:357)\n at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyInitializingShard(IndicesClusterStateService.java:704)\n at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyNewOrUpdatedShards(IndicesClusterStateService.java:605)\n at org.elasticsearch.indices.cluster.IndicesClusterStateService.clusterChanged(IndicesClusterStateService.java:185)\n at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:480)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:188)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:158)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\nCaused by: org.apache.lucene.store.LockObtainFailedException: Can't lock shard [de_v4][2], timed out after 5000ms\n at org.elasticsearch.env.NodeEnvironment$InternalShardLock.acquire(NodeEnvironment.java:576)\n at org.elasticsearch.env.NodeEnvironment.shardLock(NodeEnvironment.java:504)\n at org.elasticsearch.index.IndexService.createShard(IndexService.java:310)\n ... 9 more\n[2015-06-22 14:48:27,309][WARN ][cluster.action.shard ] [Grim Hunter] [de_v4][2] received shard failed for [de_v4][2], node[iTOuyuGSRLab6ZZxwnhb3g], [P], s[INITIALIZING], indexUUID [jsOkLa-_Q8GSPkKAXXfsoQ], reason [shard failure [failed to create shard][IndexShardCreationException[[de_v4][2] failed to create shard]; nested: LockObtainFailedException[Can't lock shard [de_v4][2], timed out after 5000ms]; ]]\n[2015-06-22 14:48:32,309][WARN ][indices.cluster ] [Grim Hunter] [[de_v4][0]] marking and sending shard failed due to [failed to create shard]\norg.elasticsearch.index.shard.IndexShardCreationException: [de_v4][0] failed to create shard\n at org.elasticsearch.index.IndexService.createShard(IndexService.java:357)\n at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyInitializingShard(IndicesClusterStateService.java:704)\n at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyNewOrUpdatedShards(IndicesClusterStateService.java:605)\n at org.elasticsearch.indices.cluster.IndicesClusterStateService.clusterChanged(IndicesClusterStateService.java:185)\n at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:480)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:188)\n at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:158)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\nCaused by: org.apache.lucene.store.LockObtainFailedException: Can't lock shard [de_v4][0], timed out after 5000ms\n at org.elasticsearch.env.NodeEnvironment$InternalShardLock.acquire(NodeEnvironment.java:576)\n at org.elasticsearch.env.NodeEnvironment.shardLock(NodeEnvironment.java:504)\n at org.elasticsearch.index.IndexService.createShard(IndexService.java:310)\n ... 9 more\n\n[... trimmed ...]\n\n[2015-06-22 14:49:17,414][WARN ][cluster.action.shard ] [Grim Hunter] [de_v4][0] received shard failed for [de_v4][0], node[iTOuyuGSRLab6ZZxwnhb3g], [P], s[INITIALIZING], indexUUID [jsOkLa-_Q8GSPkKAXXfsoQ], reason [shard failure [failed to create shard][IndexShardCreationException[[de_v4][0] failed to create shard]; nested: LockObtainFailedException[Can't lock shard [de_v4][0], timed out after 5000ms]; ]]\n[2015-06-22 14:49:17,414][WARN ][cluster.action.shard ] [Grim Hunter] [de_v4][3] received shard failed for [de_v4][3], node[iTOuyuGSRLab6ZZxwnhb3g], [P], s[INITIALIZING], indexUUID [jsOkLa-_Q8GSPkKAXXfsoQ], reason [master [Grim Hunter][iTOuyuGSRLab6ZZxwnhb3g][u-excus][inet[/192.168.1.181:9300]] marked shard as initializing, but shard is marked as failed, resend shard failure]\n```\n\n_Update:_\nToday I adjusted the logging settings and briefly before the cluster is getting green:\n\n```\n[2015-06-23 12:21:00,100][DEBUG][search ] [Prime Mover] freeing search context [1], time [1435054860015], lastAccessTime [1435054776777], keepAlive [60000]\n[2015-06-23 12:21:00,101][DEBUG][search ] [Prime Mover] freeing search context [2], time [1435054860015], lastAccessTime [1435054776777], keepAlive [60000]\n```\n\nI hope this is the correct issue and it is okay to comment it again. \nOtherwise please let me know and I will open a new issue.\n", "created_at": "2015-06-22T13:04:39Z" }, { "body": "btw: [clearing](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-request-scroll.html#_clear_scroll_api) (manually) all scroll ids during closing / opening process solves the problem (as temporary workaround); cluster state will be green immediately after reopening the index.\n\n```\ncurl -XDELETE localhost:9200/_search/scroll/_all\n```\n", "created_at": "2015-06-23T09:53:55Z" }, { "body": "@cleemansen please open as a new ticket\n", "created_at": "2015-06-23T19:06:18Z" } ], "number": 8940, "title": "Using Search Scroll with keep alive prevent shards from closing" }
{ "body": "When we close a node all pending / active search requests need to be\ncleared otherwise a node will wait up to 30 sec for shutdown sicne there\ncould be open scroll requests. This behavior was introduces in 1.5 such that\nversions <= 1.4.x are not affected.\n\nCloses #8940\n", "number": 8947, "review_comments": [], "title": "Close active search contexts on SearchService#close()" }
{ "commits": [ { "message": "[SEARCH] close active contexts on SearchService#close()\n\nWhen we close a node all pending / active search requests need to be\ncleared otherwise a node will wait up to 30 sec for shutdown sicne there\ncould be open scroll requests. This behavior was introduces in 1.5 such that\nversions <= 1.4.x are not affected.\n\nCloses #8940" } ], "files": [ { "diff": "@@ -139,7 +139,7 @@ public class SearchService extends AbstractLifecycleComponent<SearchService> {\n private final ImmutableMap<String, SearchParseElement> elementParsers;\n \n @Inject\n- public SearchService(Settings settings, ClusterService clusterService, IndicesService indicesService, IndicesLifecycle indicesLifecycle, IndicesWarmer indicesWarmer, ThreadPool threadPool,\n+ public SearchService(Settings settings, ClusterService clusterService, IndicesService indicesService,IndicesWarmer indicesWarmer, ThreadPool threadPool,\n ScriptService scriptService, PageCacheRecycler pageCacheRecycler, BigArrays bigArrays, DfsPhase dfsPhase, QueryPhase queryPhase, FetchPhase fetchPhase,\n IndicesQueryCache indicesQueryCache) {\n super(settings);\n@@ -196,6 +196,7 @@ protected void doStop() throws ElasticsearchException {\n \n @Override\n protected void doClose() throws ElasticsearchException {\n+ doStop();\n FutureUtils.cancel(keepAliveReaper);\n }\n \n@@ -774,6 +775,14 @@ private void processScroll(InternalScrollSearchRequest request, SearchContext co\n }\n }\n \n+ /**\n+ * Returns the number of active contexts in this\n+ * SearchService\n+ */\n+ public int getActiveContexts() {\n+ return this.activeContexts.size();\n+ }\n+\n static class NormsWarmer extends IndicesWarmer.Listener {\n \n @Override", "filename": "src/main/java/org/elasticsearch/search/SearchService.java", "status": "modified" }, { "diff": "@@ -0,0 +1,73 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.search;\n+\n+\n+import org.elasticsearch.action.search.SearchResponse;\n+import org.elasticsearch.test.ElasticsearchSingleNodeTest;\n+\n+import java.util.concurrent.ExecutionException;\n+\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n+import static org.hamcrest.Matchers.is;\n+import static org.hamcrest.Matchers.notNullValue;\n+\n+public class SearchServiceTests extends ElasticsearchSingleNodeTest {\n+\n+ @Override\n+ protected boolean resetNodeAfterTest() {\n+ return true;\n+ }\n+\n+ public void testClearOnClose() throws ExecutionException, InterruptedException {\n+ createIndex(\"index\");\n+ client().prepareIndex(\"index\", \"type\", \"1\").setSource(\"field\", \"value\").setRefresh(true).get();\n+ SearchResponse searchResponse = client().prepareSearch(\"index\").setSize(1).setScroll(\"1m\").get();\n+ assertThat(searchResponse.getScrollId(), is(notNullValue()));\n+ SearchService service = getInstanceFromNode(SearchService.class);\n+\n+ assertEquals(1, service.getActiveContexts());\n+ service.doClose(); // this kills the keep-alive reaper we have to reset the node after this test\n+ assertEquals(0, service.getActiveContexts());\n+ }\n+\n+ public void testClearOnStop() throws ExecutionException, InterruptedException {\n+ createIndex(\"index\");\n+ client().prepareIndex(\"index\", \"type\", \"1\").setSource(\"field\", \"value\").setRefresh(true).get();\n+ SearchResponse searchResponse = client().prepareSearch(\"index\").setSize(1).setScroll(\"1m\").get();\n+ assertThat(searchResponse.getScrollId(), is(notNullValue()));\n+ SearchService service = getInstanceFromNode(SearchService.class);\n+\n+ assertEquals(1, service.getActiveContexts());\n+ service.doStop();\n+ assertEquals(0, service.getActiveContexts());\n+ }\n+\n+ public void testClearIndexDelete() throws ExecutionException, InterruptedException {\n+ createIndex(\"index\");\n+ client().prepareIndex(\"index\", \"type\", \"1\").setSource(\"field\", \"value\").setRefresh(true).get();\n+ SearchResponse searchResponse = client().prepareSearch(\"index\").setSize(1).setScroll(\"1m\").get();\n+ assertThat(searchResponse.getScrollId(), is(notNullValue()));\n+ SearchService service = getInstanceFromNode(SearchService.class);\n+\n+ assertEquals(1, service.getActiveContexts());\n+ assertAcked(client().admin().indices().prepareDelete(\"index\"));\n+ assertEquals(0, service.getActiveContexts());\n+ }\n+}", "filename": "src/test/java/org/elasticsearch/search/SearchServiceTests.java", "status": "added" } ] }
{ "body": "Date math rounding currently works by rounding the date up or down based\non the scope of the rounding. For example, if you have the date\n`2009-12-24||/d` it will round down to the inclusive lower end\n`2009-12-24T00:00:00.000` and round up to the non-inclusive date\n`2009-12-25T00:00:00.000`.\n\nThe range endpoint semantics work as follows:\n- `gt` - round D down, and use > that value\n- `gte` - round D down, and use >= that value\n- `lt` - round D down, and use <\n- `lte` - round D up, and use <=\n\nThere are 2 problems with these semantics:\n- `lte` ends up including the upper value, which should be non-inclusive\n- `gt` only excludes the beginning of the date, not the entire rounding scope\n\nThis change makes the range endpoint semantics symmetrical. First, it\nchanges the parser to round up and down using the first (same as before)\nand last (1 ms less than before) values of the rounding scope. This\nmakes both rounded endpoints inclusive. The range endpoint semantics\nare then as follows:\n- `gt` - round D up, and use > that value\n- `gte` - round D down, and use >= that value\n- `lt` - round D down, and use < that value\n- `lte` - round D up, and use <= that value\n\ncloses #8424\n", "comments": [ { "body": "LGTM\n", "created_at": "2014-11-20T15:53:23Z" }, { "body": "LGTM^2\n", "created_at": "2014-11-21T09:12:04Z" } ], "number": 8556, "title": "DateMath: Fix semantics of rounding with inclusive/exclusive ranges." }
{ "body": "The setting `mapping.date.round_ceil` (and the undocumented setting\n`index.mapping.date.parse_upper_inclusive`) affect how date ranges using\n`lte` are parsed. In #8556 the semantics of date rounding were\nsolidified, eliminating the need to have different parsing functions\nwhether the date is inclusive or exclusive.\n\nThis change removes these legacy settings and improves the tests\nfor the date math parser (now at 100% coverage!). It also removes the\nunnecessary function `DateMathParser.parseTimeZone` for which\nthe existing `DateTimeZone.forID` handles all use cases. The one caveat\nis handling malformed `H:MM` values, but this is a recently added\nfeature, and I don't think we should have our own special format here,\nbut instead enforce the standard `[+-]HH:MM`.\n\nAny user previously using these settings can refer to the changed\nsemantics and change their query accordingly. This is a breaking change\nbecause even dates without datemath previously used the different\nparsing functions depending on context.\n\ncloses #8598\n", "number": 8889, "review_comments": [], "title": "Remove `mapping.date.round_ceil` setting for date math parsing" }
{ "commits": [ { "message": "Settings: Remove `mapping.date.round_ceil` setting for date math parsing\n\nThe setting `mapping.date.round_ceil` (and the undocumented setting\n`index.mapping.date.parse_upper_inclusive`) affect how date ranges using\n`lte` are parsed. In #8556 the semantics of date rounding were\nsolidified, eliminating the need to have different parsing functions\nwhether the date is inclusive or exclusive.\n\nThis change removes these legacy settings and improves the tests\nfor the date math parser (now at 100% coverage!). It also removes the\nunnecessary function `DateMathParser.parseTimeZone` for which\nthe existing `DateTimeZone.forID` handles all use cases.\n\nAny user previously using these settings can refer to the changed\nsemantics and change their query accordingly. This is a breaking change\nbecause even dates without datemath previously used the different\nparsing functions depending on context.\n\ncloses #8598\ncloses #8889" } ], "files": [ { "diff": "@@ -35,13 +35,15 @@ then follow by a math expression, supporting `+`, `-` and `/`\n Here are some samples: `now+1h`, `now+1h+1m`, `now+1h/d`,\n `2012-01-01||+1M/d`.\n \n-Note, when doing `range` type searches, and the upper value is\n-inclusive, the rounding will properly be rounded to the ceiling instead\n-of flooring it.\n-\n-To change this behavior, set \n-`\"mapping.date.round_ceil\": false`.\n-\n+When doing `range` type searches with rounding, the value parsed\n+depends on whether the end of the range is inclusive or exclusive, and\n+whether the beginning or end of the range. Rounding up moves to the\n+last millisecond of the rounding scope, and rounding down to the\n+first millisecond of the rounding scope. The semantics work as follows:\n+* `gt` - round up, and use > that value (`2014-11-18||/M` becomes `2014-11-30T23:59:59.999`, ie excluding the entire month)\n+* `gte` - round D down, and use >= that value (`2014-11-18||/M` becomes `2014-11-01`, ie including the entire month)\n+* `lt` - round D down, and use < that value (`2014-11-18||/M` becomes `2014-11-01`, ie excluding the entire month)\n+* `lte` - round D up, and use <= that value(`2014-11-18||/M` becomes `2014-11-30T23:59:59.999`, ie including the entire month)\n \n [float]\n [[built-in]]", "filename": "docs/reference/mapping/date-format.asciidoc", "status": "modified" }, { "diff": "@@ -19,23 +19,29 @@\n \n package org.elasticsearch.common.joda;\n \n+import org.apache.commons.lang3.StringUtils;\n import org.elasticsearch.ElasticsearchParseException;\n import org.joda.time.DateTimeZone;\n import org.joda.time.MutableDateTime;\n import org.joda.time.format.DateTimeFormatter;\n \n-import java.io.IOException;\n import java.util.concurrent.TimeUnit;\n \n /**\n+ * A parser for date/time formatted text with optional date math.\n+ * \n+ * The format of the datetime is configurable, and unix timestamps can also be used. Datemath\n+ * is appended to a datetime with the following syntax:\n+ * <code>||[+-/](\\d+)?[yMwdhHms]</code>.\n */\n public class DateMathParser {\n \n private final FormatDateTimeFormatter dateTimeFormatter;\n-\n private final TimeUnit timeUnit;\n \n public DateMathParser(FormatDateTimeFormatter dateTimeFormatter, TimeUnit timeUnit) {\n+ if (dateTimeFormatter == null) throw new NullPointerException();\n+ if (timeUnit == null) throw new NullPointerException();\n this.dateTimeFormatter = dateTimeFormatter;\n this.timeUnit = timeUnit;\n }\n@@ -44,34 +50,25 @@ public long parse(String text, long now) {\n return parse(text, now, false, null);\n }\n \n- public long parse(String text, long now, boolean roundCeil, DateTimeZone timeZone) {\n+ public long parse(String text, long now, boolean roundUp, DateTimeZone timeZone) {\n long time;\n String mathString;\n if (text.startsWith(\"now\")) {\n time = now;\n mathString = text.substring(\"now\".length());\n } else {\n int index = text.indexOf(\"||\");\n- String parseString;\n if (index == -1) {\n- parseString = text;\n- mathString = \"\"; // nothing else\n- } else {\n- parseString = text.substring(0, index);\n- mathString = text.substring(index + 2);\n+ return parseDateTime(text, timeZone);\n }\n- if (roundCeil) {\n- time = parseRoundCeilStringValue(parseString, timeZone);\n- } else {\n- time = parseStringValue(parseString, timeZone);\n+ time = parseDateTime(text.substring(0, index), timeZone);\n+ mathString = text.substring(index + 2);\n+ if (mathString.isEmpty()) {\n+ return time;\n }\n }\n \n- if (mathString.isEmpty()) {\n- return time;\n- }\n-\n- return parseMath(mathString, time, roundCeil);\n+ return parseMath(mathString, time, roundUp);\n }\n \n private long parseMath(String mathString, long time, boolean roundUp) throws ElasticsearchParseException {\n@@ -174,7 +171,9 @@ private long parseMath(String mathString, long time, boolean roundUp) throws Ela\n }\n if (propertyToRound != null) {\n if (roundUp) {\n- propertyToRound.roundCeiling();\n+ // we want to go up to the next whole value, even if we are already on a rounded value\n+ propertyToRound.add(1);\n+ propertyToRound.roundFloor();\n dateTime.addMillis(-1); // subtract 1 millisecond to get the largest inclusive value\n } else {\n propertyToRound.roundFloor();\n@@ -184,83 +183,27 @@ private long parseMath(String mathString, long time, boolean roundUp) throws Ela\n return dateTime.getMillis();\n }\n \n- /**\n- * Get a DateTimeFormatter parser applying timezone if any.\n- */\n- public static DateTimeFormatter getDateTimeFormatterParser(FormatDateTimeFormatter dateTimeFormatter, DateTimeZone timeZone) {\n- if (dateTimeFormatter == null) {\n- return null;\n+ private long parseDateTime(String value, DateTimeZone timeZone) {\n+ \n+ // first check for timestamp\n+ if (value.length() > 4 && StringUtils.isNumeric(value)) {\n+ try {\n+ long time = Long.parseLong(value);\n+ return timeUnit.toMillis(time);\n+ } catch (NumberFormatException e) {\n+ throw new ElasticsearchParseException(\"failed to parse date field [\" + value + \"] as timestamp\", e);\n+ }\n }\n-\n+ \n DateTimeFormatter parser = dateTimeFormatter.parser();\n if (timeZone != null) {\n parser = parser.withZone(timeZone);\n }\n- return parser;\n- }\n-\n- private long parseStringValue(String value, DateTimeZone timeZone) {\n try {\n- DateTimeFormatter parser = getDateTimeFormatterParser(dateTimeFormatter, timeZone);\n return parser.parseMillis(value);\n- } catch (RuntimeException e) {\n- try {\n- // When date is given as a numeric value, it's a date in ms since epoch\n- // By definition, it's a UTC date.\n- long time = Long.parseLong(value);\n- return timeUnit.toMillis(time);\n- } catch (NumberFormatException e1) {\n- throw new ElasticsearchParseException(\"failed to parse date field [\" + value + \"], tried both date format [\" + dateTimeFormatter.format() + \"], and timestamp number\", e);\n- }\n- }\n- }\n-\n- private long parseRoundCeilStringValue(String value, DateTimeZone timeZone) {\n- try {\n- // we create a date time for inclusive upper range, we \"include\" by default the day level data\n- // so something like 2011-01-01 will include the full first day of 2011.\n- // we also use 1970-01-01 as the base for it so we can handle searches like 10:12:55 (just time)\n- // since when we index those, the base is 1970-01-01\n- MutableDateTime dateTime = new MutableDateTime(1970, 1, 1, 23, 59, 59, 999, DateTimeZone.UTC);\n- DateTimeFormatter parser = getDateTimeFormatterParser(dateTimeFormatter, timeZone);\n- int location = parser.parseInto(dateTime, value, 0);\n- // if we parsed all the string value, we are good\n- if (location == value.length()) {\n- return dateTime.getMillis();\n- }\n- // if we did not manage to parse, or the year is really high year which is unreasonable\n- // see if its a number\n- if (location <= 0 || dateTime.getYear() > 5000) {\n- try {\n- long time = Long.parseLong(value);\n- return timeUnit.toMillis(time);\n- } catch (NumberFormatException e1) {\n- throw new ElasticsearchParseException(\"failed to parse date field [\" + value + \"], tried both date format [\" + dateTimeFormatter.format() + \"], and timestamp number\");\n- }\n- }\n- return dateTime.getMillis();\n- } catch (RuntimeException e) {\n- try {\n- long time = Long.parseLong(value);\n- return timeUnit.toMillis(time);\n- } catch (NumberFormatException e1) {\n- throw new ElasticsearchParseException(\"failed to parse date field [\" + value + \"], tried both date format [\" + dateTimeFormatter.format() + \"], and timestamp number\", e);\n- }\n- }\n- }\n-\n- public static DateTimeZone parseZone(String text) throws IOException {\n- int index = text.indexOf(':');\n- if (index != -1) {\n- int beginIndex = text.charAt(0) == '+' ? 1 : 0;\n- // format like -02:30\n- return DateTimeZone.forOffsetHoursMinutes(\n- Integer.parseInt(text.substring(beginIndex, index)),\n- Integer.parseInt(text.substring(index + 1))\n- );\n- } else {\n- // id, listed here: http://joda-time.sourceforge.net/timezones.html\n- return DateTimeZone.forID(text);\n+ } catch (IllegalArgumentException e) {\n+ \n+ throw new ElasticsearchParseException(\"failed to parse date field [\" + value + \"] with format [\" + dateTimeFormatter.format() + \"]\", e);\n }\n }\n ", "filename": "src/main/java/org/elasticsearch/common/joda/DateMathParser.java", "status": "modified" }, { "diff": "@@ -71,9 +71,6 @@\n import static org.elasticsearch.index.mapper.core.TypeParsers.parseDateTimeFormatter;\n import static org.elasticsearch.index.mapper.core.TypeParsers.parseNumberField;\n \n-/**\n- *\n- */\n public class DateFieldMapper extends NumberFieldMapper<Long> {\n \n public static final String CONTENT_TYPE = \"date\";\n@@ -90,7 +87,6 @@ public static class Defaults extends NumberFieldMapper.Defaults {\n public static final String NULL_VALUE = null;\n \n public static final TimeUnit TIME_UNIT = TimeUnit.MILLISECONDS;\n- public static final boolean ROUND_CEIL = true;\n }\n \n public static class Builder extends NumberFieldMapper.Builder<Builder, DateFieldMapper> {\n@@ -127,17 +123,12 @@ public Builder dateTimeFormatter(FormatDateTimeFormatter dateTimeFormatter) {\n \n @Override\n public DateFieldMapper build(BuilderContext context) {\n- boolean roundCeil = Defaults.ROUND_CEIL;\n- if (context.indexSettings() != null) {\n- Settings settings = context.indexSettings();\n- roundCeil = settings.getAsBoolean(\"index.mapping.date.round_ceil\", settings.getAsBoolean(\"index.mapping.date.parse_upper_inclusive\", Defaults.ROUND_CEIL));\n- }\n fieldType.setOmitNorms(fieldType.omitNorms() && boost == 1.0f);\n if (!locale.equals(dateTimeFormatter.locale())) {\n dateTimeFormatter = new FormatDateTimeFormatter(dateTimeFormatter.format(), dateTimeFormatter.parser(), dateTimeFormatter.printer(), locale);\n }\n DateFieldMapper fieldMapper = new DateFieldMapper(buildNames(context), dateTimeFormatter,\n- fieldType.numericPrecisionStep(), boost, fieldType, docValues, nullValue, timeUnit, roundCeil, ignoreMalformed(context), coerce(context),\n+ fieldType.numericPrecisionStep(), boost, fieldType, docValues, nullValue, timeUnit, ignoreMalformed(context), coerce(context),\n postingsProvider, docValuesProvider, similarity, normsLoading, fieldDataSettings, context.indexSettings(),\n multiFieldsBuilder.build(this, context), copyTo);\n fieldMapper.includeInAll(includeInAll);\n@@ -182,23 +173,14 @@ public static class TypeParser implements Mapper.TypeParser {\n \n protected FormatDateTimeFormatter dateTimeFormatter;\n \n- // Triggers rounding up of the upper bound for range queries and filters if\n- // set to true.\n- // Rounding up a date here has the following meaning: If a date is not\n- // defined with full precision, for example, no milliseconds given, the date\n- // will be filled up to the next larger date with that precision.\n- // Example: An upper bound given as \"2000-01-01\", will be converted to\n- // \"2000-01-01T23.59.59.999\"\n- private final boolean roundCeil;\n-\n private final DateMathParser dateMathParser;\n \n private String nullValue;\n \n protected final TimeUnit timeUnit;\n \n protected DateFieldMapper(Names names, FormatDateTimeFormatter dateTimeFormatter, int precisionStep, float boost, FieldType fieldType, Boolean docValues,\n- String nullValue, TimeUnit timeUnit, boolean roundCeil, Explicit<Boolean> ignoreMalformed,Explicit<Boolean> coerce,\n+ String nullValue, TimeUnit timeUnit, Explicit<Boolean> ignoreMalformed,Explicit<Boolean> coerce,\n PostingsFormatProvider postingsProvider, DocValuesFormatProvider docValuesProvider, SimilarityProvider similarity,\n \n Loading normsLoading, @Nullable Settings fieldDataSettings, Settings indexSettings, MultiFields multiFields, CopyTo copyTo) {\n@@ -208,7 +190,6 @@ protected DateFieldMapper(Names names, FormatDateTimeFormatter dateTimeFormatter\n this.dateTimeFormatter = dateTimeFormatter;\n this.nullValue = nullValue;\n this.timeUnit = timeUnit;\n- this.roundCeil = roundCeil;\n this.dateMathParser = new DateMathParser(dateTimeFormatter, timeUnit);\n }\n \n@@ -328,8 +309,7 @@ public long parseToMilliseconds(String value, boolean inclusive, @Nullable DateT\n if (forcedDateParser != null) {\n dateParser = forcedDateParser;\n }\n- boolean roundUp = inclusive && roundCeil;\n- return dateParser.parse(value, now, roundUp, zone);\n+ return dateParser.parse(value, now, inclusive, zone);\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/index/mapper/core/DateFieldMapper.java", "status": "modified" }, { "diff": "@@ -49,8 +49,6 @@\n import static org.elasticsearch.index.mapper.core.TypeParsers.parseDateTimeFormatter;\n import static org.elasticsearch.index.mapper.core.TypeParsers.parseField;\n \n-/**\n- */\n public class TimestampFieldMapper extends DateFieldMapper implements InternalMapper, RootMapper {\n \n public static final String NAME = \"_timestamp\";\n@@ -123,12 +121,7 @@ public TimestampFieldMapper build(BuilderContext context) {\n assert fieldType.stored();\n fieldType.setStored(false);\n }\n- boolean roundCeil = Defaults.ROUND_CEIL;\n- if (context.indexSettings() != null) {\n- Settings settings = context.indexSettings();\n- roundCeil = settings.getAsBoolean(\"index.mapping.date.round_ceil\", settings.getAsBoolean(\"index.mapping.date.parse_upper_inclusive\", Defaults.ROUND_CEIL));\n- }\n- return new TimestampFieldMapper(fieldType, docValues, enabledState, path, dateTimeFormatter, defaultTimestamp, roundCeil,\n+ return new TimestampFieldMapper(fieldType, docValues, enabledState, path, dateTimeFormatter, defaultTimestamp,\n ignoreMalformed(context), coerce(context), postingsProvider, docValuesProvider, normsLoading, fieldDataSettings, context.indexSettings());\n }\n }\n@@ -173,18 +166,18 @@ private static FieldType defaultFieldType(Settings settings) {\n \n public TimestampFieldMapper(Settings indexSettings) {\n this(new FieldType(defaultFieldType(indexSettings)), null, Defaults.ENABLED, Defaults.PATH, Defaults.DATE_TIME_FORMATTER, Defaults.DEFAULT_TIMESTAMP,\n- Defaults.ROUND_CEIL, Defaults.IGNORE_MALFORMED, Defaults.COERCE, null, null, null, null, indexSettings);\n+ Defaults.IGNORE_MALFORMED, Defaults.COERCE, null, null, null, null, indexSettings);\n }\n \n protected TimestampFieldMapper(FieldType fieldType, Boolean docValues, EnabledAttributeMapper enabledState, String path,\n- FormatDateTimeFormatter dateTimeFormatter, String defaultTimestamp, boolean roundCeil,\n+ FormatDateTimeFormatter dateTimeFormatter, String defaultTimestamp,\n Explicit<Boolean> ignoreMalformed, Explicit<Boolean> coerce, PostingsFormatProvider postingsProvider,\n DocValuesFormatProvider docValuesProvider, Loading normsLoading,\n @Nullable Settings fieldDataSettings, Settings indexSettings) {\n super(new Names(Defaults.NAME, Defaults.NAME, Defaults.NAME, Defaults.NAME), dateTimeFormatter,\n Defaults.PRECISION_STEP_64_BIT, Defaults.BOOST, fieldType, docValues,\n Defaults.NULL_VALUE, TimeUnit.MILLISECONDS /*always milliseconds*/,\n- roundCeil, ignoreMalformed, coerce, postingsProvider, docValuesProvider, null, normsLoading, fieldDataSettings, \n+ ignoreMalformed, coerce, postingsProvider, docValuesProvider, null, normsLoading, fieldDataSettings, \n indexSettings, MultiFields.empty(), null);\n this.enabledState = enabledState;\n this.path = path;", "filename": "src/main/java/org/elasticsearch/index/mapper/internal/TimestampFieldMapper.java", "status": "modified" }, { "diff": "@@ -29,7 +29,6 @@\n import org.elasticsearch.common.ParseField;\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.inject.Inject;\n-import org.elasticsearch.common.joda.DateMathParser;\n import org.elasticsearch.common.lucene.search.Queries;\n import org.elasticsearch.common.regex.Regex;\n import org.elasticsearch.common.settings.Settings;\n@@ -38,6 +37,7 @@\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.index.analysis.NamedAnalyzer;\n import org.elasticsearch.index.query.support.QueryParsers;\n+import org.joda.time.DateTimeZone;\n \n import java.io.IOException;\n import java.util.Locale;\n@@ -197,7 +197,7 @@ public Query parse(QueryParseContext parseContext) throws IOException, QueryPars\n qpSettings.locale(LocaleUtils.parse(localeStr));\n } else if (\"time_zone\".equals(currentFieldName)) {\n try {\n- qpSettings.timeZone(DateMathParser.parseZone(parser.text()));\n+ qpSettings.timeZone(DateTimeZone.forID(parser.text()));\n } catch (IllegalArgumentException e) {\n throw new QueryParsingException(parseContext.index(), \"[query_string] time_zone [\" + parser.text() + \"] is unknown\");\n }", "filename": "src/main/java/org/elasticsearch/index/query/QueryStringQueryParser.java", "status": "modified" }, { "diff": "@@ -101,7 +101,7 @@ public Filter parse(QueryParseContext parseContext) throws IOException, QueryPar\n to = parser.objectBytes();\n includeUpper = true;\n } else if (\"time_zone\".equals(currentFieldName) || \"timeZone\".equals(currentFieldName)) {\n- timeZone = DateMathParser.parseZone(parser.text());\n+ timeZone = DateTimeZone.forID(parser.text());\n } else if (\"format\".equals(currentFieldName)) {\n forcedDateParser = new DateMathParser(Joda.forPattern(parser.text()), DateFieldMapper.Defaults.TIME_UNIT);\n } else {", "filename": "src/main/java/org/elasticsearch/index/query/RangeFilterParser.java", "status": "modified" }, { "diff": "@@ -102,7 +102,7 @@ public Query parse(QueryParseContext parseContext) throws IOException, QueryPars\n to = parser.objectBytes();\n includeUpper = true;\n } else if (\"time_zone\".equals(currentFieldName) || \"timeZone\".equals(currentFieldName)) {\n- timeZone = DateMathParser.parseZone(parser.text());\n+ timeZone = DateTimeZone.forID(parser.text());\n } else if (\"_name\".equals(currentFieldName)) {\n queryName = parser.text();\n } else if (\"format\".equals(currentFieldName)) {", "filename": "src/main/java/org/elasticsearch/index/query/RangeQueryParser.java", "status": "modified" }, { "diff": "@@ -100,11 +100,11 @@ public AggregatorFactory parse(String aggregationName, XContentParser parser, Se\n continue;\n } else if (token == XContentParser.Token.VALUE_STRING) {\n if (\"time_zone\".equals(currentFieldName) || \"timeZone\".equals(currentFieldName)) {\n- preZone = DateMathParser.parseZone(parser.text());\n+ preZone = DateTimeZone.forID(parser.text());\n } else if (\"pre_zone\".equals(currentFieldName) || \"preZone\".equals(currentFieldName)) {\n- preZone = DateMathParser.parseZone(parser.text());\n+ preZone = DateTimeZone.forID(parser.text());\n } else if (\"post_zone\".equals(currentFieldName) || \"postZone\".equals(currentFieldName)) {\n- postZone = DateMathParser.parseZone(parser.text());\n+ postZone = DateTimeZone.forID(parser.text());\n } else if (\"pre_offset\".equals(currentFieldName) || \"preOffset\".equals(currentFieldName)) {\n preOffset = parseOffset(parser.text());\n } else if (\"post_offset\".equals(currentFieldName) || \"postOffset\".equals(currentFieldName)) {", "filename": "src/main/java/org/elasticsearch/search/aggregations/bucket/histogram/DateHistogramParser.java", "status": "modified" }, { "diff": "@@ -20,11 +20,14 @@\n package org.elasticsearch.common.joda;\n \n import org.elasticsearch.ElasticsearchParseException;\n+import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.test.ElasticsearchTestCase;\n import org.joda.time.DateTimeZone;\n \n import java.util.concurrent.TimeUnit;\n \n+import static org.hamcrest.Matchers.equalTo;\n+\n public class DateMathParserTests extends ElasticsearchTestCase {\n FormatDateTimeFormatter formatter = Joda.forPattern(\"dateOptionalTime\");\n DateMathParser parser = new DateMathParser(formatter, TimeUnit.MILLISECONDS);\n@@ -34,16 +37,19 @@ void assertDateMathEquals(String toTest, String expected) {\n }\n \n void assertDateMathEquals(String toTest, String expected, long now, boolean roundUp, DateTimeZone timeZone) {\n- DateMathParser parser = new DateMathParser(Joda.forPattern(\"dateOptionalTime\"), TimeUnit.MILLISECONDS);\n- long gotMillis = parser.parse(toTest, now, roundUp, null);\n+ long gotMillis = parser.parse(toTest, now, roundUp, timeZone);\n+ assertDateEquals(gotMillis, toTest, expected);\n+ }\n+ \n+ void assertDateEquals(long gotMillis, String original, String expected) {\n long expectedMillis = parser.parse(expected, 0);\n if (gotMillis != expectedMillis) {\n fail(\"Date math not equal\\n\" +\n- \"Original : \" + toTest + \"\\n\" +\n- \"Parsed : \" + formatter.printer().print(gotMillis) + \"\\n\" +\n- \"Expected : \" + expected + \"\\n\" +\n- \"Expected milliseconds : \" + expectedMillis + \"\\n\" +\n- \"Actual milliseconds : \" + gotMillis + \"\\n\");\n+ \"Original : \" + original + \"\\n\" +\n+ \"Parsed : \" + formatter.printer().print(gotMillis) + \"\\n\" +\n+ \"Expected : \" + expected + \"\\n\" +\n+ \"Expected milliseconds : \" + expectedMillis + \"\\n\" +\n+ \"Actual milliseconds : \" + gotMillis + \"\\n\");\n }\n }\n \n@@ -56,6 +62,23 @@ public void testBasicDates() {\n assertDateMathEquals(\"2014-05-30T20:21:35\", \"2014-05-30T20:21:35.000\");\n assertDateMathEquals(\"2014-05-30T20:21:35.123\", \"2014-05-30T20:21:35.123\");\n }\n+\n+ public void testRoundingDoesNotAffectExactDate() {\n+ assertDateMathEquals(\"2014-11-12T22:55:00Z\", \"2014-11-12T22:55:00Z\", 0, true, null);\n+ assertDateMathEquals(\"2014-11-12T22:55:00Z\", \"2014-11-12T22:55:00Z\", 0, false, null);\n+ }\n+\n+ public void testTimezone() {\n+ // timezone works within date format\n+ assertDateMathEquals(\"2014-05-30T20:21+02:00\", \"2014-05-30T18:21:00.000\");\n+\n+ // but also externally\n+ assertDateMathEquals(\"2014-05-30T20:21\", \"2014-05-30T18:21:00.000\", 0, false, DateTimeZone.forID(\"+02:00\"));\n+\n+ // and timezone in the date has priority\n+ assertDateMathEquals(\"2014-05-30T20:21+03:00\", \"2014-05-30T17:21:00.000\", 0, false, DateTimeZone.forID(\"-08:00\"));\n+ assertDateMathEquals(\"2014-05-30T20:21Z\", \"2014-05-30T20:21:00.000\", 0, false, DateTimeZone.forID(\"-08:00\"));\n+ }\n \n public void testBasicMath() {\n assertDateMathEquals(\"2014-11-18||+y\", \"2015-11-18\");\n@@ -81,6 +104,10 @@ public void testBasicMath() {\n assertDateMathEquals(\"2014-11-18T14:27:32||+60s\", \"2014-11-18T14:28:32\");\n assertDateMathEquals(\"2014-11-18T14:27:32||-3600s\", \"2014-11-18T13:27:32\");\n }\n+ \n+ public void testLenientEmptyMath() {\n+ assertDateMathEquals(\"2014-05-30T20:21||\", \"2014-05-30T20:21:00.000\");\n+ }\n \n public void testMultipleAdjustments() {\n assertDateMathEquals(\"2014-11-18||+1M-1M\", \"2014-11-18\");\n@@ -97,13 +124,16 @@ public void testNow() {\n assertDateMathEquals(\"now+M\", \"2014-12-18T14:27:32\", now, false, null);\n assertDateMathEquals(\"now-2d\", \"2014-11-16T14:27:32\", now, false, null);\n assertDateMathEquals(\"now/m\", \"2014-11-18T14:27\", now, false, null);\n+ \n+ // timezone does not affect now\n+ assertDateMathEquals(\"now/m\", \"2014-11-18T14:27\", now, false, DateTimeZone.forID(\"+02:00\"));\n }\n \n public void testRounding() {\n assertDateMathEquals(\"2014-11-18||/y\", \"2014-01-01\", 0, false, null);\n assertDateMathEquals(\"2014-11-18||/y\", \"2014-12-31T23:59:59.999\", 0, true, null);\n assertDateMathEquals(\"2014||/y\", \"2014-01-01\", 0, false, null);\n- assertDateMathEquals(\"2014||/y\", \"2014-12-31T23:59:59.999\", 0, true, null);\n+ assertDateMathEquals(\"2014-01-01T00:00:00.001||/y\", \"2014-12-31T23:59:59.999\", 0, true, null);\n \n assertDateMathEquals(\"2014-11-18||/M\", \"2014-11-01\", 0, false, null);\n assertDateMathEquals(\"2014-11-18||/M\", \"2014-11-30T23:59:59.999\", 0, true, null);\n@@ -140,20 +170,47 @@ public void testRounding() {\n assertDateMathEquals(\"2014-11-18T14:27:32||/s\", \"2014-11-18T14:27:32.999\", 0, true, null);\n }\n \n- void assertParseException(String msg, String date) {\n+ public void testTimestamps() {\n+ assertDateMathEquals(\"1418248078000\", \"2014-12-10T21:47:58.000\");\n+\n+ // timezone does not affect timestamps\n+ assertDateMathEquals(\"1418248078000\", \"2014-12-10T21:47:58.000\", 0, false, DateTimeZone.forID(\"-08:00\"));\n+\n+ // datemath still works on timestamps\n+ assertDateMathEquals(\"1418248078000||/m\", \"2014-12-10T21:47:00.000\");\n+ \n+ // also check other time units\n+ DateMathParser parser = new DateMathParser(Joda.forPattern(\"dateOptionalTime\"), TimeUnit.SECONDS);\n+ long datetime = parser.parse(\"1418248078\", 0);\n+ assertDateEquals(datetime, \"1418248078\", \"2014-12-10T21:47:58.000\");\n+ \n+ // a timestamp before 10000 is a year\n+ assertDateMathEquals(\"9999\", \"9999-01-01T00:00:00.000\");\n+ // 10000 is the first timestamp\n+ assertDateMathEquals(\"10000\", \"1970-01-01T00:00:10.000\");\n+ // but 10000 with T is still a date format\n+ assertDateMathEquals(\"10000T\", \"10000-01-01T00:00:00.000\");\n+ }\n+ \n+ void assertParseException(String msg, String date, String exc) {\n try {\n parser.parse(date, 0);\n fail(\"Date: \" + date + \"\\n\" + msg);\n } catch (ElasticsearchParseException e) {\n- // expected\n+ assertThat(ExceptionsHelper.detailedMessage(e).contains(exc), equalTo(true));\n }\n }\n \n public void testIllegalMathFormat() {\n- assertParseException(\"Expected date math unsupported operator exception\", \"2014-11-18||*5\");\n- assertParseException(\"Expected date math incompatible rounding exception\", \"2014-11-18||/2m\");\n- assertParseException(\"Expected date math illegal unit type exception\", \"2014-11-18||+2a\");\n- assertParseException(\"Expected date math truncation exception\", \"2014-11-18||+12\");\n- assertParseException(\"Expected date math truncation exception\", \"2014-11-18||-\");\n+ assertParseException(\"Expected date math unsupported operator exception\", \"2014-11-18||*5\", \"operator not supported\");\n+ assertParseException(\"Expected date math incompatible rounding exception\", \"2014-11-18||/2m\", \"rounding\");\n+ assertParseException(\"Expected date math illegal unit type exception\", \"2014-11-18||+2a\", \"unit [a] not supported\");\n+ assertParseException(\"Expected date math truncation exception\", \"2014-11-18||+12\", \"truncated\");\n+ assertParseException(\"Expected date math truncation exception\", \"2014-11-18||-\", \"truncated\");\n+ }\n+ \n+ public void testIllegalDateFormat() {\n+ assertParseException(\"Expected bad timestamp exception\", Long.toString(Long.MAX_VALUE) + \"0\", \"timestamp\");\n+ assertParseException(\"Expected bad date format exception\", \"123bogus\", \"with format\");\n }\n }", "filename": "src/test/java/org/elasticsearch/common/joda/DateMathParserTests.java", "status": "modified" }, { "diff": "@@ -104,20 +104,6 @@ public void simpleIdTests() {\n assertHitCount(countResponse, 1l);\n }\n \n- @Test\n- public void simpleDateMathTests() throws Exception {\n- createIndex(\"test\");\n- client().prepareIndex(\"test\", \"type1\", \"1\").setSource(\"field\", \"2010-01-05T02:00\").execute().actionGet();\n- client().prepareIndex(\"test\", \"type1\", \"2\").setSource(\"field\", \"2010-01-06T02:00\").execute().actionGet();\n- ensureGreen();\n- refresh();\n- CountResponse countResponse = client().prepareCount(\"test\").setQuery(QueryBuilders.rangeQuery(\"field\").gte(\"2010-01-03||+2d\").lte(\"2010-01-04||+2d\")).execute().actionGet();\n- assertHitCount(countResponse, 2l);\n-\n- countResponse = client().prepareCount(\"test\").setQuery(QueryBuilders.queryStringQuery(\"field:[2010-01-03||+2d TO 2010-01-04||+2d]\")).execute().actionGet();\n- assertHitCount(countResponse, 2l);\n- }\n-\n @Test\n public void simpleCountEarlyTerminationTests() throws Exception {\n // set up one shard only to test early termination\n@@ -130,29 +116,26 @@ public void simpleCountEarlyTerminationTests() throws Exception {\n \n for (int i = 1; i <= max; i++) {\n String id = String.valueOf(i);\n- docbuilders.add(client().prepareIndex(\"test\", \"type1\", id).setSource(\"field\", \"2010-01-\"+ id +\"T02:00\"));\n+ docbuilders.add(client().prepareIndex(\"test\", \"type1\", id).setSource(\"field\", i));\n }\n \n indexRandom(true, docbuilders);\n ensureGreen();\n refresh();\n \n- String upperBound = \"2010-01-\" + String.valueOf(max+1) + \"||+2d\";\n- String lowerBound = \"2009-12-01||+2d\";\n-\n // sanity check\n- CountResponse countResponse = client().prepareCount(\"test\").setQuery(QueryBuilders.rangeQuery(\"field\").gte(lowerBound).lte(upperBound)).execute().actionGet();\n+ CountResponse countResponse = client().prepareCount(\"test\").setQuery(QueryBuilders.rangeQuery(\"field\").gte(1).lte(max)).execute().actionGet();\n assertHitCount(countResponse, max);\n \n // threshold <= actual count\n for (int i = 1; i <= max; i++) {\n- countResponse = client().prepareCount(\"test\").setQuery(QueryBuilders.rangeQuery(\"field\").gte(lowerBound).lte(upperBound)).setTerminateAfter(i).execute().actionGet();\n+ countResponse = client().prepareCount(\"test\").setQuery(QueryBuilders.rangeQuery(\"field\").gte(1).lte(max)).setTerminateAfter(i).execute().actionGet();\n assertHitCount(countResponse, i);\n assertTrue(countResponse.terminatedEarly());\n }\n \n // threshold > actual count\n- countResponse = client().prepareCount(\"test\").setQuery(QueryBuilders.rangeQuery(\"field\").gte(lowerBound).lte(upperBound)).setTerminateAfter(max + randomIntBetween(1, max)).execute().actionGet();\n+ countResponse = client().prepareCount(\"test\").setQuery(QueryBuilders.rangeQuery(\"field\").gte(1).lte(max)).setTerminateAfter(max + randomIntBetween(1, max)).execute().actionGet();\n assertHitCount(countResponse, max);\n assertFalse(countResponse.terminatedEarly());\n }", "filename": "src/test/java/org/elasticsearch/count/simple/SimpleCountTests.java", "status": "modified" }, { "diff": "@@ -99,29 +99,15 @@ public void simpleIdTests() {\n assertExists(existsResponse, true);\n }\n \n- @Test\n- public void simpleDateMathTests() throws Exception {\n- createIndex(\"test\");\n- client().prepareIndex(\"test\", \"type1\", \"1\").setSource(\"field\", \"2010-01-05T02:00\").execute().actionGet();\n- client().prepareIndex(\"test\", \"type1\", \"2\").setSource(\"field\", \"2010-01-06T02:00\").execute().actionGet();\n- ensureGreen();\n- refresh();\n- ExistsResponse existsResponse = client().prepareExists(\"test\").setQuery(QueryBuilders.rangeQuery(\"field\").gte(\"2010-01-03||+2d\").lte(\"2010-01-04||+2d\")).execute().actionGet();\n- assertExists(existsResponse, true);\n-\n- existsResponse = client().prepareExists(\"test\").setQuery(QueryBuilders.queryStringQuery(\"field:[2010-01-03||+2d TO 2010-01-04||+2d]\")).execute().actionGet();\n- assertExists(existsResponse, true);\n- }\n-\n @Test\n public void simpleNonExistenceTests() throws Exception {\n createIndex(\"test\");\n- client().prepareIndex(\"test\", \"type1\", \"1\").setSource(\"field\", \"2010-01-05T02:00\").execute().actionGet();\n- client().prepareIndex(\"test\", \"type1\", \"2\").setSource(\"field\", \"2010-01-06T02:00\").execute().actionGet();\n+ client().prepareIndex(\"test\", \"type1\", \"1\").setSource(\"field\", 2).execute().actionGet();\n+ client().prepareIndex(\"test\", \"type1\", \"2\").setSource(\"field\", 5).execute().actionGet();\n client().prepareIndex(\"test\", \"type\", \"XXX1\").setSource(\"field\", \"value\").execute().actionGet();\n ensureGreen();\n refresh();\n- ExistsResponse existsResponse = client().prepareExists(\"test\").setQuery(QueryBuilders.rangeQuery(\"field\").gte(\"2010-01-07||+2d\").lte(\"2010-01-21||+2d\")).execute().actionGet();\n+ ExistsResponse existsResponse = client().prepareExists(\"test\").setQuery(QueryBuilders.rangeQuery(\"field\").gte(6).lte(8)).execute().actionGet();\n assertExists(existsResponse, false);\n \n existsResponse = client().prepareExists(\"test\").setQuery(QueryBuilders.queryStringQuery(\"_id:XXY*\").lowercaseExpandedTerms(false)).execute().actionGet();", "filename": "src/test/java/org/elasticsearch/exists/SimpleExistsTests.java", "status": "modified" }, { "diff": "@@ -232,7 +232,7 @@ public void testHourFormat() throws Exception {\n }\n assertThat(filter, instanceOf(NumericRangeFilter.class));\n NumericRangeFilter<Long> rangeFilter = (NumericRangeFilter<Long>) filter;\n- assertThat(rangeFilter.getMax(), equalTo(new DateTime(TimeValue.timeValueHours(11).millis() + 999).getMillis())); // +999 to include the 00-01 minute\n+ assertThat(rangeFilter.getMax(), equalTo(new DateTime(TimeValue.timeValueHours(11).millis()).getMillis()));\n assertThat(rangeFilter.getMin(), equalTo(new DateTime(TimeValue.timeValueHours(10).millis()).getMillis()));\n }\n \n@@ -262,7 +262,7 @@ public void testDayWithoutYearFormat() throws Exception {\n }\n assertThat(filter, instanceOf(NumericRangeFilter.class));\n NumericRangeFilter<Long> rangeFilter = (NumericRangeFilter<Long>) filter;\n- assertThat(rangeFilter.getMax(), equalTo(new DateTime(TimeValue.timeValueHours(35).millis() + 999).getMillis())); // +999 to include the 00-01 minute\n+ assertThat(rangeFilter.getMax(), equalTo(new DateTime(TimeValue.timeValueHours(35).millis()).getMillis()));\n assertThat(rangeFilter.getMin(), equalTo(new DateTime(TimeValue.timeValueHours(34).millis()).getMillis()));\n }\n ", "filename": "src/test/java/org/elasticsearch/index/mapper/date/SimpleDateMappingTests.java", "status": "modified" }, { "diff": "@@ -5,7 +5,7 @@\n \"born\" : {\n \"gte\": \"2012-01-01\",\n \"lte\": \"now\",\n- \"time_zone\": \"+1:00\"\n+ \"time_zone\": \"+01:00\"\n }\n }\n }", "filename": "src/test/java/org/elasticsearch/index/query/date_range_filter_timezone.json", "status": "modified" }, { "diff": "@@ -5,7 +5,7 @@\n \"age\" : {\n \"gte\": \"0\",\n \"lte\": \"100\",\n- \"time_zone\": \"-1:00\"\n+ \"time_zone\": \"-01:00\"\n }\n }\n }", "filename": "src/test/java/org/elasticsearch/index/query/date_range_filter_timezone_numeric_field.json", "status": "modified" }, { "diff": "@@ -3,7 +3,7 @@\n \"born\" : {\n \"gte\": \"2012-01-01\",\n \"lte\": \"now\",\n- \"time_zone\": \"+1:00\"\n+ \"time_zone\": \"+01:00\"\n }\n }\n }", "filename": "src/test/java/org/elasticsearch/index/query/date_range_query_timezone.json", "status": "modified" }, { "diff": "@@ -3,7 +3,7 @@\n \"age\" : {\n \"gte\": \"0\",\n \"lte\": \"100\",\n- \"time_zone\": \"-1:00\"\n+ \"time_zone\": \"-01:00\"\n }\n }\n }", "filename": "src/test/java/org/elasticsearch/index/query/date_range_query_timezone_numeric_field.json", "status": "modified" }, { "diff": "@@ -1047,7 +1047,7 @@ public void singleValue_WithPreZone() throws Exception {\n .setQuery(matchAllQuery())\n .addAggregation(dateHistogram(\"date_histo\")\n .field(\"date\")\n- .preZone(\"-2:00\")\n+ .preZone(\"-02:00\")\n .interval(DateHistogram.Interval.DAY)\n .format(\"yyyy-MM-dd\"))\n .execute().actionGet();\n@@ -1082,7 +1082,7 @@ public void singleValue_WithPreZone_WithAadjustLargeInterval() throws Exception\n .setQuery(matchAllQuery())\n .addAggregation(dateHistogram(\"date_histo\")\n .field(\"date\")\n- .preZone(\"-2:00\")\n+ .preZone(\"-02:00\")\n .interval(DateHistogram.Interval.DAY)\n .preZoneAdjustLargeInterval(true)\n .format(\"yyyy-MM-dd'T'HH:mm:ss\"))", "filename": "src/test/java/org/elasticsearch/search/aggregations/bucket/DateHistogramTests.java", "status": "modified" }, { "diff": "@@ -2252,47 +2252,47 @@ public void testRangeFilterWithTimeZone() throws Exception {\n \n // We define a time zone to be applied to the filter and from/to have no time zone\n searchResponse = client().prepareSearch(\"test\")\n- .setQuery(QueryBuilders.filteredQuery(matchAllQuery(), FilterBuilders.rangeFilter(\"date\").from(\"2014-01-01T03:00:00\").to(\"2014-01-01T03:59:00\").timeZone(\"+3:00\")))\n+ .setQuery(QueryBuilders.filteredQuery(matchAllQuery(), FilterBuilders.rangeFilter(\"date\").from(\"2014-01-01T03:00:00\").to(\"2014-01-01T03:59:00\").timeZone(\"+03:00\")))\n .get();\n assertHitCount(searchResponse, 1l);\n assertThat(searchResponse.getHits().getAt(0).getId(), is(\"1\"));\n searchResponse = client().prepareSearch(\"test\")\n- .setQuery(QueryBuilders.filteredQuery(matchAllQuery(), FilterBuilders.rangeFilter(\"date\").from(\"2014-01-01T02:00:00\").to(\"2014-01-01T02:59:00\").timeZone(\"+3:00\")))\n+ .setQuery(QueryBuilders.filteredQuery(matchAllQuery(), FilterBuilders.rangeFilter(\"date\").from(\"2014-01-01T02:00:00\").to(\"2014-01-01T02:59:00\").timeZone(\"+03:00\")))\n .get();\n assertHitCount(searchResponse, 1l);\n assertThat(searchResponse.getHits().getAt(0).getId(), is(\"2\"));\n searchResponse = client().prepareSearch(\"test\")\n- .setQuery(QueryBuilders.filteredQuery(matchAllQuery(), FilterBuilders.rangeFilter(\"date\").from(\"2014-01-01T04:00:00\").to(\"2014-01-01T04:59:00\").timeZone(\"+3:00\")))\n+ .setQuery(QueryBuilders.filteredQuery(matchAllQuery(), FilterBuilders.rangeFilter(\"date\").from(\"2014-01-01T04:00:00\").to(\"2014-01-01T04:59:00\").timeZone(\"+03:00\")))\n .get();\n assertHitCount(searchResponse, 1l);\n assertThat(searchResponse.getHits().getAt(0).getId(), is(\"3\"));\n \n // When we use long values, it means we have ms since epoch UTC based so we don't apply any transformation\n try {\n client().prepareSearch(\"test\")\n- .setQuery(QueryBuilders.filteredQuery(matchAllQuery(), FilterBuilders.rangeFilter(\"date\").from(1388534400000L).to(1388537940999L).timeZone(\"+1:00\")))\n+ .setQuery(QueryBuilders.filteredQuery(matchAllQuery(), FilterBuilders.rangeFilter(\"date\").from(1388534400000L).to(1388537940999L).timeZone(\"+01:00\")))\n .get();\n fail(\"A Range Filter using ms since epoch with a TimeZone should raise a QueryParsingException\");\n } catch (SearchPhaseExecutionException e) {\n // We expect it\n }\n \n searchResponse = client().prepareSearch(\"test\")\n- .setQuery(QueryBuilders.filteredQuery(matchAllQuery(), FilterBuilders.rangeFilter(\"date\").from(\"2014-01-01\").to(\"2014-01-01T00:59:00\").timeZone(\"-1:00\")))\n+ .setQuery(QueryBuilders.filteredQuery(matchAllQuery(), FilterBuilders.rangeFilter(\"date\").from(\"2014-01-01\").to(\"2014-01-01T00:59:00\").timeZone(\"-01:00\")))\n .get();\n assertHitCount(searchResponse, 1l);\n assertThat(searchResponse.getHits().getAt(0).getId(), is(\"3\"));\n \n searchResponse = client().prepareSearch(\"test\")\n- .setQuery(QueryBuilders.filteredQuery(matchAllQuery(), FilterBuilders.rangeFilter(\"date\").from(\"now/d-1d\").timeZone(\"+1:00\")))\n+ .setQuery(QueryBuilders.filteredQuery(matchAllQuery(), FilterBuilders.rangeFilter(\"date\").from(\"now/d-1d\").timeZone(\"+01:00\")))\n .get();\n assertHitCount(searchResponse, 1l);\n assertThat(searchResponse.getHits().getAt(0).getId(), is(\"4\"));\n \n // A Range Filter on a numeric field with a TimeZone should raise an exception\n try {\n client().prepareSearch(\"test\")\n- .setQuery(QueryBuilders.filteredQuery(matchAllQuery(), FilterBuilders.rangeFilter(\"num\").from(\"0\").to(\"4\").timeZone(\"-1:00\")))\n+ .setQuery(QueryBuilders.filteredQuery(matchAllQuery(), FilterBuilders.rangeFilter(\"num\").from(\"0\").to(\"4\").timeZone(\"-01:00\")))\n .get();\n fail(\"A Range Filter on a numeric field with a TimeZone should raise a QueryParsingException\");\n } catch (SearchPhaseExecutionException e) {\n@@ -2348,47 +2348,47 @@ public void testRangeQueryWithTimeZone() throws Exception {\n \n // We define a time zone to be applied to the filter and from/to have no time zone\n searchResponse = client().prepareSearch(\"test\")\n- .setQuery(QueryBuilders.rangeQuery(\"date\").from(\"2014-01-01T03:00:00\").to(\"2014-01-01T03:59:00\").timeZone(\"+3:00\"))\n+ .setQuery(QueryBuilders.rangeQuery(\"date\").from(\"2014-01-01T03:00:00\").to(\"2014-01-01T03:59:00\").timeZone(\"+03:00\"))\n .get();\n assertHitCount(searchResponse, 1l);\n assertThat(searchResponse.getHits().getAt(0).getId(), is(\"1\"));\n searchResponse = client().prepareSearch(\"test\")\n- .setQuery(QueryBuilders.rangeQuery(\"date\").from(\"2014-01-01T02:00:00\").to(\"2014-01-01T02:59:00\").timeZone(\"+3:00\"))\n+ .setQuery(QueryBuilders.rangeQuery(\"date\").from(\"2014-01-01T02:00:00\").to(\"2014-01-01T02:59:00\").timeZone(\"+03:00\"))\n .get();\n assertHitCount(searchResponse, 1l);\n assertThat(searchResponse.getHits().getAt(0).getId(), is(\"2\"));\n searchResponse = client().prepareSearch(\"test\")\n- .setQuery(QueryBuilders.rangeQuery(\"date\").from(\"2014-01-01T04:00:00\").to(\"2014-01-01T04:59:00\").timeZone(\"+3:00\"))\n+ .setQuery(QueryBuilders.rangeQuery(\"date\").from(\"2014-01-01T04:00:00\").to(\"2014-01-01T04:59:00\").timeZone(\"+03:00\"))\n .get();\n assertHitCount(searchResponse, 1l);\n assertThat(searchResponse.getHits().getAt(0).getId(), is(\"3\"));\n \n // When we use long values, it means we have ms since epoch UTC based so we don't apply any transformation\n try {\n client().prepareSearch(\"test\")\n- .setQuery(QueryBuilders.rangeQuery(\"date\").from(1388534400000L).to(1388537940999L).timeZone(\"+1:00\"))\n+ .setQuery(QueryBuilders.rangeQuery(\"date\").from(1388534400000L).to(1388537940999L).timeZone(\"+01:00\"))\n .get();\n fail(\"A Range Filter using ms since epoch with a TimeZone should raise a QueryParsingException\");\n } catch (SearchPhaseExecutionException e) {\n // We expect it\n }\n \n searchResponse = client().prepareSearch(\"test\")\n- .setQuery(QueryBuilders.rangeQuery(\"date\").from(\"2014-01-01\").to(\"2014-01-01T00:59:00\").timeZone(\"-1:00\"))\n+ .setQuery(QueryBuilders.rangeQuery(\"date\").from(\"2014-01-01\").to(\"2014-01-01T00:59:00\").timeZone(\"-01:00\"))\n .get();\n assertHitCount(searchResponse, 1l);\n assertThat(searchResponse.getHits().getAt(0).getId(), is(\"3\"));\n \n searchResponse = client().prepareSearch(\"test\")\n- .setQuery(QueryBuilders.rangeQuery(\"date\").from(\"now/d-1d\").timeZone(\"+1:00\"))\n+ .setQuery(QueryBuilders.rangeQuery(\"date\").from(\"now/d-1d\").timeZone(\"+01:00\"))\n .get();\n assertHitCount(searchResponse, 1l);\n assertThat(searchResponse.getHits().getAt(0).getId(), is(\"4\"));\n \n // A Range Filter on a numeric field with a TimeZone should raise an exception\n try {\n client().prepareSearch(\"test\")\n- .setQuery(QueryBuilders.rangeQuery(\"num\").from(\"0\").to(\"4\").timeZone(\"-1:00\"))\n+ .setQuery(QueryBuilders.rangeQuery(\"num\").from(\"0\").to(\"4\").timeZone(\"-01:00\"))\n .get();\n fail(\"A Range Filter on a numeric field with a TimeZone should raise a QueryParsingException\");\n } catch (SearchPhaseExecutionException e) {", "filename": "src/test/java/org/elasticsearch/search/query/SimpleQueryTests.java", "status": "modified" }, { "diff": "@@ -126,53 +126,34 @@ public void simpleIdTests() {\n }\n \n @Test\n- public void simpleDateRangeWithUpperInclusiveEnabledTests() throws Exception {\n+ public void simpleDateRangeTests() throws Exception {\n createIndex(\"test\");\n client().prepareIndex(\"test\", \"type1\", \"1\").setSource(\"field\", \"2010-01-05T02:00\").execute().actionGet();\n client().prepareIndex(\"test\", \"type1\", \"2\").setSource(\"field\", \"2010-01-06T02:00\").execute().actionGet();\n+ ensureGreen();\n refresh();\n+ SearchResponse searchResponse = client().prepareSearch(\"test\").setQuery(QueryBuilders.rangeQuery(\"field\").gte(\"2010-01-03||+2d\").lte(\"2010-01-04||+2d/d\")).execute().actionGet();\n+ assertNoFailures(searchResponse);\n+ assertHitCount(searchResponse, 2l);\n \n- // test include upper on ranges to include the full day on the upper bound\n- SearchResponse searchResponse = client().prepareSearch(\"test\").setQuery(QueryBuilders.rangeQuery(\"field\").gte(\"2010-01-05\").lte(\"2010-01-06\")).execute().actionGet();\n+ searchResponse = client().prepareSearch(\"test\").setQuery(QueryBuilders.rangeQuery(\"field\").gte(\"2010-01-05T02:00\").lte(\"2010-01-06T02:00\")).execute().actionGet();\n+ assertNoFailures(searchResponse);\n assertHitCount(searchResponse, 2l);\n- searchResponse = client().prepareSearch(\"test\").setQuery(QueryBuilders.rangeQuery(\"field\").gte(\"2010-01-05\").lt(\"2010-01-06\")).execute().actionGet();\n- assertHitCount(searchResponse, 1l);\n- }\n \n- @Test\n- public void simpleDateRangeWithUpperInclusiveDisabledTests() throws Exception {\n- assertAcked(prepareCreate(\"test\").setSettings(ImmutableSettings.settingsBuilder()\n- .put(indexSettings())\n- .put(\"index.mapping.date.round_ceil\", false)));\n- client().prepareIndex(\"test\", \"type1\", \"1\").setSource(\"field\", \"2010-01-05T02:00\").execute().actionGet();\n- client().prepareIndex(\"test\", \"type1\", \"2\").setSource(\"field\", \"2010-01-06T02:00\").execute().actionGet();\n- ensureGreen();\n- refresh();\n- // test include upper on ranges to include the full day on the upper bound (disabled here though...)\n- SearchResponse searchResponse = client().prepareSearch(\"test\").setQuery(QueryBuilders.rangeQuery(\"field\").gte(\"2010-01-05\").lte(\"2010-01-06\")).execute().actionGet();\n+ searchResponse = client().prepareSearch(\"test\").setQuery(QueryBuilders.rangeQuery(\"field\").gte(\"2010-01-05T02:00\").lt(\"2010-01-06T02:00\")).execute().actionGet();\n assertNoFailures(searchResponse);\n assertHitCount(searchResponse, 1l);\n- searchResponse = client().prepareSearch(\"test\").setQuery(QueryBuilders.rangeQuery(\"field\").gte(\"2010-01-05\").lt(\"2010-01-06\")).execute().actionGet();\n- assertHitCount(searchResponse, 1l);\n- }\n \n- @Test @TestLogging(\"action.search.type:TRACE,action.admin.indices.refresh:TRACE\")\n- public void simpleDateMathTests() throws Exception {\n- createIndex(\"test\");\n- client().prepareIndex(\"test\", \"type1\", \"1\").setSource(\"field\", \"2010-01-05T02:00\").execute().actionGet();\n- client().prepareIndex(\"test\", \"type1\", \"2\").setSource(\"field\", \"2010-01-06T02:00\").execute().actionGet();\n- ensureGreen();\n- refresh();\n- SearchResponse searchResponse = client().prepareSearch(\"test\").setQuery(QueryBuilders.rangeQuery(\"field\").gte(\"2010-01-03||+2d\").lte(\"2010-01-04||+2d\")).execute().actionGet();\n+ searchResponse = client().prepareSearch(\"test\").setQuery(QueryBuilders.rangeQuery(\"field\").gt(\"2010-01-05T02:00\").lt(\"2010-01-06T02:00\")).execute().actionGet();\n assertNoFailures(searchResponse);\n- assertHitCount(searchResponse, 2l);\n+ assertHitCount(searchResponse, 0l);\n \n- searchResponse = client().prepareSearch(\"test\").setQuery(QueryBuilders.queryStringQuery(\"field:[2010-01-03||+2d TO 2010-01-04||+2d]\")).execute().actionGet();\n+ searchResponse = client().prepareSearch(\"test\").setQuery(QueryBuilders.queryStringQuery(\"field:[2010-01-03||+2d TO 2010-01-04||+2d/d]\")).execute().actionGet();\n assertHitCount(searchResponse, 2l);\n }\n \n @Test\n- public void localDependentDateTests() throws Exception {\n+ public void localeDependentDateTests() throws Exception {\n assertAcked(prepareCreate(\"test\")\n .addMapping(\"type1\",\n jsonBuilder().startObject()\n@@ -219,28 +200,25 @@ public void simpleTerminateAfterCountTests() throws Exception {\n \n for (int i = 1; i <= max; i++) {\n String id = String.valueOf(i);\n- docbuilders.add(client().prepareIndex(\"test\", \"type1\", id).setSource(\"field\", \"2010-01-\"+ id +\"T02:00\"));\n+ docbuilders.add(client().prepareIndex(\"test\", \"type1\", id).setSource(\"field\", i));\n }\n \n indexRandom(true, docbuilders);\n ensureGreen();\n refresh();\n \n- String upperBound = \"2010-01-\" + String.valueOf(max+1) + \"||+2d\";\n- String lowerBound = \"2009-12-01||+2d\";\n-\n SearchResponse searchResponse;\n \n for (int i = 1; i <= max; i++) {\n searchResponse = client().prepareSearch(\"test\")\n- .setQuery(QueryBuilders.rangeQuery(\"field\").gte(lowerBound).lte(upperBound))\n+ .setQuery(QueryBuilders.rangeQuery(\"field\").gte(1).lte(max))\n .setTerminateAfter(i).execute().actionGet();\n assertHitCount(searchResponse, (long)i);\n assertTrue(searchResponse.isTerminatedEarly());\n }\n \n searchResponse = client().prepareSearch(\"test\")\n- .setQuery(QueryBuilders.rangeQuery(\"field\").gte(lowerBound).lte(upperBound))\n+ .setQuery(QueryBuilders.rangeQuery(\"field\").gte(1).lte(max))\n .setTerminateAfter(2 * max).execute().actionGet();\n \n assertHitCount(searchResponse, max);", "filename": "src/test/java/org/elasticsearch/search/simple/SimpleSearchTests.java", "status": "modified" } ] }
{ "body": "If I run the query :\n\n```\n{\n \"size\": 14,\n \"_source\": \"context.data.eventTime\",\n \"query\": {\n \"filtered\": {\n \"filter\": {\n \"bool\": {\n \"must\": [\n {\n \"range\": {\n \"context.data.eventTime\": {\n \"gte\": \"2014-11-12T14:54:59.000Z\",\n \"lte\": \"2014-11-12T14:55:00.000Z\"\n }\n }\n }\n ]\n }\n }\n }\n }\n}\n```\n\nI get :\n\n```\n{\n \"took\": 0,\n \"timed_out\": false,\n \"_shards\": {\n \"total\": 1,\n \"successful\": 1,\n \"failed\": 0\n },\n \"hits\": {\n \"total\": 3,\n \"max_score\": 1,\n \"hits\": [\n {\n \"_index\": \"test\",\n \"_type\": \"request\",\n \"_id\": \"4a5acb28-e6e8-4dfc-b86f-80926ed82fd7\",\n \"_score\": 1,\n \"_source\": {\n \"context\": {\n \"data\": {\n \"eventTime\": \"2014-11-12T14:55:00.1458607Z\"\n }\n }\n }\n },\n {\n \"_index\": \"test\",\n \"_type\": \"request\",\n \"_id\": \"22a6d539-06a7-4dd9-860c-2ea5ed0956cf\",\n \"_score\": 1,\n \"_source\": {\n \"context\": {\n \"data\": {\n \"eventTime\": \"2014-11-12T14:55:00.5976447Z\"\n }\n }\n }\n },\n {\n \"_index\": \"test\",\n \"_type\": \"request\",\n \"_id\": \"554b25f3-36a9-4f84-b2a3-69f257cf1ac0\",\n \"_score\": 1,\n \"_source\": {\n \"context\": {\n \"data\": {\n \"eventTime\": \"2014-11-12T14:55:00.9979586Z\"\n }\n }\n }\n }\n ]\n }\n}\n```\n\neventTime has mappings :\n\n```\n\"eventTime\": {\n \"type\": \"date\",\n \"format\": \"dateOptionalTime\"\n }\n```\n\nShouldn't no records match ?\n\nInterestingly if I run the same query with the range filter using 'lt' instead of a 'lte' I get no records matched.\n", "comments": [ { "body": "Hi @aaneja \n\nClosing in favour of #8424\n", "created_at": "2014-11-17T11:30:16Z" }, { "body": "I read the related issue and the documentation for date rounding (http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-date-format.html#date-math)\n\nIt appears that date rounding is to the nearest second - if this is so, and I want millisecond precision is my only option passing range values as UNIX millisecond timestamps ?\n(If I use : \n\n```\n{\n \"range\": {\n \"context.data.eventTime\": {\n \"gte\": \"1415804099000\",\n \"lte\": \"1415804100000\"\n }\n }\n}\n```\n\nI get right results - no records match)\n", "created_at": "2014-11-17T17:50:14Z" }, { "body": "Rounding only happens if you specify it, eg \"some_date/d\"\n\nWhat's happening is that the `date_format` that you're using (`dateOptionalTime`) doesn't include milliseconds. So milliseconds are being ignored both in the values that you index AND in the range clause. Try using `basic_date_time`. See http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-date-format.html#built-in\n", "created_at": "2014-11-17T18:00:33Z" }, { "body": "I think `dateOptionalTime` does include milliseconds on indexing. As you see from above one of the records has timestamp : \"2014-11-12T14:55:00.1458607Z\" which in UNIX milliseconds is 1415804100145.\n\nI ran a query with \n\n```\n{\n \"range\": {\n \"context.data.eventTime\": {\n \"gte\": \"1415804100145\",\n \"lte\": \"1415804100145\"\n }\n }\n}\n```\n\nand this record matched. If I try the same range filter with 1415804100146 - 1415804100149 no records match \n", "created_at": "2014-11-17T18:32:21Z" }, { "body": "Hmm you may be right - I'll have to take another look at this one tomorrow\n", "created_at": "2014-11-17T18:34:42Z" }, { "body": "Ping. Any updates on what the expected behavior of `dateOptionalTime` should be ?\n", "created_at": "2014-11-19T17:50:45Z" }, { "body": "Hi @aaneja \n\nSo you're correct: `dateOptionalTime` does include milliseconds. But I can't replicate your findings. Your query correctly returns no results when I run it. I tried on 1.2.1, 1.3.4, and 1.4.0.\n\nDo you perhaps have another field with the same name but a different mapping? Can you provide a working recreation of this issue? What version of Elasticsearch are you running?\n", "created_at": "2014-11-24T11:51:26Z" }, { "body": "Here's a repro -\n`GET /`\nResponse -\n\n```\n{\n \"status\": 200,\n \"name\": \"Terror\",\n \"version\": {\n \"number\": \"1.3.2\",\n \"build_hash\": \"dee175dbe2f254f3f26992f5d7591939aaefd12f\",\n \"build_timestamp\": \"2014-08-13T14:29:30Z\",\n \"build_snapshot\": false,\n \"lucene_version\": \"4.9\"\n },\n \"tagline\": \"You Know, for Search\"\n}\n```\n\nThen -\n\n```\nDELETE /megacorp/tweet\n\nPUT /megacorp/_mapping/tweet\n{\n \"tweet\" : {\n \"properties\": {\n \"eventTime\": {\n \"type\": \"date\",\n \"format\": \"dateOptionalTime\"\n }\n }\n }\n}\n\nPUT /megacorp/tweet/1?pretty\n{\n \"eventTime\": 1415832900146\n}\n\nPUT /megacorp/tweet/2?pretty\n{\n \"eventTime\": 1415832900597\n}\n\n\nPOST /megacorp/tweet/_search\n{\n \"size\": 14,\n \"_source\": \"eventTime\",\n \"query\": {\n \"filtered\": {\n \"filter\": {\n \"bool\": {\n \"must\": [\n {\n \"range\": {\n \"eventTime\": {\n \"gte\": \"2014-11-12T22:54:00Z\",\n \"lte\": \"2014-11-12T22:55:00Z\"\n }\n }\n }\n ]\n }\n }\n }\n }\n}\n\n\nPOST /megacorp/tweet/_search\n{\n \"size\": 14,\n \"_source\": \"eventTime\",\n \"query\": {\n \"filtered\": {\n \"filter\": {\n \"bool\": {\n \"must\": [\n {\n \"range\": {\n \"eventTime\": {\n \"gte\": 1415832840000,\n \"lte\": 1415832900000\n }\n }\n }\n ]\n }\n }\n }\n }\n}\n```\n\nThe last two queries should be, IMO, equivalent.\nHowever I get incorrect results in the first one; the second one correctly matches no records\n", "created_at": "2014-12-05T00:50:34Z" }, { "body": "I've tracked this down to a bug in the date parser and am working on a fix.\n", "created_at": "2014-12-05T22:08:58Z" }, { "body": "I have a fix in #8834. Note that this is really a first step towards eliminating the two different parsing functions in the date math parser (see #8598).\n", "created_at": "2014-12-08T23:24:26Z" }, { "body": "After discussing the further, I think the best immediate fix here is to set `mapping.date.round_ceil = false`. This should fix the problem described above (essentially doing via configuration what my patch in #8834 does). In order to not introduce breaking behavior, I am going to drop #8834 entirely and work on the full removal of `mapping.date.round_ceil` for #8598.\n", "created_at": "2014-12-09T22:16:14Z" }, { "body": "I'm closing this in favor of #8598 which will be fixed for 2.0.\n", "created_at": "2014-12-11T01:18:01Z" } ], "number": 8490, "title": "Date Range Filter unclear rounding behavior" }
{ "body": "Date parsing uses a flag to indicate whether the rounding should be\ninclusive or exclusive. This change fixes the parsing to not use this\nlogic in the case of exact dates that do not have rounding syntax.\n\ncloses #8490\n", "number": 8834, "review_comments": [ { "body": "ok, but I get three test failures when I run the full suite:\n\n```\nTests with failures:\n - org.elasticsearch.search.simple.SimpleSearchTests.simpleDateRangeWithUpperInclusiveEnabledTests\n - org.elasticsearch.index.mapper.date.SimpleDateMappingTests.testDayWithoutYearFormat\n - org.elasticsearch.index.mapper.date.SimpleDateMappingTests.testHourFormat\n```\n\nalso, should we have the example from the original issue as test? \n", "created_at": "2014-12-09T18:43:32Z" }, { "body": "Yeah I just fixed those test failures. I will change this test to use the date from the original issue.\n", "created_at": "2014-12-09T18:46:43Z" } ], "title": "Dates: Fixed parsing issue with exact dates when using lte." }
{ "commits": [ { "message": "Dates: Fixed parsing issue with exact dates when using lte.\n\nDate parsing uses a flag to indicate whether the rounding should be\ninclusive or exclusive. This change fixes the parsing to not use this\nlogic in the case of exact dates that do not have rounding syntax.\n\ncloses #8490" }, { "message": "Fix tests assuming old behavior" }, { "message": "Change new test to use date from original issue." } ], "files": [ { "diff": "@@ -52,14 +52,11 @@ public long parse(String text, long now, boolean roundCeil, DateTimeZone timeZon\n mathString = text.substring(\"now\".length());\n } else {\n int index = text.indexOf(\"||\");\n- String parseString;\n if (index == -1) {\n- parseString = text;\n- mathString = \"\"; // nothing else\n- } else {\n- parseString = text.substring(0, index);\n- mathString = text.substring(index + 2);\n+ return parseStringValue(text, timeZone);\n }\n+ String parseString = text.substring(0, index);\n+ mathString = text.substring(index + 2);\n if (roundCeil) {\n time = parseRoundCeilStringValue(parseString, timeZone);\n } else {", "filename": "src/main/java/org/elasticsearch/common/joda/DateMathParser.java", "status": "modified" }, { "diff": "@@ -57,6 +57,11 @@ public void testBasicDates() {\n assertDateMathEquals(\"2014-05-30T20:21:35.123\", \"2014-05-30T20:21:35.123\");\n }\n \n+ public void testRoundingDoesNotAffectExactDate() {\n+ assertDateMathEquals(\"2014-11-12T22:55:00Z\", \"2014-11-12T22:55:00Z\", 0, true, null);\n+ assertDateMathEquals(\"2014-11-12T22:55:00Z\", \"2014-11-12T22:55:00Z\", 0, false, null);\n+ }\n+ \n public void testBasicMath() {\n assertDateMathEquals(\"2014-11-18||+y\", \"2015-11-18\");\n assertDateMathEquals(\"2014-11-18||-2y\", \"2012-11-18\");", "filename": "src/test/java/org/elasticsearch/common/joda/DateMathParserTests.java", "status": "modified" }, { "diff": "@@ -232,7 +232,7 @@ public void testHourFormat() throws Exception {\n }\n assertThat(filter, instanceOf(NumericRangeFilter.class));\n NumericRangeFilter<Long> rangeFilter = (NumericRangeFilter<Long>) filter;\n- assertThat(rangeFilter.getMax(), equalTo(new DateTime(TimeValue.timeValueHours(11).millis() + 999).getMillis())); // +999 to include the 00-01 minute\n+ assertThat(rangeFilter.getMax(), equalTo(new DateTime(TimeValue.timeValueHours(11).millis()).getMillis()));\n assertThat(rangeFilter.getMin(), equalTo(new DateTime(TimeValue.timeValueHours(10).millis()).getMillis()));\n }\n \n@@ -262,7 +262,7 @@ public void testDayWithoutYearFormat() throws Exception {\n }\n assertThat(filter, instanceOf(NumericRangeFilter.class));\n NumericRangeFilter<Long> rangeFilter = (NumericRangeFilter<Long>) filter;\n- assertThat(rangeFilter.getMax(), equalTo(new DateTime(TimeValue.timeValueHours(35).millis() + 999).getMillis())); // +999 to include the 00-01 minute\n+ assertThat(rangeFilter.getMax(), equalTo(new DateTime(TimeValue.timeValueHours(35).millis()).getMillis()));\n assertThat(rangeFilter.getMin(), equalTo(new DateTime(TimeValue.timeValueHours(34).millis()).getMillis()));\n }\n ", "filename": "src/test/java/org/elasticsearch/index/mapper/date/SimpleDateMappingTests.java", "status": "modified" }, { "diff": "@@ -129,10 +129,9 @@ public void simpleIdTests() {\n public void simpleDateRangeWithUpperInclusiveEnabledTests() throws Exception {\n createIndex(\"test\");\n client().prepareIndex(\"test\", \"type1\", \"1\").setSource(\"field\", \"2010-01-05T02:00\").execute().actionGet();\n- client().prepareIndex(\"test\", \"type1\", \"2\").setSource(\"field\", \"2010-01-06T02:00\").execute().actionGet();\n+ client().prepareIndex(\"test\", \"type1\", \"2\").setSource(\"field\", \"2010-01-06T00:00\").execute().actionGet();\n refresh();\n \n- // test include upper on ranges to include the full day on the upper bound\n SearchResponse searchResponse = client().prepareSearch(\"test\").setQuery(QueryBuilders.rangeQuery(\"field\").gte(\"2010-01-05\").lte(\"2010-01-06\")).execute().actionGet();\n assertHitCount(searchResponse, 2l);\n searchResponse = client().prepareSearch(\"test\").setQuery(QueryBuilders.rangeQuery(\"field\").gte(\"2010-01-05\").lt(\"2010-01-06\")).execute().actionGet();", "filename": "src/test/java/org/elasticsearch/search/simple/SimpleSearchTests.java", "status": "modified" } ] }
{ "body": "It looks that the top level boosting based on the indices (_indices_boost_) requires that we specify the exact index name. We cannot substitue an alias pointing to a given index. This is very inconvenient since it defeats the purpose of having aliases to hide internal organization of the indices.\n", "comments": [ { "body": "+1 .. \n", "created_at": "2014-04-07T18:05:43Z" }, { "body": "I have the same problem\n", "created_at": "2014-09-18T09:38:40Z" }, { "body": "+1\n", "created_at": "2015-02-06T17:10:46Z" }, { "body": "+1, has this been addressed yet? i must be able to use the alias.\n", "created_at": "2015-08-18T17:39:11Z" }, { "body": "+1\n", "created_at": "2015-08-28T08:04:23Z" }, { "body": "Has this been addressed? Please don't tell me this has not been fixed. Just bumped into it in production.\n", "created_at": "2015-10-21T20:14:50Z" }, { "body": "+1 as we have just hit this surprising behavior in production as well.\n", "created_at": "2015-10-23T19:20:46Z" }, { "body": "Things to consider:\n- should wildcard expansion be supported as well? (probably yes)\n- what if an index belongs to more than one alias in indices_boost parameter? For example, `\"indices_boost\": {\"alias1\": 3,\"alias2\": 2}` and both aliases include `index1`. (sum boost? max? min?)\n", "created_at": "2015-11-12T07:24:49Z" }, { "body": "It seems the option to add indices_boost with the java client is not available. Is this true? I am using spring-data-elasticsearch and I cant seem to find a way to add indices_boots to my query. Thank you.\n", "created_at": "2015-11-26T03:11:35Z" }, { "body": "+1\n", "created_at": "2016-01-13T15:06:59Z" }, { "body": "I would like support for wildcard expansion as well.\n\n:+1: \n", "created_at": "2016-01-14T12:48:55Z" }, { "body": "+1\n", "created_at": "2016-04-26T11:45:32Z" }, { "body": "Any updates on these? @clintongormley ?\n", "created_at": "2016-05-03T15:01:56Z" }, { "body": "Calling all devs. @clintongormley @kimchy Is this for real?\n", "created_at": "2016-05-04T14:44:37Z" }, { "body": "+1\n", "created_at": "2016-10-19T15:55:47Z" }, { "body": "This change stalled because of the problem of figuring out what to do when an index is boosted more than once via an alias or wildcard. From https://github.com/elastic/elasticsearch/pull/8811#issuecomment-258675219 : \n\n@masaruh just reread this thread and i think the correct answer is here:\n\n> Make indices_boost take list of index name and boost pair. We may need to do this if we want to have full control. But I somewhat hesitate to do this because it's breaking change.\n\nThen the logic would be that we use the boost from the first time we see the index in the list, so eg:\n\n```\n[ \n { \"foo\" : 2 }, # alias foo points to bar & baz\n { \"bar\": 1.5 }, # this boost is ignored because we've already seen bar\n { \"*\": 1.2 } # bar and baz are ignored because already seen, but index xyz gets this boost\n]\n```\n\nThis could be implemented in a bwc way. In fact, the old syntax doesn't need to be removed. We could just add this new syntax as an expert way of controlling boosts.\n\nWhat do you think?\n", "created_at": "2016-11-06T11:37:57Z" }, { "body": "Ooh, true. that should work! Thanks @clintongormley.\nI'll see what I can do.\n", "created_at": "2016-11-07T05:00:47Z" } ], "number": 4756, "title": "indices_boost ignore aliasing" }
{ "body": "Allow specifying alias or wildcard in \"indices_boost\".\n\nCloses #4756\n", "number": 8811, "review_comments": [ { "body": "This should start at 0? We shouldn't be allowing negative boosts..\n", "created_at": "2014-12-22T17:47:13Z" }, { "body": "Don't we allow negative boost? Queries accept negative `boost` value, don't they?\n(That's said, this part will be gone if we choose to make it to prefer the last seen boost as @clintongormley suggests)\n", "created_at": "2014-12-24T05:30:48Z" }, { "body": "Isn't the first argument to `addAlias()` the alias name, and then the rest of the arguments are the indices it points to?\n", "created_at": "2014-12-29T18:33:37Z" }, { "body": "I don't understand why dfs is needed to test index boost resolution? tfidf should not come into play at all in this test; you can test with static scores for documents.\n", "created_at": "2014-12-29T19:42:01Z" }, { "body": "Hmm, looks like it's not consistent.\nIndicesAliasesRequestBuilder's `addAlias()` takes index name then alias while IndicesAliasesRequest's `addAlias()` takes alias name then index name.\n", "created_at": "2015-01-05T07:36:47Z" }, { "body": "Right. I did it because existing test did that (I didn't dare to fix it).\nI think I'll fix it.\n", "created_at": "2015-01-05T07:36:52Z" } ], "title": "Parse index names in \"indices_boost\"" }
{ "commits": [ { "message": "Parse index names in \"indices_boost\"" }, { "message": "Add document for indices boost" }, { "message": "Use highest boost value if multiple boost values are found" }, { "message": "Revert \"Use highest boost value if multiple boost values are found\"\n\nThis reverts commit 416710d1d4f8184acbb122674c54e964b445e054." }, { "message": "Use last seen boost entry if multiple entries match an index" } ], "files": [ { "diff": "@@ -6,12 +6,19 @@ across more than one indices. This is very handy when hits coming from\n one index matter more than hits coming from another index (think social\n graph where each user has an index).\n \n+Index name, alias or wildcard expression can be specified.\n+\n [source,js]\n --------------------------------------------------\n {\n \"indices_boost\" : {\n \"index1\" : 1.4,\n- \"index2\" : 1.3\n+ \"alias1\" : 1.3,\n+ \"other_index*\" : 1.2\n }\n }\n --------------------------------------------------\n+\n+If multiple boosts resolve to the same index, the last boost value is used.\n+(For example, if alias1 includes index1 and other_index1, boost value 1.3 is\n+used for index1 and 1.2 is used for other_index1)", "filename": "docs/reference/search/request/index-boost.asciidoc", "status": "modified" }, { "diff": "@@ -176,7 +176,7 @@ protected ShardValidateQueryResponse shardOperation(ShardValidateQueryRequest re\n \n DefaultSearchContext searchContext = new DefaultSearchContext(0,\n new ShardSearchLocalRequest(request.types(), request.nowInMillis(), request.filteringAliases()),\n- null, indexShard.acquireSearcher(\"validate_query\"), indexService, indexShard,\n+ null, indexShard.acquireSearcher(\"validate_query\"), indexService, indexShard, clusterService,\n scriptService, pageCacheRecycler, bigArrays, threadPool.estimatedTimeInMillisCounter()\n );\n SearchContext.setCurrent(searchContext);", "filename": "src/main/java/org/elasticsearch/action/admin/indices/validate/query/TransportValidateQueryAction.java", "status": "modified" }, { "diff": "@@ -167,7 +167,7 @@ protected ShardCountResponse shardOperation(ShardCountRequest request) throws El\n SearchShardTarget shardTarget = new SearchShardTarget(clusterService.localNode().id(), request.shardId().getIndex(), request.shardId().id());\n SearchContext context = new DefaultSearchContext(0,\n new ShardSearchLocalRequest(request.types(), request.nowInMillis(), request.filteringAliases()),\n- shardTarget, indexShard.acquireSearcher(\"count\"), indexService, indexShard,\n+ shardTarget, indexShard.acquireSearcher(\"count\"), indexService, indexShard, clusterService,\n scriptService, pageCacheRecycler, bigArrays, threadPool.estimatedTimeInMillisCounter());\n SearchContext.setCurrent(context);\n ", "filename": "src/main/java/org/elasticsearch/action/count/TransportCountAction.java", "status": "modified" }, { "diff": "@@ -106,7 +106,7 @@ protected PrimaryResponse<ShardDeleteByQueryResponse, ShardDeleteByQueryRequest>\n IndexShard indexShard = indexService.shardSafe(shardRequest.shardId.id());\n \n SearchContext.setCurrent(new DefaultSearchContext(0, new ShardSearchLocalRequest(request.types(), request.nowInMillis()), null,\n- indexShard.acquireSearcher(DELETE_BY_QUERY_API), indexService, indexShard, scriptService,\n+ indexShard.acquireSearcher(DELETE_BY_QUERY_API), indexService, indexShard, clusterService, scriptService,\n pageCacheRecycler, bigArrays, threadPool.estimatedTimeInMillisCounter()));\n try {\n Engine.DeleteByQuery deleteByQuery = indexShard.prepareDeleteByQuery(request.source(), request.filteringAliases(), Engine.Operation.Origin.PRIMARY, request.types());\n@@ -128,7 +128,7 @@ protected void shardOperationOnReplica(ReplicaOperationRequest shardRequest) {\n IndexShard indexShard = indexService.shardSafe(shardRequest.shardId.id());\n \n SearchContext.setCurrent(new DefaultSearchContext(0, new ShardSearchLocalRequest(request.types(), request.nowInMillis()), null,\n- indexShard.acquireSearcher(DELETE_BY_QUERY_API, true), indexService, indexShard, scriptService,\n+ indexShard.acquireSearcher(DELETE_BY_QUERY_API, true), indexService, indexShard, clusterService, scriptService,\n pageCacheRecycler, bigArrays, threadPool.estimatedTimeInMillisCounter()));\n try {\n Engine.DeleteByQuery deleteByQuery = indexShard.prepareDeleteByQuery(request.source(), request.filteringAliases(), Engine.Operation.Origin.REPLICA, request.types());", "filename": "src/main/java/org/elasticsearch/action/deletebyquery/TransportShardDeleteByQueryAction.java", "status": "modified" }, { "diff": "@@ -168,7 +168,7 @@ protected ShardExistsResponse shardOperation(ShardExistsRequest request) throws\n SearchShardTarget shardTarget = new SearchShardTarget(clusterService.localNode().id(), request.shardId().getIndex(), request.shardId().id());\n SearchContext context = new DefaultSearchContext(0,\n new ShardSearchLocalRequest(request.types(), request.nowInMillis(), request.filteringAliases()),\n- shardTarget, indexShard.acquireSearcher(\"exists\"), indexService, indexShard,\n+ shardTarget, indexShard.acquireSearcher(\"exists\"), indexService, indexShard, clusterService,\n scriptService, pageCacheRecycler, bigArrays, threadPool.estimatedTimeInMillisCounter());\n SearchContext.setCurrent(context);\n ", "filename": "src/main/java/org/elasticsearch/action/exists/TransportExistsAction.java", "status": "modified" }, { "diff": "@@ -114,7 +114,7 @@ protected ExplainResponse shardOperation(ExplainRequest request, ShardId shardId\n \n SearchContext context = new DefaultSearchContext(\n 0, new ShardSearchLocalRequest(new String[]{request.type()}, request.nowInMillis, request.filteringAlias()),\n- null, result.searcher(), indexService, indexShard,\n+ null, result.searcher(), indexService, indexShard, clusterService,\n scriptService, pageCacheRecycler,\n bigArrays, threadPool.estimatedTimeInMillisCounter()\n );", "filename": "src/main/java/org/elasticsearch/action/explain/TransportExplainAction.java", "status": "modified" }, { "diff": "@@ -28,6 +28,7 @@\n import org.elasticsearch.action.percolate.PercolateShardRequest;\n import org.elasticsearch.action.search.SearchType;\n import org.elasticsearch.cache.recycler.PageCacheRecycler;\n+import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.common.lease.Releasables;\n import org.elasticsearch.common.text.StringText;\n import org.elasticsearch.common.util.BigArrays;\n@@ -88,6 +89,7 @@ public class PercolateContext extends SearchContext {\n private final PageCacheRecycler pageCacheRecycler;\n private final BigArrays bigArrays;\n private final ScriptService scriptService;\n+ private final ClusterService clusterService;\n private final ConcurrentMap<BytesRef, Query> percolateQueries;\n private final int numberOfShards;\n private String[] types;\n@@ -109,7 +111,7 @@ public class PercolateContext extends SearchContext {\n \n public PercolateContext(PercolateShardRequest request, SearchShardTarget searchShardTarget, IndexShard indexShard,\n IndexService indexService, PageCacheRecycler pageCacheRecycler,\n- BigArrays bigArrays, ScriptService scriptService) {\n+ BigArrays bigArrays, ScriptService scriptService, ClusterService clusterService) {\n this.indexShard = indexShard;\n this.indexService = indexService;\n this.fieldDataService = indexService.fieldData();\n@@ -122,6 +124,7 @@ public PercolateContext(PercolateShardRequest request, SearchShardTarget searchS\n this.engineSearcher = indexShard.acquireSearcher(\"percolate\");\n this.searcher = new ContextIndexSearcher(this, engineSearcher);\n this.scriptService = scriptService;\n+ this.clusterService = clusterService;\n this.numberOfShards = request.getNumberOfShards();\n }\n \n@@ -424,6 +427,11 @@ public ScriptService scriptService() {\n return scriptService;\n }\n \n+ @Override\n+ public ClusterService clusterService() {\n+ return clusterService;\n+ }\n+\n @Override\n public PageCacheRecycler pageCacheRecycler() {\n return pageCacheRecycler;", "filename": "src/main/java/org/elasticsearch/percolator/PercolateContext.java", "status": "modified" }, { "diff": "@@ -179,7 +179,7 @@ public PercolateShardResponse percolate(PercolateShardRequest request) {\n \n SearchShardTarget searchShardTarget = new SearchShardTarget(clusterService.localNode().id(), request.shardId().getIndex(), request.shardId().id());\n final PercolateContext context = new PercolateContext(\n- request, searchShardTarget, indexShard, percolateIndexService, pageCacheRecycler, bigArrays, scriptService\n+ request, searchShardTarget, indexShard, percolateIndexService, pageCacheRecycler, bigArrays, scriptService, clusterService\n );\n try {\n ParsedDocument parsedDocument = parseRequest(percolateIndexService, request, context);", "filename": "src/main/java/org/elasticsearch/percolator/PercolatorService.java", "status": "modified" }, { "diff": "@@ -537,7 +537,7 @@ final SearchContext createContext(ShardSearchRequest request, @Nullable Engine.S\n SearchShardTarget shardTarget = new SearchShardTarget(clusterService.localNode().id(), request.index(), request.shardId());\n \n Engine.Searcher engineSearcher = searcher == null ? indexShard.acquireSearcher(\"search\") : searcher;\n- SearchContext context = new DefaultSearchContext(idGenerator.incrementAndGet(), request, shardTarget, engineSearcher, indexService, indexShard, scriptService, pageCacheRecycler, bigArrays, threadPool.estimatedTimeInMillisCounter());\n+ SearchContext context = new DefaultSearchContext(idGenerator.incrementAndGet(), request, shardTarget, engineSearcher, indexService, indexShard, clusterService, scriptService, pageCacheRecycler, bigArrays, threadPool.estimatedTimeInMillisCounter());\n SearchContext.setCurrent(context);\n try {\n context.scroll(request.scroll());", "filename": "src/main/java/org/elasticsearch/search/SearchService.java", "status": "modified" }, { "diff": "@@ -19,7 +19,6 @@\n \n package org.elasticsearch.search.builder;\n \n-import com.carrotsearch.hppc.ObjectFloatOpenHashMap;\n import com.google.common.base.Charsets;\n import com.google.common.collect.ImmutableList;\n import com.google.common.collect.Lists;\n@@ -49,10 +48,7 @@\n import org.elasticsearch.search.suggest.SuggestBuilder;\n \n import java.io.IOException;\n-import java.util.ArrayList;\n-import java.util.Iterator;\n-import java.util.List;\n-import java.util.Map;\n+import java.util.*;\n \n /**\n * A search source builder allowing to easily build search source. Simple construction\n@@ -119,7 +115,7 @@ public static HighlightBuilder highlight() {\n private List<RescoreBuilder> rescoreBuilders;\n private Integer defaultRescoreWindowSize;\n \n- private ObjectFloatOpenHashMap<String> indexBoost = null;\n+ private Map<String, Float> indexBoost = null;\n \n private String[] stats;\n \n@@ -609,7 +605,7 @@ public SearchSourceBuilder scriptField(String name, String lang, String script,\n */\n public SearchSourceBuilder indexBoost(String index, float indexBoost) {\n if (this.indexBoost == null) {\n- this.indexBoost = new ObjectFloatOpenHashMap<>();\n+ this.indexBoost = new LinkedHashMap<>();\n }\n this.indexBoost.put(index, indexBoost);\n return this;\n@@ -775,13 +771,8 @@ public void innerToXContent(XContentBuilder builder, Params params) throws IOExc\n \n if (indexBoost != null) {\n builder.startObject(\"indices_boost\");\n- final boolean[] states = indexBoost.allocated;\n- final Object[] keys = indexBoost.keys;\n- final float[] values = indexBoost.values;\n- for (int i = 0; i < states.length; i++) {\n- if (states[i]) {\n- builder.field((String) keys[i], values[i]);\n- }\n+ for (Map.Entry<String, Float> entry : indexBoost.entrySet()) {\n+ builder.field(entry.getKey(), entry.getValue());\n }\n builder.endObject();\n }", "filename": "src/main/java/org/elasticsearch/search/builder/SearchSourceBuilder.java", "status": "modified" }, { "diff": "@@ -31,6 +31,7 @@\n import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.action.search.SearchType;\n import org.elasticsearch.cache.recycler.PageCacheRecycler;\n+import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.lease.Releasables;\n import org.elasticsearch.common.lucene.search.AndFilter;\n@@ -100,6 +101,8 @@ public class DefaultSearchContext extends SearchContext {\n \n private final IndexService indexService;\n \n+ private final ClusterService clusterService;\n+\n private final ContextIndexSearcher searcher;\n \n private final DfsSearchResult dfsResult;\n@@ -178,7 +181,7 @@ public class DefaultSearchContext extends SearchContext {\n private InnerHitsContext innerHitsContext;\n \n public DefaultSearchContext(long id, ShardSearchRequest request, SearchShardTarget shardTarget,\n- Engine.Searcher engineSearcher, IndexService indexService, IndexShard indexShard,\n+ Engine.Searcher engineSearcher, IndexService indexService, IndexShard indexShard, ClusterService clusterService,\n ScriptService scriptService, PageCacheRecycler pageCacheRecycler,\n BigArrays bigArrays, Counter timeEstimateCounter) {\n this.id = id;\n@@ -195,6 +198,7 @@ public DefaultSearchContext(long id, ShardSearchRequest request, SearchShardTarg\n this.fetchResult = new FetchSearchResult(id, shardTarget);\n this.indexShard = indexShard;\n this.indexService = indexService;\n+ this.clusterService = clusterService;\n \n this.searcher = new ContextIndexSearcher(this, engineSearcher);\n \n@@ -428,6 +432,10 @@ public ScriptService scriptService() {\n return scriptService;\n }\n \n+ public ClusterService clusterService() {\n+ return clusterService;\n+ }\n+\n public PageCacheRecycler pageCacheRecycler() {\n return pageCacheRecycler;\n }", "filename": "src/main/java/org/elasticsearch/search/internal/DefaultSearchContext.java", "status": "modified" }, { "diff": "@@ -26,6 +26,7 @@\n import org.apache.lucene.util.Counter;\n import org.elasticsearch.action.search.SearchType;\n import org.elasticsearch.cache.recycler.PageCacheRecycler;\n+import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.common.util.BigArrays;\n import org.elasticsearch.index.analysis.AnalysisService;\n import org.elasticsearch.index.cache.bitset.BitsetFilterCache;\n@@ -278,6 +279,11 @@ public ScriptService scriptService() {\n return in.scriptService();\n }\n \n+ @Override\n+ public ClusterService clusterService() {\n+ return in.clusterService();\n+ }\n+\n @Override\n public PageCacheRecycler pageCacheRecycler() {\n return in.pageCacheRecycler();", "filename": "src/main/java/org/elasticsearch/search/internal/FilteredSearchContext.java", "status": "modified" }, { "diff": "@@ -28,6 +28,7 @@\n import org.apache.lucene.util.Counter;\n import org.elasticsearch.action.search.SearchType;\n import org.elasticsearch.cache.recycler.PageCacheRecycler;\n+import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.lease.Releasable;\n import org.elasticsearch.common.lease.Releasables;\n@@ -207,6 +208,8 @@ public final boolean nowInMillisUsed() {\n \n public abstract ScriptService scriptService();\n \n+ public abstract ClusterService clusterService();\n+\n public abstract PageCacheRecycler pageCacheRecycler();\n \n public abstract BigArrays bigArrays();", "filename": "src/main/java/org/elasticsearch/search/internal/SearchContext.java", "status": "modified" }, { "diff": "@@ -19,6 +19,9 @@\n \n package org.elasticsearch.search.query;\n \n+import org.elasticsearch.action.support.IndicesOptions;\n+import org.elasticsearch.cluster.ClusterService;\n+import org.elasticsearch.common.regex.Regex;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.search.SearchParseElement;\n import org.elasticsearch.search.internal.SearchContext;\n@@ -40,15 +43,28 @@ public class IndicesBoostParseElement implements SearchParseElement {\n @Override\n public void parse(XContentParser parser, SearchContext context) throws Exception {\n XContentParser.Token token;\n+ ClusterService clusterService = context.clusterService();\n+\n while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {\n if (token == XContentParser.Token.FIELD_NAME) {\n String indexName = parser.currentName();\n- if (indexName.equals(context.shardTarget().index())) {\n+ if (matchesIndex(clusterService, context.shardTarget().index(), indexName)) {\n parser.nextToken(); // move to the value\n // we found our query boost\n context.queryBoost(parser.floatValue());\n }\n }\n }\n }\n+\n+ protected boolean matchesIndex(ClusterService clusterService, String currentIndex, String boostTargetIndex) {\n+ final String[] concreteIndices = clusterService.state().metaData().concreteIndices(IndicesOptions.lenientExpandOpen(), boostTargetIndex);\n+ for (String index : concreteIndices) {\n+ if (Regex.simpleMatch(index, currentIndex)) {\n+ return true;\n+ }\n+ }\n+ return false;\n+ }\n+\n }", "filename": "src/main/java/org/elasticsearch/search/query/IndicesBoostParseElement.java", "status": "modified" }, { "diff": "@@ -29,7 +29,6 @@\n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n import static org.elasticsearch.index.query.QueryBuilders.termQuery;\n import static org.elasticsearch.search.builder.SearchSourceBuilder.searchSource;\n-import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertHitCount;\n import static org.hamcrest.Matchers.equalTo;\n \n /**\n@@ -39,15 +38,6 @@ public class SimpleIndicesBoostSearchTests extends ElasticsearchIntegrationTest\n \n @Test\n public void testIndicesBoost() throws Exception {\n- assertHitCount(client().prepareSearch().setQuery(termQuery(\"test\", \"value\")).get(), 0);\n-\n- try {\n- client().prepareSearch(\"test\").setQuery(termQuery(\"test\", \"value\")).execute().actionGet();\n- fail(\"should fail\");\n- } catch (Exception e) {\n- // ignore, no indices\n- }\n-\n createIndex(\"test1\", \"test2\");\n ensureGreen();\n client().index(indexRequest(\"test1\").type(\"type1\").id(\"1\")\n@@ -110,4 +100,207 @@ public void testIndicesBoost() throws Exception {\n assertThat(response.getHits().getAt(0).index(), equalTo(\"test2\"));\n assertThat(response.getHits().getAt(1).index(), equalTo(\"test1\"));\n }\n+\n+ @Test\n+ public void testIndicesBoostWithAlias() throws Exception {\n+ createIndex(\"alias_test1\", \"alias_test2\");\n+ ensureGreen();\n+ client().admin().indices().prepareAliases().addAlias(\"alias_test1\", \"alias1\").get();\n+ client().admin().indices().prepareAliases().addAlias(\"alias_test2\", \"alias2\").get();\n+\n+ client().index(indexRequest(\"alias_test1\").type(\"type1\").id(\"1\")\n+ .source(jsonBuilder().startObject().field(\"test\", \"value check\").endObject())).actionGet();\n+ client().index(indexRequest(\"alias_test2\").type(\"type1\").id(\"1\")\n+ .source(jsonBuilder().startObject().field(\"test\", \"value beck\").endObject())).actionGet();\n+ refresh();\n+\n+ float indexBoost = 1.1f;\n+\n+ logger.info(\"--- QUERY_THEN_FETCH\");\n+\n+ logger.info(\"Query with alias_test1 boosted\");\n+ SearchResponse response = client().search(searchRequest()\n+ .searchType(SearchType.QUERY_THEN_FETCH)\n+ .source(searchSource().explain(true).indexBoost(\"alias1\", indexBoost).query(termQuery(\"test\", \"value\")))\n+ ).actionGet();\n+\n+ assertThat(response.getHits().totalHits(), equalTo(2l));\n+ logger.info(\"Hit[0] {} Explanation {}\", response.getHits().getAt(0).index(), response.getHits().getAt(0).explanation());\n+ logger.info(\"Hit[1] {} Explanation {}\", response.getHits().getAt(1).index(), response.getHits().getAt(1).explanation());\n+ assertThat(response.getHits().getAt(0).index(), equalTo(\"alias_test1\"));\n+ assertThat(response.getHits().getAt(1).index(), equalTo(\"alias_test2\"));\n+\n+ logger.info(\"Query with alias_test2 boosted\");\n+ response = client().search(searchRequest()\n+ .searchType(SearchType.QUERY_THEN_FETCH)\n+ .source(searchSource().explain(true).indexBoost(\"alias2\", indexBoost).query(termQuery(\"test\", \"value\")))\n+ ).actionGet();\n+\n+ assertThat(response.getHits().totalHits(), equalTo(2l));\n+ logger.info(\"Hit[0] {} Explanation {}\", response.getHits().getAt(0).index(), response.getHits().getAt(0).explanation());\n+ logger.info(\"Hit[1] {} Explanation {}\", response.getHits().getAt(1).index(), response.getHits().getAt(1).explanation());\n+ assertThat(response.getHits().getAt(0).index(), equalTo(\"alias_test2\"));\n+ assertThat(response.getHits().getAt(1).index(), equalTo(\"alias_test1\"));\n+\n+ logger.info(\"--- DFS_QUERY_THEN_FETCH\");\n+\n+ logger.info(\"Query with alias_test1 boosted\");\n+ response = client().search(searchRequest()\n+ .searchType(SearchType.DFS_QUERY_THEN_FETCH)\n+ .source(searchSource().explain(true).indexBoost(\"alias1\", indexBoost).query(termQuery(\"test\", \"value\")))\n+ ).actionGet();\n+\n+ assertThat(response.getHits().totalHits(), equalTo(2l));\n+ logger.info(\"Hit[0] {} Explanation {}\", response.getHits().getAt(0).index(), response.getHits().getAt(0).explanation());\n+ logger.info(\"Hit[1] {} Explanation {}\", response.getHits().getAt(1).index(), response.getHits().getAt(1).explanation());\n+ assertThat(response.getHits().getAt(0).index(), equalTo(\"alias_test1\"));\n+ assertThat(response.getHits().getAt(1).index(), equalTo(\"alias_test2\"));\n+\n+ logger.info(\"Query with alias_test2 boosted\");\n+ response = client().search(searchRequest()\n+ .searchType(SearchType.DFS_QUERY_THEN_FETCH)\n+ .source(searchSource().explain(true).indexBoost(\"alias2\", indexBoost).query(termQuery(\"test\", \"value\")))\n+ ).actionGet();\n+\n+ assertThat(response.getHits().totalHits(), equalTo(2l));\n+ logger.info(\"Hit[0] {} Explanation {}\", response.getHits().getAt(0).index(), response.getHits().getAt(0).explanation());\n+ logger.info(\"Hit[1] {} Explanation {}\", response.getHits().getAt(1).index(), response.getHits().getAt(1).explanation());\n+ assertThat(response.getHits().getAt(0).index(), equalTo(\"alias_test2\"));\n+ assertThat(response.getHits().getAt(1).index(), equalTo(\"alias_test1\"));\n+ }\n+\n+ @Test\n+ public void testIndicesBoostWithWildcard() throws Exception {\n+ createIndex(\"wildcard_test1\", \"wildcard_test2\");\n+ ensureGreen();\n+\n+ client().index(indexRequest(\"wildcard_test1\").type(\"type1\").id(\"1\")\n+ .source(jsonBuilder().startObject().field(\"test\", \"value check\").endObject())).actionGet();\n+ client().index(indexRequest(\"wildcard_test2\").type(\"type1\").id(\"1\")\n+ .source(jsonBuilder().startObject().field(\"test\", \"value beck\").endObject())).actionGet();\n+ refresh();\n+\n+ float indexBoost = 1.1f;\n+\n+ logger.info(\"--- QUERY_THEN_FETCH\");\n+\n+ logger.info(\"Query with wildcard_test1 boosted\");\n+ SearchResponse response = client().search(searchRequest()\n+ .searchType(SearchType.QUERY_THEN_FETCH)\n+ .source(searchSource().explain(true).indexBoost(\"*test1\", indexBoost).query(termQuery(\"test\", \"value\")))\n+ ).actionGet();\n+\n+ assertThat(response.getHits().totalHits(), equalTo(2l));\n+ logger.info(\"Hit[0] {} Explanation {}\", response.getHits().getAt(0).index(), response.getHits().getAt(0).explanation());\n+ logger.info(\"Hit[1] {} Explanation {}\", response.getHits().getAt(1).index(), response.getHits().getAt(1).explanation());\n+ assertThat(response.getHits().getAt(0).index(), equalTo(\"wildcard_test1\"));\n+ assertThat(response.getHits().getAt(1).index(), equalTo(\"wildcard_test2\"));\n+\n+ logger.info(\"Query with wildcard_test2 boosted\");\n+ response = client().search(searchRequest()\n+ .searchType(SearchType.QUERY_THEN_FETCH)\n+ .source(searchSource().explain(true).indexBoost(\"*test2\", indexBoost).query(termQuery(\"test\", \"value\")))\n+ ).actionGet();\n+\n+ assertThat(response.getHits().totalHits(), equalTo(2l));\n+ logger.info(\"Hit[0] {} Explanation {}\", response.getHits().getAt(0).index(), response.getHits().getAt(0).explanation());\n+ logger.info(\"Hit[1] {} Explanation {}\", response.getHits().getAt(1).index(), response.getHits().getAt(1).explanation());\n+ assertThat(response.getHits().getAt(0).index(), equalTo(\"wildcard_test2\"));\n+ assertThat(response.getHits().getAt(1).index(), equalTo(\"wildcard_test1\"));\n+\n+ logger.info(\"--- DFS_QUERY_THEN_FETCH\");\n+\n+ logger.info(\"Query with test1 boosted\");\n+ response = client().search(searchRequest()\n+ .searchType(SearchType.DFS_QUERY_THEN_FETCH)\n+ .source(searchSource().explain(true).indexBoost(\"*test1\", indexBoost).query(termQuery(\"test\", \"value\")))\n+ ).actionGet();\n+\n+ assertThat(response.getHits().totalHits(), equalTo(2l));\n+ logger.info(\"Hit[0] {} Explanation {}\", response.getHits().getAt(0).index(), response.getHits().getAt(0).explanation());\n+ logger.info(\"Hit[1] {} Explanation {}\", response.getHits().getAt(1).index(), response.getHits().getAt(1).explanation());\n+ assertThat(response.getHits().getAt(0).index(), equalTo(\"wildcard_test1\"));\n+ assertThat(response.getHits().getAt(1).index(), equalTo(\"wildcard_test2\"));\n+\n+ logger.info(\"Query with wildcard_test2 boosted\");\n+ response = client().search(searchRequest()\n+ .searchType(SearchType.DFS_QUERY_THEN_FETCH)\n+ .source(searchSource().explain(true).indexBoost(\"*test2\", indexBoost).query(termQuery(\"test\", \"value\")))\n+ ).actionGet();\n+\n+ assertThat(response.getHits().totalHits(), equalTo(2l));\n+ logger.info(\"Hit[0] {} Explanation {}\", response.getHits().getAt(0).index(), response.getHits().getAt(0).explanation());\n+ logger.info(\"Hit[1] {} Explanation {}\", response.getHits().getAt(1).index(), response.getHits().getAt(1).explanation());\n+ assertThat(response.getHits().getAt(0).index(), equalTo(\"wildcard_test2\"));\n+ assertThat(response.getHits().getAt(1).index(), equalTo(\"wildcard_test1\"));\n+ }\n+\n+ @Test\n+ public void testMultipleIndicesBoost() throws Exception {\n+ createIndex(\"multi_test1\", \"multi_test2\");\n+ ensureGreen();\n+ client().admin().indices().prepareAliases().addAlias(\"multi_test1\", \"alias1\").get();\n+\n+ client().index(indexRequest(\"multi_test1\").type(\"type1\").id(\"1\")\n+ .source(jsonBuilder().startObject().field(\"test\", \"value check\").endObject())).actionGet();\n+ client().index(indexRequest(\"multi_test2\").type(\"type1\").id(\"1\")\n+ .source(jsonBuilder().startObject().field(\"test\", \"value beck\").endObject())).actionGet();\n+ refresh();\n+\n+ // Higher boost value is used.\n+ float indexBoost1 = 1.1f;\n+ float indexBoost2 = 0.9f;\n+\n+ logger.info(\"--- QUERY_THEN_FETCH\");\n+\n+ logger.info(\"Query with multi_test1 de-boosted\");\n+ SearchResponse response = client().search(searchRequest()\n+ .searchType(SearchType.QUERY_THEN_FETCH)\n+ .source(searchSource().explain(true).indexBoost(\"alias1\", indexBoost1).indexBoost(\"multi_test1\", indexBoost2).query(termQuery(\"test\", \"value\")))\n+ ).actionGet();\n+\n+ assertThat(response.getHits().totalHits(), equalTo(2l));\n+ logger.info(\"Hit[0] {} Explanation {}\", response.getHits().getAt(0).index(), response.getHits().getAt(0).explanation());\n+ logger.info(\"Hit[1] {} Explanation {}\", response.getHits().getAt(1).index(), response.getHits().getAt(1).explanation());\n+ assertThat(response.getHits().getAt(0).index(), equalTo(\"multi_test2\"));\n+ assertThat(response.getHits().getAt(1).index(), equalTo(\"multi_test1\"));\n+\n+ logger.info(\"Query with multi_test1 boosted\");\n+ response = client().search(searchRequest()\n+ .searchType(SearchType.QUERY_THEN_FETCH)\n+ .source(searchSource().explain(true).indexBoost(\"alias1\", indexBoost2).indexBoost(\"multi_test1\", indexBoost1).query(termQuery(\"test\", \"value\")))\n+ ).actionGet();\n+\n+ assertThat(response.getHits().totalHits(), equalTo(2l));\n+ logger.info(\"Hit[0] {} Explanation {}\", response.getHits().getAt(0).index(), response.getHits().getAt(0).explanation());\n+ logger.info(\"Hit[1] {} Explanation {}\", response.getHits().getAt(1).index(), response.getHits().getAt(1).explanation());\n+ assertThat(response.getHits().getAt(0).index(), equalTo(\"multi_test1\"));\n+ assertThat(response.getHits().getAt(1).index(), equalTo(\"multi_test2\"));\n+\n+ logger.info(\"--- DFS_QUERY_THEN_FETCH\");\n+\n+ logger.info(\"Query with multi_test1 de-boosted\");\n+ response = client().search(searchRequest()\n+ .searchType(SearchType.DFS_QUERY_THEN_FETCH)\n+ .source(searchSource().explain(true).indexBoost(\"alias1\", indexBoost1).indexBoost(\"multi_test1\", indexBoost2).query(termQuery(\"test\", \"value\")))\n+ ).actionGet();\n+\n+ assertThat(response.getHits().totalHits(), equalTo(2l));\n+ logger.info(\"Hit[0] {} Explanation {}\", response.getHits().getAt(0).index(), response.getHits().getAt(0).explanation());\n+ logger.info(\"Hit[1] {} Explanation {}\", response.getHits().getAt(1).index(), response.getHits().getAt(1).explanation());\n+ assertThat(response.getHits().getAt(0).index(), equalTo(\"multi_test2\"));\n+ assertThat(response.getHits().getAt(1).index(), equalTo(\"multi_test1\"));\n+\n+ logger.info(\"Query with multi_test1 boosted\");\n+ response = client().search(searchRequest()\n+ .searchType(SearchType.DFS_QUERY_THEN_FETCH)\n+ .source(searchSource().explain(true).indexBoost(\"alias1\", indexBoost2).indexBoost(\"multi_test1\", indexBoost1).query(termQuery(\"test\", \"value\")))\n+ ).actionGet();\n+\n+ assertThat(response.getHits().totalHits(), equalTo(2l));\n+ logger.info(\"Hit[0] {} Explanation {}\", response.getHits().getAt(0).index(), response.getHits().getAt(0).explanation());\n+ logger.info(\"Hit[1] {} Explanation {}\", response.getHits().getAt(1).index(), response.getHits().getAt(1).explanation());\n+ assertThat(response.getHits().getAt(0).index(), equalTo(\"multi_test1\"));\n+ assertThat(response.getHits().getAt(1).index(), equalTo(\"multi_test2\"));\n+ }\n }", "filename": "src/test/java/org/elasticsearch/search/indicesboost/SimpleIndicesBoostSearchTests.java", "status": "modified" }, { "diff": "@@ -26,6 +26,7 @@\n import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.action.search.SearchType;\n import org.elasticsearch.cache.recycler.PageCacheRecycler;\n+import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.common.util.BigArrays;\n import org.elasticsearch.index.analysis.AnalysisService;\n import org.elasticsearch.index.cache.bitset.BitsetFilterCache;\n@@ -300,6 +301,11 @@ public ScriptService scriptService() {\n return null;\n }\n \n+ @Override\n+ public ClusterService clusterService() {\n+ return null;\n+ }\n+\n @Override\n public PageCacheRecycler pageCacheRecycler() {\n return pageCacheRecycler;", "filename": "src/test/java/org/elasticsearch/test/TestSearchContext.java", "status": "modified" } ] }
{ "body": "The problem is due to the use of `XContentHelper#writeRawField` in [`GetFieldMappingsResponse`](https://github.com/elasticsearch/elasticsearch/blob/master/src/main/java/org/elasticsearch/action/admin/indices/mapping/get/GetFieldMappingsResponse.java#L118) which is great when dealing with the `_source` field as we never want to prettify it, but in this case the output should be prettified if requested.\n\nSteps to reproduce:\n\n```\ncurl -XPUT localhost:9200/index/type/1 -d '{\n \"foo\":\"bar\"\n }\n '\n\ncurl localhost:9200/_mapping/field/foo?pretty\n\n{\n \"index\" : {\n \"mappings\" : {\n \"type\" : {\n \"foo\" : {\n \"full_name\" : \"foo\",\n \"mapping\":{\"foo\":{\"type\":\"string\"}}\n }\n }\n }\n }\n}\n```\n", "comments": [], "number": 6552, "title": "Get field mapping api doesn't honor pretty flag" }
{ "body": "Use builder.field method instead of XContentHelper#writeRawField\n\nCloses #6552\n", "number": 8806, "review_comments": [ { "body": "This seems fragile, since any slight change in the format/structure would cause the test to break. Could we just check the output contains some newlines (which non pretty results would not)?\n", "created_at": "2014-12-08T00:32:01Z" }, { "body": "can we only do this if pretty is on? the writeRawField is a good optimization for when the requested information is already in the right format (we can write the bytes directly without re-parsing and serializing them). \n", "created_at": "2014-12-08T08:02:56Z" }, { "body": "This comment doesn't add any value\n", "created_at": "2014-12-08T15:40:25Z" }, { "body": "I would really just check for newlines.\n", "created_at": "2014-12-08T15:40:43Z" }, { "body": "this can be simplified to `response.toXContent(builder,request);`\n", "created_at": "2014-12-08T16:15:38Z" }, { "body": "I think checking for newline is better than relying on pretty printing having space between key/object...\n", "created_at": "2014-12-09T07:43:17Z" }, { "body": "@rjernst I see...\nI have no idea for good assertion.\nDo you have any other idea?\n\nOutput without pretty\n\n```\n{\n \"index\" : {\n \"mappings\" : {\n \"type\" : {\n \"obj.subfield\" : {\n \"full_name\" : \"obj.subfield\",\n \"mapping\":{\"subfield\":{\"type\":\"string\",\"index\":\"not_analyzed\"}}\n },\n \"field1\" : {\n \"full_name\" : \"field1\",\n \"mapping\":{\"field1\":{\"type\":\"string\"}}\n }\n }\n }\n }\n}\n```\n\nOutput with pretty\n\n```\n{\n \"index\" : {\n \"mappings\" : {\n \"type\" : {\n \"obj.subfield\" : {\n \"full_name\" : \"obj.subfield\",\n \"mapping\" : {\n \"subfield\" : {\n \"type\" : \"string\",\n \"index\" : \"not_analyzed\"\n }\n }\n },\n \"field1\" : {\n \"full_name\" : \"field1\",\n \"mapping\" : {\n \"field1\" : {\n \"type\" : \"string\"\n }\n }\n }\n }\n }\n }\n}\n```\n", "created_at": "2014-12-09T08:54:35Z" }, { "body": "maybe just take the output you get from the API and re-feed it to a builder you know is pretty printing and make sure the output is identical to the input?\n", "created_at": "2014-12-09T09:03:37Z" }, { "body": "Thx @bleskes ! I changed the assertion and pushed. Does it make sense?\n", "created_at": "2014-12-09T10:14:39Z" }, { "body": "great. Thx.\n", "created_at": "2014-12-10T09:14:20Z" }, { "body": "Shouldn't this be equal to the `jsonBuilder().string()` above, without adding `.prettyPrint()`? And a nitpick: please add a space after the comma..\n", "created_at": "2014-12-10T16:04:33Z" } ], "title": "Get field mapping api should honour pretty flag" }
{ "commits": [ { "message": "Mappings: Fix Get field mapping api with pretty flag\n\nCloses #6552" }, { "message": "Mappings: Fix Get field mapping api with pretty flag\n\nChange assertion more simple\nUse writeRawField if pretty off\n\nCloses #6552" }, { "message": "Mappings: Fix Get field mapping api with pretty flag\n\nFix two review comments\n\nCloses #6552" }, { "message": "Mappings: Fix Get field mapping api with pretty flag\n\nFix assertion only check \":{\"\n\nCloses #6552" }, { "message": "Mappings: Fix Get field mapping api with pretty flag\n\nChange an assertion using re-parse pretty printing\n\nCloses #6552" } ], "files": [ { "diff": "@@ -115,7 +115,11 @@ public boolean isNull() {\n @Override\n public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {\n builder.field(\"full_name\", fullName);\n- XContentHelper.writeRawField(\"mapping\", source, builder, params);\n+ if (params.paramAsBoolean(\"pretty\", false)) {\n+ builder.field(\"mapping\", sourceAsMap());\n+ } else {\n+ builder.rawField(\"mapping\", source);\n+ }\n return builder;\n }\n }", "filename": "src/main/java/org/elasticsearch/action/admin/indices/mapping/get/GetFieldMappingsResponse.java", "status": "modified" }, { "diff": "@@ -28,7 +28,6 @@\n import org.elasticsearch.common.Strings;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n-import org.elasticsearch.common.xcontent.ToXContent;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.rest.*;\n import org.elasticsearch.rest.action.support.RestBuilderListener;\n@@ -81,7 +80,7 @@ public RestResponse buildResponse(GetFieldMappingsResponse response, XContentBui\n status = NOT_FOUND;\n }\n builder.startObject();\n- response.toXContent(builder, ToXContent.EMPTY_PARAMS);\n+ response.toXContent(builder, request);\n builder.endObject();\n return new BytesRestResponse(status, builder);\n }", "filename": "src/main/java/org/elasticsearch/rest/action/admin/indices/mapping/get/RestGetFieldMappingAction.java", "status": "modified" }, { "diff": "@@ -19,8 +19,9 @@\n \n package org.elasticsearch.indices.mapping;\n \n+import com.google.common.collect.Maps;\n import org.elasticsearch.action.admin.indices.mapping.get.GetFieldMappingsResponse;\n-import org.elasticsearch.common.xcontent.XContentBuilder;\n+import org.elasticsearch.common.xcontent.*;\n import org.elasticsearch.test.ElasticsearchIntegrationTest;\n import org.hamcrest.Matchers;\n import org.junit.Test;\n@@ -146,4 +147,38 @@ public void simpleGetFieldMappingsWithDefaults() throws Exception {\n \n \n }\n+\n+ //fix #6552\n+ @Test\n+ public void simpleGetFieldMappingsWithPretty() throws Exception {\n+ assertAcked(prepareCreate(\"index\").addMapping(\"type\", getMappingForType(\"type\")));\n+ Map<String, String> params = Maps.newHashMap();\n+ params.put(\"pretty\", \"true\");\n+ ensureYellow();\n+ GetFieldMappingsResponse response = client().admin().indices().prepareGetFieldMappings(\"index\").setTypes(\"type\").setFields(\"field1\", \"obj.subfield\").get();\n+ XContentBuilder responseBuilder = XContentFactory.jsonBuilder().prettyPrint();\n+ responseBuilder.startObject();\n+ response.toXContent(responseBuilder, new ToXContent.MapParams(params));\n+ responseBuilder.endObject();\n+ String responseStrings = responseBuilder.string();\n+\n+\n+ XContentBuilder prettyJsonBuilder = XContentFactory.jsonBuilder().prettyPrint();\n+ prettyJsonBuilder.copyCurrentStructure(XContentFactory.xContent(responseStrings).createParser(responseStrings));\n+ assertThat(responseStrings, equalTo(prettyJsonBuilder.string()));\n+\n+ params.put(\"pretty\", \"false\");\n+\n+ response = client().admin().indices().prepareGetFieldMappings(\"index\").setTypes(\"type\").setFields(\"field1\",\"obj.subfield\").get();\n+ responseBuilder = XContentFactory.jsonBuilder().prettyPrint().lfAtEnd();\n+ responseBuilder.startObject();\n+ response.toXContent(responseBuilder, new ToXContent.MapParams(params));\n+ responseBuilder.endObject();\n+ responseStrings = responseBuilder.string();\n+\n+ prettyJsonBuilder = XContentFactory.jsonBuilder().prettyPrint();\n+ prettyJsonBuilder.copyCurrentStructure(XContentFactory.xContent(responseStrings).createParser(responseStrings));\n+ assertThat(responseStrings,not(equalTo(prettyJsonBuilder.string())));\n+\n+ }\n }", "filename": "src/test/java/org/elasticsearch/indices/mapping/SimpleGetFieldMappingsTests.java", "status": "modified" } ] }
{ "body": "Hello,\n\nElasticsearch 0.90.9 and higher do not work with openvz (tested with debian wheezy i386) with the following error showing on startup:\n\nStarting ElasticSearch Server:sysctl: permission denied on key 'vm.max_map_count' \n", "comments": [ { "body": "Hey,\n\ncan you please check, if the process started anyway? It should have been. The error is just a permission problem due to your VM, but should not prevent elasticsearch from starting usually.\n", "created_at": "2014-02-03T09:47:17Z" }, { "body": "I've got the same issue, and on my VM it's not starting.\n", "created_at": "2014-02-24T13:57:18Z" }, { "body": "can you add `set -x` to the elasticsearch init script and paste the output from `/etc/init.d/elasticsearch start` here?\n", "created_at": "2014-02-24T14:01:22Z" }, { "body": "do you mean `/etc/init.d/elasticsearch start set -x`?\n", "created_at": "2014-02-24T14:13:35Z" }, { "body": "sorry for not being clear. You can do the following: Open `/etc/init.d/elasticsearch` in your favourite editor and change the current setup\n\n```\n#!/bin/sh\n#\n```\n\nto\n\n```\n#!/bin/sh\nset -x\n#\n```\n\nThen run `/etc/init.d/elasticsearch start` and copy paste all that output into this ticket. You might want to comment out or remove that line again after that, so you do not get his verbose output all the time.\n\nThanks a lot for helping!\n", "created_at": "2014-02-24T14:55:26Z" }, { "body": "- PATH=/bin:/usr/bin:/sbin:/usr/sbin\n- NAME=elasticsearch\n- DESC='ElasticSearch Server'\n- DEFAULT=/etc/default/elasticsearch\n ++ id -u\n- '[' 0 -ne 0 ']'\n- . /lib/lsb/init-functions\n ++ FANCYTTY=\n ++ '[' -e /etc/lsb-base-logging.sh ']'\n ++ . /etc/lsb-base-logging.sh\n +++ LOG_DAEMON_MSG=\n- '[' -r /etc/default/rcS ']'\n- . /etc/default/rcS\n ++ TMPTIME=0\n ++ SULOGIN=no\n ++ DELAYLOGIN=no\n ++ UTC=yes\n ++ VERBOSE=no\n ++ FSCKFIX=no\n- ES_USER=elasticsearch\n- ES_GROUP=elasticsearch\n- JDK_DIRS='/usr/lib/jvm/java-7-oracle /usr/lib/jvm/java-7-openjdk /usr/lib/jvm/java-7-openjdk-amd64/ /usr/lib/jvm/java-7-openjdk-armhf /usr/lib/jvm/java-7-openjdk-i386/ /usr/lib/jvm/java-6-sun /usr/lib/jvm/java-6-openjdk /usr/lib/jvm/java-6-openjdk-amd64 /usr/lib/jvm/java-6-openjdk-armhf /usr/lib/jvm/java-6-openjdk-i386 /usr/lib/jvm/default-java'\n- for jdir in '$JDK_DIRS'\n- '[' -r /usr/lib/jvm/java-7-oracle/bin/java -a -z '' ']'\n- for jdir in '$JDK_DIRS'\n- '[' -r /usr/lib/jvm/java-7-openjdk/bin/java -a -z '' ']'\n- for jdir in '$JDK_DIRS'\n- '[' -r /usr/lib/jvm/java-7-openjdk-amd64//bin/java -a -z '' ']'\n- for jdir in '$JDK_DIRS'\n- '[' -r /usr/lib/jvm/java-7-openjdk-armhf/bin/java -a -z '' ']'\n- for jdir in '$JDK_DIRS'\n- '[' -r /usr/lib/jvm/java-7-openjdk-i386//bin/java -a -z '' ']'\n- for jdir in '$JDK_DIRS'\n- '[' -r /usr/lib/jvm/java-6-sun/bin/java -a -z '' ']'\n- for jdir in '$JDK_DIRS'\n- '[' -r /usr/lib/jvm/java-6-openjdk/bin/java -a -z '' ']'\n- JAVA_HOME=/usr/lib/jvm/java-6-openjdk\n- for jdir in '$JDK_DIRS'\n- '[' -r /usr/lib/jvm/java-6-openjdk-amd64/bin/java -a -z /usr/lib/jvm/java-6-openjdk ']'\n- for jdir in '$JDK_DIRS'\n- '[' -r /usr/lib/jvm/java-6-openjdk-armhf/bin/java -a -z /usr/lib/jvm/java-6-openjdk ']'\n- for jdir in '$JDK_DIRS'\n- '[' -r /usr/lib/jvm/java-6-openjdk-i386/bin/java -a -z /usr/lib/jvm/java-6-openjdk ']'\n- for jdir in '$JDK_DIRS'\n- '[' -r /usr/lib/jvm/default-java/bin/java -a -z /usr/lib/jvm/java-6-openjdk ']'\n- export JAVA_HOME\n- ES_HOME=/usr/share/elasticsearch\n- MAX_OPEN_FILES=65535\n- LOG_DIR=/var/log/elasticsearch\n- DATA_DIR=/var/lib/elasticsearch\n- WORK_DIR=/tmp/elasticsearch\n- CONF_DIR=/etc/elasticsearch\n- CONF_FILE=/etc/elasticsearch/elasticsearch.yml\n- MAX_MAP_COUNT=65535\n- '[' -f /etc/default/elasticsearch ']'\n- . /etc/default/elasticsearch\n- PID_FILE=/var/run/elasticsearch.pid\n- DAEMON=/usr/share/elasticsearch/bin/elasticsearch\n- DAEMON_OPTS='-p /var/run/elasticsearch.pid -Des.default.config=/etc/elasticsearch/elasticsearch.yml -Des.default.path.home=/usr/share/elasticsearch -Des.default.path.logs=/var/log/elasticsearch -Des.default.path.data=/var/lib/elasticsearch -Des.default.path.work=/tmp/elasticsearch -Des.default.path.conf=/etc/elasticsearch'\n- export ES_HEAP_SIZE\n- export ES_HEAP_NEWSIZE\n- export ES_DIRECT_SIZE\n- export ES_JAVA_OPTS\n- test -x /usr/share/elasticsearch/bin/elasticsearch\n- case \"$1\" in\n- checkJava\n- '[' -x /usr/lib/jvm/java-6-openjdk/bin/java ']'\n- JAVA=/usr/lib/jvm/java-6-openjdk/bin/java\n- '[' '!' -x /usr/lib/jvm/java-6-openjdk/bin/java ']'\n- '[' -n '' -a -z '' ']'\n- log_daemon_msg 'Starting ElasticSearch Server'\n- '[' -z 'Starting ElasticSearch Server' ']'\n- log_use_fancy_output\n- TPUT=/usr/bin/tput\n- EXPR=/usr/bin/expr\n- '[' -t 1 ']'\n- '[' xxterm '!=' x ']'\n- '[' xxterm '!=' xdumb ']'\n- '[' -x /usr/bin/tput ']'\n- '[' -x /usr/bin/expr ']'\n- /usr/bin/tput hpa 60\n- /usr/bin/tput setaf 1\n- '[' -z ']'\n- FANCYTTY=1\n- case \"$FANCYTTY\" in\n- true\n- /usr/bin/tput xenl\n ++ /usr/bin/tput cols\n- COLS=80\n- '[' 80 ']'\n- '[' 80 -gt 6 ']'\n ++ /usr/bin/expr 80 - 7\n- COL=73\n- log_use_plymouth\n- '[' n = y ']'\n- plymouth --ping\n- printf ' \\* Starting ElasticSearch Server '\n - Starting ElasticSearch Server ++ /usr/bin/expr 80 - 1\n- /usr/bin/tput hpa 79\n + printf ' '\n ++ pidofproc -p /var/run/elasticsearch.pid elasticsearch\n ++ local pidfile line i pids= status specified pid\n ++ pidfile=\n ++ specified=\n ++ OPTIND=1\n ++ getopts p: opt\n ++ case \"$opt\" in\n ++ pidfile=/var/run/elasticsearch.pid\n ++ specified=1\n ++ getopts p: opt\n ++ shift 2\n ++ base=elasticsearch\n ++ '[' '!' 1 ']'\n ++ '[' -n /var/run/elasticsearch.pid -a -r /var/run/elasticsearch.pid ']'\n ++ read pid\n ++ '[' -n 16987 ']'\n +++ kill -0 16987\n ++ ps 16987\n ++ return 1\n- pid=\n- '[' -n '' ']'\n- mkdir -p /var/log/elasticsearch /var/lib/elasticsearch /tmp/elasticsearch\n- chown elasticsearch:elasticsearch /var/log/elasticsearch /var/lib/elasticsearch /tmp/elasticsearch\n- touch /var/run/elasticsearch.pid\n- chown elasticsearch:elasticsearch /var/run/elasticsearch.pid\n- '[' -n 65535 ']'\n- ulimit -n 65535\n- '[' -n '' ']'\n- '[' -n 65535 ']'\n- sysctl -q -w vm.max_map_count=65535\n error: permission denied on key 'vm.max_map_count'\n- start-stop-daemon --start -b --user elasticsearch -c elasticsearch --pidfile /var/run/elasticsearch.pid --exec /usr/share/elasticsearch/bin/elasticsearch -- -p /var/run/elasticsearch.pid -Des.default.config=/etc/elasticsearch/elasticsearch.yml -Des.default.path.home=/usr/share/elasticsearch -Des.default.path.logs=/var/log/elasticsearch -Des.default.path.data=/var/lib/elasticsearch -Des.default.path.work=/tmp/elasticsearch -Des.default.path.conf=/etc/elasticsearch\n- log_end_msg 0\n- '[' -z 0 ']'\n- '[' 73 ']'\n- '[' -x /usr/bin/tput ']'\n- log_use_plymouth\n- '[' n = y ']'\n- plymouth --ping\n- printf '\\r'\n- /usr/bin/tput hpa 73\n + '[' 0 -eq 0 ']'\n- echo '[ OK ]'\n [ OK ]\n- return 0\n- exit 0\n", "created_at": "2014-02-24T16:49:29Z" }, { "body": "can you run `ps p $(cat /var/run/elasticsearch.pid)` - this actually looks as if elasticsearch was started...\n", "created_at": "2014-02-24T17:21:49Z" }, { "body": "Returns just header, i've tried `service elasticsearch status` and it says `* elasticsearch is not running`.\n", "created_at": "2014-02-25T13:22:36Z" }, { "body": "Can you make sure elasticsearch does not run, and try this on the commandline (as root!):\n\n```\ntouch /var/run/elasticsearch.pid\nchown elasticsearch:elasticsearch /var/run/elasticsearch.pid\nstart-stop-daemon -v --start --user elasticsearch -c elasticsearch --pidfile /var/run/elasticsearch.pid --exec /usr/share/elasticsearch/bin/elasticsearch -- -p /var/run/elasticsearch.pid -Des.default.config=/etc/elasticsearch/elasticsearch.yml -Des.default.path.home=/usr/share/elasticsearch -Des.default.path.logs=/var/log/elasticsearch -Des.default.path.data=/var/lib/elasticsearch -Des.default.path.work=/tmp/elasticsearch -Des.default.path.conf=/etc/elasticsearch\n```\n\nand paste the output here?\n", "created_at": "2014-02-25T14:29:27Z" }, { "body": "{0.90.8}: Initialization Failed ...\n- ElasticSearchIllegalStateException[Failed to obtain node lock, is the following location writable?: [/var/data/elasticsearch/hostname]]\n IOException[failed to obtain lock on /var/data/elasticsearch/hostname/nodes/49]\n IOException[Cannot create directory: /var/data/elasticsearch/hostname/nodes/49]\n Turns out I just forgot to create `/var/data` directory. Now everything works. Thank you, @spinscale.\n", "created_at": "2014-02-25T17:24:38Z" }, { "body": "@empirik One last question: Did this message also occur in the log file at `/var/log/elasticsearch` or was it just printed to stdout? Would be awesome if you could check it!\n", "created_at": "2014-02-26T07:32:07Z" }, { "body": "@spinscale I don't see this message in logs.\n", "created_at": "2014-02-26T18:16:14Z" }, { "body": "Just for the record. Getting same error on startup in openvz, but elasticsearch runs without problems for now. A bit irritating as normaly a service start error means the service won't run. As opposed to a warning.\nI expect without correct vm.max_map_count I should expect \"only\" performace problems?\n", "created_at": "2014-03-16T23:53:59Z" }, { "body": "I dont know openvz enough, but not setting this setting can also result in lucene exceptions and indexing problems, it is not only about performance here - I dont how openvz is handling this. Can you get us any insight here @derEremit?\n", "created_at": "2014-04-25T19:51:32Z" }, { "body": "Got the same on openvz\n", "created_at": "2014-05-12T02:07:15Z" }, { "body": "I see the same message on big index but after allocating more memmory elasticsearch starts at least :) Using OpenVZ too and elasticsearch 1.1.1.\n", "created_at": "2014-05-14T08:53:35Z" }, { "body": "Got the same on OpenVZ and elasticsearch 1.2.0.\n", "created_at": "2014-06-03T13:00:04Z" }, { "body": "Got the same on OpenVZ and elasticsearch 1.2.0\nTemplate : [debian-7.0-x86_64.tar.gz](http://download.openvz.org/template/precreated/debian-7.0-x86_64.tar.gz)\nHost Server : Proxmox 3.1-21\n", "created_at": "2014-06-10T21:59:03Z" }, { "body": "Basically openvz does not allow modifications of kernel parameters as these would affect the host and every other guest machine.\nI think that setting should be removed from init scripts and put in a README.\nThe init script could check for the correct setting and output a warning but not set these kernel parameters itself!\n", "created_at": "2014-06-11T14:58:31Z" }, { "body": ":+1: \n", "created_at": "2014-06-26T13:23:59Z" }, { "body": "Hi all,\nkinda newbie here.\nI'm using Proxmox, Ubuntu OpenVZ container, Elasticsearch 1.2.1.\nI had the same service starting issues and I fixed that by removing the \"-d\" option in DAEMON_OPTS (a directory path shoud have been provided here)\nAlso I changed the line \nif [ -n \"$MAX_MAP_COUNT\" ]\nwith\nif [ -n \"$MAX_MAP_COUNT\" ] && [ ! -f /proc/sys/vm/max_map_count ]\nto address the unwanted sysctl issue.\nHope it helps.\n", "created_at": "2014-07-02T17:51:44Z" }, { "body": "Hi all,\n\nThanks to scamianbas for his tips, I tested this on proxmox 3.2-4, debian7-x64 template, works like a charm !\n\nThanks to the elasticsearch guys too for this great piece of software ;)\n", "created_at": "2014-08-10T13:58:01Z" }, { "body": "FYI: fresh install of Unbuntu 12.4 with elastic search 1.3.1 over the top, following this Gist (easy way) for install https://gist.github.com/wingdspur/2026107\n\nI get the error\n- Starting Elasticsearch Server \n error: permission denied on key 'vm.max_map_count'\n\nwhen I do a ps check, I get\n\nps p $(cat /var/run/elasticsearch.pid)\n PID TTY STAT TIME COMMAND\n 8246 ? Sl 0:08 /usr/lib/jvm/java-7-openjdk-i386//bin/java -Xms256m -Xmx1g -Xss256k -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX\n\nand finally when doing \"service elasticsearch status\"\n\nit says it is running. \n\nSo the error has a slightly different prefix (error: and not sysctl:) but as with other information above it is not stopping the server. If needed I have the screen output with set -x in the elasticsearch config file. \n", "created_at": "2014-08-12T10:28:36Z" }, { "body": "I'm having the same issue on CentOS 6.5 with elasticsearch 1.3.2 in OpenVZ.\nps p $(cat /var/run/elasticsearch/elasticsearch.pid)\n PID TTY STAT TIME COMMAND\n 1132 ? Sl 0:05 /usr/bin/java -Xms256m -Xmx1g -Xss256k -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -XX:+DisableExplicitGC -Delastics\nJust like andykillen, the service is running after all.\n", "created_at": "2014-09-02T14:29:43Z" }, { "body": "Same issue with LXC and Ubuntu 14.04\n\n```\nsysctl: permission denied on key 'vm.max_map_count'\n```\n\nthough elasticsearch is running, is it a huge problem for production environment?\n", "created_at": "2014-09-30T12:38:16Z" }, { "body": "I also get this error on VPS running Ubuntu 14.04 when I (re)start Elasticsearch but is running at least.\n\n```\nsysctl: permission denied on key 'vm.max_map_count'\n```\n", "created_at": "2014-10-29T16:09:29Z" }, { "body": "Can someone please merge this? This works! :+1: :shipit: \n\n`if [ -n \"$MAX_MAP_COUNT\" ]`\nwith\n`if [ -n \"$MAX_MAP_COUNT\" ] && [ ! -f /proc/sys/vm/max_map_count ]`\n", "created_at": "2014-11-13T15:05:03Z" }, { "body": "@sts, shouldnt the check be vice versa and only done if the file exists and thus without the negation? I am confused or maybe misunderstanding the intent\n", "created_at": "2014-11-13T15:10:39Z" }, { "body": "@sts to avoid the warning you can also edit `/etc/default/elasticsearch` to set `MAX_MAP_COUNT` with an empty value, no need to fix the startup script.\n", "created_at": "2014-11-17T15:57:46Z" }, { "body": "@alexgarel I think it'd be very nice of ES to construct `/etc/default/elasticsearch` that has `MAX_MAP_COUNT` set to a null value when an offending environment like OpenVZ is detected as the host of ES, or otherwise, whatever it is meant to be set to.\n", "created_at": "2014-11-18T10:01:58Z" } ], "number": 4978, "title": "sysctl: permission denied on key 'vm.max_map_count' - OpenVZ Elasticsearch 0.90.9 compatibility issue" }
{ "body": "The packaged init scripts could return an error, if the file\n/proc/sys/vm/max_map_count was not existing and we still called\nsysctl.\n\nThis is primarly to prevent confusing error messages when elasticsearch\nis started under virtualized environments without a proc file system.\n\nCloses #4978\n", "number": 8793, "review_comments": [], "title": "Check if proc file exists before calling sysctl" }
{ "commits": [ { "message": "Packaging: Check if proc file exists before calling sysctl\n\nThe packaged init scripts could return an error, if the file\n/proc/sys/vm/max_map_count was not existing and we still called\nsysctl.\n\nThis is primarly to prevent confusing error messages when elasticsearch\nis started under virtualized environments without a proc file system.\n\nCloses #4978" } ], "files": [ { "diff": "@@ -157,7 +157,7 @@ case \"$1\" in\n \t\tulimit -l $MAX_LOCKED_MEMORY\n \tfi\n \n-\tif [ -n \"$MAX_MAP_COUNT\" ]; then\n+\tif [ -n \"$MAX_MAP_COUNT\" -a -f /proc/sys/vm/max_map_count ]; then\n \t\tsysctl -q -w vm.max_map_count=$MAX_MAP_COUNT\n \tfi\n ", "filename": "src/deb/init.d/elasticsearch", "status": "modified" }, { "diff": "@@ -77,7 +77,7 @@ start() {\n if [ -n \"$MAX_LOCKED_MEMORY\" ]; then\n ulimit -l $MAX_LOCKED_MEMORY\n fi\n- if [ -n \"$MAX_MAP_COUNT\" ]; then\n+ if [ -n \"$MAX_MAP_COUNT\" -a -f /proc/sys/vm/max_map_count ]; then\n sysctl -q -w vm.max_map_count=$MAX_MAP_COUNT\n fi\n if [ -n \"$WORK_DIR\" ]; then", "filename": "src/rpm/init.d/elasticsearch", "status": "modified" } ] }
{ "body": "Given the following mapping and data:\n\n``` json\nPUT /my_index\n{\n \"mappings\": {\n \"landmark\": {\n \"properties\": {\n \"name\": {\n \"type\": \"string\"\n },\n \"location\": {\n \"type\": \"geo_shape\"\n }\n }\n }\n }\n}\n\nPOST /my_index/landmark/1\n{\"name\":\"Dam Square, Amsterdam\",\"location\":{\"type\":\"polygon\",\"coordinates\":[[[4.89218,52.37356],[4.89205,52.37276],[4.89301,52.37274],[4.89392,52.3725],[4.89431,52.37287],[4.89331,52.37346],[4.89305,52.37326],[4.89218,52.37356]]]}}\n```\n\nThe following query, either a \"geo_shape\" filter or query, fails with a SearchParseException enclosing a NullPointerException:\n\n``` json\nGET /my_index/landmark/_search\n{\n \"query\": {\n \"geo_shape\": {\n \"location\": {\n \"shape\": {\n \"type\": \"point\",\n \"coordinates\": [[\"4.901238\",\"52.36936\"]]\n }\n }\n }\n }\n}\n```\n\nAnd the exception in logs:\n\n```\nCaused by: java.lang.NullPointerException\n at org.elasticsearch.common.geo.builders.PointBuilder.build(PointBuilder.java:59)\n at org.elasticsearch.common.geo.builders.PointBuilder.build(PointBuilder.java:29)\n at org.elasticsearch.index.query.GeoShapeQueryParser.getArgs(GeoShapeQueryParser.java:173)\n at org.elasticsearch.index.query.GeoShapeFilterParser.parse(GeoShapeFilterParser.java:178)\n at org.elasticsearch.index.query.QueryParseContext.executeFilterParser(QueryParseContext.java:315)\n at org.elasticsearch.index.query.QueryParseContext.parseInnerFilter(QueryParseContext.java:296)\n at org.elasticsearch.index.query.FilteredQueryParser.parse(FilteredQueryParser.java:74)\n at org.elasticsearch.index.query.QueryParseContext.parseInnerQuery(QueryParseContext.java:252)\n at org.elasticsearch.index.query.IndexQueryParserService.innerParse(IndexQueryParserService.java:382)\n at org.elasticsearch.index.query.IndexQueryParserService.parse(IndexQueryParserService.java:281)\n at org.elasticsearch.index.query.IndexQueryParserService.parse(IndexQueryParserService.java:276)\n at org.elasticsearch.search.query.QueryParseElement.parse(QueryParseElement.java:33)\n at org.elasticsearch.search.SearchService.parseSource(SearchService.java:665)\n```\n", "comments": [ { "body": "Another NPE happens for a query type \"polygon\":\n\n``` json\n{\n \"query\": {\n \"geo_shape\": {\n \"location\": {\n \"shape\": {\n \"type\": \"polygon\",\n \"coordinates\": [\"4.901238\",\"52.36936\"]\n }\n }\n }\n }\n}\n```\n\nand the stacktrace:\n\n```\nCaused by: java.lang.NullPointerException\n at org.elasticsearch.common.geo.builders.ShapeBuilder$GeoShapeType.parsePolygon(ShapeBuilder.java:644)\n at org.elasticsearch.common.geo.builders.ShapeBuilder$GeoShapeType.parse(ShapeBuilder.java:597)\n at org.elasticsearch.common.geo.builders.ShapeBuilder.parse(ShapeBuilder.java:235)\n at org.elasticsearch.index.query.GeoShapeQueryParser.parse(GeoShapeQueryParser.java:86)\n at org.elasticsearch.index.query.QueryParseContext.parseInnerQuery(QueryParseContext.java:252)\n at org.elasticsearch.index.query.IndexQueryParserService.innerParse(IndexQueryParserService.java:382)\n at org.elasticsearch.index.query.IndexQueryParserService.parse(IndexQueryParserService.java:281)\n at org.elasticsearch.index.query.IndexQueryParserService.parse(IndexQueryParserService.java:276)\n at org.elasticsearch.search.query.QueryParseElement.parse(QueryParseElement.java:33)\n at org.elasticsearch.search.SearchService.parseSource(SearchService.java:665)\n```\n", "created_at": "2014-11-11T09:15:03Z" }, { "body": "@nknize can you take a look at this?\n", "created_at": "2014-12-01T21:50:03Z" }, { "body": "The following is probably obvious (and likely the intent of the test):\n\nThe GeoJSON in the query request is invalid. e.g., \n\n``` java\n\"shape\": {\n \"type\": \"point\",\n \"coordinates\": [[\"4.901238\",\"52.36936\"]]\n }\n```\n\nShould either be:\n\n``` java\n\"type\": \"multipoint\",\n...\n```\n\nor:\n\n``` java\n...\n\"coordinates\": [\"4.901238\",\"52.36936\"]}\n```\n\nSame with the polygon - it should be an array of LinearRings (closed LineStrings). \n\nI'll add a more useful parse error message instead of the \"give up\" NPE. \n\nFYI, most of the shape parsing logic can use better error handling so I'm certain this isn't the last NPE parse error. I'll be adding better error handling as I go so add the issues as you find them.\n", "created_at": "2014-12-04T19:33:07Z" }, { "body": "if ripping it out and redo is a potential better solution I am all for it @nknize especially if we can add unittests to it as well :)\n", "created_at": "2014-12-04T19:35:21Z" } ], "number": 8432, "title": "NPE enclosed in a SearchParseException for a \"point\" type \"geo_shape\" filter or query" }
{ "body": "...ter or query\n\nThis fix adds better error handling for parsing multipoint, linestring, and polygon GeoJSONs. Current logic throws a NPE when parsing a multipoint, linestring, or polygon that does not comply with the GeoJSON specification. That is, if a user provides a single coordinate instead of an array of coordinates, or array of linestrings, the ShapeParser throws a NPE wrapped in a SearchParseException instead of a more useful error message.\n\nCloses #8432\n", "number": 8785, "review_comments": [ { "body": "can we use `== false` instead of `!` it's so much easier to read and burned my fingers too often\n", "created_at": "2014-12-04T22:33:11Z" }, { "body": "maybe use `coordinates.children.isEmpty()` instead\n", "created_at": "2014-12-04T22:33:42Z" }, { "body": "maybe use `coordinates.children.isEmpty()`instead\n", "created_at": "2014-12-04T22:34:02Z" }, { "body": "maybe use `coordinates.children.isEmpty()` instead\n", "created_at": "2014-12-04T22:40:47Z" }, { "body": "This message suggests that it is valid to have a multipoint with no coordinates, is that correct?\n", "created_at": "2014-12-05T08:26:29Z" }, { "body": "Unlike LinearRing, LineString, and Polygon, the spec doesn't explicitly state a minimum number of points. Since you raised a very valid question - and as I think about more use cases - I think you're correct to point this out as not being user friendly. In fact, accepting 0 coordinates could lead to a lot of silent failures and extremely unhappy users.\n\nI made the change to throw a Parse Exception if an empty array is provided. Good catch!\n", "created_at": "2014-12-05T16:06:26Z" } ], "title": "Fix for NPE enclosed in SearchParseException for a \"geo_shape\" filter or query" }
{ "commits": [ { "message": "[GEO] Fix for NPE enclosed in SearchParseException for a \"geo_shape\" filter or query\n\nThis fix adds better error handling for parsing multipoint, linestring, and polygon GeoJSONs. Current logic throws a NPE when parsing a multipoint, linestring, or polygon that does not comply with the GeoJSON specification. That is, if a user provides a single coordinate instead of an array of coordinates, or array of linestrings, the ShapeParser throws a NPE wrapped in a SearchParseException instead of a more useful error message.\n\nCloses #8432" } ], "files": [ { "diff": "@@ -346,6 +346,10 @@ protected CoordinateNode(List<CoordinateNode> children) {\n this.coordinate = null;\n }\n \n+ protected boolean isEmpty() {\n+ return (coordinate == null && (children == null || children.isEmpty()));\n+ }\n+\n @Override\n public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException {\n if (children == null) {\n@@ -607,7 +611,19 @@ public static ShapeBuilder parse(XContentParser parser) throws IOException {\n }\n }\n \n+ protected static void validatePointNode(CoordinateNode node) {\n+ if (node.isEmpty()) {\n+ throw new ElasticsearchParseException(\"Invalid number of points (0) provided when expecting a single coordinate \"\n+ + \"([lat, lng])\");\n+ } else if (node.coordinate == null) {\n+ if (node.children.isEmpty() == false) {\n+ throw new ElasticsearchParseException(\"multipoint data provided when single point data expected.\");\n+ }\n+ }\n+ }\n+\n protected static PointBuilder parsePoint(CoordinateNode node) {\n+ validatePointNode(node);\n return newPoint(node.coordinate);\n }\n \n@@ -619,7 +635,24 @@ protected static EnvelopeBuilder parseEnvelope(CoordinateNode coordinates) {\n return newEnvelope().topLeft(coordinates.children.get(0).coordinate).bottomRight(coordinates.children.get(1).coordinate);\n }\n \n+ protected static void validateMultiPointNode(CoordinateNode coordinates) {\n+ if (coordinates.children == null || coordinates.children.isEmpty()) {\n+ if (coordinates.coordinate != null) {\n+ throw new ElasticsearchParseException(\"single coordinate found when expecting an array of \" +\n+ \"coordinates. change type to point or change data to an array of >0 coordinates\");\n+ }\n+ throw new ElasticsearchParseException(\"No data provided for multipoint object when expecting \" +\n+ \">0 points (e.g., [[lat, lng]] or [[lat, lng], ...])\");\n+ } else {\n+ for (CoordinateNode point : coordinates.children) {\n+ validatePointNode(point);\n+ }\n+ }\n+ }\n+\n protected static MultiPointBuilder parseMultiPoint(CoordinateNode coordinates) {\n+ validateMultiPointNode(coordinates);\n+\n MultiPointBuilder points = new MultiPointBuilder();\n for (CoordinateNode node : coordinates.children) {\n points.point(node.coordinate);\n@@ -671,6 +704,11 @@ protected static LineStringBuilder parseLinearRing(CoordinateNode coordinates) {\n }\n \n protected static PolygonBuilder parsePolygon(CoordinateNode coordinates) {\n+ if (coordinates.children == null || coordinates.children.isEmpty()) {\n+ throw new ElasticsearchParseException(\"Invalid LinearRing provided for type polygon. Linear ring must be an array of \" +\n+ \"coordinates\");\n+ }\n+\n LineStringBuilder shell = parseLinearRing(coordinates.children.get(0));\n PolygonBuilder polygon = new PolygonBuilder(shell.points);\n for (int i = 1; i < coordinates.children.size(); i++) {", "filename": "src/main/java/org/elasticsearch/common/geo/builders/ShapeBuilder.java", "status": "modified" }, { "diff": "@@ -157,6 +157,58 @@ public void testParse_polygonNoHoles() throws IOException {\n assertGeometryEquals(jtsGeom(expected), polygonGeoJson);\n }\n \n+ @Test\n+ public void testParse_invalidPoint() throws IOException {\n+ // test case 1: create an invalid point object with multipoint data format\n+ String invalidPoint1 = XContentFactory.jsonBuilder().startObject().field(\"type\", \"point\")\n+ .startArray(\"coordinates\")\n+ .startArray().value(-74.011).value(40.753).endArray()\n+ .endArray()\n+ .endObject().string();\n+ XContentParser parser = JsonXContent.jsonXContent.createParser(invalidPoint1);\n+ parser.nextToken();\n+ ElasticsearchGeoAssertions.assertValidException(parser, ElasticsearchParseException.class);\n+\n+ // test case 2: create an invalid point object with an empty number of coordinates\n+ String invalidPoint2 = XContentFactory.jsonBuilder().startObject().field(\"type\", \"point\")\n+ .startArray(\"coordinates\")\n+ .endArray()\n+ .endObject().string();\n+ parser = JsonXContent.jsonXContent.createParser(invalidPoint2);\n+ parser.nextToken();\n+ ElasticsearchGeoAssertions.assertValidException(parser, ElasticsearchParseException.class);\n+ }\n+\n+ @Test\n+ public void testParse_invalidMultipoint() throws IOException {\n+ // test case 1: create an invalid multipoint object with single coordinate\n+ String invalidMultipoint1 = XContentFactory.jsonBuilder().startObject().field(\"type\", \"multipoint\")\n+ .startArray(\"coordinates\").value(-74.011).value(40.753).endArray()\n+ .endObject().string();\n+ XContentParser parser = JsonXContent.jsonXContent.createParser(invalidMultipoint1);\n+ parser.nextToken();\n+ ElasticsearchGeoAssertions.assertValidException(parser, ElasticsearchParseException.class);\n+\n+ // test case 2: create an invalid multipoint object with null coordinate\n+ String invalidMultipoint2 = XContentFactory.jsonBuilder().startObject().field(\"type\", \"multipoint\")\n+ .startArray(\"coordinates\")\n+ .endArray()\n+ .endObject().string();\n+ parser = JsonXContent.jsonXContent.createParser(invalidMultipoint2);\n+ parser.nextToken();\n+ ElasticsearchGeoAssertions.assertValidException(parser, ElasticsearchParseException.class);\n+\n+ // test case 3: create a valid formatted multipoint object with invalid number (0) of coordinates\n+ String invalidMultipoint3 = XContentFactory.jsonBuilder().startObject().field(\"type\", \"multipoint\")\n+ .startArray(\"coordinates\")\n+ .startArray().endArray()\n+ .endArray()\n+ .endObject().string();\n+ parser = JsonXContent.jsonXContent.createParser(invalidMultipoint3);\n+ parser.nextToken();\n+ ElasticsearchGeoAssertions.assertValidException(parser, ElasticsearchParseException.class);\n+ }\n+\n @Test\n public void testParse_invalidPolygon() throws IOException {\n /**\n@@ -225,6 +277,15 @@ public void testParse_invalidPolygon() throws IOException {\n parser = JsonXContent.jsonXContent.createParser(invalidPoly5);\n parser.nextToken();\n ElasticsearchGeoAssertions.assertValidException(parser, ElasticsearchIllegalArgumentException.class);\n+\n+ // test case 6: create an invalid polygon with 0 LinearRings\n+ String invalidPoly6 = XContentFactory.jsonBuilder().startObject().field(\"type\", \"polygon\")\n+ .startArray(\"coordinates\").endArray()\n+ .endObject().string();\n+\n+ parser = JsonXContent.jsonXContent.createParser(invalidPoly6);\n+ parser.nextToken();\n+ ElasticsearchGeoAssertions.assertValidException(parser, ElasticsearchParseException.class);\n }\n \n @Test", "filename": "src/test/java/org/elasticsearch/common/geo/GeoJSONShapeParserTests.java", "status": "modified" } ] }
{ "body": "The GeoJSON specification (http://geojson.org/geojson-spec.html) does not mandate a specific order for polygon vertices thus leading to ambiguous polys around the dateline. To alleviate ambiguity the OGC requires vertex ordering for exterior rings according to the right-hand rule (ccw) with interior rings in the reverse order (cw) (http://www.opengeospatial.org/standards/sfa). While JTS expects all vertices in cw order (http://tsusiatsoftware.net/jts/jts-faq/jts-faq.html). Spatial4j circumvents the issue by not allowing polys to exceed 180 degs in width, thus choosing the smaller of the two polys. Since GEO core includes logic that determines orientation at runtime, the following OGC compliant poly will fail: \n\n``` java\n {\n \"geo_shape\" : {\n \"loc\" : {\n \"shape\" : {\n \"type\" : \"polygon\",\n \"coordinates\" : [ [ \n [ 176, 15 ], \n [ -177, 10 ], \n [ -177, -10 ], \n [ 176, -15 ], \n [ 172, 0 ],\n [ 176, 15] ], \n [ [ -179, 5],\n [-179, -5],\n [176, -5],\n [176, 5],\n [-179, -5] ]\n ]\n }\n }\n }\n}\n```\n\nOne workaround is to manually transform coordinates in the supplied GeoJSON from -180:180 to a 0:360 coordinate system (e.g, -179 = 181). This, of course, fails to comply with OGC specs and requires clients roll their own transform.\n\nThe other (preferred) solution - and the purpose of this issue - is to correct the orientation logic such that GEO core supports OGC compliant polygons without the 180 degree restriction/workaround.\n", "comments": [ { "body": "Follow feature branch feature/WKT_poly_vertex_order for WIP\n", "created_at": "2014-11-26T22:50:58Z" }, { "body": "Fixed in #8762\n", "created_at": "2014-12-16T18:53:56Z" } ], "number": 8672, "title": "[GEO] OGC compliant polygons fail with ambiguity" }
{ "body": "This feature/fix implements OGC compliance for Polygon/Multi-polygon to correctly cross the dateline without requiring the user specify invalid lat/lon pairs (e.g., 183/-70). That is, vertex order for the exterior ring follows the right-hand rule (ccw) and all holes follow the left-hand rule (cw). While GeoJSON imposes no restrictions, a user that wants to specify a complex poly across the dateline can now do so in compliance with the OGC spec, otherwise a polygon that spans the globe will be assumed. Note that this could break a users current vertex order (i.e., the user specified a non-dateline crossing global poly in cw order - see ShapeBuilderTests.testShapeWithAlternateOrientation). In this case the user would have to comply with OGC standards and reorder the vertices ccw. This does not, however, break existing ordering for polys < 180 degrees (90% of most geo datasets) or invalid lat/lon pairs. So a user or existing deployment can still specify a dateline crossing poly using invalid lat/lon pairs. This simply implements OGC compliance to handle the ambiguous polygon problem. \n\nCloses #8672\n", "number": 8762, "review_comments": [], "title": "Feature/Fix for OGC compliant polygons failing with ambiguity" }
{ "commits": [ { "message": "[GEO] OGC compliant polygons fail with ambiguity\n\nThis feature branch implements OGC compliance for Polygon/Multi-polygon. That is, vertex order for the exterior ring follows the right-hand rule (ccw) and all holes follow the left-hand rule (cw). While GeoJSON imposes no restrictions, a user that wants to specify a complex poly across the dateline must do so in compliance with the OGC spec, otherwise a polygon that spans the globe will be assumed.\n\nReference issue #8672\n\nFix orientation of outer and inner ring for polygon with holes. Updated unit tests. Bug exists in boundary condition on negative side of dateline." }, { "message": "Updating connect method to prevent duplicate edges" }, { "message": "Computational geometry logic changes to support OGC standards\n\nThis commit adds the logic necessary for supporting polygon vertex ordering per OGC standards. Exterior rings will be treated in ccw (right-handed rule) and interior rings will be treated in cw (left-handed rule). This feature change supports polygons that cross the dateline, and those that span the globe/map. The unit tests have been updated and corrected to test various situations. Greater test coverage will be provided in future commits.\n\nAddresses #8672" }, { "message": "Updating translation gate check to disregard order of hole vertices for non dateline crossing polys.\n\nUpdating comments and code readability\n\nCorrecting code formatting" }, { "message": "Adding dateline test with valid lat/lon pairs\n\nCleanup: Removing unnecessary logic checks" }, { "message": "Adding unit test for self intersecting polygons. Relevant to #7751 even/odd discussion\n\nUpdating documentation to describe polygon ambiguity and vertex ordering." }, { "message": "Adding unit tests for clockwise non-OGC ordering\n\nAdding unit tests to validate cw defined polys not-crossing and crossing the dateline, respectively" } ], "files": [ { "diff": "@@ -157,7 +157,7 @@ units, which default to `METERS`.\n For all types, both the inner `type` and `coordinates` fields are\n required.\n \n-Note: In GeoJSON, and therefore Elasticsearch, the correct *coordinate\n+In GeoJSON, and therefore Elasticsearch, the correct *coordinate\n order is longitude, latitude (X, Y)* within coordinate arrays. This\n differs from many Geospatial APIs (e.g., Google Maps) that generally\n use the colloquial latitude, longitude (Y, X).\n@@ -235,6 +235,36 @@ arrays represent the interior shapes (\"holes\"):\n }\n --------------------------------------------------\n \n+*IMPORTANT NOTE:* GeoJSON does not mandate a specific order for vertices thus ambiguous\n+polygons around the dateline and poles are possible. To alleviate ambiguity\n+the Open Geospatial Consortium (OGC)\n+http://www.opengeospatial.org/standards/sfa[Simple Feature Access] specification\n+defines the following vertex ordering:\n+\n+* Outer Ring - Counterclockwise\n+* Inner Ring(s) / Holes - Clockwise\n+\n+For polygons that do not cross the dateline, vertex order will not matter in\n+Elasticsearch. For polygons that do cross the dateline, Elasticsearch requires\n+vertex orderinging comply with the OGC specification. Otherwise, an unintended polygon\n+may be created and unexpected query/filter results will be returned.\n+\n+The following provides an example of an ambiguous polygon. Elasticsearch will apply\n+OGC standards to eliminate ambiguity resulting in a polygon that crosses the dateline.\n+\n+[source,js]\n+--------------------------------------------------\n+{\n+ \"location\" : {\n+ \"type\" : \"polygon\",\n+ \"coordinates\" : [\n+ [ [-177.0, 10.0], [176.0, 15.0], [172.0, 0.0], [176.0, -15.0], [-177.0, -10.0], [-177.0, 10.0] ],\n+ [ [178.2, 8.2], [-178.8, 8.2], [-180.8, -8.8], [178.2, 8.8] ]\n+ ]\n+ }\n+}\n+--------------------------------------------------\n+\n [float]\n ===== http://www.geojson.org/geojson-spec.html#id5[MultiPoint]\n ", "filename": "docs/reference/mapping/types/geo-shape-type.asciidoc", "status": "modified" }, { "diff": "@@ -125,10 +125,9 @@ public Coordinate[][][] coordinates() {\n \n Edge[] edges = new Edge[numEdges];\n Edge[] holeComponents = new Edge[holes.size()];\n-\n- int offset = createEdges(0, true, shell, edges, 0);\n+ int offset = createEdges(0, false, shell, null, edges, 0);\n for (int i = 0; i < holes.size(); i++) {\n- int length = createEdges(i+1, false, this.holes.get(i), edges, offset);\n+ int length = createEdges(i+1, true, shell, this.holes.get(i), edges, offset);\n holeComponents[i] = edges[offset];\n offset += length;\n }\n@@ -396,15 +395,21 @@ private static int merge(Edge[] intersections, int offset, int length, Edge[] ho\n holes[numHoles] = null;\n }\n // only connect edges if intersections are pairwise \n- // per the comment above, the edge array is sorted by y-value of the intersection\n+ // 1. per the comment above, the edge array is sorted by y-value of the intersection\n // with the dateline. Two edges have the same y intercept when they cross the \n // dateline thus they appear sequentially (pairwise) in the edge array. Two edges\n // do not have the same y intercept when we're forming a multi-poly from a poly\n // that wraps the dateline (but there are 2 ordered intercepts). \n // The connect method creates a new edge for these paired edges in the linked list. \n // For boundary conditions (e.g., intersect but not crossing) there is no sibling edge \n- // to connect. Thus the following enforces the pairwise rule \n- if (e1.intersect != Edge.MAX_COORDINATE && e2.intersect != Edge.MAX_COORDINATE) {\n+ // to connect. Thus the first logic check enforces the pairwise rule\n+ // 2. the second logic check ensures the two candidate edges aren't already connected by an\n+ // existing edge along the dateline - this is necessary due to a logic change in\n+ // ShapeBuilder.intersection that computes dateline edges as valid intersect points \n+ // in support of OGC standards\n+ if (e1.intersect != Edge.MAX_COORDINATE && e2.intersect != Edge.MAX_COORDINATE \n+ && !(e1.next.next.coordinate.equals3D(e2.coordinate) && Math.abs(e1.next.coordinate.x) == DATELINE\n+ && Math.abs(e2.coordinate.x) == DATELINE) ) {\n connect(e1, e2);\n }\n }\n@@ -431,7 +436,7 @@ private static void connect(Edge in, Edge out) {\n in.next = new Edge(in.intersect, out.next, in.intersect);\n }\n out.next = new Edge(out.intersect, e1, out.intersect);\n- } else if (in.next != out){\n+ } else if (in.next != out && in.coordinate != out.intersect) {\n // first edge intersects with dateline\n Edge e2 = new Edge(out.intersect, in.next, out.intersect);\n \n@@ -448,9 +453,11 @@ private static void connect(Edge in, Edge out) {\n }\n }\n \n- private static int createEdges(int component, boolean direction, BaseLineStringBuilder<?> line, Edge[] edges, int offset) {\n- Coordinate[] points = line.coordinates(false); // last point is repeated \n- Edge.ring(component, direction, points, 0, edges, offset, points.length-1);\n+ private static int createEdges(int component, boolean direction, BaseLineStringBuilder<?> shell, BaseLineStringBuilder<?> hole,\n+ Edge[] edges, int offset) {\n+ // set the points array accordingly (shell or hole)\n+ Coordinate[] points = (hole != null) ? hole.coordinates(false) : shell.coordinates(false);\n+ Edge.ring(component, direction, shell, points, 0, edges, offset, points.length-1);\n return points.length-1;\n }\n ", "filename": "src/main/java/org/elasticsearch/common/geo/builders/BasePolygonBuilder.java", "status": "modified" }, { "diff": "@@ -34,6 +34,7 @@\n public abstract class PointCollection<E extends PointCollection<E>> extends ShapeBuilder {\n \n protected final ArrayList<Coordinate> points;\n+ protected boolean translated = false;\n \n protected PointCollection() {\n this(new ArrayList<Coordinate>());", "filename": "src/main/java/org/elasticsearch/common/geo/builders/PointCollection.java", "status": "modified" }, { "diff": "@@ -25,6 +25,7 @@\n import com.vividsolutions.jts.geom.Coordinate;\n import com.vividsolutions.jts.geom.Geometry;\n import com.vividsolutions.jts.geom.GeometryFactory;\n+import org.apache.commons.lang3.tuple.Pair;\n import org.elasticsearch.ElasticsearchIllegalArgumentException;\n import org.elasticsearch.ElasticsearchParseException;\n import org.elasticsearch.common.logging.ESLogger;\n@@ -271,8 +272,10 @@ protected static Coordinate shift(Coordinate coordinate, double dateline) {\n * returns {@link Double#NaN}\n */\n protected static final double intersection(Coordinate p1, Coordinate p2, double dateline) {\n- if (p1.x == p2.x) {\n+ if (p1.x == p2.x && p1.x != dateline) {\n return Double.NaN;\n+ } else if (p1.x == p2.x && p1.x == dateline) {\n+ return 1.0;\n } else {\n final double t = (dateline - p1.x) / (p2.x - p1.x);\n if (t > 1 || t <= 0) {\n@@ -403,6 +406,29 @@ private static final int top(Coordinate[] points, int offset, int length) {\n return top;\n }\n \n+ private static final Pair range(Coordinate[] points, int offset, int length) {\n+ double minX = points[0].x;\n+ double maxX = points[0].x;\n+ double minY = points[0].y;\n+ double maxY = points[0].y;\n+ // compute the bounding coordinates (@todo: cleanup brute force)\n+ for (int i = 1; i < length; ++i) {\n+ if (points[offset + i].x < minX) {\n+ minX = points[offset + i].x;\n+ }\n+ if (points[offset + i].x > maxX) {\n+ maxX = points[offset + i].x;\n+ }\n+ if (points[offset + i].y < minY) {\n+ minY = points[offset + i].y;\n+ }\n+ if (points[offset + i].y > maxY) {\n+ maxY = points[offset + i].y;\n+ }\n+ }\n+ return Pair.of(Pair.of(minX, maxX), Pair.of(minY, maxY));\n+ }\n+\n /**\n * Concatenate a set of points to a polygon\n * \n@@ -459,19 +485,56 @@ private static Edge[] concat(int component, boolean direction, Coordinate[] poin\n * number of points\n * @return Array of edges\n */\n- protected static Edge[] ring(int component, boolean direction, Coordinate[] points, int offset, Edge[] edges, int toffset,\n- int length) {\n+ protected static Edge[] ring(int component, boolean direction, BaseLineStringBuilder<?> shell, Coordinate[] points, int offset, \n+ Edge[] edges, int toffset, int length) {\n // calculate the direction of the points:\n // find the point a the top of the set and check its\n // neighbors orientation. So direction is equivalent\n // to clockwise/counterclockwise\n final int top = top(points, offset, length);\n final int prev = (offset + ((top + length - 1) % length));\n final int next = (offset + ((top + 1) % length));\n- final boolean orientation = points[offset + prev].x > points[offset + next].x;\n+ boolean orientation = points[offset + prev].x > points[offset + next].x;\n+\n+ // OGC requires shell as ccw (Right-Handedness) and holes as cw (Left-Handedness) \n+ // since GeoJSON doesn't specify (and doesn't need to) GEO core will assume OGC standards\n+ // thus if orientation is computed as cw, the logic will translate points across dateline\n+ // and convert to a right handed system\n+\n+ // compute the bounding box and calculate range\n+ Pair<Pair, Pair> range = range(points, offset, length);\n+ final double rng = (Double)range.getLeft().getRight() - (Double)range.getLeft().getLeft();\n+ // translate the points if the following is true\n+ // 1. shell orientation is cw and range is greater than a hemisphere (180 degrees) but not spanning 2 hemispheres \n+ // (translation would result in a collapsed poly)\n+ // 2. the shell of the candidate hole has been translated (to preserve the coordinate system)\n+ if (((component == 0 && orientation) && (rng > DATELINE && rng != 2*DATELINE))\n+ || (shell.translated && component != 0)) {\n+ translate(points);\n+ // flip the translation bit if the shell is being translated\n+ if (component == 0) {\n+ shell.translated = true;\n+ }\n+ // correct the orientation post translation (ccw for shell, cw for holes)\n+ if (component == 0 || (component != 0 && !orientation)) {\n+ orientation = !orientation;\n+ }\n+ }\n return concat(component, direction ^ orientation, points, offset, edges, toffset, length);\n }\n \n+ /**\n+ * Transforms coordinates in the eastern hemisphere (-180:0) to a (180:360) range \n+ * @param points\n+ */\n+ protected static void translate(Coordinate[] points) {\n+ for (Coordinate c : points) {\n+ if (c.x < 0) {\n+ c.x += 2*DATELINE;\n+ }\n+ }\n+ }\n+\n /**\n * Set the intersection of this line segment to the given position\n * ", "filename": "src/main/java/org/elasticsearch/common/geo/builders/ShapeBuilder.java", "status": "modified" }, { "diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.common.geo;\n \n+import com.spatial4j.core.exception.InvalidShapeException;\n import com.spatial4j.core.shape.Circle;\n import com.spatial4j.core.shape.Rectangle;\n import com.spatial4j.core.shape.Shape;\n@@ -209,6 +210,196 @@ public void testParse_invalidMultipoint() throws IOException {\n ElasticsearchGeoAssertions.assertValidException(parser, ElasticsearchParseException.class);\n }\n \n+ @Test\n+ public void testParse_OGCPolygonWithoutHoles() throws IOException {\n+ // test 1: ccw poly not crossing dateline\n+ String polygonGeoJson = XContentFactory.jsonBuilder().startObject().field(\"type\", \"Polygon\")\n+ .startArray(\"coordinates\")\n+ .startArray()\n+ .startArray().value(176.0).value(15.0).endArray()\n+ .startArray().value(-177.0).value(10.0).endArray()\n+ .startArray().value(-177.0).value(-10.0).endArray()\n+ .startArray().value(176.0).value(-15.0).endArray()\n+ .startArray().value(172.0).value(0.0).endArray()\n+ .startArray().value(176.0).value(15.0).endArray()\n+ .endArray()\n+ .endArray()\n+ .endObject().string();\n+\n+ XContentParser parser = JsonXContent.jsonXContent.createParser(polygonGeoJson);\n+ parser.nextToken();\n+ Shape shape = ShapeBuilder.parse(parser).build();\n+ \n+ ElasticsearchGeoAssertions.assertPolygon(shape);\n+\n+ // test 2: ccw poly crossing dateline\n+ polygonGeoJson = XContentFactory.jsonBuilder().startObject().field(\"type\", \"Polygon\")\n+ .startArray(\"coordinates\")\n+ .startArray()\n+ .startArray().value(-177.0).value(10.0).endArray()\n+ .startArray().value(176.0).value(15.0).endArray()\n+ .startArray().value(172.0).value(0.0).endArray()\n+ .startArray().value(176.0).value(-15.0).endArray()\n+ .startArray().value(-177.0).value(-10.0).endArray()\n+ .startArray().value(-177.0).value(10.0).endArray()\n+ .endArray()\n+ .endArray()\n+ .endObject().string();\n+\n+ parser = JsonXContent.jsonXContent.createParser(polygonGeoJson);\n+ parser.nextToken();\n+ shape = ShapeBuilder.parse(parser).build();\n+ \n+ ElasticsearchGeoAssertions.assertMultiPolygon(shape);\n+\n+ // test 3: cw poly not crossing dateline\n+ polygonGeoJson = XContentFactory.jsonBuilder().startObject().field(\"type\", \"Polygon\")\n+ .startArray(\"coordinates\")\n+ .startArray()\n+ .startArray().value(176.0).value(15.0).endArray()\n+ .startArray().value(180.0).value(10.0).endArray()\n+ .startArray().value(180.0).value(-10.0).endArray()\n+ .startArray().value(176.0).value(-15.0).endArray()\n+ .startArray().value(172.0).value(0.0).endArray()\n+ .startArray().value(176.0).value(15.0).endArray()\n+ .endArray()\n+ .endArray()\n+ .endObject().string();\n+\n+ parser = JsonXContent.jsonXContent.createParser(polygonGeoJson);\n+ parser.nextToken();\n+ shape = ShapeBuilder.parse(parser).build();\n+\n+ ElasticsearchGeoAssertions.assertPolygon(shape);\n+\n+ // test 4: cw poly crossing dateline\n+ polygonGeoJson = XContentFactory.jsonBuilder().startObject().field(\"type\", \"Polygon\")\n+ .startArray(\"coordinates\")\n+ .startArray()\n+ .startArray().value(176.0).value(15.0).endArray()\n+ .startArray().value(184.0).value(15.0).endArray()\n+ .startArray().value(184.0).value(0.0).endArray()\n+ .startArray().value(176.0).value(-15.0).endArray()\n+ .startArray().value(174.0).value(-10.0).endArray()\n+ .startArray().value(176.0).value(15.0).endArray()\n+ .endArray()\n+ .endArray()\n+ .endObject().string();\n+\n+ parser = JsonXContent.jsonXContent.createParser(polygonGeoJson);\n+ parser.nextToken();\n+ shape = ShapeBuilder.parse(parser).build();\n+\n+ ElasticsearchGeoAssertions.assertMultiPolygon(shape);\n+ }\n+\n+ @Test\n+ public void testParse_OGCPolygonWithHoles() throws IOException {\n+ // test 1: ccw poly not crossing dateline\n+ String polygonGeoJson = XContentFactory.jsonBuilder().startObject().field(\"type\", \"Polygon\")\n+ .startArray(\"coordinates\")\n+ .startArray()\n+ .startArray().value(176.0).value(15.0).endArray()\n+ .startArray().value(-177.0).value(10.0).endArray()\n+ .startArray().value(-177.0).value(-10.0).endArray()\n+ .startArray().value(176.0).value(-15.0).endArray()\n+ .startArray().value(172.0).value(0.0).endArray()\n+ .startArray().value(176.0).value(15.0).endArray()\n+ .endArray()\n+ .startArray()\n+ .startArray().value(-172.0).value(8.0).endArray()\n+ .startArray().value(174.0).value(10.0).endArray()\n+ .startArray().value(-172.0).value(-8.0).endArray()\n+ .startArray().value(-172.0).value(8.0).endArray()\n+ .endArray()\n+ .endArray()\n+ .endObject().string();\n+\n+ XContentParser parser = JsonXContent.jsonXContent.createParser(polygonGeoJson);\n+ parser.nextToken();\n+ Shape shape = ShapeBuilder.parse(parser).build();\n+\n+ ElasticsearchGeoAssertions.assertPolygon(shape);\n+\n+ // test 2: ccw poly crossing dateline\n+ polygonGeoJson = XContentFactory.jsonBuilder().startObject().field(\"type\", \"Polygon\")\n+ .startArray(\"coordinates\")\n+ .startArray()\n+ .startArray().value(-177.0).value(10.0).endArray()\n+ .startArray().value(176.0).value(15.0).endArray()\n+ .startArray().value(172.0).value(0.0).endArray()\n+ .startArray().value(176.0).value(-15.0).endArray()\n+ .startArray().value(-177.0).value(-10.0).endArray()\n+ .startArray().value(-177.0).value(10.0).endArray()\n+ .endArray()\n+ .startArray()\n+ .startArray().value(178.0).value(8.0).endArray()\n+ .startArray().value(-178.0).value(8.0).endArray()\n+ .startArray().value(-180.0).value(-8.0).endArray()\n+ .startArray().value(178.0).value(8.0).endArray()\n+ .endArray()\n+ .endArray()\n+ .endObject().string();\n+\n+ parser = JsonXContent.jsonXContent.createParser(polygonGeoJson);\n+ parser.nextToken();\n+ shape = ShapeBuilder.parse(parser).build();\n+\n+ ElasticsearchGeoAssertions.assertMultiPolygon(shape);\n+\n+ // test 3: cw poly not crossing dateline\n+ polygonGeoJson = XContentFactory.jsonBuilder().startObject().field(\"type\", \"Polygon\")\n+ .startArray(\"coordinates\")\n+ .startArray()\n+ .startArray().value(176.0).value(15.0).endArray()\n+ .startArray().value(180.0).value(10.0).endArray()\n+ .startArray().value(179.0).value(-10.0).endArray()\n+ .startArray().value(176.0).value(-15.0).endArray()\n+ .startArray().value(172.0).value(0.0).endArray()\n+ .startArray().value(176.0).value(15.0).endArray()\n+ .endArray()\n+ .startArray()\n+ .startArray().value(177.0).value(8.0).endArray()\n+ .startArray().value(179.0).value(10.0).endArray()\n+ .startArray().value(179.0).value(-8.0).endArray()\n+ .startArray().value(177.0).value(8.0).endArray()\n+ .endArray()\n+ .endArray()\n+ .endObject().string();\n+\n+ parser = JsonXContent.jsonXContent.createParser(polygonGeoJson);\n+ parser.nextToken();\n+ shape = ShapeBuilder.parse(parser).build();\n+\n+ ElasticsearchGeoAssertions.assertPolygon(shape);\n+\n+ // test 4: cw poly crossing dateline\n+ polygonGeoJson = XContentFactory.jsonBuilder().startObject().field(\"type\", \"Polygon\")\n+ .startArray(\"coordinates\")\n+ .startArray()\n+ .startArray().value(183.0).value(10.0).endArray()\n+ .startArray().value(183.0).value(-10.0).endArray()\n+ .startArray().value(176.0).value(-15.0).endArray()\n+ .startArray().value(172.0).value(0.0).endArray()\n+ .startArray().value(176.0).value(15.0).endArray()\n+ .startArray().value(183.0).value(10.0).endArray()\n+ .endArray()\n+ .startArray()\n+ .startArray().value(178.0).value(8.0).endArray()\n+ .startArray().value(182.0).value(8.0).endArray()\n+ .startArray().value(180.0).value(-8.0).endArray()\n+ .startArray().value(178.0).value(8.0).endArray()\n+ .endArray()\n+ .endArray()\n+ .endObject().string();\n+\n+ parser = JsonXContent.jsonXContent.createParser(polygonGeoJson);\n+ parser.nextToken();\n+ shape = ShapeBuilder.parse(parser).build();\n+\n+ ElasticsearchGeoAssertions.assertMultiPolygon(shape);\n+ }\n+ \n @Test\n public void testParse_invalidPolygon() throws IOException {\n /**\n@@ -332,6 +523,28 @@ public void testParse_polygonWithHole() throws IOException {\n assertGeometryEquals(jtsGeom(expected), polygonGeoJson);\n }\n \n+ @Test\n+ public void testParse_selfCrossingPolygon() throws IOException {\n+ // test self crossing ccw poly not crossing dateline\n+ String polygonGeoJson = XContentFactory.jsonBuilder().startObject().field(\"type\", \"Polygon\")\n+ .startArray(\"coordinates\")\n+ .startArray()\n+ .startArray().value(176.0).value(15.0).endArray()\n+ .startArray().value(-177.0).value(10.0).endArray()\n+ .startArray().value(-177.0).value(-10.0).endArray()\n+ .startArray().value(176.0).value(-15.0).endArray()\n+ .startArray().value(-177.0).value(15.0).endArray()\n+ .startArray().value(172.0).value(0.0).endArray()\n+ .startArray().value(176.0).value(15.0).endArray()\n+ .endArray()\n+ .endArray()\n+ .endObject().string();\n+\n+ XContentParser parser = JsonXContent.jsonXContent.createParser(polygonGeoJson);\n+ parser.nextToken();\n+ ElasticsearchGeoAssertions.assertValidException(parser, InvalidShapeException.class);\n+ }\n+\n @Test\n public void testParse_multiPoint() throws IOException {\n String multiPointGeoJson = XContentFactory.jsonBuilder().startObject().field(\"type\", \"MultiPoint\")\n@@ -417,7 +630,7 @@ public void testParse_multiPolygon() throws IOException {\n @Test\n public void testParse_geometryCollection() throws IOException {\n String geometryCollectionGeoJson = XContentFactory.jsonBuilder().startObject()\n- .field(\"type\",\"GeometryCollection\")\n+ .field(\"type\", \"GeometryCollection\")\n .startArray(\"geometries\")\n .startObject()\n .field(\"type\", \"LineString\")", "filename": "src/test/java/org/elasticsearch/common/geo/GeoJSONShapeParserTests.java", "status": "modified" }, { "diff": "@@ -240,47 +240,91 @@ public void testLineStringWrapping() {\n }\n \n @Test\n- public void testDateline() {\n- // view shape at https://gist.github.com/anonymous/7f1bb6d7e9cd72f5977c\n- // expect 3 polygons, 1 with a hole\n+ public void testDatelineOGC() {\n+ // tests that the following shape (defined in counterclockwise OGC order)\n+ // https://gist.github.com/anonymous/7f1bb6d7e9cd72f5977c crosses the dateline\n+ // expected results: 3 polygons, 1 with a hole\n \n // a giant c shape\n PolygonBuilder builder = ShapeBuilder.newPolygon()\n- .point(-186,0)\n+ .point(174,0)\n .point(-176,0)\n .point(-176,3)\n- .point(-183,3)\n- .point(-183,5)\n+ .point(177,3)\n+ .point(177,5)\n .point(-176,5)\n .point(-176,8)\n- .point(-186,8)\n- .point(-186,0);\n+ .point(174,8)\n+ .point(174,0);\n \n // 3/4 of an embedded 'c', crossing dateline once\n builder.hole()\n- .point(-185,1)\n- .point(-181,1)\n- .point(-181,2)\n- .point(-184,2)\n- .point(-184,6)\n- .point(-178,6)\n- .point(-178,7)\n- .point(-185,7)\n- .point(-185,1);\n+ .point(175, 1)\n+ .point(175, 7)\n+ .point(-178, 7)\n+ .point(-178, 6)\n+ .point(176, 6)\n+ .point(176, 2)\n+ .point(179, 2)\n+ .point(179,1)\n+ .point(175, 1);\n \n // embedded hole right of the dateline\n builder.hole()\n- .point(-179,1)\n+ .point(-179, 1)\n+ .point(-179, 2)\n+ .point(-177, 2)\n .point(-177,1)\n- .point(-177,2)\n- .point(-179,2)\n .point(-179,1);\n \n Shape shape = builder.close().build();\n \n- assertMultiPolygon(shape);\n- }\n+ assertMultiPolygon(shape);\n+ }\n+\n+ @Test\n+ public void testDateline() {\n+ // tests that the following shape (defined in clockwise non-OGC order)\n+ // https://gist.github.com/anonymous/7f1bb6d7e9cd72f5977c crosses the dateline\n+ // expected results: 3 polygons, 1 with a hole\n+\n+ // a giant c shape\n+ PolygonBuilder builder = ShapeBuilder.newPolygon()\n+ .point(-186,0)\n+ .point(-176,0)\n+ .point(-176,3)\n+ .point(-183,3)\n+ .point(-183,5)\n+ .point(-176,5)\n+ .point(-176,8)\n+ .point(-186,8)\n+ .point(-186,0);\n+\n+ // 3/4 of an embedded 'c', crossing dateline once\n+ builder.hole()\n+ .point(-185,1)\n+ .point(-181,1)\n+ .point(-181,2)\n+ .point(-184,2)\n+ .point(-184,6)\n+ .point(-178,6)\n+ .point(-178,7)\n+ .point(-185,7)\n+ .point(-185,1);\n+\n+ // embedded hole right of the dateline\n+ builder.hole()\n+ .point(-179,1)\n+ .point(-177,1)\n+ .point(-177,2)\n+ .point(-179,2)\n+ .point(-179,1);\n+\n+ Shape shape = builder.close().build();\n \n+ assertMultiPolygon(shape);\n+ }\n+ \n @Test\n public void testComplexShapeWithHole() {\n PolygonBuilder builder = ShapeBuilder.newPolygon()\n@@ -406,15 +450,51 @@ public void testShapeWithEdgeAlongDateline() {\n \n // test case 2: test the negative side of the dateline\n builder = ShapeBuilder.newPolygon()\n- .point(-180, 0)\n .point(-176, 4)\n+ .point(-180, 0)\n .point(-180, -4)\n- .point(-180, 0);\n+ .point(-176, 4);\n \n shape = builder.close().build();\n assertPolygon(shape);\n }\n \n+ @Test\n+ public void testShapeWithBoundaryHoles() {\n+ // test case 1: test the positive side of the dateline\n+ PolygonBuilder builder = ShapeBuilder.newPolygon()\n+ .point(-177, 10)\n+ .point(176, 15)\n+ .point(172, 0)\n+ .point(176, -15)\n+ .point(-177, -10)\n+ .point(-177, 10);\n+ builder.hole()\n+ .point(176, 10)\n+ .point(180, 5)\n+ .point(180, -5)\n+ .point(176, -10)\n+ .point(176, 10);\n+ Shape shape = builder.close().build();\n+ assertMultiPolygon(shape);\n+\n+ // test case 2: test the negative side of the dateline\n+ builder = ShapeBuilder.newPolygon()\n+ .point(-176, 15)\n+ .point(179, 10)\n+ .point(179, -10)\n+ .point(-176, -15)\n+ .point(-172,0);\n+ builder.hole()\n+ .point(-176, 10)\n+ .point(-176, -10)\n+ .point(-180, -5)\n+ .point(-180, 5)\n+ .point(-176, 10);\n+ shape = builder.close().build();\n+ assertMultiPolygon(shape);\n+ }\n+\n /**\n * Test an enveloping polygon around the max mercator bounds\n */\n@@ -432,15 +512,26 @@ public void testBoundaryShape() {\n }\n \n @Test\n- public void testShapeWithEdgeAcrossDateline() {\n+ public void testShapeWithAlternateOrientation() {\n+ // ccw: should produce a single polygon spanning hemispheres\n PolygonBuilder builder = ShapeBuilder.newPolygon()\n .point(180, 0)\n .point(176, 4)\n .point(-176, 4)\n .point(180, 0);\n \n Shape shape = builder.close().build();\n+ assertPolygon(shape);\n \n- assertPolygon(shape);\n+ // cw: geo core will convert to ccw across the dateline\n+ builder = ShapeBuilder.newPolygon()\n+ .point(180, 0)\n+ .point(-176, 4)\n+ .point(176, 4)\n+ .point(180, 0);\n+\n+ shape = builder.close().build();\n+\n+ assertMultiPolygon(shape);\n }\n }", "filename": "src/test/java/org/elasticsearch/common/geo/ShapeBuilderTests.java", "status": "modified" }, { "diff": "@@ -251,7 +251,7 @@ private static double distance(double lat1, double lon1, double lat2, double lon\n \n public static void assertValidException(XContentParser parser, Class expectedException) {\n try {\n- ShapeBuilder.parse(parser);\n+ ShapeBuilder.parse(parser).build();\n Assert.fail(\"process completed successfully when \" + expectedException.getName() + \" expected\");\n } catch (Exception e) {\n assert(e.getClass().equals(expectedException)):", "filename": "src/test/java/org/elasticsearch/test/hamcrest/ElasticsearchGeoAssertions.java", "status": "modified" } ] }
{ "body": "I encountered an issue with the systemd process ordering. The network target starts after ES then it crashes at boot time because the interface wasn't up.\n\nIn the rpm package, the systemd service doesn't ensure that the network is up before starting ES.\n\nI suggest to add an After param : \n\n[Unit]\nDescription=Starts and stops a single elasticsearch instance on this system\n<b>After=network.target</b>\nDocumentation=http://www.elasticsearch.org\n[...]\n", "comments": [ { "body": "According to http://www.freedesktop.org/wiki/Software/systemd/NetworkTarget/, this should be\n\n``` ini\n[Unit]\nDescription=Starts and stops a single elasticsearch instance on this system\nWants=network-online.target\nAfter=network-online.target\nDocumentation=http://www.elasticsearch.org\n...\n```\n\nbut things seem to be a bit more complicated. I'll check how `network-online.target` works on my Fedora 21 box.\n", "created_at": "2014-12-03T11:46:00Z" }, { "body": "Looking at `elasticsearch.service`, there's room for more improvement. In the service configuration `bin/elasticsearch` is told to fork and to provide a PID file. Both is unnecessary when systemd is available (see http://0pointer.de/blog/projects/systemd-for-admins-3.html). Part of the intention to write systemd in the first place was to move daemonizing / PID handling complexity from start scripts into the init service. So `src/rpm/systemd/elasticsearch.service` can be simplified quite a bit.\n", "created_at": "2014-12-03T11:58:43Z" }, { "body": "Changes are implemented, tested w/ Fedora 21, works as expected.\n", "created_at": "2014-12-03T13:07:40Z" }, { "body": "Thanks for reporting and your contribution.\n\nI agree with @t-lo it should be `network-online.target` and it how it is configured now (see https://github.com/elastic/elasticsearch/blob/master/core/src/packaging/common/systemd/elasticsearch.service). It has been tested on various distributions including Fedora21, OpenSUSE13, Ubuntu15.04, Debian8, CentOS7.\n\nClosing this one, feel free to reopen if needed.\n", "created_at": "2015-06-17T07:29:38Z" } ], "number": 8636, "title": "Ensure the service is started after network is up" }
{ "body": "This change updates the systemd elasticsearch service configuration to depend\non boot-time networking being available before ES is started (see\nhttp://www.freedesktop.org/wiki/Software/systemd/NetworkTarget/ for a\ndiscussion of the specific network-online target used in the service file).\n\nAlso, the ES script does not fork / daemonize when started by systemd since\nthis is the recommended behaviour for systemd services (see\nhttp://0pointer.de/blog/projects/systemd-for-admins-3.html).\n\nfixes #8636\n\nSigned-off-by: Thilo Fromm github@thilo-fromm.de\n", "number": 8761, "review_comments": [], "title": "systemd service: wait for networking, don't daemonize" }
{ "commits": [ { "message": "systemd service: wait for networking, don't daemonize\n\nThis change updates the systemd elasticsearch service configuration to depend\non boot-time networking being available before ES is started (see\nhttp://www.freedesktop.org/wiki/Software/systemd/NetworkTarget/ for a\ndiscussion of the specific network-online target used in the service file).\n\nAlso, the ES script does not fork / daemonize when started by systemd since\nthis is the recommended behaviour for systemd services (see\nhttp://0pointer.de/blog/projects/systemd-for-admins-3.html).\n\nfixes #8636\n\nSigned-off-by: Thilo Fromm <github@thilo-fromm.de>" } ], "files": [ { "diff": "@@ -1,14 +1,20 @@\n [Unit]\n Description=Starts and stops a single elasticsearch instance on this system\n Documentation=http://www.elasticsearch.org\n+Wants=network-online.target\n+After=network-online.target\n \n [Service]\n-Type=forking\n EnvironmentFile=/etc/sysconfig/elasticsearch\n User=elasticsearch\n Group=elasticsearch\n-PIDFile=/var/run/elasticsearch/elasticsearch.pid\n-ExecStart=/usr/share/elasticsearch/bin/elasticsearch -d -p /var/run/elasticsearch/elasticsearch.pid -Des.default.config=$CONF_FILE -Des.default.path.home=$ES_HOME -Des.default.path.logs=$LOG_DIR -Des.default.path.data=$DATA_DIR -Des.default.path.work=$WORK_DIR -Des.default.path.conf=$CONF_DIR\n+ExecStart=/usr/share/elasticsearch/bin/elasticsearch \\\n+ -Des.default.config=$CONF_FILE \\\n+ -Des.default.path.home=$ES_HOME \\\n+ -Des.default.path.logs=$LOG_DIR \\\n+ -Des.default.path.data=$DATA_DIR \\\n+ -Des.default.path.work=$WORK_DIR \\\n+ -Des.default.path.conf=$CONF_DIR\n # See MAX_OPEN_FILES in sysconfig\n LimitNOFILE=65535\n # See MAX_LOCKED_MEMORY in sysconfig, use \"infinity\" when MAX_LOCKED_MEMORY=unlimited and using bootstrap.mlockall: true", "filename": "src/rpm/systemd/elasticsearch.service", "status": "modified" } ] }
{ "body": "trappy `FilterOutputStream` writes every byte in `public void write(byte[] b, int off, int len)` which leads to bad perf... This was introduced in 1.4.0\n", "comments": [], "number": 8748, "title": "Snapshot/Restore: FilterOutputStream in FsBlobContainer writes every individual byte " }
{ "body": "Closes #8748\n", "number": 8749, "review_comments": [], "title": "Override write(byte[] b, int off, int len) in FilterOutputStream for better performance" }
{ "commits": [ { "message": "Override write(byte[] b, int off, int len) in FilterOutputStream for better performance\n\nCloses #8748" } ], "files": [ { "diff": "@@ -89,6 +89,10 @@ public InputStream openInput(String name) throws IOException {\n public OutputStream createOutput(String blobName) throws IOException {\n final File file = new File(path, blobName);\n return new BufferedOutputStream(new FilterOutputStream(new FileOutputStream(file)) {\n+\n+ @Override // FilterOutputStream#write(byte[] b, int off, int len) is trappy writes every single byte\n+ public void write(byte[] b, int off, int len) throws IOException { out.write(b, off, len);}\n+\n @Override\n public void close() throws IOException {\n super.close();", "filename": "src/main/java/org/elasticsearch/common/blobstore/fs/FsBlobContainer.java", "status": "modified" } ] }
{ "body": "The effect is very similar to https://github.com/elasticsearch/elasticsearch/pull/5623\nWhen a document is indexed that does have a dynamic field then the indexing fails as expected. \nHowever, the type is created locally in the mapper service of the node but never updated on master, see https://github.com/brwe/elasticsearch/commit/340f5c5de207a802085f23aeb984dbd98349301a#diff-defbaaff93b959a2f9a93e7167f6f345R165\n\nThis can cause several problems:\n1. `_default_` mappings are applied locally and can potentially later not be updated anymore, see https://github.com/brwe/elasticsearch/commit/340f5c5de207a802085f23aeb984dbd98349301a#diff-ed65252ffbbf8656bf257a8cd6251420R68 (thanks @pkoenig10 for the test, https://github.com/elasticsearch/elasticsearch/issues/8423#issuecomment-64395503)\n2. Mappings that were created via `_default_` mappings when indexing a document can be lost, see https://github.com/brwe/elasticsearch/commit/340f5c5de207a802085f23aeb984dbd98349301a#diff-defbaaff93b959a2f9a93e7167f6f345R187\n", "comments": [ { "body": "Will this also be fixed on the 1.4 branch?\n", "created_at": "2014-11-27T09:28:49Z" }, { "body": "Yes. \n\nTo fix this, there is two options:\n1. make sure the type is not created if indexing fails\n2. update the mapping on master even if indexing of doc failed\n\nOption 1 is rather tricky to implement and I do not see why the type should not be created in the mapping so I'll make a pr for option 2 shortly.\n", "created_at": "2014-11-27T17:30:39Z" }, { "body": "I just checked 0992e7f, but https://gist.github.com/miccon/a4869fe04f9010015861 still fails.\n\nI suppose that when the mapping is created after the failed indexing request, the _all mapping is set to true (by default) and then the _all cannot by set to false anymore.\n\nIMHO not creating the type at all would be the cleaner solution, because even when the default mapping is set to strict creating the type can prevent you from updating the mapping later on (as the _all mapping is created automatically).\n", "created_at": "2014-11-28T08:15:37Z" }, { "body": "I pushed the fix for the lost mappings but as @rjernst pointed out, not updating the mapping can only be done once https://github.com/elasticsearch/elasticsearch/issues/9365 is done so I'll leave this issue open.\n", "created_at": "2015-02-24T16:57:49Z" }, { "body": "fixed by https://github.com/elastic/elasticsearch/pull/10634\n", "created_at": "2015-05-22T12:27:16Z" } ], "number": 8650, "title": "Mapping potentially lost with `\"dynamic\" : \"strict\"`, `_default_` mapping and failed document index" }
{ "body": "When indexing of a document with a type that is not in the mappings fails,\nfor example because \"dynamic\": \"strict\" but doc contains a new field,\nthen the type is still created on the node that executed the indexing request.\nHowever, the change was never added to the cluster state.\nThis commit makes sure mapping updates are always added to the cluster state\neven if indexing of a document fails.\n\ncloses #8650\n", "number": 8692, "review_comments": [ { "body": "You can just use `setSource(\"test\", \"test\")`?\n", "created_at": "2014-12-02T22:15:54Z" }, { "body": "I like to have a comment like `//expected` in the catch here? And nitpick: move the opening brace up to match the style here...\n", "created_at": "2014-12-02T22:17:10Z" }, { "body": "Why is this nullable when there is an assert forbidding null in the ctor?\n", "created_at": "2015-02-23T22:44:44Z" }, { "body": "Do we have any other exception classes that don't end in \"Exception\"? We should probably stick with that naming scheme..\n", "created_at": "2015-02-23T22:45:18Z" }, { "body": "ignore, im blind. :)\n", "created_at": "2015-02-23T22:45:56Z" }, { "body": "And I see that this was just extracted out from an inner class, but I think making it public means we should be better with the name.\n", "created_at": "2015-02-23T22:46:54Z" }, { "body": "I'm not seeing where this is used after it is set?\n", "created_at": "2015-02-23T22:49:49Z" } ], "title": "Update cluster state with type mapping also for failed indexing request" }
{ "commits": [ { "message": "mappings: update cluster state with type mapping also for failed indexing request\n\nWhen indexing of a document with a type that is not in the mappings fails,\nfor example because \"dynamic\": \"strict\" but doc contains a new field,\nthen the type is still created on the node that executed the indexing request.\nHowever, the change was never added to the cluster state.\nThis commit makes sure mapping updates are always added to the cluster state\neven if indexing of a document fails.\n\ncloses #8650" }, { "message": "another test and fix for partial parsed docs" }, { "message": "remve unused var" }, { "message": "rename WriteFailure -> WriteFailureException" }, { "message": "Revert \"another test and fix for partial parsed docs\"\n\nThis reverts commit 86a58d5eb50657508c618c2807e1dbbef9c96cdc." } ], "files": [ { "diff": "@@ -0,0 +1,40 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.action;\n+\n+import org.elasticsearch.ElasticsearchException;\n+import org.elasticsearch.ElasticsearchWrapperException;\n+import org.elasticsearch.common.Nullable;\n+\n+\n+public class WriteFailureException extends ElasticsearchException implements ElasticsearchWrapperException {\n+ @Nullable\n+ private final String mappingTypeToUpdate;\n+\n+ public WriteFailureException(Throwable cause, String mappingTypeToUpdate) {\n+ super(null, cause);\n+ assert cause != null;\n+ this.mappingTypeToUpdate = mappingTypeToUpdate;\n+ }\n+\n+ public String getMappingTypeToUpdate() {\n+ return mappingTypeToUpdate;\n+ }\n+}", "filename": "src/main/java/org/elasticsearch/action/WriteFailureException.java", "status": "added" }, { "diff": "@@ -22,11 +22,11 @@\n import com.google.common.collect.Sets;\n import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.ElasticsearchIllegalStateException;\n-import org.elasticsearch.ElasticsearchWrapperException;\n import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.action.ActionRequest;\n import org.elasticsearch.action.ActionWriteResponse;\n import org.elasticsearch.action.RoutingMissingException;\n+import org.elasticsearch.action.WriteFailureException;\n import org.elasticsearch.action.delete.DeleteRequest;\n import org.elasticsearch.action.delete.DeleteResponse;\n import org.elasticsearch.action.index.IndexRequest;\n@@ -42,7 +42,6 @@\n import org.elasticsearch.cluster.action.shard.ShardStateAction;\n import org.elasticsearch.cluster.metadata.MappingMetaData;\n import org.elasticsearch.cluster.routing.ShardIterator;\n-import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.collect.Tuple;\n import org.elasticsearch.common.inject.Inject;\n@@ -161,9 +160,9 @@ protected Tuple<BulkShardResponse, BulkShardRequest> shardOperationOnPrimary(Clu\n }\n ops[requestIndex] = result.op;\n }\n- } catch (WriteFailure e) {\n- if (e.mappingTypeToUpdate != null) {\n- mappingTypesToUpdate.add(e.mappingTypeToUpdate);\n+ } catch (WriteFailureException e) {\n+ if (e.getMappingTypeToUpdate() != null) {\n+ mappingTypesToUpdate.add(e.getMappingTypeToUpdate());\n }\n throw e.getCause();\n }\n@@ -397,17 +396,6 @@ <T extends ActionWriteResponse> T response() {\n \n }\n \n- static class WriteFailure extends ElasticsearchException implements ElasticsearchWrapperException {\n- @Nullable\n- final String mappingTypeToUpdate;\n-\n- WriteFailure(Throwable cause, String mappingTypeToUpdate) {\n- super(null, cause);\n- assert cause != null;\n- this.mappingTypeToUpdate = mappingTypeToUpdate;\n- }\n- }\n-\n private WriteResult shardIndexOperation(BulkShardRequest request, IndexRequest indexRequest, ClusterState clusterState,\n IndexShard indexShard, boolean processed) {\n \n@@ -457,7 +445,7 @@ private WriteResult shardIndexOperation(BulkShardRequest request, IndexRequest i\n indexRequest.versionType(indexRequest.versionType().versionTypeForReplicationAndRecovery());\n indexRequest.version(version);\n } catch (Throwable t) {\n- throw new WriteFailure(t, mappingTypeToUpdate);\n+ throw new WriteFailureException(t, mappingTypeToUpdate);\n }\n \n assert indexRequest.versionType().validateVersionForWrites(indexRequest.version());", "filename": "src/main/java/org/elasticsearch/action/bulk/TransportShardBulkAction.java", "status": "modified" }, { "diff": "@@ -22,6 +22,7 @@\n import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.action.ActionListener;\n import org.elasticsearch.action.RoutingMissingException;\n+import org.elasticsearch.action.WriteFailureException;\n import org.elasticsearch.action.admin.indices.create.CreateIndexRequest;\n import org.elasticsearch.action.admin.indices.create.CreateIndexResponse;\n import org.elasticsearch.action.admin.indices.create.TransportCreateIndexAction;\n@@ -40,6 +41,7 @@\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.index.engine.Engine;\n+import org.elasticsearch.index.mapper.DocumentMapper;\n import org.elasticsearch.index.mapper.SourceToParse;\n import org.elasticsearch.index.IndexService;\n import org.elasticsearch.index.shard.IndexShard;\n@@ -167,7 +169,7 @@ protected ShardIterator shards(ClusterState clusterState, InternalRequest reques\n }\n \n @Override\n- protected Tuple<IndexResponse, IndexRequest> shardOperationOnPrimary(ClusterState clusterState, PrimaryOperationRequest shardRequest) {\n+ protected Tuple<IndexResponse, IndexRequest> shardOperationOnPrimary(ClusterState clusterState, PrimaryOperationRequest shardRequest) throws Throwable {\n final IndexRequest request = shardRequest.request;\n \n // validate, if routing is required, that we got routing\n@@ -185,39 +187,49 @@ protected Tuple<IndexResponse, IndexRequest> shardOperationOnPrimary(ClusterStat\n .routing(request.routing()).parent(request.parent()).timestamp(request.timestamp()).ttl(request.ttl());\n long version;\n boolean created;\n- if (request.opType() == IndexRequest.OpType.INDEX) {\n- Engine.Index index = indexShard.prepareIndex(sourceToParse, request.version(), request.versionType(), Engine.Operation.Origin.PRIMARY, request.canHaveDuplicates());\n- if (index.parsedDoc().mappingsModified()) {\n- mappingUpdatedAction.updateMappingOnMaster(shardRequest.shardId.getIndex(), index.docMapper(), indexService.indexUUID());\n- }\n- indexShard.index(index);\n- version = index.version();\n- created = index.created();\n- } else {\n- Engine.Create create = indexShard.prepareCreate(sourceToParse,\n- request.version(), request.versionType(), Engine.Operation.Origin.PRIMARY, request.canHaveDuplicates(), request.autoGeneratedId());\n- if (create.parsedDoc().mappingsModified()) {\n- mappingUpdatedAction.updateMappingOnMaster(shardRequest.shardId.getIndex(), create.docMapper(), indexService.indexUUID());\n+\n+ try {\n+ if (request.opType() == IndexRequest.OpType.INDEX) {\n+ Engine.Index index = indexShard.prepareIndex(sourceToParse, request.version(), request.versionType(), Engine.Operation.Origin.PRIMARY, request.canHaveDuplicates());\n+ if (index.parsedDoc().mappingsModified()) {\n+ mappingUpdatedAction.updateMappingOnMaster(shardRequest.shardId.getIndex(), index.docMapper(), indexService.indexUUID());\n+ }\n+ indexShard.index(index);\n+ version = index.version();\n+ created = index.created();\n+ } else {\n+ Engine.Create create = indexShard.prepareCreate(sourceToParse,\n+ request.version(), request.versionType(), Engine.Operation.Origin.PRIMARY, request.canHaveDuplicates(), request.autoGeneratedId());\n+ if (create.parsedDoc().mappingsModified()) {\n+ mappingUpdatedAction.updateMappingOnMaster(shardRequest.shardId.getIndex(), create.docMapper(), indexService.indexUUID());\n+ }\n+ indexShard.create(create);\n+ version = create.version();\n+ created = true;\n }\n- indexShard.create(create);\n- version = create.version();\n- created = true;\n- }\n- if (request.refresh()) {\n- try {\n- indexShard.refresh(\"refresh_flag_index\");\n- } catch (Throwable e) {\n- // ignore\n+ if (request.refresh()) {\n+ try {\n+ indexShard.refresh(\"refresh_flag_index\");\n+ } catch (Throwable e) {\n+ // ignore\n+ }\n }\n- }\n-\n- // update the version on the request, so it will be used for the replicas\n- request.version(version);\n- request.versionType(request.versionType().versionTypeForReplicationAndRecovery());\n \n- assert request.versionType().validateVersionForWrites(request.version());\n+ // update the version on the request, so it will be used for the replicas\n+ request.version(version);\n+ request.versionType(request.versionType().versionTypeForReplicationAndRecovery());\n \n- return new Tuple<>(new IndexResponse(shardRequest.shardId.getIndex(), request.type(), request.id(), version, created), shardRequest.request);\n+ assert request.versionType().validateVersionForWrites(request.version());\n+ return new Tuple<>(new IndexResponse(shardRequest.shardId.getIndex(), request.type(), request.id(), version, created), shardRequest.request);\n+ } catch (WriteFailureException e) {\n+ if (e.getMappingTypeToUpdate() != null){\n+ DocumentMapper docMapper = indexService.mapperService().documentMapper(e.getMappingTypeToUpdate());\n+ if (docMapper != null) {\n+ mappingUpdatedAction.updateMappingOnMaster(indexService.index().name(), docMapper, indexService.indexUUID());\n+ }\n+ }\n+ throw e.getCause();\n+ }\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/action/index/TransportIndexAction.java", "status": "modified" }, { "diff": "@@ -117,7 +117,7 @@ protected void doExecute(Request request, ActionListener<Response> listener) {\n * @return A tuple containing not null values, as first value the result of the primary operation and as second value\n * the request to be executed on the replica shards.\n */\n- protected abstract Tuple<Response, ReplicaRequest> shardOperationOnPrimary(ClusterState clusterState, PrimaryOperationRequest shardRequest);\n+ protected abstract Tuple<Response, ReplicaRequest> shardOperationOnPrimary(ClusterState clusterState, PrimaryOperationRequest shardRequest) throws Throwable;\n \n protected abstract void shardOperationOnReplica(ReplicaOperationRequest shardRequest);\n ", "filename": "src/main/java/org/elasticsearch/action/support/replication/TransportShardReplicationOperationAction.java", "status": "modified" }, { "diff": "@@ -34,7 +34,7 @@\n import org.elasticsearch.ElasticsearchIllegalStateException;\n import org.elasticsearch.action.admin.indices.flush.FlushRequest;\n import org.elasticsearch.action.admin.indices.optimize.OptimizeRequest;\n-import org.elasticsearch.cluster.metadata.IndexMetaData;\n+import org.elasticsearch.action.WriteFailureException;\n import org.elasticsearch.cluster.routing.ShardRouting;\n import org.elasticsearch.cluster.routing.ShardRoutingState;\n import org.elasticsearch.common.Booleans;\n@@ -64,6 +64,12 @@\n import org.elasticsearch.index.deletionpolicy.SnapshotDeletionPolicy;\n import org.elasticsearch.index.deletionpolicy.SnapshotIndexCommit;\n import org.elasticsearch.index.engine.*;\n+import org.elasticsearch.index.engine.Engine;\n+import org.elasticsearch.index.engine.EngineClosedException;\n+import org.elasticsearch.index.engine.EngineException;\n+import org.elasticsearch.index.engine.IgnoreOnRecoveryEngineException;\n+import org.elasticsearch.index.engine.RefreshFailedEngineException;\n+import org.elasticsearch.index.engine.SegmentsStats;\n import org.elasticsearch.index.fielddata.FieldDataStats;\n import org.elasticsearch.index.fielddata.IndexFieldDataService;\n import org.elasticsearch.index.fielddata.ShardFieldData;\n@@ -409,8 +415,16 @@ private IndexShardState changeState(IndexShardState newState, String reason) {\n public Engine.Create prepareCreate(SourceToParse source, long version, VersionType versionType, Engine.Operation.Origin origin, boolean canHaveDuplicates, boolean autoGeneratedId) throws ElasticsearchException {\n long startTime = System.nanoTime();\n Tuple<DocumentMapper, Boolean> docMapper = mapperService.documentMapperWithAutoCreate(source.type());\n- ParsedDocument doc = docMapper.v1().parse(source).setMappingsModified(docMapper);\n- return new Engine.Create(docMapper.v1(), docMapper.v1().uidMapper().term(doc.uid().stringValue()), doc, version, versionType, origin, startTime, state != IndexShardState.STARTED || canHaveDuplicates, autoGeneratedId);\n+ try {\n+ ParsedDocument doc = docMapper.v1().parse(source).setMappingsModified(docMapper);\n+ return new Engine.Create(docMapper.v1(), docMapper.v1().uidMapper().term(doc.uid().stringValue()), doc, version, versionType, origin, startTime, state != IndexShardState.STARTED || canHaveDuplicates, autoGeneratedId);\n+ } catch (Throwable t) {\n+ if (docMapper.v2()) {\n+ throw new WriteFailureException(t, docMapper.v1().type());\n+ } else {\n+ throw t;\n+ }\n+ }\n }\n \n public ParsedDocument create(Engine.Create create) throws ElasticsearchException {\n@@ -434,8 +448,16 @@ public ParsedDocument create(Engine.Create create) throws ElasticsearchException\n public Engine.Index prepareIndex(SourceToParse source, long version, VersionType versionType, Engine.Operation.Origin origin, boolean canHaveDuplicates) throws ElasticsearchException {\n long startTime = System.nanoTime();\n Tuple<DocumentMapper, Boolean> docMapper = mapperService.documentMapperWithAutoCreate(source.type());\n- ParsedDocument doc = docMapper.v1().parse(source).setMappingsModified(docMapper);\n- return new Engine.Index(docMapper.v1(), docMapper.v1().uidMapper().term(doc.uid().stringValue()), doc, version, versionType, origin, startTime, state != IndexShardState.STARTED || canHaveDuplicates);\n+ try {\n+ ParsedDocument doc = docMapper.v1().parse(source).setMappingsModified(docMapper);\n+ return new Engine.Index(docMapper.v1(), docMapper.v1().uidMapper().term(doc.uid().stringValue()), doc, version, versionType, origin, startTime, state != IndexShardState.STARTED || canHaveDuplicates);\n+ } catch (Throwable t) {\n+ if (docMapper.v2()) {\n+ throw new WriteFailureException(t, docMapper.v1().type());\n+ } else {\n+ throw t;\n+ }\n+ }\n }\n \n public ParsedDocument index(Engine.Index index) throws ElasticsearchException {", "filename": "src/main/java/org/elasticsearch/index/shard/IndexShard.java", "status": "modified" }, { "diff": "@@ -0,0 +1,76 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.index.mapper.dynamic;\n+\n+import com.google.common.base.Predicate;\n+import org.elasticsearch.action.admin.indices.mapping.get.GetMappingsResponse;\n+import org.elasticsearch.client.Client;\n+import org.elasticsearch.index.mapper.StrictDynamicMappingException;\n+import org.elasticsearch.test.ElasticsearchIntegrationTest;\n+import org.junit.Test;\n+\n+import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n+\n+\n+public class DynamicMappingIntegrationTests extends ElasticsearchIntegrationTest {\n+\n+ // https://github.com/elasticsearch/elasticsearch/issues/8423#issuecomment-64229717\n+ @Test\n+ public void testStrictAllMapping() throws Exception {\n+ String defaultMapping = jsonBuilder().startObject().startObject(\"_default_\")\n+ .field(\"dynamic\", \"strict\")\n+ .endObject().endObject().string();\n+ client().admin().indices().prepareCreate(\"index\").addMapping(\"_default_\", defaultMapping).get();\n+\n+ try {\n+ client().prepareIndex(\"index\", \"type\", \"id\").setSource(\"test\", \"test\").get();\n+ fail();\n+ } catch (StrictDynamicMappingException ex) {\n+ // this should not be created dynamically so we expect this exception\n+ }\n+ awaitBusy(new Predicate<Object>() {\n+ @Override\n+ public boolean apply(java.lang.Object input) {\n+ GetMappingsResponse currentMapping = client().admin().indices().prepareGetMappings(\"index\").get();\n+ return currentMapping.getMappings().get(\"index\").get(\"type\") != null;\n+ }\n+ });\n+\n+ String docMapping = jsonBuilder().startObject().startObject(\"type\")\n+ .startObject(\"_all\")\n+ .field(\"enabled\", false)\n+ .endObject()\n+ .endObject().endObject().string();\n+ try {\n+ client().admin().indices()\n+ .preparePutMapping(\"index\")\n+ .setType(\"type\")\n+ .setSource(docMapping).get();\n+ fail();\n+ } catch (Exception e) {\n+ // the mapping was created anyway with _all enabled: true, although the index request fails so we expect the update to fail\n+ }\n+\n+ // make sure type was created\n+ for (Client client : cluster()) {\n+ GetMappingsResponse mapping = client.admin().indices().prepareGetMappings(\"index\").setLocal(true).get();\n+ assertNotNull(mapping.getMappings().get(\"index\").get(\"type\"));\n+ }\n+ }\n+}", "filename": "src/test/java/org/elasticsearch/index/mapper/dynamic/DynamicMappingIntegrationTests.java", "status": "added" }, { "diff": "@@ -18,6 +18,10 @@\n */\n package org.elasticsearch.index.mapper.dynamic;\n \n+import com.google.common.base.Predicate;\n+import org.elasticsearch.action.admin.indices.mapping.get.GetMappingsResponse;\n+import org.elasticsearch.common.settings.ImmutableSettings;\n+import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.index.mapper.DocumentMapper;\n import org.elasticsearch.index.mapper.FieldMappers;\n@@ -29,14 +33,15 @@\n \n import java.io.IOException;\n \n+import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.nullValue;\n \n public class DynamicMappingTests extends ElasticsearchSingleNodeTest {\n \n @Test\n public void testDynamicTrue() throws IOException {\n- String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n+ String mapping = jsonBuilder().startObject().startObject(\"type\")\n .field(\"dynamic\", \"true\")\n .startObject(\"properties\")\n .startObject(\"field1\").field(\"type\", \"string\").endObject()\n@@ -45,7 +50,7 @@ public void testDynamicTrue() throws IOException {\n \n DocumentMapper defaultMapper = createIndex(\"test\").mapperService().documentMapperParser().parse(mapping);\n \n- ParsedDocument doc = defaultMapper.parse(\"type\", \"1\", XContentFactory.jsonBuilder()\n+ ParsedDocument doc = defaultMapper.parse(\"type\", \"1\", jsonBuilder()\n .startObject()\n .field(\"field1\", \"value1\")\n .field(\"field2\", \"value2\")\n@@ -57,7 +62,7 @@ public void testDynamicTrue() throws IOException {\n \n @Test\n public void testDynamicFalse() throws IOException {\n- String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n+ String mapping = jsonBuilder().startObject().startObject(\"type\")\n .field(\"dynamic\", \"false\")\n .startObject(\"properties\")\n .startObject(\"field1\").field(\"type\", \"string\").endObject()\n@@ -66,7 +71,7 @@ public void testDynamicFalse() throws IOException {\n \n DocumentMapper defaultMapper = createIndex(\"test\").mapperService().documentMapperParser().parse(mapping);\n \n- ParsedDocument doc = defaultMapper.parse(\"type\", \"1\", XContentFactory.jsonBuilder()\n+ ParsedDocument doc = defaultMapper.parse(\"type\", \"1\", jsonBuilder()\n .startObject()\n .field(\"field1\", \"value1\")\n .field(\"field2\", \"value2\")\n@@ -79,7 +84,7 @@ public void testDynamicFalse() throws IOException {\n \n @Test\n public void testDynamicStrict() throws IOException {\n- String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n+ String mapping = jsonBuilder().startObject().startObject(\"type\")\n .field(\"dynamic\", \"strict\")\n .startObject(\"properties\")\n .startObject(\"field1\").field(\"type\", \"string\").endObject()\n@@ -89,7 +94,7 @@ public void testDynamicStrict() throws IOException {\n DocumentMapper defaultMapper = createIndex(\"test\").mapperService().documentMapperParser().parse(mapping);\n \n try {\n- defaultMapper.parse(\"type\", \"1\", XContentFactory.jsonBuilder()\n+ defaultMapper.parse(\"type\", \"1\", jsonBuilder()\n .startObject()\n .field(\"field1\", \"value1\")\n .field(\"field2\", \"value2\")\n@@ -113,7 +118,7 @@ public void testDynamicStrict() throws IOException {\n \n @Test\n public void testDynamicFalseWithInnerObjectButDynamicSetOnRoot() throws IOException {\n- String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n+ String mapping = jsonBuilder().startObject().startObject(\"type\")\n .field(\"dynamic\", \"false\")\n .startObject(\"properties\")\n .startObject(\"obj1\").startObject(\"properties\")\n@@ -124,7 +129,7 @@ public void testDynamicFalseWithInnerObjectButDynamicSetOnRoot() throws IOExcept\n \n DocumentMapper defaultMapper = createIndex(\"test\").mapperService().documentMapperParser().parse(mapping);\n \n- ParsedDocument doc = defaultMapper.parse(\"type\", \"1\", XContentFactory.jsonBuilder()\n+ ParsedDocument doc = defaultMapper.parse(\"type\", \"1\", jsonBuilder()\n .startObject().startObject(\"obj1\")\n .field(\"field1\", \"value1\")\n .field(\"field2\", \"value2\")\n@@ -137,7 +142,7 @@ public void testDynamicFalseWithInnerObjectButDynamicSetOnRoot() throws IOExcept\n \n @Test\n public void testDynamicStrictWithInnerObjectButDynamicSetOnRoot() throws IOException {\n- String mapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\")\n+ String mapping = jsonBuilder().startObject().startObject(\"type\")\n .field(\"dynamic\", \"strict\")\n .startObject(\"properties\")\n .startObject(\"obj1\").startObject(\"properties\")\n@@ -149,7 +154,7 @@ public void testDynamicStrictWithInnerObjectButDynamicSetOnRoot() throws IOExcep\n DocumentMapper defaultMapper = createIndex(\"test\").mapperService().documentMapperParser().parse(mapping);\n \n try {\n- defaultMapper.parse(\"type\", \"1\", XContentFactory.jsonBuilder()\n+ defaultMapper.parse(\"type\", \"1\", jsonBuilder()\n .startObject().startObject(\"obj1\")\n .field(\"field1\", \"value1\")\n .field(\"field2\", \"value2\")\n@@ -167,4 +172,74 @@ public void testDynamicMappingOnEmptyString() throws Exception {\n FieldMappers mappers = service.mapperService().indexName(\"empty_field\");\n assertTrue(mappers != null && mappers.isEmpty() == false);\n }\n+\n+ @Test\n+ public void testIndexingFailureDoesNotCreateType() throws IOException, InterruptedException {\n+ XContentBuilder mapping = jsonBuilder().startObject().startObject(\"_default_\")\n+ .field(\"dynamic\", \"strict\")\n+ .endObject().endObject();\n+\n+ IndexService indexService = createIndex(\"test\", ImmutableSettings.EMPTY, \"_default_\", mapping);\n+\n+ try {\n+ client().prepareIndex().setIndex(\"test\").setType(\"type\").setSource(jsonBuilder().startObject().field(\"test\", \"test\").endObject()).get();\n+ fail();\n+ } catch (StrictDynamicMappingException e) {\n+\n+ }\n+ awaitBusy(new Predicate<Object>() {\n+ @Override\n+ public boolean apply(java.lang.Object input) {\n+ GetMappingsResponse currentMapping = client().admin().indices().prepareGetMappings(\"test\").get();\n+ return currentMapping.getMappings().get(\"test\").get(\"type\") != null;\n+ }\n+ });\n+\n+ GetMappingsResponse getMappingsResponse = client().admin().indices().prepareGetMappings(\"test\").get();\n+ assertNotNull(getMappingsResponse.getMappings().get(\"test\").get(\"type\"));\n+ DocumentMapper mapper = indexService.mapperService().documentMapper(\"type\");\n+ assertNotNull(mapper);\n+\n+ }\n+\n+ @Test\n+ public void testTypeCreatedProperly() throws IOException, InterruptedException {\n+ XContentBuilder mapping = jsonBuilder().startObject().startObject(\"_default_\")\n+ .field(\"dynamic\", \"strict\")\n+ .startObject(\"properties\")\n+ .startObject(\"test_string\")\n+ .field(\"type\", \"string\")\n+ .endObject()\n+ .endObject()\n+ .endObject().endObject();\n+\n+ IndexService indexService = createIndex(\"test\", ImmutableSettings.EMPTY, \"_default_\", mapping);\n+\n+ try {\n+ client().prepareIndex().setIndex(\"test\").setType(\"type\").setSource(jsonBuilder().startObject().field(\"test\", \"test\").endObject()).get();\n+ fail();\n+ } catch (StrictDynamicMappingException e) {\n+\n+ }\n+ awaitBusy(new Predicate<Object>() {\n+ @Override\n+ public boolean apply(java.lang.Object input) {\n+ GetMappingsResponse currentMapping = client().admin().indices().prepareGetMappings(\"test\").get();\n+ return currentMapping.getMappings().get(\"test\").get(\"type\") != null;\n+ }\n+ });\n+ //type should be in mapping\n+ GetMappingsResponse getMappingsResponse = client().admin().indices().prepareGetMappings(\"test\").get();\n+ assertNotNull(getMappingsResponse.getMappings().get(\"test\").get(\"type\"));\n+\n+ client().prepareIndex().setIndex(\"test\").setType(\"type\").setSource(jsonBuilder().startObject().field(\"test_string\", \"test\").endObject()).get();\n+ client().admin().indices().prepareRefresh(\"test\").get();\n+ assertThat(client().prepareSearch(\"test\").get().getHits().getTotalHits(), equalTo(1l));\n+\n+ DocumentMapper mapper = indexService.mapperService().documentMapper(\"type\");\n+ assertNotNull(mapper);\n+\n+ getMappingsResponse = client().admin().indices().prepareGetMappings(\"test\").get();\n+ assertNotNull(getMappingsResponse.getMappings().get(\"test\").get(\"type\"));\n+ }\n }\n\\ No newline at end of file", "filename": "src/test/java/org/elasticsearch/index/mapper/dynamic/DynamicMappingTests.java", "status": "modified" } ] }
{ "body": "", "comments": [ { "body": "The change looks good, maybe we should remove the query explanation from the parameters of `explainScore`? (only the score seems to be needed?)\n", "created_at": "2014-08-13T20:12:46Z" }, { "body": "Ok, I removed the query explanation from the parameters of explainScore. However, it was used by ExplainableSearchScript. But this class seems to be used nowhere, so I removed it.\n", "created_at": "2014-08-14T13:53:55Z" }, { "body": "LGTM\n", "created_at": "2014-08-18T07:29:45Z" } ], "number": 7245, "title": "Function Score: Remove explanation of query score from functions" }
{ "body": "explainable scripts were removed in #7245 but they were used.\nThis commit adds them again.\n\ncloses #8561 \n\n@ursvasan could you take a look and see if this is sufficient for your use case?\n", "number": 8665, "review_comments": [], "title": "Add explainable script again" }
{ "commits": [ { "message": "scripts: add explainable script again\n\nexplainable scripts were removed in #7245 but they were used.\nThis commit adds them again.\n\ncloses #8561" } ], "files": [ { "diff": "@@ -22,6 +22,8 @@\n import org.apache.lucene.index.LeafReaderContext;\n import org.apache.lucene.search.Explanation;\n \n+import java.io.IOException;\n+\n /**\n *\n */\n@@ -33,7 +35,7 @@ public abstract class ScoreFunction {\n \n public abstract double score(int docId, float subQueryScore);\n \n- public abstract Explanation explainScore(int docId, float subQueryScore);\n+ public abstract Explanation explainScore(int docId, float subQueryScore) throws IOException;\n \n public CombineFunction getDefaultScoreCombiner() {\n return scoreCombiner;", "filename": "src/main/java/org/elasticsearch/common/lucene/search/function/ScoreFunction.java", "status": "modified" }, { "diff": "@@ -22,6 +22,7 @@\n import org.apache.lucene.index.LeafReaderContext;\n import org.apache.lucene.search.Explanation;\n import org.apache.lucene.search.Scorer;\n+import org.elasticsearch.script.ExplainableSearchScript;\n import org.elasticsearch.script.SearchScript;\n \n import java.io.IOException;\n@@ -100,14 +101,21 @@ public double score(int docId, float subQueryScore) {\n }\n \n @Override\n- public Explanation explainScore(int docId, float subQueryScore) {\n+ public Explanation explainScore(int docId, float subQueryScore) throws IOException {\n Explanation exp;\n- double score = score(docId, subQueryScore);\n- String explanation = \"script score function, computed with script:\\\"\" + sScript;\n- if (params != null) {\n- explanation += \"\\\" and parameters: \\n\" + params.toString();\n+ if (script instanceof ExplainableSearchScript) {\n+ script.setNextDocId(docId);\n+ scorer.docid = docId;\n+ scorer.score = subQueryScore;\n+ exp = ((ExplainableSearchScript) script).explain(subQueryScore);\n+ } else {\n+ double score = score(docId, subQueryScore);\n+ String explanation = \"script score function, computed with script:\\\"\" + sScript;\n+ if (params != null) {\n+ explanation += \"\\\" and parameters: \\n\" + params.toString();\n+ }\n+ exp = new Explanation(CombineFunction.toFloat(score), explanation);\n }\n- exp = new Explanation(CombineFunction.toFloat(score), explanation);\n return exp;\n }\n ", "filename": "src/main/java/org/elasticsearch/common/lucene/search/function/ScriptScoreFunction.java", "status": "modified" }, { "diff": "@@ -24,6 +24,8 @@\n import org.apache.lucene.search.Explanation;\n import org.elasticsearch.ElasticsearchIllegalArgumentException;\n \n+import java.io.IOException;\n+\n /**\n *\n */\n@@ -63,7 +65,7 @@ public double score(int docId, float subQueryScore) {\n }\n \n @Override\n- public Explanation explainScore(int docId, float score) {\n+ public Explanation explainScore(int docId, float score) throws IOException {\n Explanation functionScoreExplanation;\n Explanation functionExplanation = scoreFunction.explainScore(docId, score);\n functionScoreExplanation = new ComplexExplanation(true, functionExplanation.getValue() * (float) getWeight(), \"product of:\");", "filename": "src/main/java/org/elasticsearch/common/lucene/search/function/WeightFactorFunction.java", "status": "modified" }, { "diff": "@@ -0,0 +1,59 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.script;\n+\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+import org.apache.lucene.search.Explanation;\n+\n+import java.io.IOException;\n+\n+/**\n+ * To be implemented by {@link SearchScript} which can provided an {@link Explanation} of the score\n+ * This is currently not used inside elasticsearch but it is used, see for example here:\n+ * https://github.com/elasticsearch/elasticsearch/issues/8561\n+ */\n+public interface ExplainableSearchScript extends SearchScript {\n+\n+ /**\n+ * Build the explanation of the current document being scored\n+ *\n+ * @param score the score\n+ */\n+ Explanation explain(float score) throws IOException;\n+\n+}\n\\ No newline at end of file", "filename": "src/main/java/org/elasticsearch/script/ExplainableSearchScript.java", "status": "added" }, { "diff": "@@ -0,0 +1,41 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.search.functionscore;\n+\n+import org.elasticsearch.plugins.AbstractPlugin;\n+import org.elasticsearch.script.ScriptModule;\n+\n+public class ExplainableScriptPlugin extends AbstractPlugin {\n+\n+ public ExplainableScriptPlugin() {}\n+ @Override\n+ public String name() {\n+ return \"native-explainable-script\";\n+ }\n+\n+ @Override\n+ public String description() {\n+ return \"Native explainable script\";\n+ }\n+\n+ public void onModule(ScriptModule module) {\n+ module.registerScript(\"native_explainable_script\", ExplainableScriptTests.MyNativeScriptFactory.class);\n+ }\n+}", "filename": "src/test/java/org/elasticsearch/search/functionscore/ExplainableScriptPlugin.java", "status": "added" }, { "diff": "@@ -0,0 +1,112 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.search.functionscore;\n+\n+import org.apache.lucene.search.Explanation;\n+import org.elasticsearch.action.index.IndexRequestBuilder;\n+import org.elasticsearch.action.search.SearchResponse;\n+import org.elasticsearch.action.search.SearchType;\n+import org.elasticsearch.common.Nullable;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.index.fielddata.ScriptDocValues;\n+import org.elasticsearch.script.AbstractDoubleSearchScript;\n+import org.elasticsearch.script.ExecutableScript;\n+import org.elasticsearch.script.ExplainableSearchScript;\n+import org.elasticsearch.script.NativeScriptFactory;\n+import org.elasticsearch.search.SearchHit;\n+import org.elasticsearch.search.SearchHits;\n+import org.elasticsearch.test.ElasticsearchIntegrationTest;\n+import org.elasticsearch.test.hamcrest.ElasticsearchAssertions;\n+import org.junit.Test;\n+\n+import java.io.IOException;\n+import java.util.ArrayList;\n+import java.util.List;\n+import java.util.Map;\n+import java.util.concurrent.ExecutionException;\n+\n+import static org.elasticsearch.client.Requests.searchRequest;\n+import static org.elasticsearch.common.settings.ImmutableSettings.settingsBuilder;\n+import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n+import static org.elasticsearch.index.query.QueryBuilders.functionScoreQuery;\n+import static org.elasticsearch.index.query.QueryBuilders.termQuery;\n+import static org.elasticsearch.index.query.functionscore.ScoreFunctionBuilders.scriptFunction;\n+import static org.elasticsearch.search.builder.SearchSourceBuilder.searchSource;\n+import static org.elasticsearch.test.ElasticsearchIntegrationTest.ClusterScope;\n+import static org.elasticsearch.test.ElasticsearchIntegrationTest.Scope;\n+import static org.hamcrest.Matchers.containsString;\n+import static org.hamcrest.Matchers.equalTo;\n+\n+@ClusterScope(scope = Scope.SUITE, numDataNodes = 1)\n+public class ExplainableScriptTests extends ElasticsearchIntegrationTest {\n+\n+ @Override\n+ protected Settings nodeSettings(int nodeOrdinal) {\n+ return settingsBuilder()\n+ .put(super.nodeSettings(nodeOrdinal))\n+ .put(\"plugin.types\", ExplainableScriptPlugin.class.getName())\n+ .build();\n+ }\n+\n+ @Test\n+ public void testNativeExplainScript() throws InterruptedException, IOException, ExecutionException {\n+\n+ List<IndexRequestBuilder> indexRequests = new ArrayList<>();\n+ for (int i = 0; i < 20; i++) {\n+ indexRequests.add(client().prepareIndex(\"test\", \"type\").setId(Integer.toString(i)).setSource(\n+ jsonBuilder().startObject().field(\"number_field\", i).field(\"text\", \"text\").endObject()));\n+ }\n+ indexRandom(true, true, indexRequests);\n+ client().admin().indices().prepareRefresh().execute().actionGet();\n+ ensureYellow();\n+ SearchResponse response = client().search(searchRequest().searchType(SearchType.QUERY_THEN_FETCH).source(\n+ searchSource().explain(true).query(functionScoreQuery(termQuery(\"text\", \"text\")).add(scriptFunction(\"native_explainable_script\", \"native\")).boostMode(\"sum\")))).actionGet();\n+\n+ ElasticsearchAssertions.assertNoFailures(response);\n+ SearchHits hits = response.getHits();\n+ assertThat(hits.getTotalHits(), equalTo(20l));\n+ int idCounter = 19;\n+ for (SearchHit hit : hits.getHits()) {\n+ assertThat(hit.getId(), equalTo(Integer.toString(idCounter)));\n+ assertThat(hit.explanation().toString(), containsString(Double.toString(idCounter) + \" = This script returned \" + Double.toString(idCounter)));\n+ idCounter--;\n+ }\n+ }\n+\n+ static class MyNativeScriptFactory implements NativeScriptFactory {\n+ @Override\n+ public ExecutableScript newScript(@Nullable Map<String, Object> params) {\n+ return new MyScript();\n+ }\n+ }\n+\n+ static class MyScript extends AbstractDoubleSearchScript implements ExplainableSearchScript {\n+\n+ @Override\n+ public double runAsDouble() {\n+ return ((Number) ((ScriptDocValues) doc().get(\"number_field\")).getValues().get(0)).doubleValue();\n+ }\n+\n+ @Override\n+ public Explanation explain(float score) throws IOException {\n+ return new Explanation((float) (runAsDouble()), \"This script returned \" + runAsDouble());\n+ }\n+ }\n+}", "filename": "src/test/java/org/elasticsearch/search/functionscore/ExplainableScriptTests.java", "status": "added" } ] }
{ "body": "I have a relatively large cluster running ES 1.3.5 and we recently started to get low on disk space. The cluster has roughly 110 TB usable space of which about 86TB is used. We have 93 indices of which 90 are rotating daily indices and the other 3 are permanent. With this setup we end up with shards that range in size from a few MB to over 160GB and we're finding the shard count based allocation strategy results in some nodes starting to run out of space earlier than expected which trips the cluster.routing.allocation.disk.watermark.high setting.\n\nIt appears when this happens that \"all\" of the shards from that node are relocated. This then puts more pressure on the other nodes and at some point another node will trip cluster.routing.allocation.disk.watermark.high and relocate all of its shards too. This then goes on and on and on and never stops until we intercede and manually cancel the allocations. At times we've seen more than 200 relocations in progress at the same time.\n\nIs the relocation of \"all\" shards really the intended behavior? In looking at the source code it appears to do a reroute in this scenario but this seems like the last thing you would want to do. You're almost guaranteed that if one node is tripping that setting then others are going to be close as well. In our cluster we have room on many of the other nodes and if it selectively moved a shard or two everything would be fine but moving all of them is really problematic.\n\nIt also seems that when all these relocations occur that cluster.routing.allocation.disk.watermark.low doesn't prevent the overallocation of other nodes. I believe this is occurring because of the number of relocations that are started at once so the node gets asked multiple times to accept shards within a short time period and all requests succeed because no pending relocations are considered in respect to disk utilization. This pretty much guarantees that node will eventually trip cluster.routing.allocation.disk.watermark.high and continue the cycle even though there are other nodes in the cluster with hundreds of GB free. \n\nWe're in the process of adding additional capacity to our cluster so we're not at risk of bumping into these limits but otherwise this behavior seems quite problematic.\n", "comments": [ { "body": "Okay, this looks like this is because the `DiskThresholdDecider`'s `canRemain` method needs to take currently relocating shards into account, subtracting them from the limit when deciding whether to relocate more shards.\n\nWe currently take relocations _in_ to the node into account, now we need to take relocations _out_ of the node into account, and only in the `canRemain` method.\n", "created_at": "2014-11-19T17:01:37Z" }, { "body": "@dakrone I think we only need to change the default for `cluster.routing.allocation.disk.include_relocations` here which is true. we can also think about adding an enum here where you can specify where you wanna take it into account like \n`cluster.routing.allocation.disk.include_relocations = never | allocation | moves` the naming is hard here I guess.. :)\n", "created_at": "2014-11-23T12:54:14Z" }, { "body": "@s1monw I don't follow how that will help? I think it will still see all shards as failing the `canRemain()` decision and relocating them away from the node that goes over the high watermark\n", "created_at": "2014-11-24T11:04:19Z" }, { "body": "oh well I misunderstood the code - it only takes the incoming into account :/\n", "created_at": "2014-11-24T15:47:39Z" } ], "number": 8538, "title": "disk.watermark.high relocates all shards creating a relocation storm" }
{ "body": "Let the disk threshold decider take into account shards moving away from a node in order to determine if a shard can remain.\n\nBy taking this into account we can prevent that we move too many shards away than is necessary.\n\nPR for #8538\n", "number": 8659, "review_comments": [ { "body": "You put `[test][0][p]` and `[test][1][p]`, but then create metadata with 1 primary shard and one replica, can you change it here or in the shardSizes map?\n", "created_at": "2014-11-25T20:16:10Z" }, { "body": "Can you add a `strategy.reroute` call in this test at the end to ensure that the cluster rerouting does the right thing with regard to one of the shards? (marking it as RELOCATING and **not** relocating the other shard because it's taking the relocation into account)\n", "created_at": "2014-11-25T20:19:21Z" }, { "body": "can we name this `subtractShardsMovingAway` ? it's clear what it means\n", "created_at": "2014-11-25T21:58:22Z" }, { "body": "subtractShardsMovingAwayRen -> subtractShardsMovingAway\n", "created_at": "2014-11-26T08:36:15Z" } ], "title": "DiskThresholdDecider#remain(...) should take shards relocating away into account" }
{ "commits": [ { "message": "Core: Let the disk threshold decider take into account shards moving away from a node in order to determine if a shard can remain.\n\nBy taking this into account we can prevent that we move too many shards away than is necessary.\n\nCloses #8538\nCloses #8659" } ], "files": [ { "diff": "@@ -223,20 +223,28 @@ public DiskThresholdDecider(Settings settings, NodeSettingsService nodeSettingsS\n /**\n * Returns the size of all shards that are currently being relocated to\n * the node, but may not be finished transfering yet.\n+ *\n+ * If subtractShardsMovingAway is set then the size of shards moving away is subtracted from the total size\n+ * of all shards\n */\n- public long sizeOfRelocatingShards(RoutingNode node, RoutingAllocation allocation, Map<String, Long> shardSizes) {\n+ public long sizeOfRelocatingShards(RoutingNode node, RoutingAllocation allocation, Map<String, Long> shardSizes, boolean subtractShardsMovingAway) {\n List<ShardRouting> relocatingShards = allocation.routingTable().shardsWithState(ShardRoutingState.RELOCATING);\n long totalSize = 0;\n for (ShardRouting routing : relocatingShards) {\n if (routing.relocatingNodeId().equals(node.nodeId())) {\n- Long shardSize = shardSizes.get(shardIdentifierFromRouting(routing));\n- shardSize = shardSize == null ? 0 : shardSize;\n- totalSize += shardSize;\n+ totalSize += getShardSize(routing, shardSizes);\n+ } else if (subtractShardsMovingAway && routing.currentNodeId().equals(node.nodeId())) {\n+ totalSize -= getShardSize(routing, shardSizes);\n }\n }\n return totalSize;\n }\n \n+ private long getShardSize(ShardRouting routing, Map<String, Long> shardSizes) {\n+ Long shardSize = shardSizes.get(shardIdentifierFromRouting(routing));\n+ return shardSize == null ? 0 : shardSize;\n+ }\n+\n public Decision canAllocate(ShardRouting shardRouting, RoutingNode node, RoutingAllocation allocation) {\n \n // Always allow allocation if the decider is disabled\n@@ -283,7 +291,7 @@ public Decision canAllocate(ShardRouting shardRouting, RoutingNode node, Routing\n }\n \n if (includeRelocations) {\n- long relocatingShardsSize = sizeOfRelocatingShards(node, allocation, shardSizes);\n+ long relocatingShardsSize = sizeOfRelocatingShards(node, allocation, shardSizes, false);\n DiskUsage usageIncludingRelocations = new DiskUsage(node.nodeId(), node.node().name(),\n usage.getTotalBytes(), usage.getFreeBytes() - relocatingShardsSize);\n if (logger.isTraceEnabled()) {\n@@ -429,7 +437,7 @@ public Decision canRemain(ShardRouting shardRouting, RoutingNode node, RoutingAl\n \n if (includeRelocations) {\n Map<String, Long> shardSizes = clusterInfo.getShardSizes();\n- long relocatingShardsSize = sizeOfRelocatingShards(node, allocation, shardSizes);\n+ long relocatingShardsSize = sizeOfRelocatingShards(node, allocation, shardSizes, true);\n DiskUsage usageIncludingRelocations = new DiskUsage(node.nodeId(), node.node().name(),\n usage.getTotalBytes(), usage.getFreeBytes() - relocatingShardsSize);\n if (logger.isTraceEnabled()) {", "filename": "src/main/java/org/elasticsearch/cluster/routing/allocation/decider/DiskThresholdDecider.java", "status": "modified" }, { "diff": "@@ -22,21 +22,25 @@\n import com.google.common.base.Predicate;\n import com.google.common.collect.ImmutableMap;\n import org.elasticsearch.ElasticsearchIllegalArgumentException;\n+import org.elasticsearch.Version;\n import org.elasticsearch.cluster.ClusterInfo;\n import org.elasticsearch.cluster.ClusterInfoService;\n import org.elasticsearch.cluster.ClusterState;\n import org.elasticsearch.cluster.DiskUsage;\n import org.elasticsearch.cluster.metadata.IndexMetaData;\n import org.elasticsearch.cluster.metadata.MetaData;\n+import org.elasticsearch.cluster.node.DiscoveryNode;\n import org.elasticsearch.cluster.node.DiscoveryNodes;\n import org.elasticsearch.cluster.routing.*;\n import org.elasticsearch.cluster.routing.allocation.AllocationService;\n+import org.elasticsearch.cluster.routing.allocation.RoutingAllocation;\n import org.elasticsearch.cluster.routing.allocation.allocator.ShardsAllocators;\n import org.elasticsearch.cluster.routing.allocation.command.AllocationCommand;\n import org.elasticsearch.cluster.routing.allocation.command.AllocationCommands;\n import org.elasticsearch.cluster.routing.allocation.command.MoveAllocationCommand;\n import org.elasticsearch.common.settings.ImmutableSettings;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.transport.LocalTransportAddress;\n import org.elasticsearch.index.shard.ShardId;\n import org.elasticsearch.test.ElasticsearchAllocationTestCase;\n import org.elasticsearch.test.junit.annotations.TestLogging;\n@@ -50,6 +54,8 @@\n import static org.elasticsearch.cluster.routing.ShardRoutingState.*;\n import static org.elasticsearch.common.settings.ImmutableSettings.settingsBuilder;\n import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.is;\n+import static org.hamcrest.Matchers.nullValue;\n \n public class DiskThresholdDeciderTests extends ElasticsearchAllocationTestCase {\n \n@@ -784,6 +790,116 @@ public void addListener(Listener listener) {\n \n }\n \n+ @Test\n+ public void testCanRemainWithShardRelocatingAway() {\n+ Settings diskSettings = settingsBuilder()\n+ .put(DiskThresholdDecider.CLUSTER_ROUTING_ALLOCATION_DISK_THRESHOLD_ENABLED, true)\n+ .put(DiskThresholdDecider.CLUSTER_ROUTING_ALLOCATION_INCLUDE_RELOCATIONS, true)\n+ .put(DiskThresholdDecider.CLUSTER_ROUTING_ALLOCATION_LOW_DISK_WATERMARK, \"60%\")\n+ .put(DiskThresholdDecider.CLUSTER_ROUTING_ALLOCATION_HIGH_DISK_WATERMARK, \"70%\").build();\n+\n+ // We have an index with 2 primary shards each taking 40 bytes. Each node has 100 bytes available\n+ Map<String, DiskUsage> usages = new HashMap<>();\n+ usages.put(\"node1\", new DiskUsage(\"node1\", \"n1\", 100, 20)); // 80% used\n+ usages.put(\"node2\", new DiskUsage(\"node2\", \"n2\", 100, 100)); // 0% used\n+\n+ Map<String, Long> shardSizes = new HashMap<>();\n+ shardSizes.put(\"[test][0][p]\", 40L);\n+ shardSizes.put(\"[test][1][p]\", 40L);\n+ final ClusterInfo clusterInfo = new ClusterInfo(ImmutableMap.copyOf(usages), ImmutableMap.copyOf(shardSizes));\n+\n+ DiskThresholdDecider diskThresholdDecider = new DiskThresholdDecider(diskSettings);\n+ MetaData metaData = MetaData.builder()\n+ .put(IndexMetaData.builder(\"test\").numberOfShards(2).numberOfReplicas(0))\n+ .build();\n+\n+ RoutingTable routingTable = RoutingTable.builder()\n+ .addAsNew(metaData.index(\"test\"))\n+ .build();\n+\n+ DiscoveryNode discoveryNode1 = new DiscoveryNode(\"node1\", new LocalTransportAddress(\"1\"), Version.CURRENT);\n+ DiscoveryNode discoveryNode2 = new DiscoveryNode(\"node2\", new LocalTransportAddress(\"2\"), Version.CURRENT);\n+ DiscoveryNodes discoveryNodes = DiscoveryNodes.builder().put(discoveryNode1).put(discoveryNode2).build();\n+\n+ ClusterState baseClusterState = ClusterState.builder(org.elasticsearch.cluster.ClusterName.DEFAULT)\n+ .metaData(metaData)\n+ .routingTable(routingTable)\n+ .nodes(discoveryNodes)\n+ .build();\n+\n+ // Two shards consuming each 80% of disk space while 70% is allowed, so shard 0 isn't allowed here\n+ MutableShardRouting firstRouting = new MutableShardRouting(\"test\", 0, \"node1\", true, ShardRoutingState.STARTED, 1);\n+ MutableShardRouting secondRouting = new MutableShardRouting(\"test\", 1, \"node1\", true, ShardRoutingState.STARTED, 1);\n+ RoutingNode firstRoutingNode = new RoutingNode(\"node1\", discoveryNode1, Arrays.asList(firstRouting, secondRouting));\n+ RoutingTable.Builder builder = RoutingTable.builder().add(\n+ IndexRoutingTable.builder(\"test\")\n+ .addIndexShard(new IndexShardRoutingTable.Builder(new ShardId(\"test\", 0), false)\n+ .addShard(firstRouting)\n+ .build()\n+ )\n+ .addIndexShard(new IndexShardRoutingTable.Builder(new ShardId(\"test\", 1), false)\n+ .addShard(secondRouting)\n+ .build()\n+ )\n+ );\n+ ClusterState clusterState = ClusterState.builder(baseClusterState).routingTable(builder).build();\n+ RoutingAllocation routingAllocation = new RoutingAllocation(null, new RoutingNodes(clusterState), discoveryNodes, clusterInfo);\n+ Decision decision = diskThresholdDecider.canRemain(firstRouting, firstRoutingNode, routingAllocation);\n+ assertThat(decision.type(), equalTo(Decision.Type.NO));\n+\n+ // Two shards consuming each 80% of disk space while 70% is allowed, but one is relocating, so shard 0 can stay\n+ firstRouting = new MutableShardRouting(\"test\", 0, \"node1\", true, ShardRoutingState.STARTED, 1);\n+ secondRouting = new MutableShardRouting(\"test\", 1, \"node1\", \"node2\", true, ShardRoutingState.RELOCATING, 1);\n+ firstRoutingNode = new RoutingNode(\"node1\", discoveryNode1, Arrays.asList(firstRouting, secondRouting));\n+ builder = RoutingTable.builder().add(\n+ IndexRoutingTable.builder(\"test\")\n+ .addIndexShard(new IndexShardRoutingTable.Builder(new ShardId(\"test\", 0), false)\n+ .addShard(firstRouting)\n+ .build()\n+ )\n+ .addIndexShard(new IndexShardRoutingTable.Builder(new ShardId(\"test\", 1), false)\n+ .addShard(secondRouting)\n+ .build()\n+ )\n+ );\n+ clusterState = ClusterState.builder(baseClusterState).routingTable(builder).build();\n+ routingAllocation = new RoutingAllocation(null, new RoutingNodes(clusterState), discoveryNodes, clusterInfo);\n+ decision = diskThresholdDecider.canRemain(firstRouting, firstRoutingNode, routingAllocation);\n+ assertThat(decision.type(), equalTo(Decision.Type.YES));\n+\n+ // Creating AllocationService instance and the services it depends on...\n+ ClusterInfoService cis = new ClusterInfoService() {\n+ @Override\n+ public ClusterInfo getClusterInfo() {\n+ logger.info(\"--> calling fake getClusterInfo\");\n+ return clusterInfo;\n+ }\n+\n+ @Override\n+ public void addListener(Listener listener) {\n+ // noop\n+ }\n+ };\n+ AllocationDeciders deciders = new AllocationDeciders(ImmutableSettings.EMPTY, new HashSet<>(Arrays.asList(\n+ new SameShardAllocationDecider(ImmutableSettings.EMPTY), diskThresholdDecider\n+ )));\n+ AllocationService strategy = new AllocationService(settingsBuilder()\n+ .put(\"cluster.routing.allocation.concurrent_recoveries\", 10)\n+ .put(ClusterRebalanceAllocationDecider.CLUSTER_ROUTING_ALLOCATION_ALLOW_REBALANCE, \"always\")\n+ .put(\"cluster.routing.allocation.cluster_concurrent_rebalance\", -1)\n+ .build(), deciders, new ShardsAllocators(), cis);\n+ // Ensure that the reroute call doesn't alter the routing table, since the first primary is relocating away\n+ // and therefor we will have sufficient disk space on node1.\n+ RoutingAllocation.Result result = strategy.reroute(clusterState);\n+ assertThat(result.changed(), is(false));\n+ assertThat(result.routingTable().index(\"test\").getShards().get(0).primaryShard().state(), equalTo(STARTED));\n+ assertThat(result.routingTable().index(\"test\").getShards().get(0).primaryShard().currentNodeId(), equalTo(\"node1\"));\n+ assertThat(result.routingTable().index(\"test\").getShards().get(0).primaryShard().relocatingNodeId(), nullValue());\n+ assertThat(result.routingTable().index(\"test\").getShards().get(1).primaryShard().state(), equalTo(RELOCATING));\n+ assertThat(result.routingTable().index(\"test\").getShards().get(1).primaryShard().currentNodeId(), equalTo(\"node1\"));\n+ assertThat(result.routingTable().index(\"test\").getShards().get(1).primaryShard().relocatingNodeId(), equalTo(\"node2\"));\n+ }\n+\n public void logShardStates(ClusterState state) {\n RoutingNodes rn = state.routingNodes();\n logger.info(\"--> counts: total: {}, unassigned: {}, initializing: {}, relocating: {}, started: {}\",", "filename": "src/test/java/org/elasticsearch/cluster/routing/allocation/decider/DiskThresholdDeciderTests.java", "status": "modified" } ] }
{ "body": "Hi,\n\nSince it is the time of year where we adjust our clocks from daylight savings time to normal time again a bug in the date histogram struck us.\n\nWhen we run a date_histogram in a timezone other than UTC you will see some buckets being wrongfully combined. In the first example you see a `date_histogram` aggregate in UTC followed by the same aggregate in CET;\n\nUTC query:\n\n```\n{\n \"size\": 0,\n \"query\": {\n \"range\": {\n \"published\": {\n \"gte\": 1414274400000,\n \"lt\": 1414292400000\n }\n }\n },\n \"aggs\": {\n \"vot\": {\n \"date_histogram\": {\n \"field\": \"published\",\n \"interval\": \"hour\",\n \"min_doc_count\": 0,\n \"time_zone\": \"UTC\"\n }\n }\n }\n}\n```\n\nUTC result:\n\n```\n{\n \"took\": 12,\n \"timed_out\": false,\n \"_shards\": {\n \"total\": 1,\n \"successful\": 1,\n \"failed\": 0\n },\n \"hits\": {\n \"total\": 1130,\n \"max_score\": 0,\n \"hits\": []\n },\n \"aggregations\": {\n \"vot\": {\n \"buckets\": [\n {\n \"key_as_string\": \"2014-10-25T22:00:00.000Z\",\n \"key\": 1414274400000,\n \"doc_count\": 260\n },\n {\n \"key_as_string\": \"2014-10-25T23:00:00.000Z\",\n \"key\": 1414278000000,\n \"doc_count\": 216\n },\n {\n \"key_as_string\": \"2014-10-26T00:00:00.000Z\",\n \"key\": 1414281600000,\n \"doc_count\": 222\n },\n {\n \"key_as_string\": \"2014-10-26T01:00:00.000Z\",\n \"key\": 1414285200000,\n \"doc_count\": 200\n },\n {\n \"key_as_string\": \"2014-10-26T02:00:00.000Z\",\n \"key\": 1414288800000,\n \"doc_count\": 232\n }\n ]\n }\n }\n}\n```\n\nCET query:\n\n```\n{\n \"size\": 0,\n \"query\": {\n \"range\": {\n \"published\": {\n \"gte\": 1414274400000,\n \"lt\": 1414292400000\n }\n }\n },\n \"aggs\": {\n \"vot\": {\n \"date_histogram\": {\n \"field\": \"published\",\n \"interval\": \"hour\",\n \"min_doc_count\": 0,\n \"time_zone\": \"CET\"\n }\n }\n }\n}\n```\n\nUTC result:\n\n```\n{\n \"took\": 9,\n \"timed_out\": false,\n \"_shards\": {\n \"total\": 1,\n \"successful\": 1,\n \"failed\": 0\n },\n \"hits\": {\n \"total\": 1130,\n \"max_score\": 0,\n \"hits\": []\n },\n \"aggregations\": {\n \"vot\": {\n \"buckets\": [\n {\n \"key_as_string\": \"2014-10-25T22:00:00.000Z\",\n \"key\": 1414274400000,\n \"doc_count\": 260\n },\n {\n \"key_as_string\": \"2014-10-25T23:00:00.000Z\",\n \"key\": 1414278000000,\n \"doc_count\": 0\n },\n {\n \"key_as_string\": \"2014-10-26T00:00:00.000Z\",\n \"key\": 1414281600000,\n \"doc_count\": 216\n },\n {\n \"key_as_string\": \"2014-10-26T01:00:00.000Z\",\n \"key\": 1414285200000,\n \"doc_count\": 422\n },\n {\n \"key_as_string\": \"2014-10-26T02:00:00.000Z\",\n \"key\": 1414288800000,\n \"doc_count\": 232\n }\n ]\n }\n }\n}\n```\n\nPoints of interest in these queries are the result buckets for key: `1414278000000`. When ran in UTC it give a `doc_count` of `216`. When ran in CET it has a `doc_count` of `0`.\n\nFurther more, you will find a `doc_count` of `216` in the CET bucket of `1414281600000` (an hour to late). Last to point out is the next CET bucket, it contains the value of `422` which is the sum of `222` + `200`. As you can see these values can be found in the corresponding bucket and the previous bucket in UTC.\n\nAttached you will find a patch file adding some basic tests to the `org.elasticsearch.common.rounding` package. Here we test key rounding for the `CET` and `America/Chicago` timezones on the days of the DST switch.\n\nI tried diggin into the issue and found that it has to do with the recalculation of `preTz.getOffset` on the following lines [TimeZoneRounding.java:156](https://github.com/elasticsearch/elasticsearch/blob/5797682bd0b87e8efeffb046258287d480435395/src/main/java/org/elasticsearch/common/rounding/TimeZoneRounding.java#L156) and [TimeZoneRounding.java:163](https://github.com/elasticsearch/elasticsearch/blob/5797682bd0b87e8efeffb046258287d480435395/src/main/java/org/elasticsearch/common/rounding/TimeZoneRounding.java#L163).\n\nRunning this step by step on my first CET test case results the first time (line 156) in 7200000 (2 hours) and the second time in 3600000 (1 hour). This difference is causeing `2014-10-26T01:01:01 GMT+0200` to resolve to `2014-10-26T02:00:00 GMT+0200`. Explaining why the `1414278000000` (2014-10-26T01:01:01 GMT+0200) bucket to be empty, the next bucket containing the contents of this bucket, and the bucket after that a sum of two buckets.\n\n0001-Add-tests-for-timezone-problems.patch:\n\n```\nFrom 4e022035b25a141027be6aaae78c8ad2454e673f Mon Sep 17 00:00:00 2001\nFrom: Nils Dijk <me@thanod.nl>\nDate: Tue, 4 Nov 2014 07:56:05 -0600\nSubject: [PATCH 1/1] Add tests for timezone problems.\n\n---\n .../common/rounding/TimeZoneRoundingTests.java | 39 ++++++++++++++++++++++\n 1 file changed, 39 insertions(+)\n\ndiff --git a/src/test/java/org/elasticsearch/common/rounding/TimeZoneRoundingTests.java b/src/test/java/org/elasticsearch/common/rounding/TimeZoneRoundingTests.java\nindex a3d70c7..e79ad1e 100644\n--- a/src/test/java/org/elasticsearch/common/rounding/TimeZoneRoundingTests.java\n+++ b/src/test/java/org/elasticsearch/common/rounding/TimeZoneRoundingTests.java\n@@ -83,6 +83,45 @@ public class TimeZoneRoundingTests extends ElasticsearchTestCase {\n assertThat(tzRounding.round(utc(\"2009-02-03T01:01:01\")), equalTo(time(\"2009-02-03T01:00:00\", DateTimeZone.forOffsetHours(+2))));\n assertThat(tzRounding.nextRoundingValue(time(\"2009-02-03T01:00:00\", DateTimeZone.forOffsetHours(+2))), equalTo(time(\"2009-02-03T02:00:00\", DateTimeZone.forOffsetHours(+2))));\n }\n+ \n+ @Test\n+ public void testTimeTimeZoneRoundingDST() {\n+ Rounding tzRounding;\n+ // testing savings to non savings switch \n+ tzRounding = TimeZoneRounding.builder(DateTimeUnit.HOUR_OF_DAY).preZone(DateTimeZone.forID(\"UTC\")).build();\n+ assertThat(tzRounding.round(time(\"2014-10-26T01:01:01\", DateTimeZone.forID(\"CET\"))), equalTo(time(\"2014-10-26T01:00:00\", DateTimeZone.forID(\"CET\"))));\n+ \n+ tzRounding = TimeZoneRounding.builder(DateTimeUnit.HOUR_OF_DAY).preZone(DateTimeZone.forID(\"CET\")).build();\n+ assertThat(tzRounding.round(time(\"2014-10-26T01:01:01\", DateTimeZone.forID(\"CET\"))), equalTo(time(\"2014-10-26T01:00:00\", DateTimeZone.forID(\"CET\"))));\n+ \n+ // testing non savings to savings switch\n+ tzRounding = TimeZoneRounding.builder(DateTimeUnit.HOUR_OF_DAY).preZone(DateTimeZone.forID(\"UTC\")).build();\n+ assertThat(tzRounding.round(time(\"2014-03-30T01:01:01\", DateTimeZone.forID(\"CET\"))), equalTo(time(\"2014-03-30T01:00:00\", DateTimeZone.forID(\"CET\"))));\n+ \n+ tzRounding = TimeZoneRounding.builder(DateTimeUnit.HOUR_OF_DAY).preZone(DateTimeZone.forID(\"CET\")).build();\n+ assertThat(tzRounding.round(time(\"2014-03-30T01:01:01\", DateTimeZone.forID(\"CET\"))), equalTo(time(\"2014-03-30T01:00:00\", DateTimeZone.forID(\"CET\"))));\n+ \n+ // testing non savings to savings switch (America/Chicago)\n+ tzRounding = TimeZoneRounding.builder(DateTimeUnit.HOUR_OF_DAY).preZone(DateTimeZone.forID(\"UTC\")).build();\n+ assertThat(tzRounding.round(time(\"2014-03-09T03:01:01\", DateTimeZone.forID(\"America/Chicago\"))), equalTo(time(\"2014-03-09T03:00:00\", DateTimeZone.forID(\"America/Chicago\"))));\n+ \n+ tzRounding = TimeZoneRounding.builder(DateTimeUnit.HOUR_OF_DAY).preZone(DateTimeZone.forID(\"America/Chicago\")).build();\n+ assertThat(tzRounding.round(time(\"2014-03-09T03:01:01\", DateTimeZone.forID(\"America/Chicago\"))), equalTo(time(\"2014-03-09T03:00:00\", DateTimeZone.forID(\"America/Chicago\"))));\n+ \n+ // testing savings to non savings switch 2013 (America/Chicago)\n+ tzRounding = TimeZoneRounding.builder(DateTimeUnit.HOUR_OF_DAY).preZone(DateTimeZone.forID(\"UTC\")).build();\n+ assertThat(tzRounding.round(time(\"2013-11-03T06:01:01\", DateTimeZone.forID(\"America/Chicago\"))), equalTo(time(\"2013-11-03T06:00:00\", DateTimeZone.forID(\"America/Chicago\"))));\n+ \n+ tzRounding = TimeZoneRounding.builder(DateTimeUnit.HOUR_OF_DAY).preZone(DateTimeZone.forID(\"America/Chicago\")).build();\n+ assertThat(tzRounding.round(time(\"2013-11-03T06:01:01\", DateTimeZone.forID(\"America/Chicago\"))), equalTo(time(\"2013-11-03T06:00:00\", DateTimeZone.forID(\"America/Chicago\"))));\n+ \n+ // testing savings to non savings switch 2014 (America/Chicago)\n+ tzRounding = TimeZoneRounding.builder(DateTimeUnit.HOUR_OF_DAY).preZone(DateTimeZone.forID(\"UTC\")).build();\n+ assertThat(tzRounding.round(time(\"2014-11-02T06:01:01\", DateTimeZone.forID(\"America/Chicago\"))), equalTo(time(\"2014-11-02T06:00:00\", DateTimeZone.forID(\"America/Chicago\"))));\n+ \n+ tzRounding = TimeZoneRounding.builder(DateTimeUnit.HOUR_OF_DAY).preZone(DateTimeZone.forID(\"America/Chicago\")).build();\n+ assertThat(tzRounding.round(time(\"2014-11-02T06:01:01\", DateTimeZone.forID(\"America/Chicago\"))), equalTo(time(\"2014-11-02T06:00:00\", DateTimeZone.forID(\"America/Chicago\"))));\n+ }\n\n private long utc(String time) {\n return time(time, DateTimeZone.UTC);\n-- \n1.9.3 (Apple Git-50)\n```\n", "comments": [ { "body": "I have been fiddling around with this issue and came up with this fix: https://github.com/thanodnl/elasticsearch/commit/1cedf3ed4ff9fb4b8c7f1a1e00efefe8d8330097\n\nThe existing tests still pass (I only ran the timezone rounding tests though) and the tests I submitted in the patch above also pass.\n\nIf you are happy with the result feel free to take this solution. If you need me to open a pull request let me know!\n", "created_at": "2014-11-21T16:20:41Z" }, { "body": "@thanodnl Your fix and tests look good to me.\n\n> If you need me to open a pull request let me know!\n\nPlease do! (and ping me when you open it so that I don't miss it)\n", "created_at": "2014-11-24T19:12:45Z" }, { "body": "@jpountz I just opened the pull request, also mentioned your handle there but I do not know if you get alerted for that. So: PING! :smile: \n", "created_at": "2014-11-25T15:49:17Z" } ], "number": 8339, "title": "Aggregations: date_histogram aggregation DST bug" }
{ "body": "The problem was in the difference between the two responses of the `preTz.getOffset` call. To solve this I only call it once.\n\nThe behaviour of `roundKey` changed that it is now returning the actual rounded timestamp rounded timestamp offsetted to UTC. The exact opposite is true for `valueForKey` which used to accept the UTC offsetted timestamp but now needs to work on the normal timestamp.\n\nThe tests for rounding are all passing, including the tests I added where the issue was shown.\n\nCloses #8339\n\n/cc @jpountz \n", "number": 8655, "review_comments": [], "title": "Fix date_histogram issues during a timezone DST switch" }
{ "commits": [ { "message": "Add tests for timezone problems and Fix for #8339." } ], "files": [ { "diff": "@@ -153,14 +153,13 @@ public byte id() {\n \n @Override\n public long roundKey(long utcMillis) {\n- long time = utcMillis + preTz.getOffset(utcMillis);\n- return field.roundFloor(time);\n+ long offset = preTz.getOffset(utcMillis);\n+ long time = utcMillis + offset;\n+ return field.roundFloor(time) - offset;\n }\n \n @Override\n public long valueForKey(long time) {\n- // now, time is still in local, move it to UTC (or the adjustLargeInterval flag is set)\n- time = time - preTz.getOffset(time);\n // now apply post Tz\n time = time + postTz.getOffset(time);\n return time;", "filename": "src/main/java/org/elasticsearch/common/rounding/TimeZoneRounding.java", "status": "modified" }, { "diff": "@@ -83,6 +83,45 @@ public void testTimeTimeZoneRounding() {\n assertThat(tzRounding.round(utc(\"2009-02-03T01:01:01\")), equalTo(time(\"2009-02-03T01:00:00\", DateTimeZone.forOffsetHours(+2))));\n assertThat(tzRounding.nextRoundingValue(time(\"2009-02-03T01:00:00\", DateTimeZone.forOffsetHours(+2))), equalTo(time(\"2009-02-03T02:00:00\", DateTimeZone.forOffsetHours(+2))));\n }\n+ \n+ @Test\n+ public void testTimeTimeZoneRoundingDST() {\n+ Rounding tzRounding;\n+ // testing savings to non savings switch \n+ tzRounding = TimeZoneRounding.builder(DateTimeUnit.HOUR_OF_DAY).preZone(DateTimeZone.forID(\"UTC\")).build();\n+ assertThat(tzRounding.round(time(\"2014-10-26T01:01:01\", DateTimeZone.forID(\"CET\"))), equalTo(time(\"2014-10-26T01:00:00\", DateTimeZone.forID(\"CET\"))));\n+ \n+ tzRounding = TimeZoneRounding.builder(DateTimeUnit.HOUR_OF_DAY).preZone(DateTimeZone.forID(\"CET\")).build();\n+ assertThat(tzRounding.round(time(\"2014-10-26T01:01:01\", DateTimeZone.forID(\"CET\"))), equalTo(time(\"2014-10-26T01:00:00\", DateTimeZone.forID(\"CET\"))));\n+ \n+ // testing non savings to savings switch\n+ tzRounding = TimeZoneRounding.builder(DateTimeUnit.HOUR_OF_DAY).preZone(DateTimeZone.forID(\"UTC\")).build();\n+ assertThat(tzRounding.round(time(\"2014-03-30T01:01:01\", DateTimeZone.forID(\"CET\"))), equalTo(time(\"2014-03-30T01:00:00\", DateTimeZone.forID(\"CET\"))));\n+ \n+ tzRounding = TimeZoneRounding.builder(DateTimeUnit.HOUR_OF_DAY).preZone(DateTimeZone.forID(\"CET\")).build();\n+ assertThat(tzRounding.round(time(\"2014-03-30T01:01:01\", DateTimeZone.forID(\"CET\"))), equalTo(time(\"2014-03-30T01:00:00\", DateTimeZone.forID(\"CET\"))));\n+ \n+ // testing non savings to savings switch (America/Chicago)\n+ tzRounding = TimeZoneRounding.builder(DateTimeUnit.HOUR_OF_DAY).preZone(DateTimeZone.forID(\"UTC\")).build();\n+ assertThat(tzRounding.round(time(\"2014-03-09T03:01:01\", DateTimeZone.forID(\"America/Chicago\"))), equalTo(time(\"2014-03-09T03:00:00\", DateTimeZone.forID(\"America/Chicago\"))));\n+ \n+ tzRounding = TimeZoneRounding.builder(DateTimeUnit.HOUR_OF_DAY).preZone(DateTimeZone.forID(\"America/Chicago\")).build();\n+ assertThat(tzRounding.round(time(\"2014-03-09T03:01:01\", DateTimeZone.forID(\"America/Chicago\"))), equalTo(time(\"2014-03-09T03:00:00\", DateTimeZone.forID(\"America/Chicago\"))));\n+ \n+ // testing savings to non savings switch 2013 (America/Chicago)\n+ tzRounding = TimeZoneRounding.builder(DateTimeUnit.HOUR_OF_DAY).preZone(DateTimeZone.forID(\"UTC\")).build();\n+ assertThat(tzRounding.round(time(\"2013-11-03T06:01:01\", DateTimeZone.forID(\"America/Chicago\"))), equalTo(time(\"2013-11-03T06:00:00\", DateTimeZone.forID(\"America/Chicago\"))));\n+ \n+ tzRounding = TimeZoneRounding.builder(DateTimeUnit.HOUR_OF_DAY).preZone(DateTimeZone.forID(\"America/Chicago\")).build();\n+ assertThat(tzRounding.round(time(\"2013-11-03T06:01:01\", DateTimeZone.forID(\"America/Chicago\"))), equalTo(time(\"2013-11-03T06:00:00\", DateTimeZone.forID(\"America/Chicago\"))));\n+ \n+ // testing savings to non savings switch 2014 (America/Chicago)\n+ tzRounding = TimeZoneRounding.builder(DateTimeUnit.HOUR_OF_DAY).preZone(DateTimeZone.forID(\"UTC\")).build();\n+ assertThat(tzRounding.round(time(\"2014-11-02T06:01:01\", DateTimeZone.forID(\"America/Chicago\"))), equalTo(time(\"2014-11-02T06:00:00\", DateTimeZone.forID(\"America/Chicago\"))));\n+ \n+ tzRounding = TimeZoneRounding.builder(DateTimeUnit.HOUR_OF_DAY).preZone(DateTimeZone.forID(\"America/Chicago\")).build();\n+ assertThat(tzRounding.round(time(\"2014-11-02T06:01:01\", DateTimeZone.forID(\"America/Chicago\"))), equalTo(time(\"2014-11-02T06:00:00\", DateTimeZone.forID(\"America/Chicago\"))));\n+ }\n \n private long utc(String time) {\n return time(time, DateTimeZone.UTC);", "filename": "src/test/java/org/elasticsearch/common/rounding/TimeZoneRoundingTests.java", "status": "modified" } ] }
{ "body": "PR for #5828\n", "comments": [ { "body": "left one comment - look close :) thanks!\n", "created_at": "2014-04-23T15:47:12Z" }, { "body": "Updated the PR to move the delete by query api usage checking into one place.\n", "created_at": "2014-04-24T07:58:05Z" }, { "body": "It's not super urgent but maybe we should move this method into a `QueryParserUtils` class?\n", "created_at": "2014-04-24T08:00:46Z" }, { "body": "Updated the PR and moved the method to `QueryParserUtils` util class.\n", "created_at": "2014-04-28T02:21:37Z" }, { "body": "lets a small comment, LGTM otherwise.\n", "created_at": "2014-04-28T09:43:38Z" }, { "body": "Pushed via: https://github.com/elasticsearch/elasticsearch/commit/17a5575757317962dab4c295bbfacbdb136cc61e\n", "created_at": "2014-04-28T13:14:30Z" } ], "number": 5916, "title": "Disabled parent/child queries in the delete by query api." }
{ "body": "PR for #8628\n\nThe bug was introduced by #5916\n", "number": 8649, "review_comments": [], "title": "Fixed p/c filters not being able to be used in alias filters." }
{ "commits": [ { "message": "Parent/child: Fixed parent/child not being able to be used in alias filters.\n\nCloses #8628" } ], "files": [ { "diff": "@@ -35,7 +35,9 @@ private QueryParserUtils() {\n public static void ensureNotDeleteByQuery(String name, QueryParseContext parseContext) {\n SearchContext context = SearchContext.current();\n if (context == null) {\n- throw new QueryParsingException(parseContext.index(), \"[\" + name + \"] query and filter requires a search context\");\n+ // We can't do the api check, because there is no search context.\n+ // Because the delete by query shard transport action sets the search context this isn't an issue.\n+ return;\n }\n \n if (TransportShardDeleteByQueryAction.DELETE_BY_QUERY_API.equals(context.source())) {", "filename": "src/main/java/org/elasticsearch/index/query/QueryParserUtils.java", "status": "modified" }, { "diff": "@@ -57,6 +57,7 @@\n import static org.elasticsearch.client.Requests.indexRequest;\n import static org.elasticsearch.common.settings.ImmutableSettings.settingsBuilder;\n import static org.elasticsearch.index.query.FilterBuilders.*;\n+import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery;\n import static org.elasticsearch.index.query.QueryBuilders.rangeQuery;\n import static org.elasticsearch.test.hamcrest.CollectionAssertions.hasKey;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n@@ -883,12 +884,12 @@ public void testCreateIndexWithAliases() throws Exception {\n @Test\n public void testCreateIndexWithAliasesInSource() throws Exception {\n assertAcked(prepareCreate(\"test\").setSource(\"{\\n\" +\n- \" \\\"aliases\\\" : {\\n\" +\n- \" \\\"alias1\\\" : {},\\n\" +\n- \" \\\"alias2\\\" : {\\\"filter\\\" : {\\\"match_all\\\": {}}},\\n\" +\n- \" \\\"alias3\\\" : { \\\"index_routing\\\" : \\\"index\\\", \\\"search_routing\\\" : \\\"search\\\"}\\n\" +\n- \" }\\n\" +\n- \"}\"));\n+ \" \\\"aliases\\\" : {\\n\" +\n+ \" \\\"alias1\\\" : {},\\n\" +\n+ \" \\\"alias2\\\" : {\\\"filter\\\" : {\\\"match_all\\\": {}}},\\n\" +\n+ \" \\\"alias3\\\" : { \\\"index_routing\\\" : \\\"index\\\", \\\"search_routing\\\" : \\\"search\\\"}\\n\" +\n+ \" }\\n\" +\n+ \"}\"));\n \n checkAliases();\n }\n@@ -975,7 +976,17 @@ public void testAliasFilterWithNowInRangeFilterAndQuery() throws Exception {\n }\n }\n }\n- \n+\n+ @Test\n+ public void testAliasesFilterWithHasChildQuery() throws Exception {\n+ assertAcked(prepareCreate(\"my-index\")\n+ .addMapping(\"parent\")\n+ .addMapping(\"child\", \"_parent\", \"type=parent\")\n+ );\n+ assertAcked(admin().indices().prepareAliases().addAlias(\"my-index\", \"filter1\", hasChildFilter(\"child\", matchAllQuery())));\n+ assertAcked(admin().indices().prepareAliases().addAlias(\"my-index\", \"filter2\", hasParentFilter(\"child\", matchAllQuery())));\n+ }\n+\n private void checkAliases() {\n GetAliasesResponse getAliasesResponse = admin().indices().prepareGetAliases(\"alias1\").get();\n assertThat(getAliasesResponse.getAliases().get(\"test\").size(), equalTo(1));", "filename": "src/test/java/org/elasticsearch/aliases/IndexAliasesTests.java", "status": "modified" } ] }
{ "body": "In aggregations, the size property must be unquoted while in other places it can be quoted (tested in 1.0.0, 1.1.0 and 1.1.1)\n\nThe following work:\n\n``` javascript\n{\n \"size\": \"1\",\n \"aggs\": {\n \"aggtest\": {\n \"terms\": {\n \"field\": \"somefield\",\n \"size\": 1\n }\n }\n }\n}\n```\n\nBut this doesn't:\n\n``` javascript\n{\n \"size\": \"1\",\n \"aggs\": {\n \"aggtest\": {\n \"terms\": {\n \"field\": \"somefield\",\n \"size\": \"1\"\n }\n }\n }\n}\n```\n\nResulting in a 400 error ending in \"Parse Failure [Unknown key for a VALUE_STRING in [aggtest]: [size].]]; }]\"\n\nThis inconsistency is not mentioned in documentation and the error message is not very helpful, so I could only spot the problem after @HonzaKral pasted a working example on irc.\n\nI believe at least a fix should be done so all fields behave the same, or with mandatory or with optional quotes, but no two fields with same name but different behaviour.\n", "comments": [], "number": 6061, "title": "Aggregations: `size` property parsing inconsistent" }
{ "body": "Closes #6061\n", "number": 8645, "review_comments": [], "title": "Make `size` property parsing inconsistent" }
{ "commits": [ { "message": "Fixed #6061 size parsing consistent for strings" } ], "files": [ { "diff": "@@ -82,6 +82,8 @@ public void parse(String aggregationName, XContentParser parser, SearchContext c\n executionHint = parser.text();\n } else if(Aggregator.COLLECT_MODE.match(currentFieldName)){\n collectMode = SubAggCollectionMode.parse(parser.text());\n+ } else if (REQUIRED_SIZE_FIELD_NAME.match(currentFieldName)) {\n+ bucketCountThresholds.setRequiredSize(parser.intValue());\n } else {\n parseSpecial(aggregationName, parser, context, token, currentFieldName);\n }", "filename": "src/main/java/org/elasticsearch/search/aggregations/bucket/terms/AbstractTermsParametersParser.java", "status": "modified" } ] }
{ "body": "In `InternalIndexShard#index` method calls `failedIndex` callback (to reduce current counters) when a `RuntimeException` happens, it should do it for any `Throwable`. Also, we are missing similar logic in create, and we should properly handle delete as well.\n\nMight make sense to have a single post callback, and not post/failed, with an Throwable passed to indicate if it was a failure or not.\n", "comments": [], "number": 5945, "title": "InternalIndexShard callback handling of failure is missing/wrong" }
{ "body": "Closes #5945\n", "number": 8644, "review_comments": [], "title": "Fixes InternalIndexShard callback handling of failure" }
{ "commits": [ { "message": "Fixes #5945 updated InternalIndexShard callbacks" } ], "files": [ { "diff": "@@ -132,6 +132,9 @@ public void postCreate(Engine.Create create) {\n }\n }\n \n+ public void postCreate(Engine.Create create, Throwable ex) {\n+ }\n+\n public Engine.Index preIndex(Engine.Index index) {\n totalStats.indexCurrent.inc();\n typeStats(index.type()).indexCurrent.inc();\n@@ -168,7 +171,7 @@ public void postIndex(Engine.Index index) {\n }\n }\n \n- public void failedIndex(Engine.Index index) {\n+ public void postIndex(Engine.Index index, Throwable ex) {\n totalStats.indexCurrent.dec();\n typeStats(index.type()).indexCurrent.dec();\n }\n@@ -208,7 +211,7 @@ public void postDelete(Engine.Delete delete) {\n }\n }\n \n- public void failedDelete(Engine.Delete delete) {\n+ public void postDelete(Engine.Delete delete, Throwable ex) {\n totalStats.deleteCurrent.dec();\n typeStats(delete.type()).deleteCurrent.dec();\n }", "filename": "src/main/java/org/elasticsearch/index/indexing/ShardIndexingService.java", "status": "modified" }, { "diff": "@@ -420,11 +420,16 @@ public Engine.Create prepareCreate(SourceToParse source, long version, VersionTy\n public ParsedDocument create(Engine.Create create) throws ElasticsearchException {\n writeAllowed(create.origin());\n create = indexingService.preCreate(create);\n- if (logger.isTraceEnabled()) {\n- logger.trace(\"index [{}][{}]{}\", create.type(), create.id(), create.docs());\n+ try {\n+ if (logger.isTraceEnabled()) {\n+ logger.trace(\"index [{}][{}]{}\", create.type(), create.id(), create.docs());\n+ }\n+ engine.create(create);\n+ create.endTime(System.nanoTime());\n+ } catch (Throwable ex) {\n+ indexingService.postCreate(create, ex);\n+ throw ex;\n }\n- engine.create(create);\n- create.endTime(System.nanoTime());\n indexingService.postCreate(create);\n return create.parsedDoc();\n }\n@@ -447,8 +452,8 @@ public ParsedDocument index(Engine.Index index) throws ElasticsearchException {\n }\n engine.index(index);\n index.endTime(System.nanoTime());\n- } catch (RuntimeException ex) {\n- indexingService.failedIndex(index);\n+ } catch (Throwable ex) {\n+ indexingService.postIndex(index, ex);\n throw ex;\n }\n indexingService.postIndex(index);\n@@ -472,8 +477,8 @@ public void delete(Engine.Delete delete) throws ElasticsearchException {\n }\n engine.delete(delete);\n delete.endTime(System.nanoTime());\n- } catch (RuntimeException ex) {\n- indexingService.failedDelete(delete);\n+ } catch (Throwable ex) {\n+ indexingService.postDelete(delete, ex);\n throw ex;\n }\n indexingService.postDelete(delete);", "filename": "src/main/java/org/elasticsearch/index/shard/service/InternalIndexShard.java", "status": "modified" } ] }
{ "body": "Currently we only check if the first byte of the body is a `BYTE_OBJECT_INDEFINITE` to determine whether the content is CBOR or not. However, what we should actually do is to check whether the \"major type\" is an object.\n\nSee:\n- https://github.com/FasterXML/jackson-dataformat-cbor/blob/master/src/main/java/com/fasterxml/jackson/dataformat/cbor/CBORParser.java#L614\n- https://github.com/FasterXML/jackson-dataformat-cbor/blob/master/src/main/java/com/fasterxml/jackson/dataformat/cbor/CBORParser.java#L682\n\nAlso, CBOR can be prefixed with a self-identifying tag, `0xd9d9f7`, which we should check for as well. Currently Jackson doesn't recognise this tag, but it looks like that will change in the future: https://github.com/FasterXML/jackson-dataformat-cbor/issues/6\n", "comments": [ { "body": "Jackson 2.4.3 now contains the above fixes. We should upgrade and add the changes mentioned above.\n", "created_at": "2014-11-14T11:53:39Z" }, { "body": "if we get a fix for this I think it should go into `1.3.6`\n", "created_at": "2014-11-21T09:39:05Z" }, { "body": "do we need to do anything else than upgrading jackson? @pickypg do you have a ETA for this?\n", "created_at": "2014-11-23T12:55:00Z" }, { "body": "I should have this up for review on Monday.\n\nWe need to change `XContentFactory.xContentType(...)` to support the new header. By default, the new `CBORGenerator.Feature.WRITE_TYPE_HEADER` feature is `false`, so just upgrading will do nothing (nothing breaks, but nothing improves).\n", "created_at": "2014-11-24T05:48:16Z" }, { "body": "Merged\n", "created_at": "2014-11-25T19:04:53Z" }, { "body": "This was reverted because the JSON tokenizer was acting up in some of the randomized tests. I am looking at the root cause (my change or just incoming changes from 2.4.3).\n", "created_at": "2014-11-25T21:57:11Z" }, { "body": "@clintongormley it'd be great to have this feature. What are the chances this will get into an upcoming release of Elasticsearch?", "created_at": "2017-05-30T14:15:35Z" }, { "body": "@johnrfrank this was merged into 2.0.0. We've since deprecated content detection in favor of providing a content-type header.", "created_at": "2017-05-30T14:42:14Z" } ], "number": 7640, "title": "CBOR: Improve recognition of CBOR data format" }
{ "body": "- Update pom to 2.4.3 from 2.4.2\n- Enable the CBOR data header (aka tag) from the CBOR Generator to provide binary identification like the Smile format\n- Check for the CBOR header and ensure that the data sent in represents a \"major type\" that is an object\n- Cleans up `JsonVsCborTests` unused imports\n\nCloses #7640\n", "number": 8637, "review_comments": [ { "body": "for other reviewers wondering where this change comes from, the first char is already checked earlier in this method\n", "created_at": "2014-11-24T18:00:44Z" }, { "body": "I think you should removed `third == -1` from this condition otherwise `{}` would not be detected as a json document?\n", "created_at": "2014-11-24T18:05:19Z" }, { "body": "Good point. I'll drop that short circuit check and just let it fail the other checks (same for `fourth == -1`).\n", "created_at": "2014-11-24T18:13:57Z" }, { "body": "Fixed.\n", "created_at": "2014-11-25T17:03:12Z" }, { "body": "Maybe you should also check that third and fourth are different from -1, otherwise this indicator of a missing byte (-1) is going to be converted to a legal byte (255) via the cast?\n", "created_at": "2014-11-25T17:31:22Z" }, { "body": "Agreed. Adjusted so that it also includes the loop as well.\n", "created_at": "2014-11-25T18:12:05Z" } ], "title": "CBOR: Improve recognition of CBOR data format and update to Jackson 2.4.3" }
{ "commits": [ { "message": "Update to Jackson 2.4.3\n\n- Update pom to 2.4.3 from 2.4.2\n- Enable the CBOR data header (aka tag) from the CBOR Generator to provide binary identification like the Smile format\n- Check for the CBOR header and ensure that the data sent in represents a \"major type\" that is an object\n- Cleans up `JsonVsCborTests` unused imports" } ], "files": [ { "diff": "@@ -31,6 +31,7 @@\n </parent>\n \n <properties>\n+ <jackson.version>2.4.3</jackson.version>\n <lucene.version>5.0.0</lucene.version>\n <lucene.maven.version>5.0.0-snapshot-1641343</lucene.maven.version>\n <tests.jvms>auto</tests.jvms>\n@@ -226,28 +227,28 @@\n <dependency>\n <groupId>com.fasterxml.jackson.core</groupId>\n <artifactId>jackson-core</artifactId>\n- <version>2.4.2</version>\n+ <version>${jackson.version}</version>\n <scope>compile</scope>\n </dependency>\n \n <dependency>\n <groupId>com.fasterxml.jackson.dataformat</groupId>\n <artifactId>jackson-dataformat-smile</artifactId>\n- <version>2.4.2</version>\n+ <version>${jackson.version}</version>\n <scope>compile</scope>\n </dependency>\n \n <dependency>\n <groupId>com.fasterxml.jackson.dataformat</groupId>\n <artifactId>jackson-dataformat-yaml</artifactId>\n- <version>2.4.2</version>\n+ <version>${jackson.version}</version>\n <scope>compile</scope>\n </dependency>\n \n <dependency>\n <groupId>com.fasterxml.jackson.dataformat</groupId>\n <artifactId>jackson-dataformat-cbor</artifactId>\n- <version>2.4.2</version>\n+ <version>${jackson.version}</version>\n <scope>compile</scope>\n </dependency>\n ", "filename": "pom.xml", "status": "modified" }, { "diff": "@@ -157,9 +157,9 @@ public static XContentType xContentType(CharSequence content) {\n return XContentType.YAML;\n }\n \n- // CBOR is not supported\n+ // CBOR is not supported because it is a binary-only format\n \n- for (int i = 0; i < length; i++) {\n+ for (int i = 1; i < length; i++) {\n char c = content.charAt(i);\n if (c == '{') {\n return XContentType.JSON;\n@@ -208,41 +208,48 @@ public static XContentType xContentType(byte[] data) {\n * Guesses the content type based on the provided input stream.\n */\n public static XContentType xContentType(InputStream si) throws IOException {\n+ // Need minimum of 3 bytes for everything (except really JSON)\n int first = si.read();\n- if (first == -1) {\n- return null;\n- }\n int second = si.read();\n- if (second == -1) {\n+ int third = si.read();\n+\n+ // Cannot short circuit based on third (or later fourth) because \"{}\" is valid JSON\n+ if (first == -1 || second == -1) {\n return null;\n }\n- if (first == SmileConstants.HEADER_BYTE_1 && second == SmileConstants.HEADER_BYTE_2) {\n- int third = si.read();\n- if (third == SmileConstants.HEADER_BYTE_3) {\n- return XContentType.SMILE;\n- }\n- }\n- if (first == '{' || second == '{') {\n- return XContentType.JSON;\n+\n+ if (first == SmileConstants.HEADER_BYTE_1 && second == SmileConstants.HEADER_BYTE_2 &&\n+ third == SmileConstants.HEADER_BYTE_3) {\n+ return XContentType.SMILE;\n }\n- if (first == '-' && second == '-') {\n- int third = si.read();\n- if (third == '-') {\n- return XContentType.YAML;\n- }\n+ if (first == '-' && second == '-' && third == '-') {\n+ return XContentType.YAML;\n }\n- if (first == (CBORConstants.BYTE_OBJECT_INDEFINITE & 0xff)){\n- return XContentType.CBOR;\n+\n+ // Need 4 bytes for CBOR\n+ int fourth = si.read();\n+\n+ if (first == '{' || second == '{' || third == '{' || fourth == '{') {\n+ return XContentType.JSON;\n }\n- for (int i = 2; i < GUESS_HEADER_LENGTH; i++) {\n- int val = si.read();\n- if (val == -1) {\n- return null;\n+\n+ // ensure that we don't only have two or three bytes (definitely not CBOR and everything else was checked)\n+ if (third != -1 && fourth != -1) {\n+ if (isCBORObjectHeader((byte)first, (byte)second, (byte)third, (byte)fourth)) {\n+ return XContentType.CBOR;\n }\n- if (val == '{') {\n- return XContentType.JSON;\n+\n+ for (int i = 4; i < GUESS_HEADER_LENGTH; i++) {\n+ int val = si.read();\n+ if (val == -1) {\n+ return null;\n+ }\n+ if (val == '{') {\n+ return XContentType.JSON;\n+ }\n }\n }\n+\n return null;\n }\n \n@@ -273,20 +280,53 @@ public static XContentType xContentType(BytesReference bytes) {\n if (first == '{') {\n return XContentType.JSON;\n }\n- if (length > 2 && first == SmileConstants.HEADER_BYTE_1 && bytes.get(1) == SmileConstants.HEADER_BYTE_2 && bytes.get(2) == SmileConstants.HEADER_BYTE_3) {\n- return XContentType.SMILE;\n- }\n- if (length > 2 && first == '-' && bytes.get(1) == '-' && bytes.get(2) == '-') {\n- return XContentType.YAML;\n- }\n- if (first == CBORConstants.BYTE_OBJECT_INDEFINITE){\n- return XContentType.CBOR;\n- }\n- for (int i = 0; i < length; i++) {\n- if (bytes.get(i) == '{') {\n- return XContentType.JSON;\n+ if (length > 2) {\n+ byte second = bytes.get(1);\n+ byte third = bytes.get(2);\n+\n+ if (first == SmileConstants.HEADER_BYTE_1 && second == SmileConstants.HEADER_BYTE_2 && third == SmileConstants.HEADER_BYTE_3) {\n+ return XContentType.SMILE;\n+ }\n+ if (first == '-' && second == '-' && third == '-') {\n+ return XContentType.YAML;\n+ }\n+ if (length > 3 && isCBORObjectHeader(first, second, third, bytes.get(3))) {\n+ return XContentType.CBOR;\n+ }\n+ // note: technically this only needs length >= 2, but if the string is just \" {\", then it's not JSON, but\n+ // \" {}\" is JSON (3 characters)\n+ for (int i = 1; i < length; i++) {\n+ if (bytes.get(i) == '{') {\n+ return XContentType.JSON;\n+ }\n }\n }\n return null;\n }\n+\n+ /**\n+ * Determine if the specified bytes represent a CBOR encoded stream.\n+ * <p />\n+ * This performs two checks to verify that it is indeed valid/usable CBOR data:\n+ * <ol>\n+ * <li>Checks the first three bytes for the\n+ * {@link CBORConstants#TAG_ID_SELF_DESCRIBE self-identifying CBOR tag (header)}</li>\n+ * <li>Checks that the fourth byte represents a major type that is an object, as opposed to some other type (e.g.,\n+ * text)</li>\n+ * </ol>\n+ *\n+ * @param first The first byte of the header\n+ * @param second The second byte of the header\n+ * @param third The third byte of the header\n+ * @param fourth The fourth byte represents the <em>first</em> byte of the CBOR data, indicating the data's type\n+ *\n+ * @return {@code true} if a CBOR byte stream is detected. {@code false} otherwise.\n+ */\n+ private static boolean isCBORObjectHeader(byte first, byte second, byte third, byte fourth) {\n+ // Determine if it uses the ID TAG (0xd9f7), then see if it starts with an object (equivalent to\n+ // checking in JSON if it starts with '{')\n+ return CBORConstants.hasMajorType(CBORConstants.MAJOR_TYPE_TAG, first) &&\n+ (((second << 8) & 0xff00) | (third & 0xff)) == CBORConstants.TAG_ID_SELF_DESCRIBE &&\n+ CBORConstants.hasMajorType(CBORConstants.MAJOR_TYPE_OBJECT, fourth);\n+ }\n }", "filename": "src/main/java/org/elasticsearch/common/xcontent/XContentFactory.java", "status": "modified" }, { "diff": "@@ -21,6 +21,8 @@\n \n import com.fasterxml.jackson.core.JsonEncoding;\n import com.fasterxml.jackson.dataformat.cbor.CBORFactory;\n+import com.fasterxml.jackson.dataformat.cbor.CBORGenerator;\n+\n import org.elasticsearch.ElasticsearchParseException;\n import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.io.FastStringReader;\n@@ -43,6 +45,8 @@ public static XContentBuilder contentBuilder() throws IOException {\n static {\n cborFactory = new CBORFactory();\n cborFactory.configure(CBORFactory.Feature.FAIL_ON_SYMBOL_HASH_OVERFLOW, false); // this trips on many mappings now...\n+ // Enable prefixing the entire byte stream with a CBOR header (\"tag\")\n+ cborFactory.configure(CBORGenerator.Feature.WRITE_TYPE_HEADER, true);\n cborXContent = new CborXContent();\n }\n ", "filename": "src/main/java/org/elasticsearch/common/xcontent/cbor/CborXContent.java", "status": "modified" }, { "diff": "@@ -29,37 +29,45 @@\n import static org.hamcrest.Matchers.equalTo;\n \n /**\n- *\n+ * Tests {@link XContentFactory} type generation.\n */\n public class XContentFactoryTests extends ElasticsearchTestCase {\n \n \n @Test\n public void testGuessJson() throws IOException {\n- testGuessType(XContentType.JSON);\n+ assertType(XContentType.JSON);\n }\n \n @Test\n public void testGuessSmile() throws IOException {\n- testGuessType(XContentType.SMILE);\n+ assertType(XContentType.SMILE);\n }\n \n @Test\n public void testGuessYaml() throws IOException {\n- testGuessType(XContentType.YAML);\n+ assertType(XContentType.YAML);\n }\n \n @Test\n public void testGuessCbor() throws IOException {\n- testGuessType(XContentType.CBOR);\n+ assertType(XContentType.CBOR);\n }\n \n- private void testGuessType(XContentType type) throws IOException {\n- XContentBuilder builder = XContentFactory.contentBuilder(type);\n- builder.startObject();\n- builder.field(\"field1\", \"value1\");\n- builder.endObject();\n+ private void assertType(XContentType type) throws IOException {\n+ for (XContentBuilder builder : generateBuilders(type)) {\n+ assertBuilderType(builder, type);\n+ }\n+ }\n \n+ /**\n+ * Assert the {@code builder} maps to the appropriate {@code type}.\n+ *\n+ * @param builder Builder to check.\n+ * @param type Type to match.\n+ * @throws IOException if any error occurs while checking the builder\n+ */\n+ private void assertBuilderType(XContentBuilder builder, XContentType type) throws IOException {\n assertThat(XContentFactory.xContentType(builder.bytes()), equalTo(type));\n BytesArray bytesArray = builder.bytes().toBytesArray();\n assertThat(XContentFactory.xContentType(new BytesStreamInput(bytesArray.array(), bytesArray.arrayOffset(), bytesArray.length(), false)), equalTo(type));\n@@ -69,4 +77,30 @@ private void testGuessType(XContentType type) throws IOException {\n assertThat(XContentFactory.xContentType(builder.string()), equalTo(type));\n }\n }\n+\n+ /**\n+ * Generate builders to test various use cases to check.\n+ *\n+ * @param type The type to use.\n+ * @return Never {@code null} array of unique {@link XContentBuilder}s testing different edge cases.\n+ * @throws IOException if any error occurs while generating builders\n+ */\n+ private XContentBuilder[] generateBuilders(XContentType type) throws IOException {\n+ XContentBuilder[] builders = new XContentBuilder[] {\n+ XContentFactory.contentBuilder(type), XContentFactory.contentBuilder(type)\n+ };\n+\n+ // simple object\n+ builders[0].startObject();\n+ builders[0].field(\"field1\", \"value1\");\n+ builders[0].startObject(\"object1\");\n+ builders[0].field(\"field2\", \"value2\");\n+ builders[0].endObject();\n+ builders[0].endObject();\n+\n+ // empty object\n+ builders[1].startObject().endObject();\n+\n+ return builders;\n+ }\n }", "filename": "src/test/java/org/elasticsearch/common/xcontent/XContentFactoryTests.java", "status": "modified" }, { "diff": "@@ -19,10 +19,6 @@\n \n package org.elasticsearch.common.xcontent.cbor;\n \n-import com.fasterxml.jackson.core.JsonFactory;\n-import com.fasterxml.jackson.core.JsonGenerator;\n-import com.fasterxml.jackson.core.JsonParser;\n-import com.fasterxml.jackson.dataformat.cbor.CBORFactory;\n import org.elasticsearch.common.io.stream.BytesStreamOutput;\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.common.xcontent.XContentGenerator;\n@@ -31,7 +27,6 @@\n import org.elasticsearch.test.ElasticsearchTestCase;\n import org.junit.Test;\n \n-import java.io.ByteArrayOutputStream;\n import java.io.IOException;\n \n import static org.hamcrest.Matchers.equalTo;", "filename": "src/test/java/org/elasticsearch/common/xcontent/cbor/JsonVsCborTests.java", "status": "modified" } ] }
{ "body": "This query used to work fine on 1.3.5:\n\n```\n{\n \"from\": 0,\n \"size\": 0,\n \"query\": {\n \"filtered\": {\n \"query\": {\n \"match_all\": {}\n },\n \"filter\": {\n \"bool\": {\n \"must\": {\n \"term\": {\n \"closed\": false\n }\n },\n \"_cache\": true\n }\n }\n }\n },\n \"aggregations\": {\n \"missing-external_link-square\": {\n \"missing\": {\n \"field\": \"external_link.square\"\n }\n }\n }\n}\n```\n\nI upgraded the index to 1.4.0, and now it returns this stack trace (on 10 shards):\n\n{\n\"error\": \"SearchPhaseExecutionException[Failed to execute phase [query], all shards failed; shardFailures {[LBJC8-OYQySmdIMINrn-Ow][radius_2014-11-13-23-39_updated][0]: QueryPhaseExecutionException[[radius_2014-11-13-23-39_updated][0]: query[filtered(ConstantScore(cache(BooleanFilter(+cache(closed:F)))))->cache(_type:place)],from[0],size[0]: Query Failed [Failed to execute main query]]; nested: ArrayIndexOutOfBoundsException; }{[LBJC8-OYQySmdIMINrn-Ow][radius_2014-11-13-23-39_updated][1]: QueryPhaseExecutionException[[radius_2014-11-13-23-39_updated][1]: query[filtered(ConstantScore(cache(BooleanFilter(+cache(closed:F)))))->cache(_type:place)],from[0],size[0]: Query Failed [Failed to execute main query]]; nested: ArrayIndexOutOfBoundsException; }{[LBJC8-OYQySmdIMINrn-Ow][radius_2014-11-13-23-39_updated][2]: QueryPhaseExecutionException[[radius_2014-11-13-23-39_updated][2]: query[filtered(ConstantScore(cache(BooleanFilter(+cache(closed:F)))))->cache(_type:place)],from[0],size[0]: Query Failed [Failed to execute main query]]; nested: ArrayIndexOutOfBoundsException; }{[LBJC8-OYQySmdIMINrn-Ow][radius_2014-11-13-23-39_updated][3]: QueryPhaseExecutionException[[radius_2014-11-13-23-39_updated][3]: query[filtered(ConstantScore(cache(BooleanFilter(+cache(closed:F)))))->cache(_type:place)],from[0],size[0]: Query Failed [Failed to execute main query]]; nested: ArrayIndexOutOfBoundsException; }{[LBJC8-OYQySmdIMINrn-Ow][radius_2014-11-13-23-39_updated][4]: QueryPhaseExecutionException[[radius_2014-11-13-23-39_updated][4]: query[filtered(ConstantScore(cache(BooleanFilter(+cache(closed:F)))))->cache(_type:place)],from[0],size[0]: Query Failed [Failed to execute main query]]; nested: ArrayIndexOutOfBoundsException; }{[LBJC8-OYQySmdIMINrn-Ow][radius_2014-11-13-23-39_updated][5]: QueryPhaseExecutionException[[radius_2014-11-13-23-39_updated][5]: query[filtered(ConstantScore(cache(BooleanFilter(+cache(closed:F)))))->cache(_type:place)],from[0],size[0]: Query Failed [Failed to execute main query]]; nested: ArrayIndexOutOfBoundsException; }{[LBJC8-OYQySmdIMINrn-Ow][radius_2014-11-13-23-39_updated][6]: QueryPhaseExecutionException[[radius_2014-11-13-23-39_updated][6]: query[filtered(ConstantScore(cache(BooleanFilter(+cache(closed:F)))))->cache(_type:place)],from[0],size[0]: Query Failed [Failed to execute main query]]; nested: ArrayIndexOutOfBoundsException; }{[LBJC8-OYQySmdIMINrn-Ow][radius_2014-11-13-23-39_updated][7]: QueryPhaseExecutionException[[radius_2014-11-13-23-39_updated][7]: query[filtered(ConstantScore(cache(BooleanFilter(+cache(closed:F)))))->cache(_type:place)],from[0],size[0]: Query Failed [Failed to execute main query]]; nested: ArrayIndexOutOfBoundsException; }{[LBJC8-OYQySmdIMINrn-Ow][radius_2014-11-13-23-39_updated][8]: QueryPhaseExecutionException[[radius_2014-11-13-23-39_updated][8]: query[filtered(ConstantScore(cache(BooleanFilter(+cache(closed:F)))))->cache(_type:place)],from[0],size[0]: Query Failed [Failed to execute main query]]; nested: ArrayIndexOutOfBoundsException; }{[LBJC8-OYQySmdIMINrn-Ow][radius_2014-11-13-23-39_updated][9]: QueryPhaseExecutionException[[radius_2014-11-13-23-39_updated][9]: query[filtered(ConstantScore(cache(BooleanFilter(+cache(closed:F)))))->cache(_type:place)],from[0],size[0]: Query Failed [Failed to execute main query]]; nested: ArrayIndexOutOfBoundsException; }]\",\n\"status\": 500\n}\n", "comments": [ { "body": "@jpountz can you take a look at this?\n", "created_at": "2014-11-23T13:27:12Z" }, { "body": "@tylerprete Can you share the mappings of your `external_link.square` field? If you look at the lines that have been written in the elasticsearch log file at the same time that you got this error, you should see a stack trace for the `ArrayIndexOutOfBoundsException`, this information would be very helpful too. Thanks!\n", "created_at": "2014-11-23T20:42:34Z" }, { "body": "Thanks for looking into this so quickly!\n\nMapping for that field is nothing fancy:\n\n```\n{\n \"external_link\": {\n \"properties\": {\n \"square\": {\n \"type\": \"string\"\n }\n }\n }\n}\n```\n\nI believe this is the stack trace you're looking for:\n\n```\n[2014-11-24 04:41:17,894][DEBUG][action.search.type ] [10.0.4.38] [radius_2014-11-13-23-39_updated][5], node[AlrEzws1TiiuZ6CiC-9RvQ], [P], s[STARTED]: Failed to execute [org.elasticsearch.action.search.SearchRequest@3e546cc6] lastShard [true]\norg.elasticsearch.search.query.QueryPhaseExecutionException: [radius_2014-11-13-23-39_updated][5]: query[filtered(ConstantScore(cache(BooleanFilter(+cache(closed:F)))))->cache(_type:place)],from[0],size[0]: Query Failed [Failed to execute main query]\n at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:163)\n at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:275)\n at org.elasticsearch.search.action.SearchServiceTransportAction$5.call(SearchServiceTransportAction.java:231)\n at org.elasticsearch.search.action.SearchServiceTransportAction$5.call(SearchServiceTransportAction.java:228)\n at org.elasticsearch.search.action.SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:559)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n at java.lang.Thread.run(Thread.java:745)\nCaused by: java.lang.ArrayIndexOutOfBoundsException: 728\n at org.apache.lucene.util.packed.Direct16.get(Direct16.java:54)\n at org.apache.lucene.util.packed.PackedLongValues.get(PackedLongValues.java:102)\n at org.apache.lucene.util.packed.PackedLongValues.get(PackedLongValues.java:110)\n at org.elasticsearch.index.fielddata.ordinals.MultiOrdinals$MultiDocs.ordAt(MultiOrdinals.java:166)\n at org.elasticsearch.index.fielddata.AbstractRandomAccessOrds.nextOrd(AbstractRandomAccessOrds.java:41)\n at org.apache.lucene.index.DocValues$5.get(DocValues.java:171)\n at org.elasticsearch.search.aggregations.bucket.missing.MissingAggregator.collect(MissingAggregator.java:59)\n at org.elasticsearch.search.aggregations.BucketCollector$2.collect(BucketCollector.java:81)\n at org.elasticsearch.search.aggregations.bucket.BucketsAggregator.collectBucketNoCounts(BucketsAggregator.java:74)\n at org.elasticsearch.search.aggregations.bucket.BucketsAggregator.collectExistingBucket(BucketsAggregator.java:63)\n at org.elasticsearch.search.aggregations.bucket.BucketsAggregator.collectBucket(BucketsAggregator.java:55)\n at org.elasticsearch.search.aggregations.bucket.filter.FilterAggregator.collect(FilterAggregator.java:61)\n at org.elasticsearch.search.aggregations.AggregationPhase$AggregationsCollector.collect(AggregationPhase.java:161)\n at org.elasticsearch.common.lucene.MultiCollector.collect(MultiCollector.java:60)\n at org.apache.lucene.search.Weight$DefaultBulkScorer.scoreAll(Weight.java:193)\n at org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:163)\n at org.apache.lucene.search.BulkScorer.score(BulkScorer.java:35)\n at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:621)\n at org.elasticsearch.search.internal.ContextIndexSearcher.search(ContextIndexSearcher.java:191)\n at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:309)\n at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:117)\n ... 7 more\n```\n", "created_at": "2014-11-24T04:45:30Z" } ], "number": 8580, "title": "Missing Aggregation in 1.4.0 throws ArrayOutOfBoundsException" }
{ "body": "Our iterator over global ordinals is currently incorrect since it does NOT\nreturn -1 (NO_MORE_ORDS) when all ordinals have been consumed. This bug does\nnot strike immediately with elasticsearch since we always consume ordinals in\na random-access fashion. However it strikes when consuming ordinals through\nLucene helpers such as DocValues#docsWithField.\n\nClose #8580\n", "number": 8627, "review_comments": [], "title": "Fix iterator over global ordinals." }
{ "commits": [ { "message": "Fielddata: Fix iterator over global ordinals.\n\nOur iterator over global ordinals is currently incorrect since it does NOT\nreturn -1 (NO_MORE_ORDS) when all ordinals have been consumed. This bug does\nnot strike immediately with elasticsearch since we always consume ordinals in\na random-access fashion. However it strikes when consuming ordinals through\nLucene helpers such as DocValues#docsWithField.\n\nClose #8580" } ], "files": [ { "diff": "@@ -38,7 +38,11 @@ public final void setDocument(int docID) {\n \n @Override\n public long nextOrd() {\n- return ordAt(i++);\n+ if (i < cardinality()) {\n+ return ordAt(i++);\n+ } else {\n+ return NO_MORE_ORDS;\n+ }\n }\n \n }", "filename": "src/main/java/org/elasticsearch/index/fielddata/AbstractRandomAccessOrds.java", "status": "modified" }, { "diff": "@@ -59,6 +59,11 @@ protected String toString(Object value) {\n \n protected abstract void add2SingleValuedDocumentsAndDeleteOneOfThem() throws Exception;\n \n+ protected long minRamBytesUsed() {\n+ // minimum number of bytes that this fielddata instance is expected to require\n+ return 1;\n+ }\n+\n @Test\n public void testDeletedDocs() throws Exception {\n add2SingleValuedDocumentsAndDeleteOneOfThem();\n@@ -78,7 +83,7 @@ public void testSingleValueAllSet() throws Exception {\n IndexFieldData indexFieldData = getForField(\"value\");\n LeafReaderContext readerContext = refreshReader();\n AtomicFieldData fieldData = indexFieldData.load(readerContext);\n- assertThat(fieldData.ramBytesUsed(), greaterThan(0l));\n+ assertThat(fieldData.ramBytesUsed(), greaterThanOrEqualTo(minRamBytesUsed()));\n \n SortedBinaryDocValues bytesValues = fieldData.getBytesValues();\n \n@@ -141,7 +146,7 @@ public void testSingleValueWithMissing() throws Exception {\n fillSingleValueWithMissing();\n IndexFieldData indexFieldData = getForField(\"value\");\n AtomicFieldData fieldData = indexFieldData.load(refreshReader());\n- assertThat(fieldData.ramBytesUsed(), greaterThan(0l));\n+ assertThat(fieldData.ramBytesUsed(), greaterThanOrEqualTo(minRamBytesUsed()));\n \n SortedBinaryDocValues bytesValues = fieldData\n .getBytesValues();\n@@ -158,7 +163,7 @@ public void testMultiValueAllSet() throws Exception {\n fillMultiValueAllSet();\n IndexFieldData indexFieldData = getForField(\"value\");\n AtomicFieldData fieldData = indexFieldData.load(refreshReader());\n- assertThat(fieldData.ramBytesUsed(), greaterThan(0l));\n+ assertThat(fieldData.ramBytesUsed(), greaterThanOrEqualTo(minRamBytesUsed()));\n \n SortedBinaryDocValues bytesValues = fieldData.getBytesValues();\n \n@@ -189,7 +194,7 @@ public void testMultiValueWithMissing() throws Exception {\n fillMultiValueWithMissing();\n IndexFieldData indexFieldData = getForField(\"value\");\n AtomicFieldData fieldData = indexFieldData.load(refreshReader());\n- assertThat(fieldData.ramBytesUsed(), greaterThan(0l));\n+ assertThat(fieldData.ramBytesUsed(), greaterThanOrEqualTo(minRamBytesUsed()));\n \n SortedBinaryDocValues bytesValues = fieldData.getBytesValues();\n ", "filename": "src/test/java/org/elasticsearch/index/fielddata/AbstractFieldDataImplTests.java", "status": "modified" }, { "diff": "@@ -20,9 +20,11 @@\n package org.elasticsearch.index.fielddata;\n \n import com.carrotsearch.randomizedtesting.generators.RandomPicks;\n+\n import org.apache.lucene.document.Document;\n import org.apache.lucene.document.Field;\n import org.apache.lucene.document.Field.Store;\n+import org.apache.lucene.document.SortedSetDocValuesField;\n import org.apache.lucene.document.StringField;\n import org.apache.lucene.index.DirectoryReader;\n import org.apache.lucene.index.LeafReaderContext;\n@@ -69,32 +71,37 @@\n */\n public abstract class AbstractStringFieldDataTests extends AbstractFieldDataImplTests {\n \n+ private void addField(Document d, String name, String value) {\n+ d.add(new StringField(name, value, Field.Store.YES));\n+ d.add(new SortedSetDocValuesField(name, new BytesRef(value)));\n+ }\n+\n protected void fillSingleValueAllSet() throws Exception {\n Document d = new Document();\n- d.add(new StringField(\"_id\", \"1\", Field.Store.NO));\n- d.add(new StringField(\"value\", \"2\", Field.Store.NO));\n+ addField(d, \"_id\", \"1\");\n+ addField(d, \"value\", \"2\");\n writer.addDocument(d);\n \n d = new Document();\n- d.add(new StringField(\"_id\", \"1\", Field.Store.NO));\n- d.add(new StringField(\"value\", \"1\", Field.Store.NO));\n+ addField(d, \"_id\", \"1\");\n+ addField(d, \"value\", \"1\");\n writer.addDocument(d);\n \n d = new Document();\n- d.add(new StringField(\"_id\", \"3\", Field.Store.NO));\n- d.add(new StringField(\"value\", \"3\", Field.Store.NO));\n+ addField(d, \"_id\", \"3\");\n+ addField(d, \"value\", \"3\");\n writer.addDocument(d);\n }\n \n protected void add2SingleValuedDocumentsAndDeleteOneOfThem() throws Exception {\n Document d = new Document();\n- d.add(new StringField(\"_id\", \"1\", Field.Store.NO));\n- d.add(new StringField(\"value\", \"2\", Field.Store.NO));\n+ addField(d, \"_id\", \"1\");\n+ addField(d, \"value\", \"2\");\n writer.addDocument(d);\n \n d = new Document();\n- d.add(new StringField(\"_id\", \"2\", Field.Store.NO));\n- d.add(new StringField(\"value\", \"4\", Field.Store.NO));\n+ addField(d, \"_id\", \"2\");\n+ addField(d, \"value\", \"4\");\n writer.addDocument(d);\n \n writer.commit();\n@@ -104,120 +111,120 @@ protected void add2SingleValuedDocumentsAndDeleteOneOfThem() throws Exception {\n \n protected void fillSingleValueWithMissing() throws Exception {\n Document d = new Document();\n- d.add(new StringField(\"_id\", \"1\", Field.Store.NO));\n- d.add(new StringField(\"value\", \"2\", Field.Store.NO));\n+ addField(d, \"_id\", \"1\");\n+ addField(d, \"value\", \"2\");\n writer.addDocument(d);\n \n d = new Document();\n- d.add(new StringField(\"_id\", \"2\", Field.Store.NO));\n+ addField(d, \"_id\", \"2\");\n //d.add(new StringField(\"value\", one(), Field.Store.NO)); // MISSING....\n writer.addDocument(d);\n \n d = new Document();\n- d.add(new StringField(\"_id\", \"3\", Field.Store.NO));\n- d.add(new StringField(\"value\", \"3\", Field.Store.NO));\n+ addField(d, \"_id\", \"3\");\n+ addField(d, \"value\", \"3\");\n writer.addDocument(d);\n }\n \n protected void fillMultiValueAllSet() throws Exception {\n Document d = new Document();\n- d.add(new StringField(\"_id\", \"1\", Field.Store.NO));\n- d.add(new StringField(\"value\", \"2\", Field.Store.NO));\n- d.add(new StringField(\"value\", \"4\", Field.Store.NO));\n+ addField(d, \"_id\", \"1\");\n+ addField(d, \"value\", \"2\");\n+ addField(d, \"value\", \"4\");\n writer.addDocument(d);\n \n d = new Document();\n- d.add(new StringField(\"_id\", \"2\", Field.Store.NO));\n- d.add(new StringField(\"value\", \"1\", Field.Store.NO));\n+ addField(d, \"_id\", \"2\");\n+ addField(d, \"value\", \"1\");\n writer.addDocument(d);\n writer.commit(); // TODO: Have tests with more docs for sorting\n \n d = new Document();\n- d.add(new StringField(\"_id\", \"3\", Field.Store.NO));\n- d.add(new StringField(\"value\", \"3\", Field.Store.NO));\n+ addField(d, \"_id\", \"3\");\n+ addField(d, \"value\", \"3\");\n writer.addDocument(d);\n }\n \n protected void fillMultiValueWithMissing() throws Exception {\n Document d = new Document();\n- d.add(new StringField(\"_id\", \"1\", Field.Store.NO));\n- d.add(new StringField(\"value\", \"2\", Field.Store.NO));\n- d.add(new StringField(\"value\", \"4\", Field.Store.NO));\n+ addField(d, \"_id\", \"1\");\n+ addField(d, \"value\", \"2\");\n+ addField(d, \"value\", \"4\");\n writer.addDocument(d);\n \n d = new Document();\n- d.add(new StringField(\"_id\", \"2\", Field.Store.NO));\n+ addField(d, \"_id\", \"2\");\n //d.add(new StringField(\"value\", one(), Field.Store.NO)); // MISSING\n writer.addDocument(d);\n \n d = new Document();\n- d.add(new StringField(\"_id\", \"3\", Field.Store.NO));\n- d.add(new StringField(\"value\", \"3\", Field.Store.NO));\n+ addField(d, \"_id\", \"3\");\n+ addField(d, \"value\", \"3\");\n writer.addDocument(d);\n }\n \n protected void fillAllMissing() throws Exception {\n Document d = new Document();\n- d.add(new StringField(\"_id\", \"1\", Field.Store.NO));\n+ addField(d, \"_id\", \"1\");\n writer.addDocument(d);\n \n d = new Document();\n- d.add(new StringField(\"_id\", \"2\", Field.Store.NO));\n+ addField(d, \"_id\", \"2\");\n writer.addDocument(d);\n \n d = new Document();\n- d.add(new StringField(\"_id\", \"3\", Field.Store.NO));\n+ addField(d, \"_id\", \"3\");\n writer.addDocument(d);\n }\n \n protected void fillExtendedMvSet() throws Exception {\n Document d = new Document();\n- d.add(new StringField(\"_id\", \"1\", Field.Store.NO));\n- d.add(new StringField(\"value\", \"02\", Field.Store.NO));\n- d.add(new StringField(\"value\", \"04\", Field.Store.NO));\n+ addField(d, \"_id\", \"1\");\n+ addField(d, \"value\", \"02\");\n+ addField(d, \"value\", \"04\");\n writer.addDocument(d);\n \n d = new Document();\n- d.add(new StringField(\"_id\", \"2\", Field.Store.NO));\n+ addField(d, \"_id\", \"2\");\n writer.addDocument(d);\n \n d = new Document();\n- d.add(new StringField(\"_id\", \"3\", Field.Store.NO));\n- d.add(new StringField(\"value\", \"03\", Field.Store.NO));\n+ addField(d, \"_id\", \"3\");\n+ addField(d, \"value\", \"03\");\n writer.addDocument(d);\n writer.commit();\n \n d = new Document();\n- d.add(new StringField(\"_id\", \"4\", Field.Store.NO));\n- d.add(new StringField(\"value\", \"04\", Field.Store.NO));\n- d.add(new StringField(\"value\", \"05\", Field.Store.NO));\n- d.add(new StringField(\"value\", \"06\", Field.Store.NO));\n+ addField(d, \"_id\", \"4\");\n+ addField(d, \"value\", \"04\");\n+ addField(d, \"value\", \"05\");\n+ addField(d, \"value\", \"06\");\n writer.addDocument(d);\n \n d = new Document();\n- d.add(new StringField(\"_id\", \"5\", Field.Store.NO));\n- d.add(new StringField(\"value\", \"06\", Field.Store.NO));\n- d.add(new StringField(\"value\", \"07\", Field.Store.NO));\n- d.add(new StringField(\"value\", \"08\", Field.Store.NO));\n+ addField(d, \"_id\", \"5\");\n+ addField(d, \"value\", \"06\");\n+ addField(d, \"value\", \"07\");\n+ addField(d, \"value\", \"08\");\n writer.addDocument(d);\n \n d = new Document();\n d.add(new StringField(\"_id\", \"6\", Field.Store.NO));\n writer.addDocument(d);\n \n d = new Document();\n- d.add(new StringField(\"_id\", \"7\", Field.Store.NO));\n- d.add(new StringField(\"value\", \"08\", Field.Store.NO));\n- d.add(new StringField(\"value\", \"09\", Field.Store.NO));\n- d.add(new StringField(\"value\", \"10\", Field.Store.NO));\n+ addField(d, \"_id\", \"7\");\n+ addField(d, \"value\", \"08\");\n+ addField(d, \"value\", \"09\");\n+ addField(d, \"value\", \"10\");\n writer.addDocument(d);\n writer.commit();\n \n d = new Document();\n- d.add(new StringField(\"_id\", \"8\", Field.Store.NO));\n- d.add(new StringField(\"value\", \"!08\", Field.Store.NO));\n- d.add(new StringField(\"value\", \"!09\", Field.Store.NO));\n- d.add(new StringField(\"value\", \"!10\", Field.Store.NO));\n+ addField(d, \"_id\", \"8\");\n+ addField(d, \"value\", \"!08\");\n+ addField(d, \"value\", \"!09\");\n+ addField(d, \"value\", \"!10\");\n writer.addDocument(d);\n }\n \n@@ -231,9 +238,6 @@ public void testActualMissingValueReverse() throws IOException {\n \n public void testActualMissingValue(boolean reverse) throws IOException {\n // missing value is set to an actual value\n- Document d = new Document();\n- final StringField s = new StringField(\"value\", \"\", Field.Store.YES);\n- d.add(s);\n final String[] values = new String[randomIntBetween(2, 30)];\n for (int i = 1; i < values.length; ++i) {\n values[i] = TestUtil.randomUnicodeString(getRandom());\n@@ -244,7 +248,8 @@ public void testActualMissingValue(boolean reverse) throws IOException {\n if (value == null) {\n writer.addDocument(new Document());\n } else {\n- s.setStringValue(value);\n+ Document d = new Document();\n+ addField(d, \"value\", value);\n writer.addDocument(d);\n }\n if (randomInt(10) == 0) {\n@@ -289,9 +294,6 @@ public void testSortMissingLastReverse() throws IOException {\n }\n \n public void testSortMissing(boolean first, boolean reverse) throws IOException {\n- Document d = new Document();\n- final StringField s = new StringField(\"value\", \"\", Field.Store.YES);\n- d.add(s);\n final String[] values = new String[randomIntBetween(2, 10)];\n for (int i = 1; i < values.length; ++i) {\n values[i] = TestUtil.randomUnicodeString(getRandom());\n@@ -302,7 +304,8 @@ public void testSortMissing(boolean first, boolean reverse) throws IOException {\n if (value == null) {\n writer.addDocument(new Document());\n } else {\n- s.setStringValue(value);\n+ Document d = new Document();\n+ addField(d, \"value\", value);\n writer.addDocument(d);\n }\n if (randomInt(10) == 0) {\n@@ -359,15 +362,15 @@ public void testNestedSorting(MultiValueMode sortMode) throws IOException {\n final int numValues = randomInt(3);\n for (int k = 0; k < numValues; ++k) {\n final String value = RandomPicks.randomFrom(getRandom(), values);\n- child.add(new StringField(\"text\", value, Store.YES));\n+ addField(child, \"text\", value);\n }\n docs.add(child);\n }\n final Document parent = new Document();\n parent.add(new StringField(\"type\", \"parent\", Store.YES));\n final String value = RandomPicks.randomFrom(getRandom(), values);\n if (value != null) {\n- parent.add(new StringField(\"text\", value, Store.YES));\n+ addField(parent, \"text\", value);\n }\n docs.add(parent);\n int bit = parents.prevSetBit(parents.length() - 1) + docs.size();\n@@ -446,6 +449,19 @@ public void testNestedSorting(MultiValueMode sortMode) throws IOException {\n searcher.getIndexReader().close();\n }\n \n+ private void assertIteratorConsistentWithRandomAccess(RandomAccessOrds ords, int maxDoc) {\n+ for (int doc = 0; doc < maxDoc; ++doc) {\n+ ords.setDocument(doc);\n+ final int cardinality = ords.cardinality();\n+ for (int i = 0; i < cardinality; ++i) {\n+ assertEquals(ords.nextOrd(), ords.ordAt(i));\n+ }\n+ for (int i = 0; i < 3; ++i) {\n+ assertEquals(ords.nextOrd(), -1);\n+ }\n+ }\n+ }\n+\n @Test\n public void testGlobalOrdinals() throws Exception {\n fillExtendedMvSet();\n@@ -457,8 +473,10 @@ public void testGlobalOrdinals() throws Exception {\n \n // First segment\n assertThat(globalOrdinals, instanceOf(GlobalOrdinalsIndexFieldData.class));\n- AtomicOrdinalsFieldData afd = globalOrdinals.load(topLevelReader.leaves().get(0));\n+ LeafReaderContext leaf = topLevelReader.leaves().get(0);\n+ AtomicOrdinalsFieldData afd = globalOrdinals.load(leaf);\n RandomAccessOrds values = afd.getOrdinalsValues();\n+ assertIteratorConsistentWithRandomAccess(values, leaf.reader().maxDoc());\n values.setDocument(0);\n assertThat(values.cardinality(), equalTo(2));\n long ord = values.nextOrd();\n@@ -476,8 +494,10 @@ public void testGlobalOrdinals() throws Exception {\n assertThat(values.lookupOrd(ord).utf8ToString(), equalTo(\"03\"));\n \n // Second segment\n- afd = globalOrdinals.load(topLevelReader.leaves().get(1));\n+ leaf = topLevelReader.leaves().get(1);\n+ afd = globalOrdinals.load(leaf);\n values = afd.getOrdinalsValues();\n+ assertIteratorConsistentWithRandomAccess(values, leaf.reader().maxDoc());\n values.setDocument(0);\n assertThat(values.cardinality(), equalTo(3));\n ord = values.nextOrd();\n@@ -515,8 +535,10 @@ public void testGlobalOrdinals() throws Exception {\n assertThat(values.lookupOrd(ord).utf8ToString(), equalTo(\"10\"));\n \n // Third segment\n- afd = globalOrdinals.load(topLevelReader.leaves().get(2));\n+ leaf = topLevelReader.leaves().get(2);\n+ afd = globalOrdinals.load(leaf);\n values = afd.getOrdinalsValues();\n+ assertIteratorConsistentWithRandomAccess(values, leaf.reader().maxDoc());\n values.setDocument(0);\n values.setDocument(0);\n assertThat(values.cardinality(), equalTo(3));", "filename": "src/test/java/org/elasticsearch/index/fielddata/AbstractStringFieldDataTests.java", "status": "modified" }, { "diff": "@@ -0,0 +1,36 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.index.fielddata;\n+\n+import org.elasticsearch.common.settings.ImmutableSettings;\n+import org.elasticsearch.index.fielddata.ordinals.OrdinalsBuilder;\n+\n+public class SortedSetDVStringFieldDataTests extends AbstractStringFieldDataTests {\n+\n+ @Override\n+ protected FieldDataType getFieldDataType() {\n+ return new FieldDataType(\"string\", ImmutableSettings.builder().put(\"format\", \"doc_values\").put(OrdinalsBuilder.FORCE_MULTI_ORDINALS, randomBoolean()));\n+ }\n+\n+ @Override\n+ protected long minRamBytesUsed() {\n+ return 0;\n+ }\n+}", "filename": "src/test/java/org/elasticsearch/index/fielddata/SortedSetDVStringFieldDataTests.java", "status": "added" } ] }
{ "body": "BWC tests run into a failure this morning which is caused by the verification of the old adler 32 checksums we added recently. \n\nhttp://build-us-00.elasticsearch.org/job/es_bwc_1x/5047/CHECK_BRANCH=tags%2Fv1.2.4,jdk=JDK7,label=bwc/\n\nthe problem here is the following\n- index was created with es `1.2.4` which still records Adler32 checksums in the legacy checksum file\n- we flush but apparently the last segment `_h` didn't make it into the commit but was recorded in the checksums file. \n- we uprade the node to `1.4.1-SNAPSHOT` - the index opens just fine\n- we apply the transaction log and IndexWriter starts writing a segment `_h`\n - note: now we have a Adler32 checksum for `_h` in the checksum file but the files are actually not the once that where checksummed.\n- since we have a replica we initiate a recovery and in our `Store.java` code line `#638` we prefer the adler checksum even though we could get the original checksum from lucene.\n- on recovery we now compare the checksums and they obviously don't match - in turn fail the primary :-1: \n\nI think the fix here is to prefer new checksums since they are taken from the file if we know we have them....\n", "comments": [ { "body": "just as a sidenote - the code that has the problem was never released to this is not affecting `1.4.0`\n", "created_at": "2014-11-21T18:38:31Z" } ], "number": 8587, "title": "Recovery detects false corruption if legacy checksums are present for a new written segment" }
{ "body": "We started to use the lucene CRC32 checksums instead of the legacy Adler32\nin `v1.3.0` which was the first version using lucene `4.9.0`. We can safely\nassume that if the segment was written with this version that checksums\nfrom lucene can be used even if the legacy checksum claims that it has a Adler32\nfor a given file / segment.\n\nCloses #8587 \n\nNOTE: this PR is against 1.x but will apply to master too but since it needs to pass BWC tests opened it against 1.x\n", "number": 8599, "review_comments": [ { "body": "typo: lowercase k?\n", "created_at": "2014-11-21T20:07:48Z" }, { "body": "g00d catch\n", "created_at": "2014-11-21T20:08:45Z" }, { "body": "\"and at\" -> \"and a\"?\n", "created_at": "2014-11-21T20:08:48Z" } ], "title": "Use Lucene checksums if segment version is >= 4.9.0" }
{ "commits": [ { "message": "[STORE] Use Lucene checksums if segment version is >= 4.9.0\n\nWe started to use the lucene CRC32 checksums instead of the legacy Adler32\nin `v1.3.0` which was the first version using lucene `4.9.0`. We can safely\nassume that if the segment was written with this version that checksums\nfrom lucene can be used even if the legacy checksum claims that it has a Adler32\nfor a given file / segment.\n\nCloses #8587" } ], "files": [ { "diff": "@@ -603,6 +603,9 @@ public String toString() {\n */\n public final static class MetadataSnapshot implements Iterable<StoreFileMetaData> {\n private static final ESLogger logger = Loggers.getLogger(MetadataSnapshot.class);\n+ private static final Version FIRST_LUCENE_CHECKSUM_VERSION = Version.LUCENE_48;\n+ // we stopped writing legacy checksums in 1.3.0 so all segments here must use the new CRC32 version\n+ private static final Version FIRST_ES_CRC32_VERSION = org.elasticsearch.Version.V_1_3_0.luceneVersion;\n \n private final Map<String, StoreFileMetaData> metadata;\n \n@@ -620,6 +623,11 @@ public MetadataSnapshot(Map<String, StoreFileMetaData> metadata) {\n metadata = buildMetadata(commit, directory, logger);\n }\n \n+ private static final boolean useLuceneChecksum(Version version, boolean hasLegacyChecksum) {\n+ return (version.onOrAfter(FIRST_LUCENE_CHECKSUM_VERSION) && hasLegacyChecksum == false) // no legacy checksum and a guarantee that lucene has checksums\n+ || version.onOrAfter(FIRST_ES_CRC32_VERSION); // OR we know that we didn't even write legacy checksums anymore when this segment was written.\n+ }\n+\n ImmutableMap<String, StoreFileMetaData> buildMetadata(IndexCommit commit, Directory directory, ESLogger logger) throws IOException {\n ImmutableMap.Builder<String, StoreFileMetaData> builder = ImmutableMap.builder();\n Map<String, String> checksumMap = readLegacyChecksums(directory).v1();\n@@ -633,7 +641,7 @@ ImmutableMap<String, StoreFileMetaData> buildMetadata(IndexCommit commit, Direct\n }\n for (String file : info.files()) {\n String legacyChecksum = checksumMap.get(file);\n- if (version.onOrAfter(Version.LUCENE_4_8) && legacyChecksum == null) {\n+ if (useLuceneChecksum(version, legacyChecksum != null)) {\n checksumFromLuceneFile(directory, file, builder, logger, version, Lucene46SegmentInfoFormat.SI_EXTENSION.equals(IndexFileNames.getExtension(file)));\n } else {\n builder.put(file, new StoreFileMetaData(file, directory.fileLength(file), legacyChecksum, null));\n@@ -642,7 +650,7 @@ ImmutableMap<String, StoreFileMetaData> buildMetadata(IndexCommit commit, Direct\n }\n final String segmentsFile = segmentCommitInfos.getSegmentsFileName();\n String legacyChecksum = checksumMap.get(segmentsFile);\n- if (maxVersion.onOrAfter(Version.LUCENE_4_8) && legacyChecksum == null) {\n+ if (useLuceneChecksum(maxVersion, legacyChecksum != null)) {\n checksumFromLuceneFile(directory, segmentsFile, builder, logger, maxVersion, true);\n } else {\n builder.put(segmentsFile, new StoreFileMetaData(segmentsFile, directory.fileLength(segmentsFile), legacyChecksum, null));", "filename": "src/main/java/org/elasticsearch/index/store/Store.java", "status": "modified" }, { "diff": "@@ -19,14 +19,12 @@\n package org.elasticsearch.index.store;\n \n import org.apache.lucene.analysis.MockAnalyzer;\n-import org.apache.lucene.codecs.CodecUtil;\n+import org.apache.lucene.codecs.*;\n+import org.apache.lucene.codecs.lucene45.Lucene45RWCodec;\n import org.apache.lucene.document.*;\n import org.apache.lucene.index.*;\n import org.apache.lucene.store.*;\n-import org.apache.lucene.util.BytesRef;\n-import org.apache.lucene.util.IOUtils;\n-import org.apache.lucene.util.TestUtil;\n-import org.apache.lucene.util.Version;\n+import org.apache.lucene.util.*;\n import org.elasticsearch.common.settings.ImmutableSettings;\n import org.elasticsearch.env.ShardLock;\n import org.elasticsearch.index.Index;\n@@ -36,6 +34,8 @@\n import org.elasticsearch.index.store.distributor.RandomWeightedDistributor;\n import org.elasticsearch.test.DummyShardLock;\n import org.elasticsearch.test.ElasticsearchLuceneTestCase;\n+import org.junit.AfterClass;\n+import org.junit.BeforeClass;\n import org.junit.Test;\n \n import java.io.FileNotFoundException;\n@@ -50,6 +50,16 @@\n \n public class StoreTest extends ElasticsearchLuceneTestCase {\n \n+ @BeforeClass\n+ public static void before() {\n+ LuceneTestCase.OLD_FORMAT_IMPERSONATION_IS_ACTIVE = true;\n+ }\n+\n+ @AfterClass\n+ public static void after() {\n+ LuceneTestCase.OLD_FORMAT_IMPERSONATION_IS_ACTIVE = false;\n+ }\n+\n @Test\n public void testRefCount() throws IOException {\n final ShardId shardId = new ShardId(new Index(\"index\"), 1);\n@@ -188,13 +198,40 @@ public void testVerifyingIndexOutputWithBogusInput() throws IOException {\n IOUtils.close(verifyingOutput, dir);\n }\n \n+ private static final class OldSIMockingCodec extends Lucene45RWCodec {\n+\n+ @Override\n+ public SegmentInfoFormat segmentInfoFormat() {\n+ final SegmentInfoFormat segmentInfoFormat = super.segmentInfoFormat();\n+ return new SegmentInfoFormat() {\n+ @Override\n+ public SegmentInfoReader getSegmentInfoReader() {\n+ return segmentInfoFormat.getSegmentInfoReader();\n+ }\n+\n+ @Override\n+ public SegmentInfoWriter getSegmentInfoWriter() {\n+ final SegmentInfoWriter segmentInfoWriter = segmentInfoFormat.getSegmentInfoWriter();\n+ return new SegmentInfoWriter() {\n+ @Override\n+ public void write(Directory dir, SegmentInfo info, FieldInfos fis, IOContext ioContext) throws IOException {\n+ info.setVersion(Version.LUCENE_45); // maybe lucene should do this too...\n+ segmentInfoWriter.write(dir, info, fis, ioContext);\n+ }\n+ };\n+ }\n+ };\n+ }\n+ }\n+\n @Test\n public void testWriteLegacyChecksums() throws IOException {\n final ShardId shardId = new ShardId(new Index(\"index\"), 1);\n DirectoryService directoryService = new LuceneManagedDirectoryService(random());\n Store store = new Store(shardId, ImmutableSettings.EMPTY, null, directoryService, randomDistributor(directoryService), new DummyShardLock(shardId));\n // set default codec - all segments need checksums\n- IndexWriter writer = new IndexWriter(store.directory(), newIndexWriterConfig(random(), TEST_VERSION_CURRENT, new MockAnalyzer(random())).setCodec(actualDefaultCodec()));\n+ final boolean usesOldCodec = randomBoolean();\n+ IndexWriter writer = new IndexWriter(store.directory(), newIndexWriterConfig(random(), TEST_VERSION_CURRENT, new MockAnalyzer(random())).setCodec(usesOldCodec ? new OldSIMockingCodec() : actualDefaultCodec()));\n int docs = 1 + random().nextInt(100);\n \n for (int i = 0; i < docs; i++) {\n@@ -234,23 +271,34 @@ public void testWriteLegacyChecksums() throws IOException {\n if (file.equals(\"write.lock\") || file.equals(IndexFileNames.SEGMENTS_GEN)) {\n continue;\n }\n- try (IndexInput input = store.directory().openInput(file, IOContext.READONCE)) {\n- String checksum = Store.digestToString(CodecUtil.retrieveChecksum(input));\n- StoreFileMetaData storeFileMetaData = new StoreFileMetaData(file, store.directory().fileLength(file), checksum, null);\n- legacyMeta.put(file, storeFileMetaData);\n- checksums.add(storeFileMetaData);\n-\n- }\n-\n+ StoreFileMetaData storeFileMetaData = new StoreFileMetaData(file, store.directory().fileLength(file), file + \"checksum\", null);\n+ legacyMeta.put(file, storeFileMetaData);\n+ checksums.add(storeFileMetaData);\n }\n checksums.write(store);\n \n metadata = store.getMetadata();\n Map<String, StoreFileMetaData> stringStoreFileMetaDataMap = metadata.asMap();\n assertThat(legacyMeta.size(), equalTo(stringStoreFileMetaDataMap.size()));\n- for (StoreFileMetaData meta : legacyMeta.values()) {\n- assertTrue(stringStoreFileMetaDataMap.containsKey(meta.name()));\n- assertTrue(stringStoreFileMetaDataMap.get(meta.name()).isSame(meta));\n+ if (usesOldCodec) {\n+ for (StoreFileMetaData meta : legacyMeta.values()) {\n+ assertTrue(meta.toString(), stringStoreFileMetaDataMap.containsKey(meta.name()));\n+ assertEquals(meta.name() + \"checksum\", meta.checksum());\n+ assertTrue(meta + \" vs. \" + stringStoreFileMetaDataMap.get(meta.name()), stringStoreFileMetaDataMap.get(meta.name()).isSame(meta));\n+ }\n+ } else {\n+\n+ // even if we have a legacy checksum - if we use a new codec we should reuse\n+ for (StoreFileMetaData meta : legacyMeta.values()) {\n+ assertTrue(meta.toString(), stringStoreFileMetaDataMap.containsKey(meta.name()));\n+ assertFalse(meta + \" vs. \" + stringStoreFileMetaDataMap.get(meta.name()), stringStoreFileMetaDataMap.get(meta.name()).isSame(meta));\n+ StoreFileMetaData storeFileMetaData = metadata.get(meta.name());\n+ try (IndexInput input = store.openVerifyingInput(meta.name(), IOContext.DEFAULT, storeFileMetaData)) {\n+ assertTrue(storeFileMetaData.toString(), input instanceof Store.VerifyingIndexInput);\n+ input.seek(meta.length());\n+ Store.verify(input);\n+ }\n+ }\n }\n assertDeleteContent(store, directoryService);\n IOUtils.close(store);", "filename": "src/test/java/org/elasticsearch/index/store/StoreTest.java", "status": "modified" } ] }
{ "body": "I found this by dropping bloom filter code into lucene test suite. There are consistent failures:\n\njava.lang.AssertionError: Actual RAM usage 136880, but got 915568, -568.8836937463471% error\n\nI think we should just stop writing these?\n", "comments": [], "number": 8564, "title": "bloom filter overestimates ram usage by order of magnitude" }
{ "body": "BloomFilter actually returned the size of the bitset as the\nsize in bytes so off by factor 8 plus a constant :)\n\nCloses #8564\n", "number": 8584, "review_comments": [], "title": "Fix Bloom filter ram usage calculation" }
{ "commits": [ { "message": "[BLOOM] Fix Bloom filter ram usage calculation\n\nBloomFilter actually returned the size of the bitset as the\nsize in bytes so off by factor 8 plus a constant :)\n\nCloses #8564" } ], "files": [ { "diff": "@@ -24,6 +24,7 @@\n import org.apache.lucene.store.DataOutput;\n import org.apache.lucene.store.IndexInput;\n import org.apache.lucene.util.BytesRef;\n+import org.apache.lucene.util.RamUsageEstimator;\n import org.elasticsearch.ElasticsearchIllegalArgumentException;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.Strings;\n@@ -262,7 +263,7 @@ public int getNumHashFunctions() {\n }\n \n public long getSizeInBytes() {\n- return bits.bitSize() + 8;\n+ return bits.ramBytesUsed();\n }\n \n @Override\n@@ -383,6 +384,10 @@ void putAll(BitArray array) {\n @Override public int hashCode() {\n return Arrays.hashCode(data);\n }\n+\n+ public long ramBytesUsed() {\n+ return RamUsageEstimator.NUM_BYTES_LONG * data.length + RamUsageEstimator.NUM_BYTES_ARRAY_HEADER + 16;\n+ }\n }\n \n static enum Hashing {", "filename": "src/main/java/org/elasticsearch/common/util/BloomFilter.java", "status": "modified" }, { "diff": "@@ -24,10 +24,13 @@\n import com.carrotsearch.randomizedtesting.annotations.ThreadLeakScope;\n import com.carrotsearch.randomizedtesting.annotations.TimeoutSuite;\n import org.apache.lucene.codecs.Codec;\n+import org.apache.lucene.codecs.PostingsFormat;\n import org.apache.lucene.index.BasePostingsFormatTestCase;\n import org.apache.lucene.util.LuceneTestCase;\n import org.apache.lucene.util.TestUtil;\n import org.apache.lucene.util.TimeUnits;\n+import org.elasticsearch.common.util.BloomFilter;\n+import org.elasticsearch.index.codec.postingsformat.BloomFilterPostingsFormat;\n import org.elasticsearch.index.codec.postingsformat.Elasticsearch090PostingsFormat;\n import org.elasticsearch.test.ElasticsearchThreadFilter;\n import org.elasticsearch.test.junit.listeners.ReproduceInfoPrinter;\n@@ -44,7 +47,9 @@ public class ElasticsearchPostingsFormatTest extends BasePostingsFormatTestCase\n \n @Override\n protected Codec getCodec() {\n- return TestUtil.alwaysPostingsFormat(new Elasticsearch090PostingsFormat());\n+ return random().nextBoolean() ?\n+ TestUtil.alwaysPostingsFormat(new Elasticsearch090PostingsFormat())\n+ : TestUtil.alwaysPostingsFormat(new BloomFilterPostingsFormat(PostingsFormat.forName(\"Lucene50\"), BloomFilter.Factory.DEFAULT));\n }\n \n }", "filename": "src/test/java/org/elasticsearch/index/codec/postingformat/ElasticsearchPostingsFormatTest.java", "status": "modified" } ] }
{ "body": "If two geo-hashes of the same document fall into the same bucket, then the document count for this bucket will be incremented by 2 instead of 1. Here is a reproduction:\n\n```\nDELETE test\n\nPUT test\n{\n \"mappings\": {\n \"test\": {\n \"properties\": {\n \"points\": {\n \"type\": \"geo_point\"\n }\n }\n }\n }\n}\n\nPUT test/test/1\n{\n \"points\": [\n \"1,2\",\n \"2,3\"\n ]\n}\n\nGET /test/_search?search_type=count\n{\n \"aggs\": {\n \"grid\": {\n \"geohash_grid\": {\n \"field\": \"points\",\n \"precision\": 2\n }\n }\n }\n}\n```\n\nwhich returns:\n\n```\n{\n \"took\": 3,\n \"timed_out\": false,\n \"_shards\": {\n \"total\": 5,\n \"successful\": 5,\n \"failed\": 0\n },\n \"hits\": {\n \"total\": 1,\n \"max_score\": 0,\n \"hits\": []\n },\n \"aggregations\": {\n \"grid\": {\n \"buckets\": [\n {\n \"key\": \"s0\",\n \"doc_count\": 2\n }\n ]\n }\n }\n}\n```\n", "comments": [ { "body": "btw the geohash grid aggregation fails in 1.4.0 if documents with more than one value are present, see https://github.com/elasticsearch/elasticsearch/issues/8507\n", "created_at": "2014-11-17T18:45:16Z" }, { "body": "Indeed, I found this issue while working on a fix for that issue! :) https://github.com/elasticsearch/elasticsearch/pull/8513\n", "created_at": "2014-11-17T22:56:38Z" } ], "number": 8512, "title": "Aggregations: Geohash grid should count documents only once" }
{ "body": "Close #8512\n", "number": 8574, "review_comments": [], "title": "Fix geohash grid doc counts computation on multi-valued fields" }
{ "commits": [ { "message": "Aggregations: Fix geohash grid doc counts computation on multi-valued fields.\n\nClose #8512" } ], "files": [ { "diff": "@@ -74,14 +74,18 @@ public void collect(int doc, long owningBucketOrdinal) throws IOException {\n values.setDocument(doc);\n final int valuesCount = values.count();\n \n+ long previous = Long.MAX_VALUE;\n for (int i = 0; i < valuesCount; ++i) {\n final long val = values.valueAt(i);\n- long bucketOrdinal = bucketOrds.add(val);\n- if (bucketOrdinal < 0) { // already seen\n- bucketOrdinal = - 1 - bucketOrdinal;\n- collectExistingBucket(doc, bucketOrdinal);\n- } else {\n- collectBucket(doc, bucketOrdinal);\n+ if (previous != val || i == 0) {\n+ long bucketOrdinal = bucketOrds.add(val);\n+ if (bucketOrdinal < 0) { // already seen\n+ bucketOrdinal = - 1 - bucketOrdinal;\n+ collectExistingBucket(doc, bucketOrdinal);\n+ } else {\n+ collectBucket(doc, bucketOrdinal);\n+ }\n+ previous = val;\n }\n }\n }", "filename": "src/main/java/org/elasticsearch/search/aggregations/bucket/geogrid/GeoHashGridAggregator.java", "status": "modified" }, { "diff": "@@ -106,9 +106,7 @@ public void setupSuiteScopeCluster() throws Exception {\n for (int i = 0; i < numDocs; i++) {\n final int numPoints = random.nextInt(4);\n List<String> points = new ArrayList<>();\n- // TODO (#8512): this should be a Set, not a List. Currently if a document has two positions that have\n- // the same geo hash, it will increase the doc_count for this geo hash by 2 instead of 1\n- List<String> geoHashes = new ArrayList<>();\n+ Set<String> geoHashes = new HashSet<>();\n for (int j = 0; j < numPoints; ++j) {\n double lat = (180d * random.nextDouble()) - 90d;\n double lng = (360d * random.nextDouble()) - 180d;", "filename": "src/test/java/org/elasticsearch/search/aggregations/bucket/GeoHashGridTests.java", "status": "modified" } ] }
{ "body": "I found this by dropping bloom filter code into lucene test suite. There are consistent failures:\n\njava.lang.AssertionError: Actual RAM usage 136880, but got 915568, -568.8836937463471% error\n\nI think we should just stop writing these?\n", "comments": [], "number": 8564, "title": "bloom filter overestimates ram usage by order of magnitude" }
{ "body": "See #8571 and #8564 \n\nI make the \"es090\" postings format read-only, just to support old segments. There is a test version that subclasses it with write-capability for testing. \n", "number": 8572, "review_comments": [], "title": "Disable bloom filters" }
{ "commits": [ { "message": "disable bloom filters" }, { "message": "remove StoreDirectory.codecService" } ], "files": [ { "diff": "@@ -57,10 +57,6 @@ settings API:\n `index.index_concurrency`::\n Defaults to `8`.\n \n-`index.codec.bloom.load`::\n- Whether to load the bloom filter. Defaults to `false`.\n- See <<codec-bloom-load>>.\n-\n `index.fail_on_merge_failure`::\n Default to `true`.\n \n@@ -227,35 +223,3 @@ curl -XPUT 'localhost:9200/myindex/_settings' -d '{\n \n curl -XPOST 'localhost:9200/myindex/_open'\n --------------------------------------------------\n-\n-[float]\n-[[codec-bloom-load]]\n-=== Bloom filters\n-\n-Up to version 1.3, Elasticsearch used to generate bloom filters for the `_uid`\n-field at indexing time and to load them at search time in order to speed-up\n-primary-key lookups by savings disk seeks.\n-\n-As of 1.4, bloom filters are still generated at indexing time, but they are\n-no longer loaded at search time by default: they consume RAM in proportion to\n-the number of unique terms, which can quickly add up for certain use cases,\n-and separate performance improvements have made the performance gains with\n-bloom filters very small.\n-\n-[TIP]\n-==================================================\n-\n-You can enable loading of the bloom filter at search time on a\n-per-index basis by updating the index settings:\n-\n-[source,js]\n---------------------------------------------------\n-PUT /old_index/_settings?index.codec.bloom.load=true\n---------------------------------------------------\n-\n-This setting, which defaults to `false`, can be updated on a live index. Note,\n-however, that changing the value will cause the index to be reopened, which\n-will invalidate any existing caches.\n-\n-==================================================\n-", "filename": "docs/reference/indices/update-settings.asciidoc", "status": "modified" }, { "diff": "@@ -44,16 +44,11 @@\n */\n public class CodecService extends AbstractIndexComponent {\n \n- public static final String INDEX_CODEC_BLOOM_LOAD = \"index.codec.bloom.load\";\n- public static final boolean INDEX_CODEC_BLOOM_LOAD_DEFAULT = false;\n-\n private final PostingsFormatService postingsFormatService;\n private final DocValuesFormatService docValuesFormatService;\n private final MapperService mapperService;\n private final ImmutableMap<String, Codec> codecs;\n \n- private volatile boolean loadBloomFilter = true;\n-\n public final static String DEFAULT_CODEC = \"default\";\n \n public CodecService(Index index) {\n@@ -83,7 +78,6 @@ public CodecService(Index index, @IndexSettings Settings indexSettings, Postings\n codecs.put(codec, Codec.forName(codec));\n }\n this.codecs = codecs.immutableMap();\n- this.loadBloomFilter = indexSettings.getAsBoolean(INDEX_CODEC_BLOOM_LOAD, INDEX_CODEC_BLOOM_LOAD_DEFAULT);\n }\n \n public PostingsFormatService postingsFormatService() {\n@@ -105,12 +99,4 @@ public Codec codec(String name) throws ElasticsearchIllegalArgumentException {\n }\n return codec;\n }\n-\n- public boolean isLoadBloomFilter() {\n- return this.loadBloomFilter;\n- }\n-\n- public void setLoadBloomFilter(boolean loadBloomFilter) {\n- this.loadBloomFilter = loadBloomFilter;\n- }\n }", "filename": "src/main/java/org/elasticsearch/index/codec/CodecService.java", "status": "modified" }, { "diff": "@@ -24,8 +24,6 @@\n import org.apache.lucene.store.*;\n import org.apache.lucene.util.*;\n import org.elasticsearch.common.util.BloomFilter;\n-import org.elasticsearch.index.store.DirectoryUtils;\n-import org.elasticsearch.index.store.Store;\n \n import java.io.IOException;\n import java.util.*;\n@@ -42,7 +40,9 @@\n * This is a special bloom filter version, based on {@link org.elasticsearch.common.util.BloomFilter} and inspired\n * by Lucene {@link org.apache.lucene.codecs.bloom.BloomFilteringPostingsFormat}.\n * </p>\n+ * @deprecated only for reading old segments\n */\n+@Deprecated\n public final class BloomFilterPostingsFormat extends PostingsFormat {\n \n public static final String BLOOM_CODEC_NAME = \"XBloomFilter\"; // the Lucene one is named BloomFilter\n@@ -160,30 +160,7 @@ public BloomFilteredFieldsProducer(SegmentReadState state)\n // // Load the hash function used in the BloomFilter\n // hashFunction = HashFunction.forName(bloomIn.readString());\n // Load the delegate postings format\n- final String delegatePostings = bloomIn\n- .readString();\n- int numBlooms = bloomIn.readInt();\n-\n- boolean load = false;\n- Store.StoreDirectory storeDir = DirectoryUtils.getStoreDirectory(state.directory);\n- if (storeDir != null && storeDir.codecService() != null) {\n- load = storeDir.codecService().isLoadBloomFilter();\n- }\n-\n- if (load) {\n- for (int i = 0; i < numBlooms; i++) {\n- int fieldNum = bloomIn.readInt();\n- FieldInfo fieldInfo = state.fieldInfos.fieldInfo(fieldNum);\n- LazyBloomLoader loader = new LazyBloomLoader(bloomIn.getFilePointer(), dataInput);\n- bloomsByFieldName.put(fieldInfo.name, loader);\n- BloomFilter.skipBloom(bloomIn);\n- }\n- if (version >= BLOOM_CODEC_VERSION_CHECKSUM) {\n- CodecUtil.checkFooter(bloomIn);\n- } else {\n- CodecUtil.checkEOF(bloomIn);\n- }\n- }\n+ final String delegatePostings = bloomIn.readString();\n this.delegateFieldsProducer = PostingsFormat.forName(delegatePostings)\n .fieldsProducer(state);\n this.data = dataInput;\n@@ -383,8 +360,9 @@ public DocsEnum docs(Bits liveDocs, DocsEnum reuse, int flags)\n \n }\n \n-\n- final class BloomFilteredFieldsConsumer extends FieldsConsumer {\n+ // TODO: would be great to move this out to test code, but the interaction between es090 and bloom is complex\n+ // at least it is not accessible via SPI\n+ public final class BloomFilteredFieldsConsumer extends FieldsConsumer {\n private FieldsConsumer delegateFieldsConsumer;\n private Map<FieldInfo, BloomFilter> bloomFilters = new HashMap<>();\n private SegmentWriteState state;\n@@ -399,7 +377,7 @@ public BloomFilteredFieldsConsumer(FieldsConsumer fieldsConsumer,\n }\n \n // for internal use only\n- FieldsConsumer getDelegate() {\n+ public FieldsConsumer getDelegate() {\n return delegateFieldsConsumer;\n }\n ", "filename": "src/main/java/org/elasticsearch/index/codec/postingsformat/BloomFilterPostingsFormat.java", "status": "modified" }, { "diff": "@@ -30,7 +30,9 @@\n import java.util.Map;\n \n /**\n+ * @deprecated only for reading old segments\n */\n+@Deprecated\n public class BloomFilterPostingsFormatProvider extends AbstractPostingsFormatProvider {\n \n private final PostingsFormatProvider delegate;", "filename": "src/main/java/org/elasticsearch/index/codec/postingsformat/BloomFilterPostingsFormatProvider.java", "status": "modified" }, { "diff": "@@ -38,14 +38,17 @@\n import java.util.Iterator;\n \n /**\n- * This is the default postings format for Elasticsearch that special cases\n+ * This is the old default postings format for Elasticsearch that special cases\n * the <tt>_uid</tt> field to use a bloom filter while all other fields\n * will use a {@link Lucene50PostingsFormat}. This format will reuse the underlying\n * {@link Lucene50PostingsFormat} and its files also for the <tt>_uid</tt> saving up to\n * 5 files per segment in the default case.\n+ * <p>\n+ * @deprecated only for reading old segments\n */\n-public final class Elasticsearch090PostingsFormat extends PostingsFormat {\n- private final BloomFilterPostingsFormat bloomPostings;\n+@Deprecated\n+public class Elasticsearch090PostingsFormat extends PostingsFormat {\n+ protected final BloomFilterPostingsFormat bloomPostings;\n \n public Elasticsearch090PostingsFormat() {\n super(\"es090\");\n@@ -57,7 +60,7 @@ public Elasticsearch090PostingsFormat() {\n public PostingsFormat getDefaultWrapped() {\n return bloomPostings.getDelegate();\n }\n- private static final Predicate<String> UID_FIELD_FILTER = new Predicate<String>() {\n+ protected static final Predicate<String> UID_FIELD_FILTER = new Predicate<String>() {\n \n @Override\n public boolean apply(String s) {\n@@ -67,34 +70,7 @@ public boolean apply(String s) {\n \n @Override\n public FieldsConsumer fieldsConsumer(SegmentWriteState state) throws IOException {\n- final BloomFilteredFieldsConsumer fieldsConsumer = bloomPostings.fieldsConsumer(state);\n- return new FieldsConsumer() {\n-\n- @Override\n- public void write(Fields fields) throws IOException {\n-\n- Fields maskedFields = new FilterLeafReader.FilterFields(fields) {\n- @Override\n- public Iterator<String> iterator() {\n- return Iterators.filter(this.in.iterator(), Predicates.not(UID_FIELD_FILTER));\n- }\n- };\n- fieldsConsumer.getDelegate().write(maskedFields);\n- maskedFields = new FilterLeafReader.FilterFields(fields) {\n- @Override\n- public Iterator<String> iterator() {\n- return Iterators.singletonIterator(UidFieldMapper.NAME);\n- }\n- };\n- // only go through bloom for the UID field\n- fieldsConsumer.write(maskedFields);\n- }\n-\n- @Override\n- public void close() throws IOException {\n- fieldsConsumer.close();\n- }\n- };\n+ throw new UnsupportedOperationException(\"this codec can only be used for reading\");\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/index/codec/postingsformat/Elasticsearch090PostingsFormat.java", "status": "modified" }, { "diff": "@@ -30,10 +30,7 @@\n * This class represents the set of Elasticsearch \"built-in\"\n * {@link PostingsFormatProvider.Factory postings format factories}\n * <ul>\n- * <li><b>bloom_default</b>: a postings format that uses a bloom filter to\n- * improve term lookup performance. This is useful for primarily keys or fields\n- * that are used as a delete key</li>\n- * <li><b>default</b>: the default Elasticsearch postings format offering best\n+ * <li><b>default</b>: the default Lucene postings format offering best\n * general purpose performance. This format is used if no postings format is\n * specified in the field mapping.</li>\n * <li><b>***</b>: other formats from Lucene core (e.g. Lucene41 as of Lucene 4.10)\n@@ -51,12 +48,10 @@ public class PostingFormats {\n for (String luceneName : PostingsFormat.availablePostingsFormats()) {\n builtInPostingFormatsX.put(luceneName, new PreBuiltPostingsFormatProvider.Factory(PostingsFormat.forName(luceneName)));\n }\n- final PostingsFormat defaultFormat = new Elasticsearch090PostingsFormat();\n+ final PostingsFormat defaultFormat = PostingsFormat.forName(Lucene.LATEST_POSTINGS_FORMAT);\n builtInPostingFormatsX.put(PostingsFormatService.DEFAULT_FORMAT,\n new PreBuiltPostingsFormatProvider.Factory(PostingsFormatService.DEFAULT_FORMAT, defaultFormat));\n \n- builtInPostingFormatsX.put(\"bloom_default\", new PreBuiltPostingsFormatProvider.Factory(\"bloom_default\", wrapInBloom(PostingsFormat.forName(Lucene.LATEST_POSTINGS_FORMAT))));\n-\n builtInPostingFormats = builtInPostingFormatsX.immutableMap();\n }\n ", "filename": "src/main/java/org/elasticsearch/index/codec/postingsformat/PostingFormats.java", "status": "modified" }, { "diff": "@@ -1522,12 +1522,10 @@ public void onRefreshSettings(Settings settings) {\n int indexConcurrency = settings.getAsInt(INDEX_INDEX_CONCURRENCY, InternalEngine.this.indexConcurrency);\n boolean failOnMergeFailure = settings.getAsBoolean(INDEX_FAIL_ON_MERGE_FAILURE, InternalEngine.this.failOnMergeFailure);\n String codecName = settings.get(INDEX_CODEC, InternalEngine.this.codecName);\n- final boolean codecBloomLoad = settings.getAsBoolean(CodecService.INDEX_CODEC_BLOOM_LOAD, codecService.isLoadBloomFilter());\n boolean requiresFlushing = false;\n if (indexConcurrency != InternalEngine.this.indexConcurrency ||\n !codecName.equals(InternalEngine.this.codecName) ||\n- failOnMergeFailure != InternalEngine.this.failOnMergeFailure ||\n- codecBloomLoad != codecService.isLoadBloomFilter()) {\n+ failOnMergeFailure != InternalEngine.this.failOnMergeFailure) {\n try (InternalLock _ = readLock.acquire()) {\n if (indexConcurrency != InternalEngine.this.indexConcurrency) {\n logger.info(\"updating index.index_concurrency from [{}] to [{}]\", InternalEngine.this.indexConcurrency, indexConcurrency);\n@@ -1545,12 +1543,6 @@ public void onRefreshSettings(Settings settings) {\n logger.info(\"updating {} from [{}] to [{}]\", InternalEngine.INDEX_FAIL_ON_MERGE_FAILURE, InternalEngine.this.failOnMergeFailure, failOnMergeFailure);\n InternalEngine.this.failOnMergeFailure = failOnMergeFailure;\n }\n- if (codecBloomLoad != codecService.isLoadBloomFilter()) {\n- logger.info(\"updating {} from [{}] to [{}]\", CodecService.INDEX_CODEC_BLOOM_LOAD, codecService.isLoadBloomFilter(), codecBloomLoad);\n- codecService.setLoadBloomFilter(codecBloomLoad);\n- // we need to flush in this case, to load/unload the bloom filters\n- requiresFlushing = true;\n- }\n }\n if (requiresFlushing) {\n flush(new Flush().type(Flush.Type.NEW_WRITER));", "filename": "src/main/java/org/elasticsearch/index/engine/internal/InternalEngine.java", "status": "modified" }, { "diff": "@@ -85,7 +85,6 @@ public IndexDynamicSettingsModule() {\n indexDynamicSettings.addDynamicSetting(LogDocMergePolicyProvider.INDEX_COMPOUND_FORMAT);\n indexDynamicSettings.addDynamicSetting(InternalEngine.INDEX_INDEX_CONCURRENCY, Validator.NON_NEGATIVE_INTEGER);\n indexDynamicSettings.addDynamicSetting(InternalEngine.INDEX_COMPOUND_ON_FLUSH, Validator.BOOLEAN);\n- indexDynamicSettings.addDynamicSetting(CodecService.INDEX_CODEC_BLOOM_LOAD, Validator.BOOLEAN);\n indexDynamicSettings.addDynamicSetting(InternalEngine.INDEX_GC_DELETES, Validator.TIME);\n indexDynamicSettings.addDynamicSetting(InternalEngine.INDEX_CODEC);\n indexDynamicSettings.addDynamicSetting(InternalEngine.INDEX_FAIL_ON_MERGE_FAILURE);", "filename": "src/main/java/org/elasticsearch/index/settings/IndexDynamicSettingsModule.java", "status": "modified" }, { "diff": "@@ -550,12 +550,6 @@ public ShardId shardId() {\n return Store.this.shardId();\n }\n \n- @Nullable\n- public CodecService codecService() {\n- ensureOpen();\n- return Store.this.codecService;\n- }\n-\n @Override\n public void close() throws IOException {\n assert false : \"Nobody should close this directory except of the Store itself\";", "filename": "src/main/java/org/elasticsearch/index/store/Store.java", "status": "modified" }, { "diff": "@@ -1,3 +1,2 @@\n-org.elasticsearch.index.codec.postingsformat.BloomFilterPostingsFormat\n org.elasticsearch.index.codec.postingsformat.Elasticsearch090PostingsFormat\n org.elasticsearch.search.suggest.completion.Completion090PostingsFormat", "filename": "src/main/resources/META-INF/services/org.apache.lucene.codecs.PostingsFormat", "status": "modified" }, { "diff": "@@ -20,6 +20,7 @@\n package org.elasticsearch.index.codec;\n \n import org.apache.lucene.codecs.Codec;\n+import org.apache.lucene.codecs.PostingsFormat;\n import org.apache.lucene.codecs.bloom.BloomFilteringPostingsFormat;\n import org.apache.lucene.codecs.lucene40.Lucene40Codec;\n import org.apache.lucene.codecs.lucene41.Lucene41Codec;\n@@ -33,6 +34,7 @@\n import org.apache.lucene.codecs.lucene50.Lucene50Codec;\n import org.apache.lucene.codecs.lucene50.Lucene50DocValuesFormat;\n import org.apache.lucene.codecs.perfield.PerFieldPostingsFormat;\n+import org.elasticsearch.common.lucene.Lucene;\n import org.elasticsearch.common.settings.ImmutableSettings;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.common.xcontent.XContentFactory;\n@@ -80,25 +82,16 @@ public void testResolveDefaultCodecs() throws Exception {\n public void testResolveDefaultPostingFormats() throws Exception {\n PostingsFormatService postingsFormatService = createCodecService().postingsFormatService();\n assertThat(postingsFormatService.get(\"default\"), instanceOf(PreBuiltPostingsFormatProvider.class));\n- assertThat(postingsFormatService.get(\"default\").get(), instanceOf(Elasticsearch090PostingsFormat.class));\n+ PostingsFormat luceneDefault = PostingsFormat.forName(Lucene.LATEST_POSTINGS_FORMAT);\n+ assertThat(postingsFormatService.get(\"default\").get(), instanceOf(luceneDefault.getClass()));\n \n // Should fail when upgrading Lucene with codec changes\n- assertThat(((Elasticsearch090PostingsFormat)postingsFormatService.get(\"default\").get()).getDefaultWrapped(), instanceOf(((PerFieldPostingsFormat) Codec.getDefault().postingsFormat()).getPostingsFormatForField(\"\").getClass()));\n assertThat(postingsFormatService.get(\"Lucene41\"), instanceOf(PreBuiltPostingsFormatProvider.class));\n // Should fail when upgrading Lucene with codec changes\n assertThat(postingsFormatService.get(\"Lucene50\").get(), instanceOf(((PerFieldPostingsFormat) Codec.getDefault().postingsFormat()).getPostingsFormatForField(null).getClass()));\n \n- assertThat(postingsFormatService.get(\"bloom_default\"), instanceOf(PreBuiltPostingsFormatProvider.class));\n- if (PostingFormats.luceneBloomFilter) {\n- assertThat(postingsFormatService.get(\"bloom_default\").get(), instanceOf(BloomFilteringPostingsFormat.class));\n- } else {\n- assertThat(postingsFormatService.get(\"bloom_default\").get(), instanceOf(BloomFilterPostingsFormat.class));\n- }\n assertThat(postingsFormatService.get(\"BloomFilter\"), instanceOf(PreBuiltPostingsFormatProvider.class));\n assertThat(postingsFormatService.get(\"BloomFilter\").get(), instanceOf(BloomFilteringPostingsFormat.class));\n-\n- assertThat(postingsFormatService.get(\"XBloomFilter\"), instanceOf(PreBuiltPostingsFormatProvider.class));\n- assertThat(postingsFormatService.get(\"XBloomFilter\").get(), instanceOf(BloomFilterPostingsFormat.class));\n }\n \n @Test\n@@ -128,7 +121,8 @@ public void testResolvePostingFormatsFromMapping_default() throws Exception {\n CodecService codecService = createCodecService(indexSettings);\n DocumentMapper documentMapper = codecService.mapperService().documentMapperParser().parse(mapping);\n assertThat(documentMapper.mappers().name(\"field1\").mapper().postingsFormatProvider(), instanceOf(PreBuiltPostingsFormatProvider.class));\n- assertThat(documentMapper.mappers().name(\"field1\").mapper().postingsFormatProvider().get(), instanceOf(Elasticsearch090PostingsFormat.class));\n+ PostingsFormat luceneDefault = PostingsFormat.forName(Lucene.LATEST_POSTINGS_FORMAT);\n+ assertThat(documentMapper.mappers().name(\"field1\").mapper().postingsFormatProvider().get(), instanceOf(luceneDefault.getClass()));\n \n assertThat(documentMapper.mappers().name(\"field2\").mapper().postingsFormatProvider(), instanceOf(DefaultPostingsFormatProvider.class));\n DefaultPostingsFormatProvider provider = (DefaultPostingsFormatProvider) documentMapper.mappers().name(\"field2\").mapper().postingsFormatProvider();", "filename": "src/test/java/org/elasticsearch/index/codec/CodecTests.java", "status": "modified" }, { "diff": "@@ -0,0 +1,69 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.index.codec.postingformat;\n+\n+import com.google.common.base.Predicates;\n+import com.google.common.collect.Iterators;\n+\n+import org.apache.lucene.codecs.FieldsConsumer;\n+import org.apache.lucene.index.Fields;\n+import org.apache.lucene.index.FilterLeafReader;\n+import org.apache.lucene.index.SegmentWriteState;\n+import org.elasticsearch.index.codec.postingsformat.BloomFilterPostingsFormat.BloomFilteredFieldsConsumer;\n+import org.elasticsearch.index.codec.postingsformat.Elasticsearch090PostingsFormat;\n+import org.elasticsearch.index.mapper.internal.UidFieldMapper;\n+\n+import java.io.IOException;\n+import java.util.Iterator;\n+\n+/** read-write version with blooms for testing */\n+public class Elasticsearch090RWPostingsFormat extends Elasticsearch090PostingsFormat {\n+ @Override\n+ public FieldsConsumer fieldsConsumer(SegmentWriteState state) throws IOException {\n+ final BloomFilteredFieldsConsumer fieldsConsumer = bloomPostings.fieldsConsumer(state);\n+ return new FieldsConsumer() {\n+\n+ @Override\n+ public void write(Fields fields) throws IOException {\n+\n+ Fields maskedFields = new FilterLeafReader.FilterFields(fields) {\n+ @Override\n+ public Iterator<String> iterator() {\n+ return Iterators.filter(this.in.iterator(), Predicates.not(UID_FIELD_FILTER));\n+ }\n+ };\n+ fieldsConsumer.getDelegate().write(maskedFields);\n+ maskedFields = new FilterLeafReader.FilterFields(fields) {\n+ @Override\n+ public Iterator<String> iterator() {\n+ return Iterators.singletonIterator(UidFieldMapper.NAME);\n+ }\n+ };\n+ // only go through bloom for the UID field\n+ fieldsConsumer.write(maskedFields);\n+ }\n+\n+ @Override\n+ public void close() throws IOException {\n+ fieldsConsumer.close();\n+ }\n+ };\n+ }\n+}", "filename": "src/test/java/org/elasticsearch/index/codec/postingformat/Elasticsearch090RWPostingsFormat.java", "status": "added" }, { "diff": "@@ -44,7 +44,7 @@ public class ElasticsearchPostingsFormatTest extends BasePostingsFormatTestCase\n \n @Override\n protected Codec getCodec() {\n- return TestUtil.alwaysPostingsFormat(new Elasticsearch090PostingsFormat());\n+ return TestUtil.alwaysPostingsFormat(new Elasticsearch090RWPostingsFormat());\n }\n \n }", "filename": "src/test/java/org/elasticsearch/index/codec/postingformat/ElasticsearchPostingsFormatTest.java", "status": "modified" }, { "diff": "@@ -19,105 +19,22 @@\n \n package org.elasticsearch.index.engine.internal;\n \n-import com.google.common.base.Predicate;\n-import org.apache.lucene.util.LuceneTestCase.Slow;\n import org.elasticsearch.action.admin.indices.segments.IndexSegments;\n import org.elasticsearch.action.admin.indices.segments.IndexShardSegments;\n import org.elasticsearch.action.admin.indices.segments.IndicesSegmentResponse;\n import org.elasticsearch.action.admin.indices.segments.ShardSegments;\n-import org.elasticsearch.action.admin.indices.stats.IndicesStatsResponse;\n-import org.elasticsearch.action.index.IndexRequestBuilder;\n import org.elasticsearch.common.settings.ImmutableSettings;\n-import org.elasticsearch.common.util.BloomFilter;\n-import org.elasticsearch.index.codec.CodecService;\n import org.elasticsearch.index.engine.Segment;\n-import org.elasticsearch.index.merge.policy.AbstractMergePolicyProvider;\n import org.elasticsearch.test.ElasticsearchIntegrationTest;\n import org.hamcrest.Matchers;\n import org.junit.Test;\n \n import java.util.Collection;\n import java.util.HashSet;\n import java.util.Set;\n-import java.util.concurrent.ExecutionException;\n-\n-import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n \n public class InternalEngineIntegrationTest extends ElasticsearchIntegrationTest {\n \n- @Test\n- @Slow\n- public void testSettingLoadBloomFilterDefaultTrue() throws Exception {\n- client().admin().indices().prepareCreate(\"test\").setSettings(ImmutableSettings.builder().put(\"number_of_replicas\", 0).put(\"number_of_shards\", 1)).get();\n- client().prepareIndex(\"test\", \"foo\").setSource(\"field\", \"foo\").get();\n- ensureGreen();\n- refresh();\n- IndicesStatsResponse stats = client().admin().indices().prepareStats().setSegments(true).get();\n- final long segmentsMemoryWithBloom = stats.getTotal().getSegments().getMemoryInBytes();\n- logger.info(\"segments with bloom: {}\", segmentsMemoryWithBloom);\n-\n- logger.info(\"updating the setting to unload bloom filters\");\n- client().admin().indices().prepareUpdateSettings(\"test\").setSettings(ImmutableSettings.builder().put(CodecService.INDEX_CODEC_BLOOM_LOAD, false)).get();\n- logger.info(\"waiting for memory to match without blooms\");\n- awaitBusy(new Predicate<Object>() {\n- public boolean apply(Object o) {\n- IndicesStatsResponse stats = client().admin().indices().prepareStats().setSegments(true).get();\n- long segmentsMemoryWithoutBloom = stats.getTotal().getSegments().getMemoryInBytes();\n- logger.info(\"trying segments without bloom: {}\", segmentsMemoryWithoutBloom);\n- return segmentsMemoryWithoutBloom == (segmentsMemoryWithBloom - BloomFilter.Factory.DEFAULT.createFilter(1).getSizeInBytes());\n- }\n- });\n-\n- logger.info(\"updating the setting to load bloom filters\");\n- client().admin().indices().prepareUpdateSettings(\"test\").setSettings(ImmutableSettings.builder().put(CodecService.INDEX_CODEC_BLOOM_LOAD, true)).get();\n- logger.info(\"waiting for memory to match with blooms\");\n- awaitBusy(new Predicate<Object>() {\n- public boolean apply(Object o) {\n- IndicesStatsResponse stats = client().admin().indices().prepareStats().setSegments(true).get();\n- long newSegmentsMemoryWithBloom = stats.getTotal().getSegments().getMemoryInBytes();\n- logger.info(\"trying segments with bloom: {}\", newSegmentsMemoryWithBloom);\n- return newSegmentsMemoryWithBloom == segmentsMemoryWithBloom;\n- }\n- });\n- }\n-\n- @Test\n- @Slow\n- public void testSettingLoadBloomFilterDefaultFalse() throws Exception {\n- client().admin().indices().prepareCreate(\"test\").setSettings(ImmutableSettings.builder().put(\"number_of_replicas\", 0).put(\"number_of_shards\", 1).put(CodecService.INDEX_CODEC_BLOOM_LOAD, false)).get();\n- client().prepareIndex(\"test\", \"foo\").setSource(\"field\", \"foo\").get();\n- ensureGreen();\n- refresh();\n-\n- IndicesStatsResponse stats = client().admin().indices().prepareStats().setSegments(true).get();\n- final long segmentsMemoryWithoutBloom = stats.getTotal().getSegments().getMemoryInBytes();\n- logger.info(\"segments without bloom: {}\", segmentsMemoryWithoutBloom);\n-\n- logger.info(\"updating the setting to load bloom filters\");\n- client().admin().indices().prepareUpdateSettings(\"test\").setSettings(ImmutableSettings.builder().put(CodecService.INDEX_CODEC_BLOOM_LOAD, true)).get();\n- logger.info(\"waiting for memory to match with blooms\");\n- awaitBusy(new Predicate<Object>() {\n- public boolean apply(Object o) {\n- IndicesStatsResponse stats = client().admin().indices().prepareStats().setSegments(true).get();\n- long segmentsMemoryWithBloom = stats.getTotal().getSegments().getMemoryInBytes();\n- logger.info(\"trying segments with bloom: {}\", segmentsMemoryWithoutBloom);\n- return segmentsMemoryWithoutBloom == (segmentsMemoryWithBloom - BloomFilter.Factory.DEFAULT.createFilter(1).getSizeInBytes());\n- }\n- });\n-\n- logger.info(\"updating the setting to unload bloom filters\");\n- client().admin().indices().prepareUpdateSettings(\"test\").setSettings(ImmutableSettings.builder().put(CodecService.INDEX_CODEC_BLOOM_LOAD, false)).get();\n- logger.info(\"waiting for memory to match without blooms\");\n- awaitBusy(new Predicate<Object>() {\n- public boolean apply(Object o) {\n- IndicesStatsResponse stats = client().admin().indices().prepareStats().setSegments(true).get();\n- long newSegmentsMemoryWithoutBloom = stats.getTotal().getSegments().getMemoryInBytes();\n- logger.info(\"trying segments without bloom: {}\", newSegmentsMemoryWithoutBloom);\n- return newSegmentsMemoryWithoutBloom == segmentsMemoryWithoutBloom;\n- }\n- });\n- }\n-\n @Test\n public void testSetIndexCompoundOnFlush() {\n client().admin().indices().prepareCreate(\"test\").setSettings(ImmutableSettings.builder().put(\"number_of_replicas\", 0).put(\"number_of_shards\", 1)).get();", "filename": "src/test/java/org/elasticsearch/index/engine/internal/InternalEngineIntegrationTest.java", "status": "modified" }, { "diff": "@@ -36,6 +36,7 @@\n import org.apache.lucene.util.Bits;\n import org.apache.lucene.util.BytesRef;\n import org.apache.lucene.util.LineFileDocs;\n+import org.elasticsearch.common.lucene.Lucene;\n import org.elasticsearch.index.analysis.NamedAnalyzer;\n import org.elasticsearch.index.codec.postingsformat.Elasticsearch090PostingsFormat;\n import org.elasticsearch.index.codec.postingsformat.PostingsFormatProvider;\n@@ -70,7 +71,7 @@ public void testCompletionPostingsFormat() throws IOException {\n \n IndexInput input = dir.openInput(\"foo.txt\", IOContext.DEFAULT);\n LookupFactory load = currentProvider.load(input);\n- PostingsFormatProvider format = new PreBuiltPostingsFormatProvider(new Elasticsearch090PostingsFormat());\n+ PostingsFormatProvider format = new PreBuiltPostingsFormatProvider(PostingsFormat.forName(Lucene.LATEST_POSTINGS_FORMAT));\n NamedAnalyzer analyzer = new NamedAnalyzer(\"foo\", new StandardAnalyzer());\n Lookup lookup = load.getLookup(new CompletionFieldMapper(new Names(\"foo\"), analyzer, analyzer, format, null, true, true, true, Integer.MAX_VALUE, AbstractFieldMapper.MultiFields.empty(), null, ContextMapping.EMPTY_MAPPING), new CompletionSuggestionContext(null));\n List<LookupResult> result = lookup.lookup(\"ge\", false, 10);\n@@ -214,7 +215,7 @@ public boolean hasContexts() {\n iter = primaryIter;\n }\n reference.build(iter);\n- PostingsFormatProvider provider = new PreBuiltPostingsFormatProvider(new Elasticsearch090PostingsFormat());\n+ PostingsFormatProvider provider = new PreBuiltPostingsFormatProvider(PostingsFormat.forName(Lucene.LATEST_POSTINGS_FORMAT));\n \n NamedAnalyzer namedAnalzyer = new NamedAnalyzer(\"foo\", new StandardAnalyzer());\n final CompletionFieldMapper mapper = new CompletionFieldMapper(new Names(\"foo\"), namedAnalzyer, namedAnalzyer, provider, null, usePayloads,", "filename": "src/test/java/org/elasticsearch/search/suggest/completion/CompletionPostingsFormatTest.java", "status": "modified" }, { "diff": "@@ -466,9 +466,6 @@ private static ImmutableSettings.Builder setRandomSettings(Random random, Immuta\n builder.put(FsTranslog.INDEX_TRANSLOG_FS_TYPE, RandomPicks.randomFrom(random, FsTranslogFile.Type.values()).name());\n }\n \n- // Randomly load or don't load bloom filters:\n- builder.put(CodecService.INDEX_CODEC_BLOOM_LOAD, random.nextBoolean());\n-\n if (random.nextBoolean()) {\n builder.put(IndicesQueryCache.INDEX_CACHE_QUERY_ENABLED, random.nextBoolean());\n }", "filename": "src/test/java/org/elasticsearch/test/ElasticsearchIntegrationTest.java", "status": "modified" } ] }
{ "body": "As discussed in https://github.com/elasticsearch/elasticsearch/issues/7203 the current behaviour of `lt`/`lte` with date rounding is inconsistent. For instance, with the following docs indexed:\n\n```\nDELETE /t\n\nPOST /t/t/_bulk\n{\"index\":{\"_id\":1}}\n{\"date\":\"2014/11/07 00:00:00\"}\n{\"index\":{\"_id\":2}}\n{\"date\":\"2014/11/07 01:00:00\"}\n{\"index\":{\"_id\":3}}\n{\"date\":\"2014/11/08 00:00:00\"}\n{\"index\":{\"_id\":4}}\n{\"date\":\"2014/11/08 01:00:00\"}\n{\"index\":{\"_id\":5}}\n{\"date\":\"2014/11/09 00:00:00\"}\n{\"index\":{\"_id\":6}}\n{\"date\":\"2014/11/09 01:00:00\"}\n```\n\nThis query with `lt`:\n\n```\nGET /_search?sort=date\n{\n \"query\": {\n \"range\": {\n \"date\": {\n \"lt\": \"2014/11/08||/d\"\n }\n }\n }\n}\n```\n\ncorrectly returns `2014/11/07 00:00:00` and `2014/11/07 01:00:00`, but this query with `lte`:\n\n```\nGET /_search?sort=date\n{\n \"query\": {\n \"range\": {\n \"date\": {\n \"lte\": \"2014/11/08||/d\"\n }\n }\n }\n}\n```\n\nincorrectly returns:\n- `2014/11/07 00:00:00` \n- `2014/11/07 01:00:00`\n- `2014/11/08 00:00:00` \n- `2014/11/08 01:00:00`\n- `2014/11/09 00:00:00` \n\nIt should not include that last document. The `lte` parameter, when used with date rounding, should use `ceil()` to round the date up, as it does today, but `include_upper` should be set to `false`.\n", "comments": [], "number": 8424, "title": "Rounded date ranges with `lte` should set `include_upper` to `false`" }
{ "body": "Date math rounding currently works by rounding the date up or down based\non the scope of the rounding. For example, if you have the date\n`2009-12-24||/d` it will round down to the inclusive lower end\n`2009-12-24T00:00:00.000` and round up to the non-inclusive date\n`2009-12-25T00:00:00.000`.\n\nThe range endpoint semantics work as follows:\n- `gt` - round D down, and use > that value\n- `gte` - round D down, and use >= that value\n- `lt` - round D down, and use <\n- `lte` - round D up, and use <=\n\nThere are 2 problems with these semantics:\n- `lte` ends up including the upper value, which should be non-inclusive\n- `gt` only excludes the beginning of the date, not the entire rounding scope\n\nThis change makes the range endpoint semantics symmetrical. First, it\nchanges the parser to round up and down using the first (same as before)\nand last (1 ms less than before) values of the rounding scope. This\nmakes both rounded endpoints inclusive. The range endpoint semantics\nare then as follows:\n- `gt` - round D up, and use > that value\n- `gte` - round D down, and use >= that value\n- `lt` - round D down, and use < that value\n- `lte` - round D up, and use <= that value\n\ncloses #8424\n", "number": 8556, "review_comments": [ { "body": "This is a hard override (via index.mapping.date.round_ceil setting, which default to true) to disallow rounding up. Not sure if this actuall set to false, seems undesired to me.\n", "created_at": "2014-11-20T15:17:10Z" }, { "body": "I will open a separate issue to deal with this odd setting.\n", "created_at": "2014-11-20T18:15:31Z" }, { "body": "+1\n", "created_at": "2014-11-21T09:08:31Z" } ], "title": "DateMath: Fix semantics of rounding with inclusive/exclusive ranges." }
{ "commits": [ { "message": "DateMath: Fix semantics of rounding with inclusive/exclusive ranges.\n\nDate math rounding currently works by rounding the date up or down based\non the scope of the rounding. For example, if you have the date\n`2009-12-24||/d` it will round down to the inclusive lower end\n`2009-12-24T00:00:00.000` and round up to the non-inclusive date\n`2009-12-25T00:00:00.000`.\n\nThe range endpoint semantics work as follows:\ngt - round D down, and use > that value\ngte - round D down, and use >= that value\nlt - round D down, and use <\nlte - round D up, and use <=\n\nThere are 2 problems with these semantics:\n1. lte ends up including the upper value, which should be non-inclusive\n2. gt only excludes the beginning of the date, not the entire rounding scope\n\nThis change makes the range endpoint semantics symmetrical. First, it\nchanges the parser to round up and down using the first (same as before)\nand last (1 ms less than before) values of the rounding scope. This\nmakes both rounded endpoints inclusive. The range endpoint semantics\nare then as follows:\ngt - round D up, and use > that value\ngte - round D down, and use >= that value\nlt - round D down, and use < that value\nlte - round D up, and use <= that value\n\ncloses #8424" } ], "files": [ { "diff": "@@ -44,22 +44,6 @@ public long parse(String text, long now) {\n return parse(text, now, false, null);\n }\n \n- public long parse(String text, long now, DateTimeZone timeZone) {\n- return parse(text, now, false, timeZone);\n- }\n-\n- public long parseRoundCeil(String text, long now) {\n- return parse(text, now, true, null);\n- }\n-\n- public long parseRoundCeil(String text, long now, DateTimeZone timeZone) {\n- return parse(text, now, true, timeZone);\n- }\n-\n- public long parse(String text, long now, boolean roundCeil) {\n- return parse(text, now, roundCeil, null);\n- }\n-\n public long parse(String text, long now, boolean roundCeil, DateTimeZone timeZone) {\n long time;\n String mathString;\n@@ -92,139 +76,110 @@ public long parse(String text, long now, boolean roundCeil, DateTimeZone timeZon\n \n private long parseMath(String mathString, long time, boolean roundUp) throws ElasticsearchParseException {\n MutableDateTime dateTime = new MutableDateTime(time, DateTimeZone.UTC);\n- try {\n- for (int i = 0; i < mathString.length(); ) {\n- char c = mathString.charAt(i++);\n- int type;\n- if (c == '/') {\n- type = 0;\n- } else if (c == '+') {\n- type = 1;\n+ for (int i = 0; i < mathString.length(); ) {\n+ char c = mathString.charAt(i++);\n+ final boolean round;\n+ final int sign;\n+ if (c == '/') {\n+ round = true;\n+ sign = 1;\n+ } else {\n+ round = false;\n+ if (c == '+') {\n+ sign = 1;\n } else if (c == '-') {\n- type = 2;\n+ sign = -1;\n } else {\n throw new ElasticsearchParseException(\"operator not supported for date math [\" + mathString + \"]\");\n }\n+ }\n+ \n+ if (i >= mathString.length()) {\n+ throw new ElasticsearchParseException(\"truncated date math [\" + mathString + \"]\");\n+ }\n \n- int num;\n- if (!Character.isDigit(mathString.charAt(i))) {\n- num = 1;\n- } else {\n- int numFrom = i;\n- while (Character.isDigit(mathString.charAt(i))) {\n- i++;\n- }\n- num = Integer.parseInt(mathString.substring(numFrom, i));\n+ final int num;\n+ if (!Character.isDigit(mathString.charAt(i))) {\n+ num = 1;\n+ } else {\n+ int numFrom = i;\n+ while (i < mathString.length() && Character.isDigit(mathString.charAt(i))) {\n+ i++;\n }\n- if (type == 0) {\n- // rounding is only allowed on whole numbers\n- if (num != 1) {\n- throw new ElasticsearchParseException(\"rounding `/` can only be used on single unit types [\" + mathString + \"]\");\n- }\n+ if (i >= mathString.length()) {\n+ throw new ElasticsearchParseException(\"truncated date math [\" + mathString + \"]\");\n }\n- char unit = mathString.charAt(i++);\n- switch (unit) {\n- case 'y':\n- if (type == 0) {\n- if (roundUp) {\n- dateTime.yearOfCentury().roundCeiling();\n- } else {\n- dateTime.yearOfCentury().roundFloor();\n- }\n- } else if (type == 1) {\n- dateTime.addYears(num);\n- } else if (type == 2) {\n- dateTime.addYears(-num);\n- }\n- break;\n- case 'M':\n- if (type == 0) {\n- if (roundUp) {\n- dateTime.monthOfYear().roundCeiling();\n- } else {\n- dateTime.monthOfYear().roundFloor();\n- }\n- } else if (type == 1) {\n- dateTime.addMonths(num);\n- } else if (type == 2) {\n- dateTime.addMonths(-num);\n- }\n- break;\n- case 'w':\n- if (type == 0) {\n- if (roundUp) {\n- dateTime.weekOfWeekyear().roundCeiling();\n- } else {\n- dateTime.weekOfWeekyear().roundFloor();\n- }\n- } else if (type == 1) {\n- dateTime.addWeeks(num);\n- } else if (type == 2) {\n- dateTime.addWeeks(-num);\n- }\n- break;\n- case 'd':\n- if (type == 0) {\n- if (roundUp) {\n- dateTime.dayOfMonth().roundCeiling();\n- } else {\n- dateTime.dayOfMonth().roundFloor();\n- }\n- } else if (type == 1) {\n- dateTime.addDays(num);\n- } else if (type == 2) {\n- dateTime.addDays(-num);\n- }\n- break;\n- case 'h':\n- case 'H':\n- if (type == 0) {\n- if (roundUp) {\n- dateTime.hourOfDay().roundCeiling();\n- } else {\n- dateTime.hourOfDay().roundFloor();\n- }\n- } else if (type == 1) {\n- dateTime.addHours(num);\n- } else if (type == 2) {\n- dateTime.addHours(-num);\n- }\n- break;\n- case 'm':\n- if (type == 0) {\n- if (roundUp) {\n- dateTime.minuteOfHour().roundCeiling();\n- } else {\n- dateTime.minuteOfHour().roundFloor();\n- }\n- } else if (type == 1) {\n- dateTime.addMinutes(num);\n- } else if (type == 2) {\n- dateTime.addMinutes(-num);\n- }\n- break;\n- case 's':\n- if (type == 0) {\n- if (roundUp) {\n- dateTime.secondOfMinute().roundCeiling();\n- } else {\n- dateTime.secondOfMinute().roundFloor();\n- }\n- } else if (type == 1) {\n- dateTime.addSeconds(num);\n- } else if (type == 2) {\n- dateTime.addSeconds(-num);\n- }\n- break;\n- default:\n- throw new ElasticsearchParseException(\"unit [\" + unit + \"] not supported for date math [\" + mathString + \"]\");\n+ num = Integer.parseInt(mathString.substring(numFrom, i));\n+ }\n+ if (round) {\n+ if (num != 1) {\n+ throw new ElasticsearchParseException(\"rounding `/` can only be used on single unit types [\" + mathString + \"]\");\n }\n }\n- } catch (Exception e) {\n- if (e instanceof ElasticsearchParseException) {\n- throw (ElasticsearchParseException) e;\n+ char unit = mathString.charAt(i++);\n+ MutableDateTime.Property propertyToRound = null;\n+ switch (unit) {\n+ case 'y':\n+ if (round) {\n+ propertyToRound = dateTime.yearOfCentury();\n+ } else {\n+ dateTime.addYears(sign * num);\n+ }\n+ break;\n+ case 'M':\n+ if (round) {\n+ propertyToRound = dateTime.monthOfYear();\n+ } else {\n+ dateTime.addMonths(sign * num);\n+ }\n+ break;\n+ case 'w':\n+ if (round) {\n+ propertyToRound = dateTime.weekOfWeekyear();\n+ } else {\n+ dateTime.addWeeks(sign * num);\n+ }\n+ break;\n+ case 'd':\n+ if (round) {\n+ propertyToRound = dateTime.dayOfMonth();\n+ } else {\n+ dateTime.addDays(sign * num);\n+ }\n+ break;\n+ case 'h':\n+ case 'H':\n+ if (round) {\n+ propertyToRound = dateTime.hourOfDay();\n+ } else {\n+ dateTime.addHours(sign * num);\n+ }\n+ break;\n+ case 'm':\n+ if (round) {\n+ propertyToRound = dateTime.minuteOfHour();\n+ } else {\n+ dateTime.addMinutes(sign * num);\n+ }\n+ break;\n+ case 's':\n+ if (round) {\n+ propertyToRound = dateTime.secondOfMinute();\n+ } else {\n+ dateTime.addSeconds(sign * num);\n+ }\n+ break;\n+ default:\n+ throw new ElasticsearchParseException(\"unit [\" + unit + \"] not supported for date math [\" + mathString + \"]\");\n+ }\n+ if (propertyToRound != null) {\n+ if (roundUp) {\n+ propertyToRound.roundCeiling();\n+ dateTime.addMillis(-1); // subtract 1 millisecond to get the largest inclusive value\n+ } else {\n+ propertyToRound.roundFloor();\n+ }\n }\n- throw new ElasticsearchParseException(\"failed to parse date math [\" + mathString + \"]\");\n }\n return dateTime.getMillis();\n }", "filename": "src/main/java/org/elasticsearch/common/joda/DateMathParser.java", "status": "modified" }, { "diff": "@@ -309,25 +309,25 @@ public long parseToMilliseconds(Object value, @Nullable QueryParseContext contex\n return parseToMilliseconds(value, context, false);\n }\n \n- public long parseToMilliseconds(Object value, @Nullable QueryParseContext context, boolean includeUpper) {\n- return parseToMilliseconds(value, context, includeUpper, null, dateMathParser);\n+ public long parseToMilliseconds(Object value, @Nullable QueryParseContext context, boolean inclusive) {\n+ return parseToMilliseconds(value, context, inclusive, null, dateMathParser);\n }\n \n- public long parseToMilliseconds(Object value, @Nullable QueryParseContext context, boolean includeUpper, @Nullable DateTimeZone zone, @Nullable DateMathParser forcedDateParser) {\n+ public long parseToMilliseconds(Object value, @Nullable QueryParseContext context, boolean inclusive, @Nullable DateTimeZone zone, @Nullable DateMathParser forcedDateParser) {\n if (value instanceof Number) {\n return ((Number) value).longValue();\n }\n- return parseToMilliseconds(convertToString(value), context, includeUpper, zone, forcedDateParser);\n+ return parseToMilliseconds(convertToString(value), context, inclusive, zone, forcedDateParser);\n }\n \n- public long parseToMilliseconds(String value, @Nullable QueryParseContext context, boolean includeUpper, @Nullable DateTimeZone zone, @Nullable DateMathParser forcedDateParser) {\n+ public long parseToMilliseconds(String value, @Nullable QueryParseContext context, boolean inclusive, @Nullable DateTimeZone zone, @Nullable DateMathParser forcedDateParser) {\n long now = context == null ? System.currentTimeMillis() : context.nowInMillis();\n DateMathParser dateParser = dateMathParser;\n if (forcedDateParser != null) {\n dateParser = forcedDateParser;\n }\n- long time = includeUpper && roundCeil ? dateParser.parseRoundCeil(value, now, zone) : dateParser.parse(value, now, zone);\n- return time;\n+ boolean roundUp = inclusive && roundCeil; // TODO: what is roundCeil??\n+ return dateParser.parse(value, now, roundUp, zone);\n }\n \n @Override\n@@ -344,7 +344,7 @@ public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower\n \n public Query rangeQuery(Object lowerTerm, Object upperTerm, boolean includeLower, boolean includeUpper, @Nullable DateTimeZone timeZone, @Nullable DateMathParser forcedDateParser, @Nullable QueryParseContext context) {\n return NumericRangeQuery.newLongRange(names.indexName(), precisionStep,\n- lowerTerm == null ? null : parseToMilliseconds(lowerTerm, context, false, timeZone, forcedDateParser == null ? dateMathParser : forcedDateParser),\n+ lowerTerm == null ? null : parseToMilliseconds(lowerTerm, context, !includeLower, timeZone, forcedDateParser == null ? dateMathParser : forcedDateParser),\n upperTerm == null ? null : parseToMilliseconds(upperTerm, context, includeUpper, timeZone, forcedDateParser == null ? dateMathParser : forcedDateParser),\n includeLower, includeUpper);\n }", "filename": "src/main/java/org/elasticsearch/index/mapper/core/DateFieldMapper.java", "status": "modified" }, { "diff": "@@ -19,48 +19,141 @@\n \n package org.elasticsearch.common.joda;\n \n+import org.elasticsearch.ElasticsearchParseException;\n import org.elasticsearch.test.ElasticsearchTestCase;\n-import org.junit.Test;\n+import org.joda.time.DateTimeZone;\n \n import java.util.concurrent.TimeUnit;\n \n-import static org.hamcrest.MatcherAssert.assertThat;\n-import static org.hamcrest.Matchers.equalTo;\n-\n-/**\n- */\n public class DateMathParserTests extends ElasticsearchTestCase {\n+ FormatDateTimeFormatter formatter = Joda.forPattern(\"dateOptionalTime\");\n+ DateMathParser parser = new DateMathParser(formatter, TimeUnit.MILLISECONDS);\n \n- @Test\n- public void dataMathTests() {\n+ void assertDateMathEquals(String toTest, String expected) {\n+ assertDateMathEquals(toTest, expected, 0, false, null);\n+ }\n+ \n+ void assertDateMathEquals(String toTest, String expected, long now, boolean roundUp, DateTimeZone timeZone) {\n DateMathParser parser = new DateMathParser(Joda.forPattern(\"dateOptionalTime\"), TimeUnit.MILLISECONDS);\n+ long gotMillis = parser.parse(toTest, now, roundUp, null);\n+ long expectedMillis = parser.parse(expected, 0);\n+ if (gotMillis != expectedMillis) {\n+ fail(\"Date math not equal\\n\" +\n+ \"Original : \" + toTest + \"\\n\" +\n+ \"Parsed : \" + formatter.printer().print(gotMillis) + \"\\n\" +\n+ \"Expected : \" + expected + \"\\n\" +\n+ \"Expected milliseconds : \" + expectedMillis + \"\\n\" +\n+ \"Actual milliseconds : \" + gotMillis + \"\\n\");\n+ }\n+ }\n+ \n+ public void testBasicDates() {\n+ assertDateMathEquals(\"2014\", \"2014-01-01T00:00:00.000\");\n+ assertDateMathEquals(\"2014-05\", \"2014-05-01T00:00:00.000\");\n+ assertDateMathEquals(\"2014-05-30\", \"2014-05-30T00:00:00.000\");\n+ assertDateMathEquals(\"2014-05-30T20\", \"2014-05-30T20:00:00.000\");\n+ assertDateMathEquals(\"2014-05-30T20:21\", \"2014-05-30T20:21:00.000\");\n+ assertDateMathEquals(\"2014-05-30T20:21:35\", \"2014-05-30T20:21:35.000\");\n+ assertDateMathEquals(\"2014-05-30T20:21:35.123\", \"2014-05-30T20:21:35.123\");\n+ }\n+ \n+ public void testBasicMath() {\n+ assertDateMathEquals(\"2014-11-18||+y\", \"2015-11-18\");\n+ assertDateMathEquals(\"2014-11-18||-2y\", \"2012-11-18\");\n \n- assertThat(parser.parse(\"now\", 0), equalTo(0l));\n- assertThat(parser.parse(\"now+m\", 0), equalTo(TimeUnit.MINUTES.toMillis(1)));\n- assertThat(parser.parse(\"now+1m\", 0), equalTo(TimeUnit.MINUTES.toMillis(1)));\n- assertThat(parser.parse(\"now+11m\", 0), equalTo(TimeUnit.MINUTES.toMillis(11)));\n+ assertDateMathEquals(\"2014-11-18||+3M\", \"2015-02-18\");\n+ assertDateMathEquals(\"2014-11-18||-M\", \"2014-10-18\");\n \n- assertThat(parser.parse(\"now+1d\", 0), equalTo(TimeUnit.DAYS.toMillis(1)));\n+ assertDateMathEquals(\"2014-11-18||+1w\", \"2014-11-25\");\n+ assertDateMathEquals(\"2014-11-18||-3w\", \"2014-10-28\");\n \n- assertThat(parser.parse(\"now+1m+1s\", 0), equalTo(TimeUnit.MINUTES.toMillis(1) + TimeUnit.SECONDS.toMillis(1)));\n- assertThat(parser.parse(\"now+1m-1s\", 0), equalTo(TimeUnit.MINUTES.toMillis(1) - TimeUnit.SECONDS.toMillis(1)));\n+ assertDateMathEquals(\"2014-11-18||+22d\", \"2014-12-10\");\n+ assertDateMathEquals(\"2014-11-18||-423d\", \"2013-09-21\");\n \n- assertThat(parser.parse(\"now+1m+1s/m\", 0), equalTo(TimeUnit.MINUTES.toMillis(1)));\n- assertThat(parser.parseRoundCeil(\"now+1m+1s/m\", 0), equalTo(TimeUnit.MINUTES.toMillis(2)));\n- \n- assertThat(parser.parse(\"now+4y\", 0), equalTo(TimeUnit.DAYS.toMillis(4*365 + 1)));\n+ assertDateMathEquals(\"2014-11-18T14||+13h\", \"2014-11-19T03\");\n+ assertDateMathEquals(\"2014-11-18T14||-1h\", \"2014-11-18T13\");\n+ assertDateMathEquals(\"2014-11-18T14||+13H\", \"2014-11-19T03\");\n+ assertDateMathEquals(\"2014-11-18T14||-1H\", \"2014-11-18T13\");\n+\n+ assertDateMathEquals(\"2014-11-18T14:27||+10240m\", \"2014-11-25T17:07\");\n+ assertDateMathEquals(\"2014-11-18T14:27||-10m\", \"2014-11-18T14:17\");\n+\n+ assertDateMathEquals(\"2014-11-18T14:27:32||+60s\", \"2014-11-18T14:28:32\");\n+ assertDateMathEquals(\"2014-11-18T14:27:32||-3600s\", \"2014-11-18T13:27:32\");\n }\n \n- @Test\n- public void actualDateTests() {\n- DateMathParser parser = new DateMathParser(Joda.forPattern(\"dateOptionalTime\"), TimeUnit.MILLISECONDS);\n+ public void testMultipleAdjustments() {\n+ assertDateMathEquals(\"2014-11-18||+1M-1M\", \"2014-11-18\");\n+ assertDateMathEquals(\"2014-11-18||+1M-1m\", \"2014-12-17T23:59\");\n+ assertDateMathEquals(\"2014-11-18||-1m+1M\", \"2014-12-17T23:59\");\n+ assertDateMathEquals(\"2014-11-18||+1M/M\", \"2014-12-01\");\n+ assertDateMathEquals(\"2014-11-18||+1M/M+1h\", \"2014-12-01T01\");\n+ }\n \n- assertThat(parser.parse(\"1970-01-01\", 0), equalTo(0l));\n- assertThat(parser.parse(\"1970-01-01||+1m\", 0), equalTo(TimeUnit.MINUTES.toMillis(1)));\n- assertThat(parser.parse(\"1970-01-01||+1m+1s\", 0), equalTo(TimeUnit.MINUTES.toMillis(1) + TimeUnit.SECONDS.toMillis(1)));\n+\n+ public void testNow() {\n+ long now = parser.parse(\"2014-11-18T14:27:32\", 0, false, null);\n+ assertDateMathEquals(\"now\", \"2014-11-18T14:27:32\", now, false, null);\n+ assertDateMathEquals(\"now+M\", \"2014-12-18T14:27:32\", now, false, null);\n+ assertDateMathEquals(\"now-2d\", \"2014-11-16T14:27:32\", now, false, null);\n+ assertDateMathEquals(\"now/m\", \"2014-11-18T14:27\", now, false, null);\n+ }\n+\n+ public void testRounding() {\n+ assertDateMathEquals(\"2014-11-18||/y\", \"2014-01-01\", 0, false, null);\n+ assertDateMathEquals(\"2014-11-18||/y\", \"2014-12-31T23:59:59.999\", 0, true, null);\n+ assertDateMathEquals(\"2014||/y\", \"2014-01-01\", 0, false, null);\n+ assertDateMathEquals(\"2014||/y\", \"2014-12-31T23:59:59.999\", 0, true, null);\n+ \n+ assertDateMathEquals(\"2014-11-18||/M\", \"2014-11-01\", 0, false, null);\n+ assertDateMathEquals(\"2014-11-18||/M\", \"2014-11-30T23:59:59.999\", 0, true, null);\n+ assertDateMathEquals(\"2014-11||/M\", \"2014-11-01\", 0, false, null);\n+ assertDateMathEquals(\"2014-11||/M\", \"2014-11-30T23:59:59.999\", 0, true, null);\n+ \n+ assertDateMathEquals(\"2014-11-18T14||/w\", \"2014-11-17\", 0, false, null);\n+ assertDateMathEquals(\"2014-11-18T14||/w\", \"2014-11-23T23:59:59.999\", 0, true, null);\n+ assertDateMathEquals(\"2014-11-18||/w\", \"2014-11-17\", 0, false, null);\n+ assertDateMathEquals(\"2014-11-18||/w\", \"2014-11-23T23:59:59.999\", 0, true, null);\n \n- assertThat(parser.parse(\"2013-01-01||+1y\", 0), equalTo(parser.parse(\"2013-01-01\", 0) + TimeUnit.DAYS.toMillis(365)));\n- assertThat(parser.parse(\"2013-03-03||/y\", 0), equalTo(parser.parse(\"2013-01-01\", 0)));\n- assertThat(parser.parseRoundCeil(\"2013-03-03||/y\", 0), equalTo(parser.parse(\"2014-01-01\", 0)));\n+ assertDateMathEquals(\"2014-11-18T14||/d\", \"2014-11-18\", 0, false, null);\n+ assertDateMathEquals(\"2014-11-18T14||/d\", \"2014-11-18T23:59:59.999\", 0, true, null);\n+ assertDateMathEquals(\"2014-11-18||/d\", \"2014-11-18\", 0, false, null);\n+ assertDateMathEquals(\"2014-11-18||/d\", \"2014-11-18T23:59:59.999\", 0, true, null);\n+ \n+ assertDateMathEquals(\"2014-11-18T14:27||/h\", \"2014-11-18T14\", 0, false, null);\n+ assertDateMathEquals(\"2014-11-18T14:27||/h\", \"2014-11-18T14:59:59.999\", 0, true, null);\n+ assertDateMathEquals(\"2014-11-18T14||/H\", \"2014-11-18T14\", 0, false, null);\n+ assertDateMathEquals(\"2014-11-18T14||/H\", \"2014-11-18T14:59:59.999\", 0, true, null);\n+ assertDateMathEquals(\"2014-11-18T14:27||/h\", \"2014-11-18T14\", 0, false, null);\n+ assertDateMathEquals(\"2014-11-18T14:27||/h\", \"2014-11-18T14:59:59.999\", 0, true, null);\n+ assertDateMathEquals(\"2014-11-18T14||/H\", \"2014-11-18T14\", 0, false, null);\n+ assertDateMathEquals(\"2014-11-18T14||/H\", \"2014-11-18T14:59:59.999\", 0, true, null);\n+ \n+ assertDateMathEquals(\"2014-11-18T14:27:32||/m\", \"2014-11-18T14:27\", 0, false, null);\n+ assertDateMathEquals(\"2014-11-18T14:27:32||/m\", \"2014-11-18T14:27:59.999\", 0, true, null);\n+ assertDateMathEquals(\"2014-11-18T14:27||/m\", \"2014-11-18T14:27\", 0, false, null);\n+ assertDateMathEquals(\"2014-11-18T14:27||/m\", \"2014-11-18T14:27:59.999\", 0, true, null);\n+ \n+ assertDateMathEquals(\"2014-11-18T14:27:32.123||/s\", \"2014-11-18T14:27:32\", 0, false, null);\n+ assertDateMathEquals(\"2014-11-18T14:27:32.123||/s\", \"2014-11-18T14:27:32.999\", 0, true, null);\n+ assertDateMathEquals(\"2014-11-18T14:27:32||/s\", \"2014-11-18T14:27:32\", 0, false, null);\n+ assertDateMathEquals(\"2014-11-18T14:27:32||/s\", \"2014-11-18T14:27:32.999\", 0, true, null);\n+ }\n+ \n+ void assertParseException(String msg, String date) {\n+ try {\n+ parser.parse(date, 0);\n+ fail(\"Date: \" + date + \"\\n\" + msg);\n+ } catch (ElasticsearchParseException e) {\n+ // expected\n+ }\n+ }\n+ \n+ public void testIllegalMathFormat() {\n+ assertParseException(\"Expected date math unsupported operator exception\", \"2014-11-18||*5\");\n+ assertParseException(\"Expected date math incompatible rounding exception\", \"2014-11-18||/2m\");\n+ assertParseException(\"Expected date math illegal unit type exception\", \"2014-11-18||+2a\");\n+ assertParseException(\"Expected date math truncation exception\", \"2014-11-18||+12\");\n+ assertParseException(\"Expected date math truncation exception\", \"2014-11-18||-\");\n }\n }", "filename": "src/test/java/org/elasticsearch/common/joda/DateMathParserTests.java", "status": "modified" }, { "diff": "@@ -105,4 +105,34 @@ public void testDateRangeQueryFormat() throws IOException {\n // We expect it\n }\n }\n+\n+ @Test\n+ public void testDateRangeBoundaries() throws IOException {\n+ IndexQueryParserService queryParser = queryParser();\n+ String query = copyToStringFromClasspath(\"/org/elasticsearch/index/query/date_range_query_boundaries_inclusive.json\");\n+ Query parsedQuery = queryParser.parse(query).query();\n+ assertThat(parsedQuery, instanceOf(NumericRangeQuery.class));\n+ NumericRangeQuery rangeQuery = (NumericRangeQuery) parsedQuery;\n+\n+ DateTime min = DateTime.parse(\"2014-11-01T00:00:00.000+00\");\n+ assertThat(rangeQuery.getMin().longValue(), is(min.getMillis()));\n+ assertTrue(rangeQuery.includesMin());\n+\n+ DateTime max = DateTime.parse(\"2014-12-08T23:59:59.999+00\");\n+ assertThat(rangeQuery.getMax().longValue(), is(max.getMillis()));\n+ assertTrue(rangeQuery.includesMax());\n+\n+ query = copyToStringFromClasspath(\"/org/elasticsearch/index/query/date_range_query_boundaries_exclusive.json\");\n+ parsedQuery = queryParser.parse(query).query();\n+ assertThat(parsedQuery, instanceOf(NumericRangeQuery.class));\n+ rangeQuery = (NumericRangeQuery) parsedQuery;\n+\n+ min = DateTime.parse(\"2014-11-30T23:59:59.999+00\");\n+ assertThat(rangeQuery.getMin().longValue(), is(min.getMillis()));\n+ assertFalse(rangeQuery.includesMin());\n+\n+ max = DateTime.parse(\"2014-12-08T00:00:00.000+00\");\n+ assertThat(rangeQuery.getMax().longValue(), is(max.getMillis()));\n+ assertFalse(rangeQuery.includesMax());\n+ }\n }", "filename": "src/test/java/org/elasticsearch/index/query/IndexQueryParserFilterDateRangeFormatTests.java", "status": "modified" }, { "diff": "@@ -0,0 +1,8 @@\n+{\n+ \"range\" : {\n+ \"born\" : {\n+ \"gt\": \"2014-11-05||/M\",\n+ \"lt\": \"2014-12-08||/d\"\n+ }\n+ }\n+}", "filename": "src/test/java/org/elasticsearch/index/query/date_range_query_boundaries_exclusive.json", "status": "added" }, { "diff": "@@ -0,0 +1,8 @@\n+{\n+ \"range\" : {\n+ \"born\" : {\n+ \"gte\": \"2014-11-05||/M\",\n+ \"lte\": \"2014-12-08||/d\"\n+ }\n+ }\n+}", "filename": "src/test/java/org/elasticsearch/index/query/date_range_query_boundaries_inclusive.json", "status": "added" } ] }
{ "body": "GeoHashUtils.neighbor produces bad neighbours for even level geohash (when geohash length is even).\n\nFor instance : \n\nFor geohash `u09tv` : \n\nhttp://geohash.gofreerange.com/ (this geohash is in Paris, France).\n\nReal neighbours for this geohash are `[u09wh, u09wj, u09wn, u09tu, u09ty, u09ts, u09tt, u09tw]`\n\nGeoHashUtils.neigbors returns `[u09qh, u09wj, u09yn, u09mu, u09vy, u09ks, u09st, u09uw]`\n", "comments": [], "number": 8526, "title": "Geo: incorrect neighbours computation in GeoHashUtils" }
{ "body": "We don't have to set XLimit and YLimit depending on the level (even or odd), since semantics of x and y are already swapped on each level.\nXLimit is always 7 and YLimit is always 3.\n\nClose #8526\n", "number": 8529, "review_comments": [], "title": "Fix for geohash neighbors when geohash length is even." }
{ "commits": [ { "message": "Fix for geohash neighbors when geohash length is even.\nWe don't have to set XLimit and YLimit depending on the level (even or odd), since semantics of x and y are already swapped on each level.\nXLimit is always 7 and YLimit is always 3.\n\nClose #8526" } ], "files": [ { "diff": "@@ -159,15 +159,13 @@ private final static String neighbor(String geohash, int level, int dx, int dy)\n final int nx = ((level % 2) == 1) ? (x + dx) : (x + dy);\n final int ny = ((level % 2) == 1) ? (y + dy) : (y + dx);\n \n- // define grid limits for current level\n- final int xLimit = ((level % 2) == 0) ? 7 : 3;\n- final int yLimit = ((level % 2) == 0) ? 3 : 7;\n-\n // if the defined neighbor has the same parent a the current cell\n // encode the cell directly. Otherwise find the cell next to this\n // cell recursively. Since encoding wraps around within a cell\n // it can be encoded here.\n- if (nx >= 0 && nx <= xLimit && ny >= 0 && ny <= yLimit) {\n+ // xLimit and YLimit must always be respectively 7 and 3\n+ // since x and y semantics are swapping on each level.\n+ if (nx >= 0 && nx <= 7 && ny >= 0 && ny <= 3) {\n return geohash.substring(0, level - 1) + encode(nx, ny);\n } else {\n String neighbor = neighbor(geohash, level - 1, dx, dy);", "filename": "src/main/java/org/elasticsearch/common/geo/GeoHashUtils.java", "status": "modified" }, { "diff": "@@ -104,5 +104,35 @@ public void testNeighbours() {\n Collection<? super String> neighbors = new ArrayList<>();\n GeoHashUtils.addNeighbors(geohash, neighbors );\n assertEquals(expectedNeighbors, neighbors);\n+\n+ // Border odd geohash\n+ geohash = \"u09x\";\n+ expectedNeighbors = new ArrayList<>();\n+ expectedNeighbors.add(\"u0c2\");\n+ expectedNeighbors.add(\"u0c8\");\n+ expectedNeighbors.add(\"u0cb\");\n+ expectedNeighbors.add(\"u09r\");\n+ expectedNeighbors.add(\"u09z\");\n+ expectedNeighbors.add(\"u09q\");\n+ expectedNeighbors.add(\"u09w\");\n+ expectedNeighbors.add(\"u09y\");\n+ neighbors = new ArrayList<>();\n+ GeoHashUtils.addNeighbors(geohash, neighbors );\n+ assertEquals(expectedNeighbors, neighbors);\n+\n+ // Border even geohash\n+ geohash = \"u09tv\";\n+ expectedNeighbors = new ArrayList<>();\n+ expectedNeighbors.add(\"u09wh\");\n+ expectedNeighbors.add(\"u09wj\");\n+ expectedNeighbors.add(\"u09wn\");\n+ expectedNeighbors.add(\"u09tu\");\n+ expectedNeighbors.add(\"u09ty\");\n+ expectedNeighbors.add(\"u09ts\");\n+ expectedNeighbors.add(\"u09tt\");\n+ expectedNeighbors.add(\"u09tw\");\n+ neighbors = new ArrayList<>();\n+ GeoHashUtils.addNeighbors(geohash, neighbors );\n+ assertEquals(expectedNeighbors, neighbors);\n }\n }", "filename": "src/test/java/org/elasticsearch/index/search/geo/GeoHashUtilsTests.java", "status": "modified" } ] }
{ "body": "Query is pretty big and I have no idea what's causing it. I don't think that's relevant anyway, IMHO the code should check for nulls and throw a more specific exception, otherwise we clients are left in the dark about what we're doing wrong.\nHere's the stack trace:\n\n```\n at org.elasticsearch.search.SearchService.parseSource(SearchService.java:660)\n at org.elasticsearch.search.SearchService.createContext(SearchService.java:516)\n at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:488)\n at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:257)\n at org.elasticsearch.search.action.SearchServiceTransportAction$5.call(SearchServiceTransportAction.java:206)\n at org.elasticsearch.search.action.SearchServiceTransportAction$5.call(SearchServiceTransportAction.java:203)\n at org.elasticsearch.search.action.SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:517)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)\n at java.lang.Thread.run(Unknown Source)\nCaused by: java.lang.NullPointerException\n at org.elasticsearch.search.aggregations.bucket.filter.FilterParser.parse(FilterParser.java:42)\n at org.elasticsearch.search.aggregations.AggregatorParsers.parseAggregators(AggregatorParsers.java:130)\n at org.elasticsearch.search.aggregations.AggregatorParsers.parseAggregators(AggregatorParsers.java:120)\n at org.elasticsearch.search.aggregations.AggregatorParsers.parseAggregators(AggregatorParsers.java:77)\n at org.elasticsearch.search.aggregations.AggregationParseElement.parse(AggregationParseElement.java:60)\n at org.elasticsearch.search.SearchService.parseSource(SearchService.java:644)\n ... 9 more\n```\n", "comments": [ { "body": "> Query is pretty big and I have no idea what's causing it. I don't think that's relevant anyway, IMHO the code should check for nulls and throw a more specific exception, otherwise we clients are left in the dark about what we're doing wrong.\n\nAgreed - can you tell us what you were doing when you saw this error? What does the query/agg look like?\n", "created_at": "2014-11-11T14:03:40Z" }, { "body": "It was mostly simple terms aggs but also one global agg with a filter. IIRC the query itself was that same filter being applied to the global agg.\n", "created_at": "2014-11-11T14:42:49Z" }, { "body": "@mausch could you provide some examples? Clearly we're not seeing this in our testing, so it would be helpful to see what you are doing different.\n", "created_at": "2014-11-11T14:50:54Z" }, { "body": "Sorry, I don't have that query any more. But I still insist that it shouldn't matter. I'd run some random-testing tool (e.g. https://github.com/pholser/junit-quickcheck ) on `SearchService.parseSource` and `FilterParser.parse`. No input ever should cause a NPE in my opinion, otherwise it's a bug.\n", "created_at": "2014-11-11T16:03:02Z" }, { "body": "@mausch I agree with you - we don't wilfully leave NPEs in our code. The trick is finding out what is causing it. \n", "created_at": "2014-11-11T16:22:38Z" }, { "body": "I am new to this community and I think I can fix this one to start with. Has some one already fixed it ?\n", "created_at": "2014-11-11T16:33:20Z" }, { "body": "@clintongormley Indeed, especially with complex code. Sorry I lost the original query. The only thing I can advice is to start applying randomized testing (if you haven't already, I don't know much about the ES codebase). It's great to find this kind of bugs. IIRC Lucene has started using it a couple of years ago.\n\n@vaibhavkulkar unless there's already some randomized testing in place in the project this isn't trivial to fix.\n", "created_at": "2014-11-11T16:45:35Z" }, { "body": "can you tell what filters you are using?\n", "created_at": "2014-11-11T16:55:21Z" }, { "body": "IIRC it was an And filter with a single terms filter within.\n", "created_at": "2014-11-11T17:22:37Z" }, { "body": "@mausch also, what version of Elasticsearch did you see this on?\n", "created_at": "2014-11-11T18:11:13Z" }, { "body": "The filters aggregation should in at FilterParser.java line 41 check for null. A null value is a valid return value for a filter parser (and for example the `and` filter does this).\n", "created_at": "2014-11-12T11:19:11Z" }, { "body": "@clintongormley version 1.3.4\n", "created_at": "2014-11-12T12:48:08Z" }, { "body": "I found the same NPE.\nthe version is 1.3.2.\nIn my case,it happened when i used FilterAggregation ,but sent no filter.\nIn java client,it looks like this.\n\n``` java\n FilterBuilder fb = new AndFilterBuilder();\n FilterAggregationBuilder fab = AggregationBuilders\n .filter(\"filter\")\n .filter(fb)\n .subAggregation(AggregationBuilders.terms(\"brandId\").field(\"brand_id\").size(0))\n .subAggregation(AggregationBuilders.terms(\"allAttr\").field(\"all_attr\").size(0));\n```\n\nIn http query it looks like this.\n![elasticsearch npe](https://cloud.githubusercontent.com/assets/1748926/5064466/7748cd16-6e44-11e4-9a16-239cd4538f30.png)\n", "created_at": "2014-11-17T02:41:51Z" }, { "body": "@martijnvg I can see where you say check for null but do you believe the correct response is then to throw a parse exception with a friendly message or to process the query using something like a MatchNoDocsFilter?\n", "created_at": "2014-11-17T10:48:26Z" }, { "body": "@markharwood I think if a filter parser returns null then the filters agg should interpret this as nothing is going to match, so `MatchNoDocsFilter` should be used.\n", "created_at": "2014-11-17T10:51:22Z" }, { "body": "We had the same issue with bool query and what we do there is: \n\n``` Java\n if (clauses.isEmpty()) {\n return new MatchAllDocsQuery();\n }\n```\n\nI think bool filter should be consistent here - I am not sure which one we should change to be honest...\n", "created_at": "2014-11-17T10:57:13Z" }, { "body": "> I am not sure which one we should change to be honest...\n\nTough one. My gut feel on the empty filters array would be that it would mean \"match none\" but if we have a precedent for this on the query side which is \"match all\" then I guess we have these ugly options:\n1) API Inconsistency (query=match all, filter = match none)\n2) Consistently ALL (but empty filters matching all might feel weird)\n3) Consistently NONE ( introduces a backwards compatibility issue here for change in query behaviour)\n4) Parser error (avoids any ambiguity by forcing users to declare logic more explicitly)\n", "created_at": "2014-11-17T11:24:14Z" }, { "body": "The `bool` query change was made to make it easier to build such queries in tools like Kibana, ie they have a placeholder to which clauses can be added, but which doesn't fail if none are added.\n\nIf we don't specify a `query` (eg in `filtered` query) then it defaults to `match_all`. I think the same logic works for filters here. We're not specifying leaf filters against any fields, there is just no leaf filter. I'd lean towards matching ALL.\n", "created_at": "2014-11-17T11:52:19Z" }, { "body": "+1 for what clint said. I thinks the deal here is `no restrictions` == `match all` so I think translating it to match all it the right thing. I personally think it's not perfectly fine for a filter parser to return `null` btw. I think the actual query/filter parser should handle these cases and return valid filter instead. @martijnvg what do you think?\n", "created_at": "2014-11-17T14:27:18Z" } ], "number": 8438, "title": "Parse Failure: NullPointerException" }
{ "body": "Added Junit test that recreates the error and fixed FilterParser to default to using a MatchAllDocsFilter if the requested filter clause is left empty.\nCloses #8438\n", "number": 8527, "review_comments": [], "title": "Parser throws NullPointerException when Filter aggregation clause is empty" }
{ "commits": [ { "message": "Parser throws NullPointerException when Filter aggregation clause is empty.\nAdded Junit test that recreates the error and fixed FilterParser to default to using a MatchAllDocsFilter if the requested filter clause is left empty.\nCloses #8438" }, { "message": "Added similar logic and tests for Filters aggregation" } ], "files": [ { "diff": "@@ -18,6 +18,7 @@\n */\n package org.elasticsearch.search.aggregations.bucket.filter;\n \n+import org.elasticsearch.common.lucene.search.MatchAllDocsFilter;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.index.query.ParsedFilter;\n import org.elasticsearch.search.aggregations.Aggregator;\n@@ -39,7 +40,8 @@ public String type() {\n @Override\n public AggregatorFactory parse(String aggregationName, XContentParser parser, SearchContext context) throws IOException {\n ParsedFilter filter = context.queryParserService().parseInnerFilter(parser);\n- return new FilterAggregator.Factory(aggregationName, filter.filter());\n+\n+ return new FilterAggregator.Factory(aggregationName, filter == null ? new MatchAllDocsFilter() : filter.filter());\n }\n \n }", "filename": "src/main/java/org/elasticsearch/search/aggregations/bucket/filter/FilterParser.java", "status": "modified" }, { "diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.search.aggregations.bucket.filters;\n \n+import org.elasticsearch.common.lucene.search.MatchAllDocsFilter;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.index.query.ParsedFilter;\n import org.elasticsearch.search.SearchParseException;\n@@ -60,7 +61,7 @@ public AggregatorFactory parse(String aggregationName, XContentParser parser, Se\n key = parser.currentName();\n } else {\n ParsedFilter filter = context.queryParserService().parseInnerFilter(parser);\n- filters.add(new FiltersAggregator.KeyedFilter(key, filter.filter()));\n+ filters.add(new FiltersAggregator.KeyedFilter(key, filter == null ? new MatchAllDocsFilter() : filter.filter()));\n }\n }\n } else {\n@@ -72,7 +73,8 @@ public AggregatorFactory parse(String aggregationName, XContentParser parser, Se\n int idx = 0;\n while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) {\n ParsedFilter filter = context.queryParserService().parseInnerFilter(parser);\n- filters.add(new FiltersAggregator.KeyedFilter(String.valueOf(idx), filter.filter()));\n+ filters.add(new FiltersAggregator.KeyedFilter(String.valueOf(idx), filter == null ? new MatchAllDocsFilter()\n+ : filter.filter()));\n idx++;\n }\n } else {", "filename": "src/main/java/org/elasticsearch/search/aggregations/bucket/filters/FiltersParser.java", "status": "modified" }, { "diff": "@@ -21,6 +21,8 @@\n import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.action.index.IndexRequestBuilder;\n import org.elasticsearch.action.search.SearchResponse;\n+import org.elasticsearch.index.query.AndFilterBuilder;\n+import org.elasticsearch.index.query.FilterBuilder;\n import org.elasticsearch.search.aggregations.bucket.filter.Filter;\n import org.elasticsearch.search.aggregations.bucket.histogram.Histogram;\n import org.elasticsearch.search.aggregations.metrics.avg.Avg;\n@@ -35,7 +37,9 @@\n import static org.elasticsearch.index.query.FilterBuilders.matchAllFilter;\n import static org.elasticsearch.index.query.FilterBuilders.termFilter;\n import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery;\n-import static org.elasticsearch.search.aggregations.AggregationBuilders.*;\n+import static org.elasticsearch.search.aggregations.AggregationBuilders.avg;\n+import static org.elasticsearch.search.aggregations.AggregationBuilders.filter;\n+import static org.elasticsearch.search.aggregations.AggregationBuilders.histogram;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchResponse;\n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.is;\n@@ -97,6 +101,20 @@ public void simple() throws Exception {\n assertThat(filter.getDocCount(), equalTo((long) numTag1Docs));\n }\n \n+ // See NullPointer issue when filters are empty:\n+ // https://github.com/elasticsearch/elasticsearch/issues/8438\n+ @Test\n+ public void emptyFilterDeclarations() throws Exception {\n+ FilterBuilder emptyFilter = new AndFilterBuilder();\n+ SearchResponse response = client().prepareSearch(\"idx\").addAggregation(filter(\"tag1\").filter(emptyFilter)).execute().actionGet();\n+\n+ assertSearchResponse(response);\n+\n+ Filter filter = response.getAggregations().get(\"tag1\");\n+ assertThat(filter, notNullValue());\n+ assertThat(filter.getDocCount(), equalTo((long) numDocs));\n+ }\n+\n @Test\n public void withSubAggregation() throws Exception {\n SearchResponse response = client().prepareSearch(\"idx\")", "filename": "src/test/java/org/elasticsearch/search/aggregations/bucket/FilterTests.java", "status": "modified" }, { "diff": "@@ -22,6 +22,8 @@\n import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.action.index.IndexRequestBuilder;\n import org.elasticsearch.action.search.SearchResponse;\n+import org.elasticsearch.index.query.AndFilterBuilder;\n+import org.elasticsearch.index.query.FilterBuilder;\n import org.elasticsearch.search.aggregations.bucket.filters.Filters;\n import org.elasticsearch.search.aggregations.bucket.histogram.Histogram;\n import org.elasticsearch.search.aggregations.metrics.avg.Avg;\n@@ -38,7 +40,9 @@\n import static org.elasticsearch.index.query.FilterBuilders.matchAllFilter;\n import static org.elasticsearch.index.query.FilterBuilders.termFilter;\n import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery;\n-import static org.elasticsearch.search.aggregations.AggregationBuilders.*;\n+import static org.elasticsearch.search.aggregations.AggregationBuilders.avg;\n+import static org.elasticsearch.search.aggregations.AggregationBuilders.filters;\n+import static org.elasticsearch.search.aggregations.AggregationBuilders.histogram;\n import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchResponse;\n import static org.hamcrest.Matchers.equalTo;\n import static org.hamcrest.Matchers.is;\n@@ -112,6 +116,27 @@ public void simple() throws Exception {\n assertThat(bucket.getDocCount(), equalTo((long) numTag2Docs));\n }\n \n+ // See NullPointer issue when filters are empty:\n+ // https://github.com/elasticsearch/elasticsearch/issues/8438\n+ @Test\n+ public void emptyFilterDeclarations() throws Exception {\n+ FilterBuilder emptyFilter = new AndFilterBuilder();\n+ SearchResponse response = client().prepareSearch(\"idx\")\n+ .addAggregation(filters(\"tags\").filter(\"all\", emptyFilter).filter(\"tag1\", termFilter(\"tag\", \"tag1\"))).execute()\n+ .actionGet();\n+\n+ assertSearchResponse(response);\n+\n+ Filters filters = response.getAggregations().get(\"tags\");\n+ assertThat(filters, notNullValue());\n+ Filters.Bucket allBucket = filters.getBucketByKey(\"all\");\n+ assertThat(allBucket.getDocCount(), equalTo((long) numDocs));\n+\n+ Filters.Bucket bucket = filters.getBucketByKey(\"tag1\");\n+ assertThat(bucket, Matchers.notNullValue());\n+ assertThat(bucket.getDocCount(), equalTo((long) numTag1Docs));\n+ }\n+\n @Test\n public void withSubAggregation() throws Exception {\n SearchResponse response = client().prepareSearch(\"idx\")", "filename": "src/test/java/org/elasticsearch/search/aggregations/bucket/FiltersTests.java", "status": "modified" } ] }
{ "body": "Using a geohash_grid aggregation used to work on arrays of points, but with ES 1.4.0 an exception occurs instead. The following curl commands reproduce the issue on a clean installation of ES:\n\n```\n# create index with geo_point mapping\ncurl -XPUT localhost:9200/test -d '{\n \"mappings\": {\n \"test\": {\n \"properties\": {\n \"points\": {\n \"type\": \"geo_point\",\n \"geohash_prefix\": true\n }\n }\n }\n }\n}'\n\n# insert documents\ncurl -XPUT localhost:9200/test/test/1?refresh=true -d '{ \"points\": [[1,2], [2,3]] }'\ncurl -XPUT localhost:9200/test/test/2?refresh=true -d '{ \"points\": [[2,3], [3,4]] }'\n\n# perform aggregation\ncurl -XGET localhost:9200/test/test/_search?pretty -d '{\n \"size\": 0,\n \"aggs\": {\n \"a1\": {\n \"geohash_grid\": {\n \"field\": \"points\",\n \"precision\": 3\n }\n }\n }\n}'\n```\n\nOn Elasticsearch 1.3.5 this produces the expected result:\n\n```\n...\n \"aggregations\" : {\n \"a1\" : {\n \"buckets\" : [ {\n \"key\" : \"s09\",\n \"doc_count\" : 2\n }, {\n \"key\" : \"s0d\",\n \"doc_count\" : 1\n }, {\n \"key\" : \"s02\",\n \"doc_count\" : 1\n } ]\n }\n }\n...\n```\n\nHowever on Elasticsearch 1.4.0 this triggers a failure:\n\n```\n{\n \"took\" : 76,\n \"timed_out\" : false,\n \"_shards\" : {\n \"total\" : 5,\n \"successful\" : 3,\n \"failed\" : 2,\n \"failures\" : [ {\n \"index\" : \"test\",\n \"shard\" : 2,\n \"status\" : 500,\n \"reason\" : \"QueryPhaseExecutionException[[test][2]: query[ConstantScore(cache(_type:test))],from[0],size[0]: Query Failed [Failed to execute main query]]; nested: ArrayIndexOutOfBoundsException[1]; \"\n }, {\n \"index\" : \"test\",\n \"shard\" : 3,\n \"status\" : 500,\n \"reason\" : \"QueryPhaseExecutionException[[test][3]: query[ConstantScore(cache(_type:test))],from[0],size[0]: Query Failed [Failed to execute main query]]; nested: ArrayIndexOutOfBoundsException[1]; \"\n } ]\n },\n \"hits\" : {\n \"total\" : 0,\n \"max_score\" : 0.0,\n \"hits\" : [ ]\n },\n \"aggregations\" : {\n \"a1\" : {\n \"buckets\" : [ ]\n }\n }\n}\n```\n\nThe log contains this exception:\n\n```\n[2014-11-17 17:28:25,121][DEBUG][action.search.type ] [Franklin Storm] [test][3], node[-S9ijRKKQH-IEAr3sUW14Q], [P], s[STARTED]: Failed to execute [org.elasticsearch.action.search.SearchRequest@1fa40d49]\norg.elasticsearch.search.query.QueryPhaseExecutionException: [test][3]: query[ConstantScore(cache(_type:test))],from[0],size[0]: Query Failed [Failed to execute main query]\n at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:163)\n at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:275)\n at org.elasticsearch.search.action.SearchServiceTransportAction$5.call(SearchServiceTransportAction.java:231)\n at org.elasticsearch.search.action.SearchServiceTransportAction$5.call(SearchServiceTransportAction.java:228)\n at org.elasticsearch.search.action.SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:559)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n at java.lang.Thread.run(Thread.java:744)\nCaused by: java.lang.ArrayIndexOutOfBoundsException: 1\n at org.elasticsearch.search.aggregations.bucket.geogrid.GeoHashGridParser$GeoGridFactory$CellValues.setDocument(GeoHashGridParser.java:154)\n at org.elasticsearch.search.aggregations.bucket.geogrid.GeoHashGridAggregator.collect(GeoHashGridAggregator.java:73)\n at org.elasticsearch.search.aggregations.AggregationPhase$AggregationsCollector.collect(AggregationPhase.java:161)\n at org.elasticsearch.common.lucene.MultiCollector.collect(MultiCollector.java:60)\n at org.apache.lucene.search.Weight$DefaultBulkScorer.scoreAll(Weight.java:193)\n at org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:163)\n at org.apache.lucene.search.BulkScorer.score(BulkScorer.java:35)\n at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:621)\n at org.elasticsearch.search.internal.ContextIndexSearcher.search(ContextIndexSearcher.java:191)\n at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:309)\n at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:117)\n ... 7 more\n```\n", "comments": [ { "body": "We did quite a significant refactoring of fielddata in 1.4 in order to better integrate with Lucene, I'm wondering that this could be related. I'm looking into it...\n", "created_at": "2014-11-17T16:58:26Z" } ], "number": 8507, "title": "Exception from geohash_grid aggregation with array of points (ES 1.4.0)" }
{ "body": "This aggregation creates an anonymous fielddata instance that takes geo points\nand turns them into a geo hash encoded as a long. A bug was introduced in 1.4\nbecause of a fielddata refactoring: the fielddata instance tries to populate\nan array with values without first making sure that it is large enough.\n\nClose #8507\n", "number": 8513, "review_comments": [ { "body": "This is a weird API, where you must first set a protected member, then call a protected function. Perhaps it could be reworked to have `count` be private, and change `grow()` to `resize(int)`? I only see 2 cases where `count` is set, but `grow()` is not, and they are for 0 and 1, which I think are just special cases and can be skipped on entry to to resize?\n", "created_at": "2014-11-17T19:41:35Z" } ], "title": "Fix geohash grid aggregation on multi-valued fields." }
{ "commits": [ { "message": "Aggregations: Fix geohash grid aggregation on multi-valued fields.\n\nThis aggregation creates an anonymous fielddata instance that takes geo points\nand turns them into a geo hash encoded as a long. A bug was introduced in 1.4\nbecause of a fielddata refactoring: the fielddata instance tries to populate\nan array with values without first making sure that it is large enough.\n\nClose #8507" }, { "message": "Review round 1" } ], "files": [ { "diff": "@@ -29,7 +29,7 @@\n */\n public abstract class SortingNumericDocValues extends SortedNumericDocValues {\n \n- protected int count;\n+ private int count;\n protected long[] values;\n private final Sorter sorter;\n \n@@ -52,9 +52,11 @@ protected int compare(int i, int j) {\n }\n \n /**\n- * Make sure the {@link #values} array can store at least {@link #count} entries.\n+ * Set the {@link #count()} and ensure that the {@link #values} array can\n+ * store at least that many entries.\n */\n- protected final void grow() {\n+ protected final void resize(int newSize) {\n+ count = newSize;\n values = ArrayUtil.grow(values, count);\n }\n ", "filename": "src/main/java/org/elasticsearch/index/fielddata/SortingNumericDocValues.java", "status": "modified" }, { "diff": "@@ -148,8 +148,8 @@ protected CellValues(ValuesSource.GeoPoint geoPointValues, int precision) {\n public void setDocument(int docId) {\n geoValues = geoPointValues.geoPointValues();\n geoValues.setDocument(docId);\n- count = geoValues.count();\n- for (int i = 0; i < count; ++i) {\n+ resize(geoValues.count());\n+ for (int i = 0; i < count(); ++i) {\n GeoPoint target = geoValues.valueAt(i);\n values[i] = GeoHashUtils.encodeAsLong(target.getLat(), target.getLon(), precision);\n }", "filename": "src/main/java/org/elasticsearch/search/aggregations/bucket/geogrid/GeoHashGridParser.java", "status": "modified" }, { "diff": "@@ -484,9 +484,8 @@ public LongValues(Numeric source, SearchScript script) {\n public void setDocument(int docId) {\n script.setNextDocId(docId);\n source.longValues().setDocument(docId);\n- count = source.longValues().count();\n- grow();\n- for (int i = 0; i < count; ++i) {\n+ resize(source.longValues().count());\n+ for (int i = 0; i < count(); ++i) {\n script.setNextVar(\"_value\", source.longValues().valueAt(i));\n values[i] = script.runAsLong();\n }", "filename": "src/main/java/org/elasticsearch/search/aggregations/support/ValuesSource.java", "status": "modified" }, { "diff": "@@ -51,30 +51,28 @@ public void setDocument(int docId) {\n final Object value = script.run();\n \n if (value == null) {\n- count = 0;\n+ resize(0);\n }\n \n else if (value instanceof Number) {\n- count = 1;\n+ resize(1);\n values[0] = ((Number) value).longValue();\n }\n \n else if (value.getClass().isArray()) {\n- count = Array.getLength(value);\n- grow();\n- for (int i = 0; i < count; ++i) {\n+ resize(Array.getLength(value));\n+ for (int i = 0; i < count(); ++i) {\n values[i] = ((Number) Array.get(value, i)).longValue();\n }\n }\n \n else if (value instanceof Collection) {\n- count = ((Collection<?>) value).size();\n- grow();\n+ resize(((Collection<?>) value).size());\n int i = 0;\n for (Iterator<?> it = ((Collection<?>) value).iterator(); it.hasNext(); ++i) {\n values[i] = ((Number) it.next()).longValue();\n }\n- assert i == count;\n+ assert i == count();\n }\n \n else {", "filename": "src/main/java/org/elasticsearch/search/aggregations/support/values/ScriptLongValues.java", "status": "modified" }, { "diff": "@@ -33,8 +33,11 @@\n import org.junit.Test;\n \n import java.util.ArrayList;\n+import java.util.Arrays;\n+import java.util.HashSet;\n import java.util.List;\n import java.util.Random;\n+import java.util.Set;\n \n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n import static org.elasticsearch.search.aggregations.AggregationBuilders.geohashGrid;\n@@ -46,39 +49,43 @@\n @ElasticsearchIntegrationTest.SuiteScopeTest\n public class GeoHashGridTests extends ElasticsearchIntegrationTest {\n \n- private IndexRequestBuilder indexCity(String name, String latLon) throws Exception {\n+ static ObjectIntMap<String> expectedDocCountsForGeoHash = null;\n+ static ObjectIntMap<String> multiValuedExpectedDocCountsForGeoHash = null;\n+ static int highestPrecisionGeohash = 12;\n+ static int numDocs = 100;\n+\n+ static String smallestGeoHash = null;\n+\n+ private static IndexRequestBuilder indexCity(String index, String name, List<String> latLon) throws Exception {\n XContentBuilder source = jsonBuilder().startObject().field(\"city\", name);\n if (latLon != null) {\n source = source.field(\"location\", latLon);\n }\n source = source.endObject();\n- return client().prepareIndex(\"idx\", \"type\").setSource(source);\n+ return client().prepareIndex(index, \"type\").setSource(source);\n }\n \n-\n- static ObjectIntMap<String> expectedDocCountsForGeoHash = null;\n- static int highestPrecisionGeohash = 12;\n- static int numRandomPoints = 100;\n-\n- static String smallestGeoHash = null;\n+ private static IndexRequestBuilder indexCity(String index, String name, String latLon) throws Exception {\n+ return indexCity(index, name, Arrays.<String>asList(latLon));\n+ }\n \n @Override\n public void setupSuiteScopeCluster() throws Exception {\n+ createIndex(\"idx_unmapped\");\n+\n assertAcked(prepareCreate(\"idx\")\n .addMapping(\"type\", \"location\", \"type=geo_point\", \"city\", \"type=string,index=not_analyzed\"));\n \n- createIndex(\"idx_unmapped\");\n-\n List<IndexRequestBuilder> cities = new ArrayList<>();\n Random random = getRandom();\n- expectedDocCountsForGeoHash = new ObjectIntOpenHashMap<>(numRandomPoints * 2);\n- for (int i = 0; i < numRandomPoints; i++) {\n+ expectedDocCountsForGeoHash = new ObjectIntOpenHashMap<>(numDocs * 2);\n+ for (int i = 0; i < numDocs; i++) {\n //generate random point\n double lat = (180d * random.nextDouble()) - 90d;\n double lng = (360d * random.nextDouble()) - 180d;\n String randomGeoHash = GeoHashUtils.encode(lat, lng, highestPrecisionGeohash);\n //Index at the highest resolution\n- cities.add(indexCity(randomGeoHash, lat + \", \" + lng));\n+ cities.add(indexCity(\"idx\", randomGeoHash, lat + \", \" + lng));\n expectedDocCountsForGeoHash.put(randomGeoHash, expectedDocCountsForGeoHash.getOrDefault(randomGeoHash, 0) + 1);\n //Update expected doc counts for all resolutions..\n for (int precision = highestPrecisionGeohash - 1; precision > 0; precision--) {\n@@ -90,6 +97,35 @@ public void setupSuiteScopeCluster() throws Exception {\n }\n }\n indexRandom(true, cities);\n+\n+ assertAcked(prepareCreate(\"multi_valued_idx\")\n+ .addMapping(\"type\", \"location\", \"type=geo_point\", \"city\", \"type=string,index=not_analyzed\"));\n+\n+ cities = new ArrayList<>();\n+ multiValuedExpectedDocCountsForGeoHash = new ObjectIntOpenHashMap<>(numDocs * 2);\n+ for (int i = 0; i < numDocs; i++) {\n+ final int numPoints = random.nextInt(4);\n+ List<String> points = new ArrayList<>();\n+ // TODO (#8512): this should be a Set, not a List. Currently if a document has two positions that have\n+ // the same geo hash, it will increase the doc_count for this geo hash by 2 instead of 1\n+ List<String> geoHashes = new ArrayList<>();\n+ for (int j = 0; j < numPoints; ++j) {\n+ double lat = (180d * random.nextDouble()) - 90d;\n+ double lng = (360d * random.nextDouble()) - 180d;\n+ points.add(lat + \",\" + lng);\n+ // Update expected doc counts for all resolutions..\n+ for (int precision = highestPrecisionGeohash; precision > 0; precision--) {\n+ final String geoHash = GeoHashUtils.encode(lat, lng, precision);\n+ geoHashes.add(geoHash);\n+ }\n+ }\n+ cities.add(indexCity(\"multi_valued_idx\", Integer.toString(i), points));\n+ for (String hash : geoHashes) {\n+ multiValuedExpectedDocCountsForGeoHash.put(hash, multiValuedExpectedDocCountsForGeoHash.getOrDefault(hash, 0) + 1);\n+ }\n+ }\n+ indexRandom(true, cities);\n+\n ensureSearchable();\n }\n \n@@ -119,6 +155,31 @@ public void simple() throws Exception {\n }\n }\n \n+ @Test\n+ public void multivalued() throws Exception {\n+ for (int precision = 1; precision <= highestPrecisionGeohash; precision++) {\n+ SearchResponse response = client().prepareSearch(\"multi_valued_idx\")\n+ .addAggregation(geohashGrid(\"geohashgrid\")\n+ .field(\"location\")\n+ .precision(precision)\n+ )\n+ .execute().actionGet();\n+\n+ assertSearchResponse(response);\n+\n+ GeoHashGrid geoGrid = response.getAggregations().get(\"geohashgrid\");\n+ for (GeoHashGrid.Bucket cell : geoGrid.getBuckets()) {\n+ String geohash = cell.getKey();\n+\n+ long bucketCount = cell.getDocCount();\n+ int expectedBucketCount = multiValuedExpectedDocCountsForGeoHash.get(geohash);\n+ assertNotSame(bucketCount, 0);\n+ assertEquals(\"Geohash \" + geohash + \" has wrong doc count \",\n+ expectedBucketCount, bucketCount);\n+ }\n+ }\n+ }\n+\n @Test\n public void filtered() throws Exception {\n GeoBoundingBoxFilterBuilder bbox = new GeoBoundingBoxFilterBuilder(\"location\");", "filename": "src/test/java/org/elasticsearch/search/aggregations/bucket/GeoHashGridTests.java", "status": "modified" } ] }
{ "body": "If you do a _bulk that contains an update to a child doc (parent/child) and you don't (or forget to) specify the parent id, you will get an NPE error message in the item response. It would be good to adjust the error message to RoutingMissingException (just like when you do a single update (not _bulk) to the same doc but forget to specify parent id.\n\nSteps to reproduce:\n\n```\ncurl -XDELETE localhost:9200/test1\n\ncurl -XPUT localhost:9200/test1 -d '{\n \"mappings\": {\n \"p\": {},\n \"c\": {\n \"_parent\": {\n \"type\": \"p\"\n }\n }\n }\n}'\n\ncurl -XPUT localhost:9200/test1/c/1?parent=1 -d '{\n}'\n\ncurl -XPOST localhost:9200/test1/c/_bulk -d '\n{ \"update\": { \"_id\": \"1\" }}\n{ \"doc\": { \"foo\": \"bar\" } }\n'\n```\n\nResponse:\n\n```\n{\"error\":\"NullPointerException[null]\",\"status\":500}\n```\n", "comments": [ { "body": "This is fixed in: https://github.com/elasticsearch/elasticsearch/pull/8378\n", "created_at": "2014-11-07T06:19:32Z" } ], "number": 8365, "title": "Bulk update child doc, NPE error message when parent is not specified" }
{ "body": "Now each error is reported in bulk response rather than causing entire bulk to fail.\nAdded a Junit test but the use of TransportClient means the error is manifested differently to a REST based request - instead of a NullPointer the whole of the bulk request failed with a RoutingMissingException. Changed TransportBulkAction to catch this exception and treat it the same as the existing logic for a ElasticsearchParseException - the individual bulk request items are flagged and reported individually rather than failing the whole bulk request.\n\nCloses #8365\n", "number": 8506, "review_comments": [], "title": "Missing parent routing causes NullPointerException in Bulk API" }
{ "commits": [ { "message": "Bulk indexing issue - missing parent routing causes NullPointerException.\nNow each error is reported in bulk response rather than causing entire bulk to fail.\nAdded a Junit test but the use of TransportClient means the error is manifested differently to a REST based request - instead of a NullPointer the whole of the bulk request failed with a RoutingMissingException. Changed TransportBulkAction to catch this exception and treat it the same as the existing logic for a ElasticsearchParseException - the individual bulk request items are flagged and reported individually rather than failing the whole bulk request.\n\nCloses #8365" } ], "files": [ { "diff": "@@ -22,12 +22,14 @@\n import com.google.common.collect.Lists;\n import com.google.common.collect.Maps;\n import com.google.common.collect.Sets;\n+\n import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.ElasticsearchParseException;\n import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.action.ActionListener;\n import org.elasticsearch.action.ActionRequest;\n import org.elasticsearch.action.DocumentRequest;\n+import org.elasticsearch.action.RoutingMissingException;\n import org.elasticsearch.action.admin.indices.create.CreateIndexRequest;\n import org.elasticsearch.action.admin.indices.create.CreateIndexResponse;\n import org.elasticsearch.action.admin.indices.create.TransportCreateIndexAction;\n@@ -227,7 +229,7 @@ private void executeBulk(final BulkRequest bulkRequest, final long startTime, fi\n }\n try {\n indexRequest.process(metaData, mappingMd, allowIdGeneration, concreteIndex);\n- } catch (ElasticsearchParseException e) {\n+ } catch (ElasticsearchParseException | RoutingMissingException e) {\n BulkItemResponse.Failure failure = new BulkItemResponse.Failure(concreteIndex, indexRequest.type(), indexRequest.id(), e);\n BulkItemResponse bulkItemResponse = new BulkItemResponse(i, \"index\", failure);\n responses.set(i, bulkItemResponse);\n@@ -285,7 +287,10 @@ private void executeBulk(final BulkRequest bulkRequest, final long startTime, fi\n String concreteIndex = concreteIndices.getConcreteIndex(updateRequest.index());\n MappingMetaData mappingMd = clusterState.metaData().index(concreteIndex).mappingOrDefault(updateRequest.type());\n if (mappingMd != null && mappingMd.routing().required() && updateRequest.routing() == null) {\n- continue; // What to do?\n+ BulkItemResponse.Failure failure = new BulkItemResponse.Failure(updateRequest.index(), updateRequest.type(),\n+ updateRequest.id(), \"routing is required for this item\", RestStatus.BAD_REQUEST);\n+ responses.set(i, new BulkItemResponse(i, updateRequest.type(), failure));\n+ continue;\n }\n ShardId shardId = clusterService.operationRouting().indexShards(clusterState, concreteIndex, updateRequest.type(), updateRequest.id(), updateRequest.routing()).shardId();\n List<BulkItemRequest> list = requestsByShard.get(shardId);", "filename": "src/main/java/org/elasticsearch/action/bulk/TransportBulkAction.java", "status": "modified" }, { "diff": "@@ -20,6 +20,7 @@\n package org.elasticsearch.document;\n \n import com.google.common.base.Charsets;\n+\n import org.elasticsearch.action.admin.indices.alias.Alias;\n import org.elasticsearch.action.bulk.BulkItemResponse;\n import org.elasticsearch.action.bulk.BulkRequest;\n@@ -47,8 +48,15 @@\n import java.util.concurrent.CyclicBarrier;\n \n import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;\n-import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.*;\n-import static org.hamcrest.Matchers.*;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertExists;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertHitCount;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertNoFailures;\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertSearchHits;\n+import static org.hamcrest.Matchers.containsString;\n+import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.is;\n+import static org.hamcrest.Matchers.nullValue;\n \n public class BulkTests extends ElasticsearchIntegrationTest {\n \n@@ -474,6 +482,40 @@ public void testBulkUpdateUpsertWithParent() throws Exception {\n assertSearchHits(searchResponse, \"child1\");\n }\n \n+ /*\n+ * Test for https://github.com/elasticsearch/elasticsearch/issues/8365\n+ */\n+ @Test\n+ public void testBulkUpdateChildMissingParentRouting() throws Exception {\n+ assertAcked(prepareCreate(\"test\").addMapping(\"parent\", \"{\\\"parent\\\":{}}\").addMapping(\"child\",\n+ \"{\\\"child\\\": {\\\"_parent\\\": {\\\"type\\\": \\\"parent\\\"}}}\"));\n+ ensureGreen();\n+\n+ BulkRequestBuilder builder = client().prepareBulk();\n+\n+ byte[] addParent = new BytesArray(\"{\\\"index\\\" : { \\\"_index\\\" : \\\"test\\\", \\\"_type\\\" : \\\"parent\\\", \\\"_id\\\" : \\\"parent1\\\"}}\\n\"\n+ + \"{\\\"field1\\\" : \\\"value1\\\"}\\n\").array();\n+\n+ byte[] addChildOK = new BytesArray(\n+ \"{\\\"index\\\" : { \\\"_id\\\" : \\\"child1\\\", \\\"_type\\\" : \\\"child\\\", \\\"_index\\\" : \\\"test\\\", \\\"parent\\\" : \\\"parent1\\\"} }\\n\"\n+ + \"{ \\\"field1\\\" : \\\"value1\\\"}\\n\").array();\n+ byte[] addChildMissingRouting = new BytesArray(\n+ \"{\\\"index\\\" : { \\\"_id\\\" : \\\"child2\\\", \\\"_type\\\" : \\\"child\\\", \\\"_index\\\" : \\\"test\\\"} }\\n\" + \"{ \\\"field1\\\" : \\\"value1\\\"}\\n\")\n+ .array();\n+\n+ builder.add(addParent, 0, addParent.length, false);\n+ builder.add(addChildOK, 0, addChildOK.length, false);\n+ builder.add(addChildMissingRouting, 0, addChildMissingRouting.length, false);\n+ builder.add(addChildOK, 0, addChildOK.length, false);\n+\n+ BulkResponse bulkResponse = builder.get();\n+ assertThat(bulkResponse.getItems().length, equalTo(4));\n+ assertThat(bulkResponse.getItems()[0].isFailed(), equalTo(false));\n+ assertThat(bulkResponse.getItems()[1].isFailed(), equalTo(false));\n+ assertThat(bulkResponse.getItems()[2].isFailed(), equalTo(true));\n+ assertThat(bulkResponse.getItems()[3].isFailed(), equalTo(false));\n+ }\n+\n @Test\n public void testFailingVersionedUpdatedOnBulk() throws Exception {\n createIndex(\"test\");", "filename": "src/test/java/org/elasticsearch/document/BulkTests.java", "status": "modified" } ] }
{ "body": "`bin/elasticsearch.in.sh` specifies that the GC log file should be stored at `/var/log/elasticsearch/gc.log` (when `ES_USE_GC_LOGGING` is set).\n\nThis location would not exist by default, and I also think it is a rather odd place given that other logs end up in the (existing) `logs` directory.\n\nCould this be changed to \n1. by default use $ES_HOME/logs/gc.log \n2. be overrideable by another setting so one doesn't need to modify the script file\n\nThis issue has now caused about 50% of my node restarts when updating ... I always forget about it, see the error, wait for the cluster to be reasonable, and then restart again :)\n", "comments": [ { "body": "PR #8479 has a fix for this issue. \n\nNote that the windows .bat file already did the right thing, so the PR makes windows and !windows behave in the same way, and then adds the ability to configure the name of the log file from the outside.\n", "created_at": "2014-11-18T10:37:32Z" } ], "number": 8471, "title": "Non-existing location for default gc.log file" }
{ "body": "Enabling GC logging works now by setting the environment variable ES_GC_LOG_FILE\nto the full path to the GC log file. Missing directories will be created as needed.\n\nThe ES_USE_GC_LOGGING environment variable is no longer used.\n\nCloses #8471\nCloses #8479\n\nNote that the default name of the file was already proper for Windows, so the fix for #8471 effectively is \"do what windows did\".\n", "number": 8479, "review_comments": [], "title": "Allow configuration of the GC log file via an environment variable" }
{ "commits": [ { "message": "Allow configuration of the GC log file via an environment variable\n\nEnabling GC logging works now by setting the environment variable ES_GC_LOG_FILE\nto the full path to the GC log file. Missing directories will be created as needed.\n\nThe ES_USE_GC_LOGGING environment variable is no longer used.\n\nCloses #8471" } ], "files": [ { "diff": "@@ -96,7 +96,7 @@ if [ \"x$ES_INCLUDE\" = \"x\" ]; then\n /usr/local/share/elasticsearch/elasticsearch.in.sh \\\n /opt/elasticsearch/elasticsearch.in.sh \\\n ~/.elasticsearch.in.sh \\\n- $ES_HOME/bin/elasticsearch.in.sh \\\n+ \"$ES_HOME/bin/elasticsearch.in.sh\" \\\n \"`dirname \"$0\"`\"/elasticsearch.in.sh; do\n if [ -r \"$include\" ]; then\n . \"$include\"\n@@ -151,13 +151,13 @@ launch_service()\n # The es-foreground option will tell Elasticsearch not to close stdout/stderr, but it's up to us not to daemonize.\n if [ \"x$daemonized\" = \"x\" ]; then\n es_parms=\"$es_parms -Des.foreground=yes\"\n- exec \"$JAVA\" $JAVA_OPTS $ES_JAVA_OPTS $es_parms -Des.path.home=\"$ES_HOME\" -cp \"$ES_CLASSPATH\" $props \\\n+ eval exec \"$JAVA\" $JAVA_OPTS $ES_JAVA_OPTS $es_parms \"\\\"-Des.path.home=$ES_HOME\\\"\" -cp \"\\\"$ES_CLASSPATH\\\"\" $props \\\n org.elasticsearch.bootstrap.Elasticsearch\n # exec without running it in the background, makes it replace this shell, we'll never get here...\n # no need to return something\n else\n # Startup Elasticsearch, background it, and write the pid.\n- exec \"$JAVA\" $JAVA_OPTS $ES_JAVA_OPTS $es_parms -Des.path.home=\"$ES_HOME\" -cp \"$ES_CLASSPATH\" $props \\\n+ eval exec \"$JAVA\" $JAVA_OPTS $ES_JAVA_OPTS $es_parms \"\\\"-Des.path.home=$ES_HOME\\\"\" -cp \"\\\"$ES_CLASSPATH\\\"\" $props \\\n org.elasticsearch.bootstrap.Elasticsearch <&- &\n return $?\n fi\n@@ -207,7 +207,7 @@ eval set -- \"$args\"\n while true; do\n case $1 in\n -v)\n- \"$JAVA\" $JAVA_OPTS $ES_JAVA_OPTS $es_parms -Des.path.home=\"$ES_HOME\" -cp \"$ES_CLASSPATH\" $props \\\n+ eval \"$JAVA\" $JAVA_OPTS $ES_JAVA_OPTS $es_parms \"\\\"-Des.path.home=$ES_HOME\\\"\" -cp \"\\\"$ES_CLASSPATH\\\"\" $props \\\n org.elasticsearch.Version\n exit 0\n ;;", "filename": "bin/elasticsearch", "status": "modified" }, { "diff": "@@ -59,12 +59,19 @@ set JAVA_OPTS=%JAVA_OPTS% -XX:+UseCMSInitiatingOccupancyOnly\n REM When running under Java 7\n REM JAVA_OPTS=%JAVA_OPTS% -XX:+UseCondCardMark\n \n-if NOT \"%ES_USE_GC_LOGGING%\" == \"\" set JAVA_OPTS=%JAVA_OPTS% -XX:+PrintGCDetails\n-if NOT \"%ES_USE_GC_LOGGING%\" == \"\" set JAVA_OPTS=%JAVA_OPTS% -XX:+PrintGCTimeStamps\n-if NOT \"%ES_USE_GC_LOGGING%\" == \"\" set JAVA_OPTS=%JAVA_OPTS% -XX:+PrintClassHistogram\n-if NOT \"%ES_USE_GC_LOGGING%\" == \"\" set JAVA_OPTS=%JAVA_OPTS% -XX:+PrintTenuringDistribution\n-if NOT \"%ES_USE_GC_LOGGING%\" == \"\" set JAVA_OPTS=%JAVA_OPTS% -XX:+PrintGCApplicationStoppedTime\n-if NOT \"%ES_USE_GC_LOGGING%\" == \"\" set JAVA_OPTS=%JAVA_OPTS% -Xloggc:%ES_HOME%/logs/gc.log\n+if \"%ES_GC_LOG_FILE%\" == \"\" goto nogclog\n+\n+:gclog\n+set JAVA_OPTS=%JAVA_OPTS% -XX:+PrintGCDetails\n+set JAVA_OPTS=%JAVA_OPTS% -XX:+PrintGCTimeStamps\n+set JAVA_OPTS=%JAVA_OPTS% -XX:+PrintClassHistogram\n+set JAVA_OPTS=%JAVA_OPTS% -XX:+PrintTenuringDistribution\n+set JAVA_OPTS=%JAVA_OPTS% -XX:+PrintGCApplicationStoppedTime\n+set JAVA_OPTS=%JAVA_OPTS% -Xloggc:%ES_GC_LOG_FILE%\n+for %%F in (\"%ES_GC_LOG_FILE%\") do set ES_GC_LOG_FILE_DIRECTORY=%%~dpF\n+if NOT EXIST \"%ES_GC_LOG_FILE_DIRECTORY%\\.\" mkdir \"%ES_GC_LOG_FILE_DIRECTORY%\"\n+\n+:nogclog\n \n REM Causes the JVM to dump its heap on OutOfMemory.\n set JAVA_OPTS=%JAVA_OPTS% -XX:+HeapDumpOnOutOfMemoryError", "filename": "bin/elasticsearch.in.bat", "status": "modified" }, { "diff": "@@ -1,6 +1,6 @@\n #!/bin/sh\n \n-ES_CLASSPATH=$ES_CLASSPATH:$ES_HOME/lib/${project.build.finalName}.jar:$ES_HOME/lib/*:$ES_HOME/lib/sigar/*\n+ES_CLASSPATH=\"$ES_CLASSPATH:$ES_HOME/lib/${project.build.finalName}.jar:$ES_HOME/lib/*:$ES_HOME/lib/sigar/*\"\n \n if [ \"x$ES_MIN_MEM\" = \"x\" ]; then\n ES_MIN_MEM=256m\n@@ -45,13 +45,16 @@ JAVA_OPTS=\"$JAVA_OPTS -XX:CMSInitiatingOccupancyFraction=75\"\n JAVA_OPTS=\"$JAVA_OPTS -XX:+UseCMSInitiatingOccupancyOnly\"\n \n # GC logging options\n-if [ \"x$ES_USE_GC_LOGGING\" != \"x\" ]; then\n+if [ -n \"$ES_GC_LOG_FILE\" ]; then\n JAVA_OPTS=\"$JAVA_OPTS -XX:+PrintGCDetails\"\n JAVA_OPTS=\"$JAVA_OPTS -XX:+PrintGCTimeStamps\"\n JAVA_OPTS=\"$JAVA_OPTS -XX:+PrintClassHistogram\"\n JAVA_OPTS=\"$JAVA_OPTS -XX:+PrintTenuringDistribution\"\n JAVA_OPTS=\"$JAVA_OPTS -XX:+PrintGCApplicationStoppedTime\"\n- JAVA_OPTS=\"$JAVA_OPTS -Xloggc:/var/log/elasticsearch/gc.log\"\n+ JAVA_OPTS=\"$JAVA_OPTS \\\"-Xloggc:$ES_GC_LOG_FILE\\\"\"\n+\n+ # Ensure that the directory for the log file exists: the JVM will not create it.\n+ mkdir -p \"`dirname \\\"$ES_GC_LOG_FILE\\\"`\"\n fi\n \n # Causes the JVM to dump its heap on OutOfMemory.", "filename": "bin/elasticsearch.in.sh", "status": "modified" }, { "diff": "@@ -26,6 +26,7 @@ Each package features a configuration file, which allows you to set the followin\n `CONF_FILE`:: Path to configuration file, defaults to `/etc/elasticsearch/elasticsearch.yml`\n `ES_JAVA_OPTS`:: Any additional java options you may want to apply. This may be useful, if you need to set the `node.name` property, but do not want to change the `elasticsearch.yml` configuration file, because it is distributed via a provisioning system like puppet or chef. Example: `ES_JAVA_OPTS=\"-Des.node.name=search-01\"`\n `RESTART_ON_UPGRADE`:: Configure restart on package upgrade, defaults to `false`. This means you will have to restart your elasticsearch instance after installing a package manually. The reason for this is to ensure, that upgrades in a cluster do not result in a continuous shard reallocation resulting in high network traffic and reducing the response times of your cluster.\n+`ES_GC_LOG_FILE` :: The absolute log file path for creating a garbage collection logfile, which is done by the JVM. Note that this logfile can grow pretty quick and thus is disabled by default.\n \n [float]\n ==== Debian/Ubuntu", "filename": "docs/reference/setup/as-a-service.asciidoc", "status": "modified" }, { "diff": "@@ -42,3 +42,6 @@\n \n # Configure restart on package upgrade (true, every other setting will lead to not restarting)\n #RESTART_ON_UPGRADE=true\n+\n+# Path to the GC log file\n+#ES_GC_LOG_FILE=/var/log/elasticsearch/gc.log", "filename": "src/deb/default/elasticsearch", "status": "modified" }, { "diff": "@@ -94,6 +94,9 @@ CONF_FILE=$CONF_DIR/elasticsearch.yml\n # Maximum number of VMA (Virtual Memory Areas) a process can own\n MAX_MAP_COUNT=262144\n \n+# Path to the GC log file\n+#ES_GC_LOG_FILE=/var/log/elasticsearch/gc.log\n+\n # End of variables that can be overwritten in $DEFAULT\n \n # overwrite settings from default file\n@@ -110,6 +113,7 @@ export ES_HEAP_SIZE\n export ES_HEAP_NEWSIZE\n export ES_DIRECT_SIZE\n export ES_JAVA_OPTS\n+export ES_GC_LOG_FILE\n \n # Check DAEMON exists\n test -x $DAEMON || exit 0", "filename": "src/deb/init.d/elasticsearch", "status": "modified" }, { "diff": "@@ -41,6 +41,7 @@ export ES_HEAP_SIZE\n export ES_HEAP_NEWSIZE\n export ES_DIRECT_SIZE\n export ES_JAVA_OPTS\n+export ES_GC_LOG_FILE\n export JAVA_HOME\n \n lockfile=/var/lock/subsys/$prog\n@@ -84,6 +85,8 @@ start() {\n mkdir -p \"$WORK_DIR\"\n chown \"$ES_USER\":\"$ES_GROUP\" \"$WORK_DIR\"\n fi\n+ export ES_GC_LOG_FILE\n+\n echo -n $\"Starting $prog: \"\n # if not running, start it up here, usually something like \"daemon $exec\"\n daemon --user $ES_USER --pidfile $pidfile $exec -p $pidfile -d -Des.default.path.home=$ES_HOME -Des.default.path.logs=$LOG_DIR -Des.default.path.data=$DATA_DIR -Des.default.path.work=$WORK_DIR -Des.default.path.conf=$CONF_DIR", "filename": "src/rpm/init.d/elasticsearch", "status": "modified" }, { "diff": "@@ -44,3 +44,6 @@ ES_USER=elasticsearch\n \n # Configure restart on package upgrade (true, every other setting will lead to not restarting)\n #RESTART_ON_UPGRADE=true\n+\n+# Path to the GC log file\n+#ES_GC_LOG_FILE=/var/log/elasticsearch/gc.log", "filename": "src/rpm/sysconfig/elasticsearch", "status": "modified" } ] }
{ "body": "For a query like this one:\n\n``` json\n{\n \"query\": {\n \"geo_shape\": {\n \"location\": {\n \"shape\": {\n \"type\": \"polygon\",\n \"coordinates\": [[[\"4.901238\",\"52.36936\"]]]\n }\n }\n }\n }\n}\n```\n\nAn \"ArithmeticException: / by zero\" is returned enclosed in a SearchParseException:\n\n```\nCaused by: java.lang.ArithmeticException: / by zero\n at org.elasticsearch.common.geo.builders.ShapeBuilder$Edge.ring(ShapeBuilder.java:460)\n at org.elasticsearch.common.geo.builders.BasePolygonBuilder.createEdges(BasePolygonBuilder.java:442)\n at org.elasticsearch.common.geo.builders.BasePolygonBuilder.coordinates(BasePolygonBuilder.java:129)\n at org.elasticsearch.common.geo.builders.BasePolygonBuilder.buildGeometry(BasePolygonBuilder.java:170)\n at org.elasticsearch.common.geo.builders.BasePolygonBuilder.build(BasePolygonBuilder.java:146)\n at org.elasticsearch.index.query.GeoShapeQueryParser.getArgs(GeoShapeQueryParser.java:173)\n at org.elasticsearch.index.query.GeoShapeQueryParser.parse(GeoShapeQueryParser.java:155)\n at org.elasticsearch.index.query.QueryParseContext.parseInnerQuery(QueryParseContext.java:252)\n at org.elasticsearch.index.query.IndexQueryParserService.innerParse(IndexQueryParserService.java:382)\n at org.elasticsearch.index.query.IndexQueryParserService.parse(IndexQueryParserService.java:281)\n at org.elasticsearch.index.query.IndexQueryParserService.parse(IndexQueryParserService.java:276)\n at org.elasticsearch.search.query.QueryParseElement.parse(QueryParseElement.java:33)\n at org.elasticsearch.search.SearchService.parseSource(SearchService.java:665)\n```\n\nIf I increase the number of coordinates for the polygon in the query to two, a more acceptable and meaningful exception is being thrown: \"IllegalArgumentException[Invalid number of points in LinearRing (found 2 - must be 0 or >= 4)]\". Probably, the same exception should be thrown and returned in case of just one set of coordinates.\n", "comments": [], "number": 8433, "title": "\"ArithmeticException[/ by zero]\" when parsing a \"polygon\" \"geo_shape\" query with one pair of coordinates" }
{ "body": "While this commit is primariy a fix for issue/8433 it adds more rigor to ShapeBuilder for parsing against the GeoJSON specification. Specifically, this adds LinearRing and LineString validity checks as defined in http://geojson.org/geojson-spec.html to ensure valid polygons are specified. The benefit of this fix is to provide a gate check at parse time to avoid any further processing if an invalid GeoJSON is provided. More parse checks like these will be necessary going forward to ensure full compliance with the GeoJSON specification.\n\nCloses #8433\n", "number": 8475, "review_comments": [ { "body": "This statement and the message below don't seem to correlate. Should we not also check if `coordinates.children.size() == 0` here? Also from the linked spec it looks like 0 points would not be valid anyway?\n", "created_at": "2014-11-14T08:57:57Z" }, { "body": "I think the if and else if should be swapped here as, if there are no children, it looks like we would throw an exception. Again, should a LinearRing with no points be valid?\n", "created_at": "2014-11-14T09:02:37Z" }, { "body": "Would it be better to use the Assert.fail(String) method or throw an AssertionError here? That way the test will fail correctly in the test framework\n", "created_at": "2014-11-14T09:06:13Z" }, { "body": "Correct. There's a discrepancy between JTS and GeoJSON. JTS treats 0 points as a valid linear ring/linestring where GeoJSON does not. If we're going to conform to GeoJSON (which I think is the right approach - but open for thoughts) then I should insert the check for 0 points.\n", "created_at": "2014-11-14T13:43:42Z" }, { "body": "Sorry, should _not_ insert the check for 0 points. The current check would be valid per the GeoJSON spec - I'll change the exception message. \n", "created_at": "2014-11-14T13:55:59Z" }, { "body": "Good catch. I'll keep the if/else in this order but remove the check for 0 points and fix the exception message as a LinearRing (per GeoJSON) is invalid. \n", "created_at": "2014-11-14T14:00:39Z" }, { "body": "Good call. Making change.\n", "created_at": "2014-11-14T14:05:18Z" }, { "body": "Could possibly use `assertThat(e, org.hamcrest.Matchers.isA(ElasticsearchParseException));` here instead but I don't have a strong opinion about this either way.\n", "created_at": "2014-11-14T14:35:56Z" }, { "body": "Main thing here is that the test fails rather than errors on this line\n", "created_at": "2014-11-14T14:37:31Z" }, { "body": "Should we now add a test for an invalid polygon with 0 points?\n", "created_at": "2014-11-14T14:38:57Z" }, { "body": "The following coordinate arrays are considered valid GeoJSON:\n\n``` java\n{ \n \"shape\": {\n \"type\": \"polygon\",\n \"coordinates\": [[[]]]\n }\n}\n\n{ \n \"shape\": {\n \"type\": \"polygon\",\n \"coordinates\": [[[null, null]]]\n }\n}\n```\n\nThe code, however, threw jackson parse exception and number format exceptions, respectively. The code has been updated to throw a more meaningful ParseException and IllegalArgumentException for these cases to help the end user diagnose the actual issue.\n", "created_at": "2014-11-14T16:30:47Z" } ], "title": "Fix for ArithmeticException[/ by zero] when parsing a polygon" }
{ "commits": [ { "message": "[GEO] Fix for ArithmeticException[/ by zero] when parsing a \"polygon\" with one pair of coordinates\n\nWhile this commit is primariy a fix for issue/8433 it adds more rigor to ShapeBuilder for parsing against the GeoJSON specification. Specifically, this adds LinearRing and LineString validity checks as defined in http://geojson.org/geojson-spec.html to ensure valid polygons are specified. The benefit of this fix is to provide a gate check at parse time to avoid any further processing if an invalid GeoJSON is provided. More parse checks like these will be necessary going forward to ensure full compliance with the GeoJSON specification.\n\nCloses #8433" }, { "message": "Correcting coordinate checks on LinearRing and LineString, updating test" }, { "message": "Adding parse gates for valid GeoJSON coordinates. Includes unit tests." }, { "message": "Updating to throw IllegalArgument exception for null value coordinates. Tests included." } ], "files": [ { "diff": "@@ -206,13 +206,17 @@ public String toString() {\n private static CoordinateNode parseCoordinates(XContentParser parser) throws IOException {\n XContentParser.Token token = parser.nextToken();\n \n- // Base case\n- if (token != XContentParser.Token.START_ARRAY) {\n+ // Base cases\n+ if (token != XContentParser.Token.START_ARRAY && \n+ token != XContentParser.Token.END_ARRAY && \n+ token != XContentParser.Token.VALUE_NULL) {\n double lon = parser.doubleValue();\n token = parser.nextToken();\n double lat = parser.doubleValue();\n token = parser.nextToken();\n return new CoordinateNode(new Coordinate(lon, lat));\n+ } else if (token == XContentParser.Token.VALUE_NULL) {\n+ throw new ElasticsearchIllegalArgumentException(\"coordinates cannot contain NULL values)\");\n }\n \n List<CoordinateNode> nodes = new ArrayList<>();\n@@ -625,6 +629,16 @@ protected static MultiPointBuilder parseMultiPoint(CoordinateNode coordinates) {\n }\n \n protected static LineStringBuilder parseLineString(CoordinateNode coordinates) {\n+ /**\n+ * Per GeoJSON spec (http://geojson.org/geojson-spec.html#linestring)\n+ * \"coordinates\" member must be an array of two or more positions\n+ * LineStringBuilder should throw a graceful exception if < 2 coordinates/points are provided\n+ */\n+ if (coordinates.children.size() < 2) {\n+ throw new ElasticsearchParseException(\"Invalid number of points in LineString (found \" +\n+ coordinates.children.size() + \" - must be >= 2)\");\n+ }\n+\n LineStringBuilder line = newLineString();\n for (CoordinateNode node : coordinates.children) {\n line.point(node.coordinate);\n@@ -640,11 +654,28 @@ protected static MultiLineStringBuilder parseMultiLine(CoordinateNode coordinate\n return multiline;\n }\n \n+ protected static LineStringBuilder parseLinearRing(CoordinateNode coordinates) {\n+ /**\n+ * Per GeoJSON spec (http://geojson.org/geojson-spec.html#linestring)\n+ * A LinearRing is closed LineString with 4 or more positions. The first and last positions\n+ * are equivalent (they represent equivalent points). Though a LinearRing is not explicitly\n+ * represented as a GeoJSON geometry type, it is referred to in the Polygon geometry type definition.\n+ */\n+ if (coordinates.children.size() < 4) {\n+ throw new ElasticsearchParseException(\"Invalid number of points in LinearRing (found \" +\n+ coordinates.children.size() + \" - must be >= 4)\");\n+ } else if (!coordinates.children.get(0).coordinate.equals(\n+ coordinates.children.get(coordinates.children.size() - 1).coordinate)) {\n+ throw new ElasticsearchParseException(\"Invalid LinearRing found (coordinates are not closed)\");\n+ }\n+ return parseLineString(coordinates);\n+ }\n+\n protected static PolygonBuilder parsePolygon(CoordinateNode coordinates) {\n- LineStringBuilder shell = parseLineString(coordinates.children.get(0));\n+ LineStringBuilder shell = parseLinearRing(coordinates.children.get(0));\n PolygonBuilder polygon = new PolygonBuilder(shell.points);\n for (int i = 1; i < coordinates.children.size(); i++) {\n- polygon.hole(parseLineString(coordinates.children.get(i)));\n+ polygon.hole(parseLinearRing(coordinates.children.get(i)));\n }\n return polygon;\n }", "filename": "src/main/java/org/elasticsearch/common/geo/builders/ShapeBuilder.java", "status": "modified" }, { "diff": "@@ -26,6 +26,8 @@\n import com.spatial4j.core.shape.jts.JtsGeometry;\n import com.spatial4j.core.shape.jts.JtsPoint;\n import com.vividsolutions.jts.geom.*;\n+import org.elasticsearch.ElasticsearchIllegalArgumentException;\n+import org.elasticsearch.ElasticsearchParseException;\n import org.elasticsearch.common.geo.builders.ShapeBuilder;\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.common.xcontent.XContentParser;\n@@ -155,6 +157,76 @@ public void testParse_polygonNoHoles() throws IOException {\n assertGeometryEquals(jtsGeom(expected), polygonGeoJson);\n }\n \n+ @Test\n+ public void testParse_invalidPolygon() throws IOException {\n+ /**\n+ * The following 3 test cases ensure proper error handling of invalid polygons \n+ * per the GeoJSON specification\n+ */\n+ // test case 1: create an invalid polygon with only 2 points\n+ String invalidPoly1 = XContentFactory.jsonBuilder().startObject().field(\"type\", \"polygon\")\n+ .startArray(\"coordinates\")\n+ .startArray()\n+ .startArray().value(-74.011).value(40.753).endArray()\n+ .startArray().value(-75.022).value(41.783).endArray()\n+ .endArray()\n+ .endArray()\n+ .endObject().string();\n+ XContentParser parser = JsonXContent.jsonXContent.createParser(invalidPoly1);\n+ parser.nextToken();\n+ ElasticsearchGeoAssertions.assertValidException(parser, ElasticsearchParseException.class);\n+\n+ // test case 2: create an invalid polygon with only 1 point\n+ String invalidPoly2 = XContentFactory.jsonBuilder().startObject().field(\"type\", \"polygon\")\n+ .startArray(\"coordinates\")\n+ .startArray()\n+ .startArray().value(-74.011).value(40.753).endArray()\n+ .endArray()\n+ .endArray()\n+ .endObject().string();\n+\n+ parser = JsonXContent.jsonXContent.createParser(invalidPoly2);\n+ parser.nextToken();\n+ ElasticsearchGeoAssertions.assertValidException(parser, ElasticsearchParseException.class);\n+\n+ // test case 3: create an invalid polygon with 0 points\n+ String invalidPoly3 = XContentFactory.jsonBuilder().startObject().field(\"type\", \"polygon\")\n+ .startArray(\"coordinates\")\n+ .startArray()\n+ .startArray().endArray()\n+ .endArray()\n+ .endArray()\n+ .endObject().string();\n+\n+ parser = JsonXContent.jsonXContent.createParser(invalidPoly3);\n+ parser.nextToken();\n+ ElasticsearchGeoAssertions.assertValidException(parser, ElasticsearchParseException.class);\n+\n+ // test case 4: create an invalid polygon with null value points\n+ String invalidPoly4 = XContentFactory.jsonBuilder().startObject().field(\"type\", \"polygon\")\n+ .startArray(\"coordinates\")\n+ .startArray()\n+ .startArray().nullValue().nullValue().endArray()\n+ .endArray()\n+ .endArray()\n+ .endObject().string();\n+\n+ parser = JsonXContent.jsonXContent.createParser(invalidPoly4);\n+ parser.nextToken();\n+ ElasticsearchGeoAssertions.assertValidException(parser, ElasticsearchIllegalArgumentException.class);\n+\n+ // test case 5: create an invalid polygon with 1 invalid LinearRing\n+ String invalidPoly5 = XContentFactory.jsonBuilder().startObject().field(\"type\", \"polygon\")\n+ .startArray(\"coordinates\")\n+ .nullValue().nullValue()\n+ .endArray()\n+ .endObject().string();\n+\n+ parser = JsonXContent.jsonXContent.createParser(invalidPoly5);\n+ parser.nextToken();\n+ ElasticsearchGeoAssertions.assertValidException(parser, ElasticsearchIllegalArgumentException.class);\n+ }\n+\n @Test\n public void testParse_polygonWithHole() throws IOException {\n String polygonGeoJson = XContentFactory.jsonBuilder().startObject().field(\"type\", \"Polygon\")", "filename": "src/test/java/org/elasticsearch/common/geo/GeoJSONShapeParserTests.java", "status": "modified" }, { "diff": "@@ -26,9 +26,12 @@\n import com.spatial4j.core.shape.jts.JtsGeometry;\n import com.spatial4j.core.shape.jts.JtsPoint;\n import com.vividsolutions.jts.geom.*;\n+import org.elasticsearch.ElasticsearchParseException;\n import org.elasticsearch.common.geo.GeoDistance;\n import org.elasticsearch.common.geo.GeoPoint;\n+import org.elasticsearch.common.geo.builders.ShapeBuilder;\n import org.elasticsearch.common.unit.DistanceUnit;\n+import org.elasticsearch.common.xcontent.XContentParser;\n import org.hamcrest.Matcher;\n import org.junit.Assert;\n \n@@ -246,4 +249,13 @@ private static double distance(double lat1, double lon1, double lat2, double lon\n return GeoDistance.ARC.calculate(lat1, lon1, lat2, lon2, DistanceUnit.DEFAULT);\n }\n \n+ public static void assertValidException(XContentParser parser, Class expectedException) {\n+ try {\n+ ShapeBuilder.parse(parser);\n+ Assert.fail(\"process completed successfully when \" + expectedException.getName() + \" expected\");\n+ } catch (Exception e) {\n+ assert(e.getClass().equals(expectedException)):\n+ \"expected \" + expectedException.getName() + \" but found \" + e.getClass().getName();\n+ }\n+ }\n }", "filename": "src/test/java/org/elasticsearch/test/hamcrest/ElasticsearchGeoAssertions.java", "status": "modified" } ] }
{ "body": "In ES 1.4.0 it is no longer possible to update the mapping if _all.enabled has been set to false and _all isn't present in the updating request\n\ncreate index t:\n\n```\nhttp put localhost:9200/t/\n```\n\ncreate mapping:\n\n```\nhttp put localhost:9200/t/_mapping/default default:='{\"dynamic\": \"strict\", \"_all\": {\"enabled\": false}}'\n```\n\nupdate mapping:\n\n```\nhttp put localhost:9200/t/_mapping/default default:='{\"dynamic\": \"true\"}\n```\n\nFails with:\n\n```\n \"error\": \"MergeMappingException[Merge failed with failures {[mapper [_all] enabled is false now encountering true]}]\", \n```\n\nIf I include _all in the request it works as expected:\n\n```\nhttp put localhost:9200/t/_mapping/default default:='{\"dynamic\": \"true\", \"_all\": {\"enabled\": false}}'\n{\n \"acknowledged\": true\n}\n```\n\nIn 1.3.5 that was possible. \n", "comments": [ { "body": "@mfussenegger thanks for reporting. i've confirmed this.\n\nAlso, updating the `_default_` mapping will **remove** the `_all` mapping:\n\n```\nDELETE _all \n\nPUT t\n\nPUT /t/_mapping/_default_ \n{\n \"dynamic\": \"strict\",\n \"_all\": {\n \"enabled\": false\n }\n}\n```\n\nHere, the `_all` mapping is present:\n\n```\nGET /_mapping\n\nPUT /t/_mapping/_default_ \n{\n \"dynamic\": \"true\"\n}\n```\n\nAnd now it is gone:\n\n```\nGET /_mapping\n```\n", "created_at": "2014-11-10T15:21:51Z" }, { "body": "This is a bug indeed, thanks for reporting. opened pr #8426\n", "created_at": "2014-11-10T18:19:31Z" }, { "body": "There seems to be another issue with the `_all` field in combination with the `_default_` mapping:\n\nI recreated the issue here:\nhttps://gist.github.com/miccon/a4869fe04f9010015861#file-strictallmappingtest-java\n\nFailing on 21de386\n", "created_at": "2014-11-24T17:25:13Z" }, { "body": "@pkoenig10 thanks a lot for the test, this is incredibly helpful. It seems to me the issue is not in _all mapper but rather that the type is created when indexing the document even though the indexing fails. I will open a separate issue shortly.\n", "created_at": "2014-11-25T12:07:40Z" }, { "body": "Thanks for the quick reply. Another thing I also noticed is that closing and reopening the index after the failed indexing makes the test pass. So it may be that the type gets somehow created but not persisted.\n", "created_at": "2014-11-25T12:57:06Z" } ], "number": 8423, "title": "Mapping update fails if _all.enabled was set to false - MergeMappingException" }
{ "body": "_all reports a conflict since #7377. However, it was not checked if _all\nwas actually configured in the updated mapping. Therefore whenever _all\nwas disabled a mapping could not be updated unless _all was again added to the\nupdated mapping.\nAlso, add enabled setting to mapping always whenever enabled was set explicitely.\n\ncloses #8423\n", "number": 8426, "review_comments": [], "title": "Fix conflict when updating mapping with _all disabled" }
{ "commits": [ { "message": "[root mappers] fix conflict when updating mapping with _all disabled\n\n_all reports a conflict since #7377. However, it was not checked if _all\nwas actually configured in the updated mapping. Therefore whenever _all\nwas disabled a mapping could not be updated unless _all was again added to the\nupdated mapping.\nAlso, add enabled setting to mapping always whenever enabled was set explicitely.\n\ncloses #8423" } ], "files": [ { "diff": "@@ -74,7 +74,7 @@ public interface IncludeInAll extends Mapper {\n public static class Defaults extends AbstractFieldMapper.Defaults {\n public static final String NAME = AllFieldMapper.NAME;\n public static final String INDEX_NAME = AllFieldMapper.NAME;\n- public static final boolean ENABLED = true;\n+ public static final EnabledAttributeMapper ENABLED = EnabledAttributeMapper.UNSET_ENABLED;\n \n public static final FieldType FIELD_TYPE = new FieldType();\n \n@@ -87,7 +87,7 @@ public static class Defaults extends AbstractFieldMapper.Defaults {\n \n public static class Builder extends AbstractFieldMapper.Builder<Builder, AllFieldMapper> {\n \n- private boolean enabled = Defaults.ENABLED;\n+ private EnabledAttributeMapper enabled = Defaults.ENABLED;\n \n // an internal flag, automatically set if we encounter boosting\n boolean autoBoost = false;\n@@ -98,7 +98,7 @@ public Builder() {\n indexName = Defaults.INDEX_NAME;\n }\n \n- public Builder enabled(boolean enabled) {\n+ public Builder enabled(EnabledAttributeMapper enabled) {\n this.enabled = enabled;\n return this;\n }\n@@ -125,7 +125,7 @@ public Mapper.Builder parse(String name, Map<String, Object> node, ParserContext\n String fieldName = Strings.toUnderscoreCase(entry.getKey());\n Object fieldNode = entry.getValue();\n if (fieldName.equals(\"enabled\")) {\n- builder.enabled(nodeBooleanValue(fieldNode));\n+ builder.enabled(nodeBooleanValue(fieldNode) ? EnabledAttributeMapper.ENABLED : EnabledAttributeMapper.DISABLED);\n } else if (fieldName.equals(\"auto_boost\")) {\n builder.autoBoost = nodeBooleanValue(fieldNode);\n }\n@@ -135,7 +135,7 @@ public Mapper.Builder parse(String name, Map<String, Object> node, ParserContext\n }\n \n \n- private boolean enabled;\n+ private EnabledAttributeMapper enabledState;\n // The autoBoost flag is automatically set based on indexed docs on the mappings\n // if a doc is indexed with a specific boost value and part of _all, it is automatically\n // set to true. This allows to optimize (automatically, which we like) for the common case\n@@ -148,21 +148,21 @@ public AllFieldMapper() {\n }\n \n protected AllFieldMapper(String name, FieldType fieldType, NamedAnalyzer indexAnalyzer, NamedAnalyzer searchAnalyzer,\n- boolean enabled, boolean autoBoost, PostingsFormatProvider postingsProvider,\n+ EnabledAttributeMapper enabled, boolean autoBoost, PostingsFormatProvider postingsProvider,\n DocValuesFormatProvider docValuesProvider, SimilarityProvider similarity, Loading normsLoading,\n @Nullable Settings fieldDataSettings, Settings indexSettings) {\n super(new Names(name, name, name, name), 1.0f, fieldType, null, indexAnalyzer, searchAnalyzer, postingsProvider, docValuesProvider,\n similarity, normsLoading, fieldDataSettings, indexSettings);\n if (hasDocValues()) {\n throw new MapperParsingException(\"Field [\" + names.fullName() + \"] is always tokenized and cannot have doc values\");\n }\n- this.enabled = enabled;\n+ this.enabledState = enabled;\n this.autoBoost = autoBoost;\n \n }\n \n public boolean enabled() {\n- return this.enabled;\n+ return this.enabledState.enabled;\n }\n \n @Override\n@@ -212,7 +212,7 @@ public boolean includeInObject() {\n \n @Override\n protected void parseCreateField(ParseContext context, List<Field> fields) throws IOException {\n- if (!enabled) {\n+ if (!enabledState.enabled) {\n return;\n }\n // reset the entries\n@@ -279,8 +279,8 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws\n }\n \n private void innerToXContent(XContentBuilder builder, boolean includeDefaults) throws IOException {\n- if (includeDefaults || enabled != Defaults.ENABLED) {\n- builder.field(\"enabled\", enabled);\n+ if (includeDefaults || enabledState != Defaults.ENABLED) {\n+ builder.field(\"enabled\", enabledState.enabled);\n }\n if (includeDefaults || autoBoost != false) {\n builder.field(\"auto_boost\", autoBoost);\n@@ -349,7 +349,7 @@ private void innerToXContent(XContentBuilder builder, boolean includeDefaults) t\n \n @Override\n public void merge(Mapper mergeWith, MergeContext mergeContext) throws MergeMappingException {\n- if (((AllFieldMapper)mergeWith).enabled() != this.enabled()) {\n+ if (((AllFieldMapper)mergeWith).enabled() != this.enabled() && ((AllFieldMapper)mergeWith).enabledState != Defaults.ENABLED) {\n mergeContext.addConflict(\"mapper [\" + names.fullName() + \"] enabled is \" + this.enabled() + \" now encountering \"+ ((AllFieldMapper)mergeWith).enabled());\n }\n super.merge(mergeWith, mergeContext);", "filename": "src/main/java/org/elasticsearch/index/mapper/internal/AllFieldMapper.java", "status": "modified" }, { "diff": "@@ -30,6 +30,7 @@\n import org.junit.Test;\n \n import java.io.IOException;\n+import java.util.HashMap;\n import java.util.LinkedHashMap;\n \n import static org.elasticsearch.common.io.Streams.copyToStringFromClasspath;\n@@ -70,6 +71,54 @@ public void test_all_conflicts() throws Exception {\n testConflict(mapping, mappingUpdate, errorMessage);\n }\n \n+\n+ @Test\n+ public void test_all_with_default() throws Exception {\n+ String defaultMapping = jsonBuilder().startObject().startObject(\"_default_\")\n+ .startObject(\"_all\")\n+ .field(\"enabled\", false)\n+ .endObject()\n+ .endObject().endObject().string();\n+ client().admin().indices().prepareCreate(\"index\").addMapping(\"_default_\", defaultMapping).get();\n+ String docMapping = jsonBuilder().startObject()\n+ .startObject(\"doc\")\n+ .endObject()\n+ .endObject().string();\n+ PutMappingResponse response = client().admin().indices().preparePutMapping(\"index\").setType(\"doc\").setSource(docMapping).get();\n+ assertTrue(response.isAcknowledged());\n+ String docMappingUpdate = jsonBuilder().startObject().startObject(\"doc\")\n+ .startObject(\"properties\")\n+ .startObject(\"text\")\n+ .field(\"type\", \"string\")\n+ .endObject()\n+ .endObject()\n+ .endObject()\n+ .endObject().string();\n+ response = client().admin().indices().preparePutMapping(\"index\").setType(\"doc\").setSource(docMappingUpdate).get();\n+ assertTrue(response.isAcknowledged());\n+ String docMappingAllExplicitEnabled = jsonBuilder().startObject()\n+ .startObject(\"doc_all_enabled\")\n+ .startObject(\"_all\")\n+ .field(\"enabled\", true)\n+ .endObject()\n+ .endObject()\n+ .endObject().string();\n+ response = client().admin().indices().preparePutMapping(\"index\").setType(\"doc_all_enabled\").setSource(docMappingAllExplicitEnabled).get();\n+ assertTrue(response.isAcknowledged());\n+\n+ GetMappingsResponse mapping = client().admin().indices().prepareGetMappings(\"index\").get();\n+ HashMap props = (HashMap)mapping.getMappings().get(\"index\").get(\"doc\").getSourceAsMap().get(\"_all\");\n+ assertThat((Boolean)props.get(\"enabled\"), equalTo(false));\n+ props = (HashMap)mapping.getMappings().get(\"index\").get(\"doc\").getSourceAsMap().get(\"properties\");\n+ assertNotNull(props);\n+ assertNotNull(props.get(\"text\"));\n+ props = (HashMap)mapping.getMappings().get(\"index\").get(\"doc_all_enabled\").getSourceAsMap().get(\"_all\");\n+ assertThat((Boolean)props.get(\"enabled\"), equalTo(true));\n+ props = (HashMap)mapping.getMappings().get(\"index\").get(\"_default_\").getSourceAsMap().get(\"_all\");\n+ assertThat((Boolean)props.get(\"enabled\"), equalTo(false));\n+\n+ }\n+\n @Test\n public void test_doc_valuesInvalidMapping() throws Exception {\n String mapping = jsonBuilder().startObject().startObject(\"mappings\").startObject(TYPE).startObject(\"_all\").startObject(\"fielddata\").field(\"format\", \"doc_values\").endObject().endObject().endObject().endObject().endObject().string();", "filename": "src/test/java/org/elasticsearch/index/mapper/update/UpdateMappingOnClusterTests.java", "status": "modified" }, { "diff": "@@ -64,7 +64,7 @@ public void test_all_disabled_after_default_enabled() throws Exception {\n public void test_all_enabled_after_enabled() throws Exception {\n XContentBuilder mapping = XContentFactory.jsonBuilder().startObject().startObject(\"_all\").field(\"enabled\", true).endObject().endObject();\n XContentBuilder mappingUpdate = XContentFactory.jsonBuilder().startObject().startObject(\"_all\").field(\"enabled\", true).endObject().startObject(\"properties\").startObject(\"text\").field(\"type\", \"string\").endObject().endObject().endObject();\n- XContentBuilder expectedMapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\").startObject(\"properties\").startObject(\"text\").field(\"type\", \"string\").endObject().endObject().endObject().endObject();\n+ XContentBuilder expectedMapping = XContentFactory.jsonBuilder().startObject().startObject(\"type\").startObject(\"_all\").field(\"enabled\", true).endObject().startObject(\"properties\").startObject(\"text\").field(\"type\", \"string\").endObject().endObject().endObject().endObject();\n testNoConflictWhileMergingAndMappingChanged(mapping, mappingUpdate, expectedMapping);\n }\n ", "filename": "src/test/java/org/elasticsearch/index/mapper/update/UpdateMappingTests.java", "status": "modified" } ] }
{ "body": "In #6864, `score()` was removed from `AbstractSearchScript`, in favor of using `doc().score()`. However, in #7819, `score()` was removed from `DocLookup`. While native scripts can still override `setScorer(Scorer)`, we should make it easier to access the score by keeping stashing the scorer in `AbstractScoreScript` for them.\n", "comments": [ { "body": "LGTM\n", "created_at": "2014-11-10T08:43:10Z" } ], "number": 8416, "title": "Add `score()` back to `AbstractSearchScript`" }
{ "body": "See #8377\ncloses #8416\n", "number": 8417, "review_comments": [], "title": "Add score() back to AbstractSearchScript" }
{ "commits": [ { "message": "Scripting: Add score() back to AbstractSearchScript\n\nSee #8377\ncloses #8416" } ], "files": [ { "diff": "@@ -24,6 +24,7 @@\n import org.elasticsearch.index.fielddata.ScriptDocValues;\n import org.elasticsearch.search.lookup.*;\n \n+import java.io.IOException;\n import java.util.Map;\n \n /**\n@@ -38,6 +39,7 @@\n public abstract class AbstractSearchScript extends AbstractExecutableScript implements SearchScript {\n \n private SearchLookup lookup;\n+ private Scorer scorer;\n \n /**\n * Returns the doc lookup allowing to access field data (cached) values as well as the current document score\n@@ -47,6 +49,13 @@ protected final DocLookup doc() {\n return lookup.doc();\n }\n \n+ /**\n+ * Returns the current score and only applicable when used as a scoring script in a custom score query!.\n+ */\n+ protected final float score() throws IOException {\n+ return scorer.score();\n+ }\n+\n /**\n * Returns field data strings access for the provided field.\n */\n@@ -95,7 +104,7 @@ void setLookup(SearchLookup lookup) {\n \n @Override\n public void setScorer(Scorer scorer) {\n- throw new UnsupportedOperationException();\n+ this.scorer = scorer;\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/script/AbstractSearchScript.java", "status": "modified" } ] }
{ "body": "Hi y'all, \n\nAfter upgrading to 1.4.0, I failed to compile my native(Java) score functions which worked well with 1.3.4.\nThe first issue I ran into is that my function extended `AbstractDoubleSearchScript` and call `score()` to get document score from Lucene, apparently it won't work now with 1.4.0.\n\nI wonder if any of you could tell me which method should be called to get the original score of one document. An example of my function looks like:\n\n```\npublic class BasicScorer extends AbstractDoubleSearchScript {\n\n public static final String NAME = \"basic-scorer\";\n\n public static class Factory implements NativeScriptFactory {\n @Override\n public ExecutableScript newScript(@Nullable Map<String, Object> params) {\n return new BasicScorer();\n }\n\n private BasicScorer() {\n }\n\n @Override\n public double runAsDouble() {\n return score() * ((ScriptDocValues.Doubles) doc().get(\"rank\")).getValue();\n }\n}\n```\n\nThe second issue is when I just remove the 'score()' method and query ElasticSearch with following parameter:\n\n```\n...\n \"script_score\": {\n \"script\": \"basic-scorer\",\n \"lang\": \"native\"\n }\n...\n```\n\nfollowing error was logged:\n\n```\n at org.elasticsearch.search.query.QueryParseElement.parse(QueryParseElement.java:33)\n at org.elasticsearch.search.SearchService.parseSource(SearchService.java:665)\n ... 9 more\nCaused by: java.lang.UnsupportedOperationException\n at org.elasticsearch.script.AbstractSearchScript.setScorer(AbstractSearchScript.java:104)\n at org.elasticsearch.common.lucene.search.function.ScriptScoreFunction.<init>(ScriptScoreFunction.java:86)\n\n```\n\nI noticed there were some groovy script function changes, but I don't think that should affect the interface of native score function.\n\nRegards,\nAndy\n", "comments": [ { "body": "Same problem here, and I can not find a way (efficient way) to get the score lucene provided within a native score script.\n", "created_at": "2014-11-07T12:17:13Z" }, { "body": "It looks like it is to do with this change: https://github.com/elasticsearch/elasticsearch/pull/7819\n\n@brwe can you shed more light here please?\n", "created_at": "2014-11-07T13:26:50Z" }, { "body": "I think I have the same issue, I wrote my plugin with the instruction in this repo:\nhttps://github.com/imotov/elasticsearch-native-script-example\nand also got `UnsupportedOperationException` in the log.\n", "created_at": "2014-11-08T08:33:13Z" }, { "body": "I too suffer from the issue\n", "created_at": "2014-11-09T15:37:00Z" }, { "body": "In 1.3 and before, there was a weird dual API for setting the score (see #6864). The `score()` function was removed there.\n\nTo access the score from a native script, you should `@Override setScorer(Scorer)` and call `scorer.score()` on when you want the score. \n", "created_at": "2014-11-10T01:23:03Z" }, { "body": "After looking at this a little more, the original intention was the score could be accessed with `doc().score()`, but that was removed with #7819. I've opened #8416 to add back `score()` for `AbstractSearchScript`.\n", "created_at": "2014-11-10T01:45:01Z" }, { "body": "@kilnyy, @noamt elasticsearch-native-script-example is updated with [a temporary work-around](https://github.com/imotov/elasticsearch-native-script-example/commit/1254c4837de635a42e43b2bdbbd5ef70b621b010) until 1.4.1 is released.\n", "created_at": "2014-11-10T06:59:26Z" }, { "body": "Cheers!\n", "created_at": "2014-11-10T07:03:41Z" } ], "number": 8377, "title": "1.4.0 failed to compile and run native script score function" }
{ "body": "See #8377\ncloses #8416\n", "number": 8417, "review_comments": [], "title": "Add score() back to AbstractSearchScript" }
{ "commits": [ { "message": "Scripting: Add score() back to AbstractSearchScript\n\nSee #8377\ncloses #8416" } ], "files": [ { "diff": "@@ -24,6 +24,7 @@\n import org.elasticsearch.index.fielddata.ScriptDocValues;\n import org.elasticsearch.search.lookup.*;\n \n+import java.io.IOException;\n import java.util.Map;\n \n /**\n@@ -38,6 +39,7 @@\n public abstract class AbstractSearchScript extends AbstractExecutableScript implements SearchScript {\n \n private SearchLookup lookup;\n+ private Scorer scorer;\n \n /**\n * Returns the doc lookup allowing to access field data (cached) values as well as the current document score\n@@ -47,6 +49,13 @@ protected final DocLookup doc() {\n return lookup.doc();\n }\n \n+ /**\n+ * Returns the current score and only applicable when used as a scoring script in a custom score query!.\n+ */\n+ protected final float score() throws IOException {\n+ return scorer.score();\n+ }\n+\n /**\n * Returns field data strings access for the provided field.\n */\n@@ -95,7 +104,7 @@ void setLookup(SearchLookup lookup) {\n \n @Override\n public void setScorer(Scorer scorer) {\n- throw new UnsupportedOperationException();\n+ this.scorer = scorer;\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/script/AbstractSearchScript.java", "status": "modified" } ] }
{ "body": "See CorruptedCompressorTests for details on how this bug can be hit.\n", "comments": [ { "body": "Please disable unsafe encode/decode complete.\n- This may crash machines that don't allow unaligned reads: https://github.com/ning/compress/issues/18\n- if (SUNOS) does not imply its safe to do such unaligned reads.\n- This may corrupt data on bigendian systems: https://github.com/ning/compress/issues/37\n- We do not test such situations. \n", "created_at": "2014-08-08T19:38:02Z" }, { "body": "Ok, I think I addressed all the comments. The only unchanged thing is the license file, because I don't know which license to put in there (the original file had no license header).\n", "created_at": "2014-08-08T20:12:44Z" }, { "body": "The PR to the compress-lzf project was merged, and a 1.0.2 release was made. I removed the X encoder and made the upgrade to 1.0.2.\n", "created_at": "2014-08-09T19:03:45Z" }, { "body": "looks good, thanks Ryan.\n", "created_at": "2014-08-11T12:57:29Z" }, { "body": "+1 as well\n", "created_at": "2014-08-11T12:58:29Z" }, { "body": "Thanks. Pushed.\n", "created_at": "2014-08-11T14:29:18Z" }, { "body": "Upgrading from 1.1.1 to 1.6.0 and noticing this output from our cluster\n\n``````\ninsertOrder timeInQueue priority source\n 37659 27ms HIGH shard-failed ([callers][2], node[Ko3b9KsESN68lTkPtVrHKw], relocating [4mcZCKvBRoKQJS_StGNPng], [P], s[INITIALIZING]), reason [shard failure [failed recovery][RecoveryFailedException[[callers][2]: Recovery failed from [aws_el1][4mcZCKvBRoKQJS_StGNPng][ip-10-55-11-210][inet[/10.55.11.210:9300]]{rack=useast1, master=true, zone=zonea} into [aws_el1a][Ko3b9KsESN68lTkPtVrHKw][ip-10-55-11-211][inet[/10.55.11.211:9300]]{rack=useast1, zone=zonea, master=true} (unexpected error)]; nested: ElasticsearchIllegalStateException[Can't recovery from node [aws_el1][4mcZCKvBRoKQJS_StGNPng][ip-10-55-11-210][inet[/10.55.11.210:9300]]{rack=useast1, master=true, zone=zonea} with [indices.recovery.compress : true] due to compression bugs - see issue #7210 for details]; ]]```\n\nwhat do we do?\n``````\n", "created_at": "2015-06-18T20:00:30Z" }, { "body": "@taf2 Turn off compression before upgrading.\n", "created_at": "2015-06-18T20:02:47Z" }, { "body": "@rjernst thanks! which kind of compression do we disable...\n\nis it this option in\n\n/etc/elasticsearch/elasticsearch.yml\n#transport.tcp.compress: true\n\n?\n\nor another option?\n", "created_at": "2015-06-18T20:04:57Z" }, { "body": "okay sorry it looks like we need to disable indices.recovery.compress - but is this something that needs to be disabled on all nodes in the cluster or just the new 1.6.0 node we're starting up now?\n", "created_at": "2015-06-18T20:06:01Z" }, { "body": "All nodes in the cluster, before starting the upgrade. The problem is old nodes with this setting enabled would use the old buggy code, which can then cause data copied between and old and new node to become corrupted.\n", "created_at": "2015-06-18T20:07:08Z" }, { "body": "excellent thank you - we have run the following on the existing cluster:\n\n```\ncurl -XPUT localhost:9200/_cluster/settings -d '{\"transient\" : {\"indices.recovery.compress\" : false }}'\n```\n", "created_at": "2015-06-18T20:10:20Z" }, { "body": "Thank you that did the trick!\n", "created_at": "2015-06-18T20:12:39Z" } ], "number": 7210, "title": "Fix a very rare case of corruption in compression used for internal cluster communication." }
{ "body": "The compression bug fixed in #7210 can still strike us since we are\nrunning BWC test against these version. This commit disables compression\nforcefully if the compatibility version is < 1.3.2 to prevent debugging\nalready known issues.\n", "number": 8412, "review_comments": [], "title": "[TEST] Disable compression in BWC test for version < 1.3.2" }
{ "commits": [ { "message": "[TEST] Disable compression in BWC test for version < 1.3.2\n\nThe compression bug fixed in #7210 can still strike us since we are\nrunning BWC test against these version. This commit disables compression\nforcefully if the compatibility version is < 1.3.2 to prevent debugging\nalready known issues." } ], "files": [ { "diff": "@@ -27,6 +27,8 @@\n import org.elasticsearch.common.regex.Regex;\n import org.elasticsearch.common.settings.ImmutableSettings;\n import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.indices.recovery.RecoverySettings;\n+import org.elasticsearch.transport.Transport;\n import org.elasticsearch.transport.TransportModule;\n import org.elasticsearch.transport.TransportService;\n import org.elasticsearch.transport.netty.NettyTransport;\n@@ -146,9 +148,16 @@ protected Settings requiredSettings() {\n }\n \n protected Settings nodeSettings(int nodeOrdinal) {\n- return ImmutableSettings.builder().put(requiredSettings())\n+ ImmutableSettings.Builder builder = ImmutableSettings.builder().put(requiredSettings())\n .put(TransportModule.TRANSPORT_TYPE_KEY, NettyTransport.class.getName()) // run same transport / disco as external\n- .put(TransportModule.TRANSPORT_SERVICE_TYPE_KEY, TransportService.class.getName()).build();\n+ .put(TransportModule.TRANSPORT_SERVICE_TYPE_KEY, TransportService.class.getName());\n+ if (compatibilityVersion().before(Version.V_1_3_2)) {\n+ // if we test against nodes before 1.3.2 we disable all the compression due to a known bug\n+ // see #7210\n+ builder.put(Transport.TransportSettings.TRANSPORT_TCP_COMPRESS, false)\n+ .put(RecoverySettings.INDICES_RECOVERY_COMPRESS, false);\n+ }\n+ return builder.build();\n }\n \n public void assertAllShardsOnNodes(String index, String pattern) {", "filename": "src/test/java/org/elasticsearch/test/ElasticsearchBackwardsCompatIntegrationTest.java", "status": "modified" }, { "diff": "@@ -442,13 +442,19 @@ private static ImmutableSettings.Builder setRandomSettings(Random random, Immuta\n builder.put(StoreModule.DISTIBUTOR_KEY, random.nextBoolean() ? StoreModule.LEAST_USED_DISTRIBUTOR : StoreModule.RANDOM_WEIGHT_DISTRIBUTOR);\n }\n \n+\n if (random.nextBoolean()) {\n if (random.nextInt(10) == 0) { // do something crazy slow here\n builder.put(RecoverySettings.INDICES_RECOVERY_MAX_BYTES_PER_SEC, new ByteSizeValue(RandomInts.randomIntBetween(random, 1, 10), ByteSizeUnit.MB));\n } else {\n builder.put(RecoverySettings.INDICES_RECOVERY_MAX_BYTES_PER_SEC, new ByteSizeValue(RandomInts.randomIntBetween(random, 10, 200), ByteSizeUnit.MB));\n }\n }\n+\n+ if (random.nextBoolean()) {\n+ builder.put(RecoverySettings.INDICES_RECOVERY_COMPRESS, randomBoolean());\n+ }\n+\n if (random.nextBoolean()) {\n builder.put(FsTranslog.INDEX_TRANSLOG_FS_TYPE, RandomPicks.randomFrom(random, FsTranslogFile.Type.values()).name());\n }\n@@ -465,7 +471,11 @@ private static ImmutableSettings.Builder setRandomSettings(Random random, Immuta\n builder.put(IndicesFieldDataCache.FIELDDATA_CACHE_CONCURRENCY_LEVEL, randomIntBetween(1, 32));\n builder.put(IndicesFilterCache.INDICES_CACHE_FILTER_CONCURRENCY_LEVEL, randomIntBetween(1, 32));\n }\n- \n+ if (globalCompatibilityVersion().before(Version.V_1_3_2)) {\n+ // if we test against nodes before 1.3.2 we disable all the compression due to a known bug\n+ // see #7210\n+ builder.put(RecoverySettings.INDICES_RECOVERY_COMPRESS, false);\n+ }\n return builder;\n }\n ", "filename": "src/test/java/org/elasticsearch/test/ElasticsearchIntegrationTest.java", "status": "modified" } ] }
{ "body": "Is the example of filtered query template [1] working? It seems to contain extra `</6>` string which seems as an issue to me. I am unable to test this example even after I remove this extra string.\n\n[1] http://www.elasticsearch.org/guide/en/elasticsearch/reference/1.3/search-template.html#_conditional_clauses\n", "comments": [ { "body": "I was testing it with ES 1.3.0\n", "created_at": "2014-10-31T13:04:29Z" }, { "body": "@clintongormley thanks for clarifying this. Out of curiosity, does the template example in question work for you?\n", "created_at": "2014-10-31T14:15:28Z" }, { "body": "@lukas-vlcek did you wrap it in a `query` element and pass it as an escaped string? \n\nI've just pushed another improvement to the docs to include both of the above elements\n", "created_at": "2014-10-31T14:34:29Z" }, { "body": "@clintongormley thanks for this, but still I am unable to get search template with section work. Here is trivial recreation https://gist.github.com/lukas-vlcek/56540a7e8206d122d55c\nIs there anything I do wrong? Am I missing some escaping?\n", "created_at": "2014-10-31T18:06:38Z" }, { "body": "Just in case here is excerpt from server log:\n\n```\n[2014-10-31 19:03:29,575][DEBUG][action.search.type ] [Jester] [twitter][4], node[ccMkzywSSfi-4ttibpHh9w], [P], s[STARTED]: Failed to execute [org.elasticsearch.action.search.SearchRequest@7de37772] lastShard [true]\norg.elasticsearch.ElasticsearchParseException: Failed to parse template\n at org.elasticsearch.search.SearchService.parseTemplate(SearchService.java:612)\n at org.elasticsearch.search.SearchService.createContext(SearchService.java:514)\n at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:487)\n at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:256)\n at org.elasticsearch.search.action.SearchServiceTransportAction$5.call(SearchServiceTransportAction.java:206)\n at org.elasticsearch.search.action.SearchServiceTransportAction$5.call(SearchServiceTransportAction.java:203)\n at org.elasticsearch.search.action.SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:517)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n at java.lang.Thread.run(Thread.java:745)\nCaused by: org.elasticsearch.common.jackson.core.JsonParseException: Unexpected character ('{' (code 123)): was expecting either valid name character (for unquoted name) or double-quote (for quoted) to start field name\n at [Source: [B@438d7c6b; line: 1, column: 4]\n at org.elasticsearch.common.jackson.core.JsonParser._constructError(JsonParser.java:1419)\n at org.elasticsearch.common.jackson.core.base.ParserMinimalBase._reportError(ParserMinimalBase.java:508)\n at org.elasticsearch.common.jackson.core.base.ParserMinimalBase._reportUnexpectedChar(ParserMinimalBase.java:437)\n at org.elasticsearch.common.jackson.core.json.UTF8StreamJsonParser._handleOddName(UTF8StreamJsonParser.java:1808)\n at org.elasticsearch.common.jackson.core.json.UTF8StreamJsonParser._parseName(UTF8StreamJsonParser.java:1496)\n at org.elasticsearch.common.jackson.core.json.UTF8StreamJsonParser.nextToken(UTF8StreamJsonParser.java:693)\n at org.elasticsearch.common.xcontent.json.JsonXContentParser.nextToken(JsonXContentParser.java:50)\n at org.elasticsearch.index.query.TemplateQueryParser.parse(TemplateQueryParser.java:113)\n at org.elasticsearch.index.query.TemplateQueryParser.parse(TemplateQueryParser.java:103)\n at org.elasticsearch.search.SearchService.parseTemplate(SearchService.java:605)\n ... 9 more\n```\n", "created_at": "2014-10-31T18:12:00Z" }, { "body": "@lukas-vlcek I can't get this working either. No idea what is going on here :(\n", "created_at": "2014-11-01T14:45:00Z" }, { "body": "@clintongormley thanks for look at this. Shall I change the title of this issue and remove the [DOC] part? It is probably not a DOC issue anymore, is it?\n", "created_at": "2014-11-02T13:22:30Z" }, { "body": "After looking deep inside the code, I am wondering has this (i.e. the conditional clause thing) been ever working? The root clause is, inside the JsonXContentParser.java:nextToken() method, the parser.nextToken() throws an exception straight out of the box when it encounters the clause like {{#use_size}}. I guess such clause has not been implemented to be recognized yet. Notice that the parser is of type org.elasticsearch.common.jackson.core.json.UTF8StreamJsonParser . \n\nIt looks like we have to override the UTF8StreamJsonParser's nextToken() method and do some look ahead for the conditional clauses. \n", "created_at": "2014-11-06T01:09:38Z" }, { "body": "@wwken As I understand it, the JSON parser should never see this the `{{#use_size}}`. The template is passed as a string, not as JSON, so the Mustache should render the template (thus removing the conditionals) to generate JSON which can _then_ be parsed.\n\nThis would be much easier to debug if we had #6821\n", "created_at": "2014-11-06T10:05:58Z" }, { "body": " I fixed it. Please refer to this pull request: https://github.com/elasticsearch/elasticsearch/pull/8376/files\n(P.S. plz ignore the above few links as i messed up checking in some incorrect commit files before by mistake)\n", "created_at": "2014-11-07T03:40:07Z" }, { "body": "thanks @wwken - we'll take a look and get back to you\n", "created_at": "2014-11-07T13:18:52Z" }, { "body": "@clintongormley #8393 is my try to fix this issue. If you find this relevant and to the point then it might make sense to consider renaming this issue again to more general description (I think the main issue here is broken parser logic of TemplateQueryParser in case the template value is a single string token. In other words I think there is more general issue not directly related to conditional clauses only).\n", "created_at": "2014-11-07T17:21:43Z" }, { "body": "@lukas-vlcek I think for sure your solution is better! I have two suggestions on it though (minor, not any important):\n\n1) In this file, https://github.com/lukas-vlcek/elasticsearch/blob/8308/src/main/java/org/elasticsearch/search/SearchService.java, you can omit the lines from 617-635 since they are not needed\n\n2) in this file, https://github.com/lukas-vlcek/elasticsearch/blob/8308/src/main/java/org/elasticsearch/index/query/TemplateQueryParser.java, you can separate the single if in line 113 to two if statements to make it more elegant :)\n\nthanks \n", "created_at": "2014-11-07T19:25:23Z" }, { "body": "@wwken thanks for looking at this. I am going to wait if ES devs give any feedback. May be they will want to fix it in very different approach. Who knows...\n", "created_at": "2014-11-09T14:28:08Z" }, { "body": "Bump.\n\nIs there any plan to have this fixed in 1.4? Or even backporting it to 1.3? We would like to make use of Search Templates but this issue is somehow limiting us. How can I help with this?\n", "created_at": "2014-11-12T09:58:31Z" }, { "body": "Closed by #11512\n", "created_at": "2015-06-23T17:54:59Z" }, { "body": "@clintongormley correct, this issue was created before I opened more general PR. Closing...\n", "created_at": "2015-06-24T06:32:40Z" } ], "number": 8308, "title": "Search Template - conditional clauses not rendering correctly" }
{ "body": "I was trying to implement fix for #8308\n\nI think I nailed it down. It seems it is combination of two related issues:\n\n1) The `TemplateQueryParser.java` was not correctly parsing the request when the `template` contained a single `VALUE_STRING` token. I think this could not have been working before (obviously there were no tests for this use case). I improved the main `parse` method to detect string token. **Please review** namely the complex `if` conditions - I am sure there is a way how to express them in more elegant way.\n\n2) The second issue is with `SearchService.java` in `parseTemplate` method where it tries to parse the template for second time. When I commented this part out then all the tests that I added to `TemplateQueryTest.java` started to pass. **Please review** validity of commenting this out. I did not notice any tests that would be broken due to this but the chance is that there were no tests covering the logic behind second parsing pass.\n\nLooking for the feedback.\n", "number": 8393, "review_comments": [ { "body": "I'd separate this out into it's own test and add the exception as expected to the test annotation.\n", "created_at": "2014-11-13T13:51:16Z" }, { "body": "Would it make sense to separate these into their own test methods so they fail separately if they fail?\n", "created_at": "2014-11-13T14:08:21Z" }, { "body": "Makes sense given the current query template tests. Maybe add some documentation which clause covers which case?\n", "created_at": "2014-11-13T14:12:09Z" }, { "body": "When trying to figure out what exactly goes wrong when the above query is executed\\* I noticed that the template string above doesn't look quite right. When changing it to\n\n```\nString templateString = \"{\" +\n \" \\\"template\\\" : {\" +\n \" \\\"query\\\":{\\\"match_{{#use_it}}{{template}}{{/use_it}}\\\":{} },\" +\n \" \\\"params\\\":{\" +\n \" \\\"template\\\":\\\"all\\\",\" +\n \" \\\"use_it\\\": true\" +\n \" }\" +\n \"}\";\n```\n\nyour test is green on my master - am I missing something here or was that just a misunderstanding on how to formulate the template? Caveat: I didn't check re-formulating the other two tests yet.\n- And finally printing the non-quoted version after staring at the quoted one.\n", "created_at": "2015-04-08T18:48:12Z" }, { "body": "@MaineC it seems I misunderstood how to formulate the requests for Template Query.\n\nThe purpose of this PR was to fix the issue with **Search Templates** in the first place but IIRC both the Template Query and Search Templates were using the same code for parsing the templates thus I added also tests for Template Query. The template parsing logic was a bit dirty IMO (my changes made it even more messy) and it seemed to me that later modifications to this logic could easily break parsing logic in Template Query without noticing it in tests. Thus I decided to add specific tests. I tried to explain why I thought these tests were useful in comments.\n\nLooking at required request format of Template Query the string value should be provided under \"query\" key. So it might be useful to check why it was possible that my requests passed for me. You mentioned that you were trying this against current master. The chance is that parsing logic got changed in the mean time and does not support this exotic request format (which is probably good).\n\nDo you want me to look at this PR again? (I can not promise I can look at this quickly.)\n", "created_at": "2015-04-10T11:34:08Z" }, { "body": "> The purpose of this PR was to fix the issue with Search Templates in the first place but IIRC both the Template Query and Search Templates were using the same code for parsing the templates thus I added also tests for Template Query. \n\nMakes sense. I only saw those and missed your point about the search templates. Should have looked closer. Thanks for your clarification - re-reading your changes now. \n", "created_at": "2015-04-14T07:31:28Z" } ], "title": "Search Template - parse template if it is provided as a single escaped string." }
{ "commits": [ { "message": "Search Template - parse template if it is provided as a single escaped string. Closes #8308\n\n- Search template is correctly parsed if it is provided as single escaped string\n- Ignore exceptions in case of second parsing of template content (assuming it is escaped template string content)\n- New tests for relevant use cases" } ], "files": [ { "diff": "@@ -19,12 +19,9 @@\n package org.elasticsearch.index.query;\n \n import org.apache.lucene.search.Query;\n-import org.elasticsearch.ElasticsearchException;\n import org.elasticsearch.common.Nullable;\n import org.elasticsearch.common.bytes.BytesReference;\n import org.elasticsearch.common.inject.Inject;\n-import org.elasticsearch.common.logging.ESLogger;\n-import org.elasticsearch.common.logging.Loggers;\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentFactory;\n import org.elasticsearch.common.xcontent.XContentParser;\n@@ -113,17 +110,25 @@ public static TemplateContext parse(XContentParser parser, String paramsFieldnam\n while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {\n if (token == XContentParser.Token.FIELD_NAME) {\n currentFieldName = parser.currentName();\n- } else if (parameterMap.containsKey(currentFieldName)) {\n- type = parameterMap.get(currentFieldName);\n- if (token == XContentParser.Token.START_OBJECT) {\n- XContentBuilder builder = XContentBuilder.builder(parser.contentType().xContent());\n- builder.copyCurrentStructure(parser);\n- templateNameOrTemplateContent = builder.string();\n- } else {\n- templateNameOrTemplateContent = parser.text();\n+ } else {\n+ // template can be expresses as a single string value (see #8393)\n+ boolean isTemplateString = (\"template\".equals(currentFieldName) && token == XContentParser.Token.VALUE_STRING);\n+ if (parameterMap.containsKey(currentFieldName) || isTemplateString) {\n+ if (isTemplateString) {\n+ type = ScriptService.ScriptType.INLINE;\n+ } else {\n+ type = parameterMap.get(currentFieldName);\n+ }\n+ if (token == XContentParser.Token.START_OBJECT) {\n+ XContentBuilder builder = XContentBuilder.builder(parser.contentType().xContent());\n+ builder.copyCurrentStructure(parser);\n+ templateNameOrTemplateContent = builder.string();\n+ } else {\n+ templateNameOrTemplateContent = parser.text();\n+ }\n+ } else if (paramsFieldname.equals(currentFieldName)) {\n+ params = parser.map();\n }\n- } else if (paramsFieldname.equals(currentFieldName)) {\n- params = parser.map();\n }\n }\n ", "filename": "src/main/java/org/elasticsearch/index/query/TemplateQueryParser.java", "status": "modified" }, { "diff": "@@ -22,6 +22,7 @@\n import com.carrotsearch.hppc.ObjectOpenHashSet;\n import com.carrotsearch.hppc.ObjectSet;\n import com.carrotsearch.hppc.cursors.ObjectCursor;\n+import com.fasterxml.jackson.core.JsonParseException;\n import com.google.common.base.Charsets;\n import com.google.common.collect.ImmutableMap;\n \n@@ -647,10 +648,17 @@ private void parseTemplate(ShardSearchRequest request) {\n templateContext = new TemplateQueryParser.TemplateContext(ScriptService.ScriptType.FILE, templateContext.template(), templateContext.params());\n }\n if (parser != null) {\n- TemplateQueryParser.TemplateContext innerContext = TemplateQueryParser.parse(parser, \"params\");\n- if (hasLength(innerContext.template()) && !innerContext.scriptType().equals(ScriptService.ScriptType.INLINE)) {\n- //An inner template referring to a filename or id\n- templateContext = new TemplateQueryParser.TemplateContext(innerContext.scriptType(), innerContext.template(), templateContext.params());\n+ try {\n+ TemplateQueryParser.TemplateContext innerContext = TemplateQueryParser.parse(parser, \"params\");\n+ if (hasLength(innerContext.template()) && !innerContext.scriptType().equals(ScriptService.ScriptType.INLINE)) {\n+ //An inner template referring to a filename or id\n+ templateContext = new TemplateQueryParser.TemplateContext(innerContext.scriptType(), innerContext.template(), templateContext.params());\n+ }\n+ } catch (JsonParseException e) { // TODO: consider throwing better exception, dependency on Jackson exception is not ideal\n+ // If template is provided as a single string then it can contain conditional clauses.\n+ // In such case the parsing is expected to fail, it is ok to ignore the exception here because\n+ // the string is going to be processes by scripting engine and if it is invalid then\n+ // relevant exception will be fired by scripting engine later.\n }\n }\n }", "filename": "src/main/java/org/elasticsearch/search/SearchService.java", "status": "modified" }, { "diff": "@@ -122,6 +122,74 @@ public void testParser() throws IOException {\n assertTrue(\"Parsing template query failed.\", query instanceof ConstantScoreQuery);\n }\n \n+ /**\n+ * Test that the template query parser can parse and evaluate template expressed as a single string\n+ * with conditional clause.\n+ */\n+ @Test\n+ public void testParseTemplateAsSingleStringWithConditionalClause() throws IOException {\n+ String templateString = \"{\" +\n+ \" \\\"template\\\" : \\\"{ \\\\\\\"match_{{#use_it}}{{template}}{{/use_it}}\\\\\\\":{} }\\\",\" +\n+ \" \\\"params\\\":{\" +\n+ \" \\\"template\\\":\\\"all\\\",\" +\n+ \" \\\"use_it\\\": true\" +\n+ \" }\" +\n+ \"}\";\n+\n+ XContentParser templateSourceParser = XContentFactory.xContent(templateString).createParser(templateString);\n+ context.reset(templateSourceParser);\n+\n+ TemplateQueryParser parser = injector.getInstance(TemplateQueryParser.class);\n+ Query query = parser.parse(context);\n+ assertTrue(\"Parsing template query failed.\", query instanceof ConstantScoreQuery);\n+ }\n+\n+ /**\n+ * Test that the template query parser can parse and evaluate template expressed as a single string\n+ * but still it expects only the query specification (thus this test should fail with specific exception).\n+ */\n+ @Test(expected = QueryParsingException.class)\n+ public void testParseTemplateFailsToParseCompleteQueryAsSingleString() throws IOException {\n+ String templateString = \"{\" +\n+ \" \\\"template\\\" : \\\"{ \\\\\\\"size\\\\\\\": \\\\\\\"{{size}}\\\\\\\", \\\\\\\"query\\\\\\\":{\\\\\\\"match_all\\\\\\\":{}}}\\\",\" +\n+ \" \\\"params\\\":{\" +\n+ \" \\\"size\\\":2\" +\n+ \" }\\n\" +\n+ \"}\";\n+\n+ XContentParser templateSourceParser = XContentFactory.xContent(templateString).createParser(templateString);\n+ context.reset(templateSourceParser);\n+\n+ TemplateQueryParser parser = injector.getInstance(TemplateQueryParser.class);\n+ parser.parse(context);\n+ }\n+\n+ /**\n+ * Test that the template query parser can parse and evaluate template expressed as a single string\n+ * but still it expects only the query specification (thus this test should fail with specific exception).\n+ *\n+ * Compared to {@link #testParseTemplateFailsToParseCompleteQueryAsSingleString} this test verify\n+ * that it will fail correctly even if the query specification is wrapped by the \"query: { ... }\".\n+ * The point is that if it were to accept such input then it could silently ignore additional content\n+ * such as \"size: X, from: Y\" following the \"query\" without letting user know leading to hard-to-spot issues.\n+ * In fact this logic can be also sensitive to internal implementation of parser.\n+ */\n+ @Test(expected = QueryParsingException.class)\n+ public void testParseTemplateFailsToParseCompleteQueryAsSingleString2() throws IOException {\n+ String templateString = \"{\" +\n+ \" \\\"template\\\" : \\\"{ \\\\\\\"query\\\\\\\":{\\\\\\\"match_{{template}}\\\\\\\":{}}}\\\",\" +\n+ \" \\\"params\\\":{\" +\n+ \" \\\"template\\\":\\\"all\\\"\" +\n+ \" }\" +\n+ \"}\";\n+\n+ XContentParser templateSourceParser = XContentFactory.xContent(templateString).createParser(templateString);\n+ context.reset(templateSourceParser);\n+\n+ TemplateQueryParser parser = injector.getInstance(TemplateQueryParser.class);\n+ parser.parse(context);\n+ }\n+\n @Test\n public void testParserCanExtractTemplateNames() throws Exception {\n String templateString = \"{ \\\"template\\\": { \\\"file\\\": \\\"storedTemplate\\\" ,\\\"params\\\":{\\\"template\\\":\\\"all\\\" } } } \";", "filename": "src/test/java/org/elasticsearch/index/query/TemplateQueryParserTest.java", "status": "modified" }, { "diff": "@@ -196,6 +196,84 @@ public void testSearchRequestFail() throws Exception {\n assertThat(searchResponse.getHits().hits().length, equalTo(1));\n }\n \n+\n+ @Test\n+ public void testSearchTemplateQueryFromFile() throws Exception {\n+ SearchRequest searchRequest = new SearchRequest();\n+ searchRequest.indices(\"_all\");\n+ String templateString = \"{\" +\n+ \" \\\"template\\\" : { \\\"file\\\": \\\"full-query-template\\\" },\" +\n+ \" \\\"params\\\":{\" +\n+ \" \\\"mySize\\\": 2,\" +\n+ \" \\\"myField\\\": \\\"text\\\",\" +\n+ \" \\\"myValue\\\": \\\"value1\\\"\" +\n+ \" }\" +\n+ \"}\";\n+ BytesReference bytesRef = new BytesArray(templateString);\n+ searchRequest.templateSource(bytesRef, false);\n+ SearchResponse searchResponse = client().search(searchRequest).get();\n+ assertThat(searchResponse.getHits().hits().length, equalTo(1));\n+ }\n+\n+ /**\n+ * Test that template can be expressed as a single escaped string.\n+ */\n+ @Test\n+ public void testTemplateQueryAsEscapedString() throws Exception {\n+ SearchRequest searchRequest = new SearchRequest();\n+ searchRequest.indices(\"_all\");\n+ String templateString = \"{\" +\n+ \" \\\"template\\\" : \\\"{ \\\\\\\"size\\\\\\\": \\\\\\\"{{size}}\\\\\\\", \\\\\\\"query\\\\\\\":{\\\\\\\"match_all\\\\\\\":{}}}\\\",\" +\n+ \" \\\"params\\\":{\" +\n+ \" \\\"size\\\": 1\" +\n+ \" }\" +\n+ \"}\";\n+ BytesReference bytesRef = new BytesArray(templateString);\n+ searchRequest.templateSource(bytesRef, false);\n+ SearchResponse searchResponse = client().search(searchRequest).get();\n+ assertThat(searchResponse.getHits().hits().length, equalTo(1));\n+ }\n+\n+ /**\n+ * Test that template can contain conditional clause. In this case it is at the beginning of the string.\n+ */\n+ @Test\n+ public void testTemplateQueryAsEscapedStringStartingWithConditionalClause() throws Exception {\n+ SearchRequest searchRequest = new SearchRequest();\n+ searchRequest.indices(\"_all\");\n+ String templateString = \"{\" +\n+ \" \\\"template\\\" : \\\"{ {{#use_size}} \\\\\\\"size\\\\\\\": \\\\\\\"{{size}}\\\\\\\", {{/use_size}} \\\\\\\"query\\\\\\\":{\\\\\\\"match_all\\\\\\\":{}}}\\\",\" +\n+ \" \\\"params\\\":{\" +\n+ \" \\\"size\\\": 1,\" +\n+ \" \\\"use_size\\\": true\" +\n+ \" }\" +\n+ \"}\";\n+ BytesReference bytesRef = new BytesArray(templateString);\n+ searchRequest.templateSource(bytesRef, false);\n+ SearchResponse searchResponse = client().search(searchRequest).get();\n+ assertThat(searchResponse.getHits().hits().length, equalTo(1));\n+ }\n+\n+ /**\n+ * Test that template can contain conditional clause. In this case it is at the end of the string.\n+ */\n+ @Test\n+ public void testTemplateQueryAsEscapedStringWithConditionalClauseAtEnd() throws Exception {\n+ SearchRequest searchRequest = new SearchRequest();\n+ searchRequest.indices(\"_all\");\n+ String templateString = \"{\" +\n+ \" \\\"template\\\" : \\\"{ \\\\\\\"query\\\\\\\":{\\\\\\\"match_all\\\\\\\":{}} {{#use_size}}, \\\\\\\"size\\\\\\\": \\\\\\\"{{size}}\\\\\\\" {{/use_size}} }\\\",\" +\n+ \" \\\"params\\\":{\" +\n+ \" \\\"size\\\": 1,\" +\n+ \" \\\"use_size\\\": true\" +\n+ \" }\" +\n+ \"}\";\n+ BytesReference bytesRef = new BytesArray(templateString);\n+ searchRequest.templateSource(bytesRef, false);\n+ SearchResponse searchResponse = client().search(searchRequest).get();\n+ assertThat(searchResponse.getHits().hits().length, equalTo(1));\n+ }\n+\n @Test\n public void testThatParametersCanBeSet() throws Exception {\n index(\"test\", \"type\", \"1\", jsonBuilder().startObject().field(\"theField\", \"foo\").endObject());", "filename": "src/test/java/org/elasticsearch/index/query/TemplateQueryTest.java", "status": "modified" } ] }
{ "body": "Currently doing a rolling update of my 3 node cluster from 1.3.4 to 1.4.0, but after upgrading the first machine (es-node-1), I noticed Kibana stopped working, digging through the logs, I can reproduce the error:\n\n``` bash\ncurl es-node-1:9200/main-2014.39/_aliases\n```\n\nThis returns:\n\n``` json\n{\"error\":\"RemoteTransportException[[es-node-3][inet[/xxx.xxx.xxx.xxx:9300]][indices:admin/get]]; nested: ActionNotFoundTransportException[No handler for action [indices:admin/get]]; \",\"status\":500}\n```\n\nand in the log of es-node-3:\n\n```\n[es-node-3] Message not fully read (request) for [128466] and action [indices:admin/get], resetting\n```\n\nHowever the same query on the other two nodes works as expected (tried for all of my indicies), eg:\n\n``` bash\ncurl es-node-2:9200/main-2014.39/_aliases\n```\n\nIndexing seems to be happening as normal, and the _search endpoint also seems to work as expected.\n\nI've now shut down es-node-3 and everything seems to work as expected, continuing with update...\n", "comments": [ { "body": "Also just noticed in the logs got some further related warnings, which I think was as node-3 came up with 1.4.0 (my rolling update was es-node-1, es-node-3, es-node-2):\n\n```\n[es-node-2] Message not fully read (request) for [0] and action [internal:discovery/zen/unicast_gte_1_4], resetting\n```\n", "created_at": "2014-11-06T12:42:00Z" }, { "body": "Thanks for reporting, we've found the issue. It occurs when the master node has not yet been upgraded and receives a request for GET _aliases from an upgraded node. You can continue with your upgrade and the problem will disappear when you finish the upgrade. We'll work on a fix for it\n", "created_at": "2014-11-06T14:10:53Z" }, { "body": "Also, the messages in your second comment (regarding `internal:discovery/zen/unicast_gte_1_4`) are expected. The new zen discovery attempts to communicate with the node with the new protocol and then falls back to the old protocol if it is not successful. These messages should also disappear when the upgrade is complete\n", "created_at": "2014-11-06T14:14:53Z" }, { "body": "Thanks. Now everything is upgraded no problems.\n", "created_at": "2014-11-06T15:31:39Z" } ], "number": 8364, "title": "Issues during rolling update from 1.3.4 to 1.4.0" }
{ "body": "If a 1.4 node needs to forward a GET index API call to a <1.4 master it cannot use the GET index API endpoints as the master has no knowledge of it. This change detects that the master does not understand the initial request and instead tries it again using the old APIs. If these calls also do not work, an error is returned\n\nCloses #8364\n", "number": 8387, "review_comments": [ { "body": "Maybe use `ExceptionsHelper#unwrapCause`?\n", "created_at": "2014-11-07T13:14:10Z" }, { "body": "maybe we can add tests for get mappings, settings and warmers as well?\n", "created_at": "2014-11-07T13:15:35Z" }, { "body": "I vaguely remember some differences around the indices options used depending on what we retrieve. Are those still 100% bw compatible when downgrading the request? I guess so since if I remember correctly the get index behaviour is already bw compatible but wanted to double check ;)\n", "created_at": "2014-11-07T13:17:04Z" }, { "body": "To clarify, I meant the default indices options used if the user doesn't set them.\n", "created_at": "2014-11-07T13:21:01Z" }, { "body": "FYI if you want (not that it is a problem as it works now) you can leave the randomized number of shards when creating the index and retrieve it through `getNumShards(\"test\").numShards` which you can use for the comparison.\n", "created_at": "2014-11-07T13:53:26Z" }, { "body": "`createIndex(\"test\")` ? then you can remove the following `assertAcked`\n", "created_at": "2014-11-07T13:54:22Z" }, { "body": "not 100% sure we wanna return e1 here as failure, maybe the original one would be better given that the fallback call failed? I guess it depends on how you look at it... since we don't check the version of the node we are talking to. Maybe it's good as is, not sure really\n", "created_at": "2014-11-07T13:58:58Z" }, { "body": "can't the `aliasesResponse` be null here?\n", "created_at": "2014-11-07T13:59:56Z" }, { "body": "same as above, are these parameters always non null?\n", "created_at": "2014-11-07T14:00:50Z" }, { "body": "My take on this was, we know why the first exception (e) occurs since we have confirmed that it was ActionNotFound, so we can be pretty sure we are talking to a node on a previous version. Given that, we don't want to hide a 'real' exception such as IndexMissingException so is better to return the new exception (e1). WDYT?\n", "created_at": "2014-11-07T14:02:09Z" }, { "body": "This should be ok since the backward compatibility stuff will already have been applied to the indicesOptions by this point\n", "created_at": "2014-11-07T14:03:31Z" }, { "body": "right, sold!\n", "created_at": "2014-11-07T14:08:23Z" }, { "body": "all good to pass in null here as parameter I guess?\n", "created_at": "2014-11-07T14:27:30Z" }, { "body": "can we have this test on master as well please?\n", "created_at": "2014-11-07T17:41:22Z" }, { "body": "+1\n", "created_at": "2014-11-07T17:55:51Z" }, { "body": "added to master in https://github.com/elasticsearch/elasticsearch/commit/d0da605a3916550030f4a987f2f3be684294c7f3\n", "created_at": "2014-11-10T09:15:23Z" } ], "title": "Indices API: Fixed backward compatibility issue" }
{ "commits": [ { "message": "Indices API: Fixed backward compatibility issue\n\nIf a 1.4 node needs to forward a GET index API call to a <1.4 master it cannot use the GET index API endpoints as the master has no knowledge of it. This change detects that the master does not understand the initial request and instead tries it again using the old APIs. If these calls also do not work, an error is returned\n\nCloses #8364" } ], "files": [ { "diff": "@@ -22,16 +22,25 @@\n import com.carrotsearch.hppc.cursors.ObjectObjectCursor;\n import com.google.common.collect.ImmutableList;\n import org.elasticsearch.action.ActionResponse;\n+import org.elasticsearch.action.admin.indices.alias.get.GetAliasesResponse;\n+import org.elasticsearch.action.admin.indices.mapping.get.GetMappingsResponse;\n+import org.elasticsearch.action.admin.indices.settings.get.GetSettingsResponse;\n+import org.elasticsearch.action.admin.indices.warmer.get.GetWarmersResponse;\n import org.elasticsearch.cluster.metadata.AliasMetaData;\n import org.elasticsearch.cluster.metadata.MappingMetaData;\n import org.elasticsearch.common.collect.ImmutableOpenMap;\n+import org.elasticsearch.common.collect.ImmutableOpenMap.Builder;\n import org.elasticsearch.common.io.stream.StreamInput;\n import org.elasticsearch.common.io.stream.StreamOutput;\n import org.elasticsearch.common.settings.ImmutableSettings;\n import org.elasticsearch.common.settings.Settings;\n import org.elasticsearch.search.warmer.IndexWarmersMetaData;\n+import org.elasticsearch.search.warmer.IndexWarmersMetaData.Entry;\n \n import java.io.IOException;\n+import java.util.HashSet;\n+import java.util.List;\n+import java.util.Set;\n \n /**\n * A response for a delete index action.\n@@ -189,4 +198,52 @@ public void writeTo(StreamOutput out) throws IOException {\n ImmutableSettings.writeSettingsToStream(indexEntry.value, out);\n }\n }\n+\n+ public static GetIndexResponse convertResponses(GetAliasesResponse aliasesResponse, GetMappingsResponse mappingsResponse,\n+ GetSettingsResponse settingsResponse, GetWarmersResponse warmersResponse) {\n+ Set<String> indices = new HashSet<String>();\n+ Builder<String, ImmutableList<AliasMetaData>> aliasesBuilder = ImmutableOpenMap.builder();\n+ if (aliasesResponse != null) {\n+ ImmutableOpenMap<String, List<AliasMetaData>> returnedAliasesMap = aliasesResponse.getAliases();\n+ if (returnedAliasesMap != null) {\n+ for (ObjectObjectCursor<String, List<AliasMetaData>> entry : returnedAliasesMap) {\n+ ImmutableList.Builder<AliasMetaData> listBuilder = ImmutableList.builder();\n+ listBuilder.addAll(entry.value);\n+ aliasesBuilder.put(entry.key, listBuilder.build());\n+ indices.add(entry.key);\n+ }\n+ }\n+ }\n+ ImmutableOpenMap<String, ImmutableList<AliasMetaData>> aliases = aliasesBuilder.build();\n+ ImmutableOpenMap<String, ImmutableList<Entry>> warmers = null;\n+ if (warmersResponse != null) {\n+ warmers = warmersResponse.getWarmers();\n+ if (warmers != null) {\n+ for (ObjectObjectCursor<String, ImmutableList<Entry>> warmer : warmers) {\n+ indices.add(warmer.key);\n+ }\n+ }\n+ }\n+ ImmutableOpenMap<String, ImmutableOpenMap<String, MappingMetaData>> mappings = null;\n+ if (mappingsResponse != null) {\n+ mappings = mappingsResponse.getMappings();\n+ if (mappings != null) {\n+ for (ObjectObjectCursor<String, ImmutableOpenMap<String, MappingMetaData>> mapping : mappings) {\n+ indices.add(mapping.key);\n+ }\n+ }\n+ }\n+ ImmutableOpenMap<String, Settings> indexToSettings = null;\n+ if (settingsResponse != null) {\n+ indexToSettings = settingsResponse.getIndexToSettings();\n+ if (indexToSettings != null) {\n+ for (ObjectObjectCursor<String, Settings> settings : indexToSettings) {\n+ indices.add(settings.key);\n+ }\n+ }\n+ }\n+ GetIndexResponse response = new GetIndexResponse(indices.toArray(new String[indices.size()]), warmers, mappings,\n+ aliases, indexToSettings);\n+ return response;\n+ }\n }", "filename": "src/main/java/org/elasticsearch/action/admin/indices/get/GetIndexResponse.java", "status": "modified" }, { "diff": "@@ -19,6 +19,8 @@\n \n package org.elasticsearch.client.support;\n \n+import org.elasticsearch.ElasticsearchIllegalStateException;\n+import org.elasticsearch.ExceptionsHelper;\n import org.elasticsearch.action.*;\n import org.elasticsearch.action.admin.indices.alias.IndicesAliasesAction;\n import org.elasticsearch.action.admin.indices.alias.IndicesAliasesRequest;\n@@ -142,6 +144,7 @@\n import org.elasticsearch.action.admin.indices.warmer.put.PutWarmerResponse;\n import org.elasticsearch.client.IndicesAdminClient;\n import org.elasticsearch.common.Nullable;\n+import org.elasticsearch.transport.ActionNotFoundTransportException;\n \n /**\n *\n@@ -239,8 +242,57 @@ public ActionFuture<GetIndexResponse> getIndex(GetIndexRequest request) {\n }\n \n @Override\n- public void getIndex(GetIndexRequest request, ActionListener<GetIndexResponse> listener) {\n- execute(GetIndexAction.INSTANCE, request, listener);\n+ public void getIndex(final GetIndexRequest request, final ActionListener<GetIndexResponse> listener) {\n+ execute(GetIndexAction.INSTANCE, request, new ActionListener<GetIndexResponse>() {\n+\n+ @Override\n+ public void onResponse(GetIndexResponse response) {\n+ listener.onResponse(response);\n+ }\n+\n+ @Override\n+ public void onFailure(Throwable e) {\n+ Throwable rootCause = ExceptionsHelper.unwrapCause(e);\n+ if (rootCause instanceof ActionNotFoundTransportException) {\n+ String[] features = request.features();\n+ GetAliasesResponse aliasResponse = null;\n+ GetMappingsResponse mappingResponse = null;\n+ GetSettingsResponse settingsResponse = null;\n+ GetWarmersResponse warmerResponse = null;\n+ try {\n+ for (String feature : features) {\n+ switch (feature) {\n+ case \"_alias\":\n+ case \"_aliases\":\n+ aliasResponse = prepareGetAliases(new String[0]).addIndices(request.indices())\n+ .setIndicesOptions(request.indicesOptions()).get();\n+ break;\n+ case \"_mapping\":\n+ case \"_mappings\":\n+ mappingResponse = prepareGetMappings(request.indices()).setIndicesOptions(request.indicesOptions()).get();\n+ break;\n+ case \"_settings\":\n+ settingsResponse = prepareGetSettings(request.indices()).setIndicesOptions(request.indicesOptions()).get();\n+ break;\n+ case \"_warmer\":\n+ case \"_warmers\":\n+ warmerResponse = prepareGetWarmers(request.indices()).setIndicesOptions(request.indicesOptions()).get();\n+ break;\n+ default:\n+ throw new ElasticsearchIllegalStateException(\"feature [\" + feature + \"] is not valid\");\n+ }\n+ }\n+ GetIndexResponse getIndexResponse = GetIndexResponse.convertResponses(aliasResponse, mappingResponse,\n+ settingsResponse, warmerResponse);\n+ onResponse(getIndexResponse);\n+ } catch (Throwable e1) {\n+ listener.onFailure(e1);\n+ }\n+ } else {\n+ listener.onFailure(e);\n+ }\n+ }\n+ });\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/client/support/AbstractIndicesAdminClient.java", "status": "modified" }, { "diff": "@@ -0,0 +1,110 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.bwcompat;\n+\n+import com.google.common.collect.ImmutableList;\n+import org.elasticsearch.action.admin.indices.alias.Alias;\n+import org.elasticsearch.action.admin.indices.create.CreateIndexResponse;\n+import org.elasticsearch.action.admin.indices.get.GetIndexResponse;\n+import org.elasticsearch.cluster.metadata.AliasMetaData;\n+import org.elasticsearch.cluster.metadata.MappingMetaData;\n+import org.elasticsearch.common.collect.ImmutableOpenMap;\n+import org.elasticsearch.common.settings.ImmutableSettings;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.search.warmer.IndexWarmersMetaData.Entry;\n+import org.elasticsearch.test.ElasticsearchBackwardsCompatIntegrationTest;\n+import org.junit.Test;\n+\n+import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertAcked;\n+import static org.hamcrest.Matchers.*;\n+\n+public class GetIndexBackwardsCompatibilityTests extends ElasticsearchBackwardsCompatIntegrationTest {\n+\n+ @Test\n+ public void testGetAliases() throws Exception {\n+ CreateIndexResponse createIndexResponse = prepareCreate(\"test\").addAlias(new Alias(\"testAlias\")).execute().actionGet();\n+ assertAcked(createIndexResponse);\n+ GetIndexResponse getIndexResponse = client().admin().indices().prepareGetIndex().addIndices(\"test\").addFeatures(\"_aliases\")\n+ .execute().actionGet();\n+ ImmutableOpenMap<String, ImmutableList<AliasMetaData>> aliasesMap = getIndexResponse.aliases();\n+ assertThat(aliasesMap, notNullValue());\n+ assertThat(aliasesMap.size(), equalTo(1));\n+ ImmutableList<AliasMetaData> aliasesList = aliasesMap.get(\"test\");\n+ assertThat(aliasesList, notNullValue());\n+ assertThat(aliasesList.size(), equalTo(1));\n+ AliasMetaData alias = aliasesList.get(0);\n+ assertThat(alias, notNullValue());\n+ assertThat(alias.alias(), equalTo(\"testAlias\"));\n+ }\n+\n+ @Test\n+ public void testGetMappings() throws Exception {\n+ CreateIndexResponse createIndexResponse = prepareCreate(\"test\").addMapping(\"type1\", \"{\\\"type1\\\":{}}\").execute().actionGet();\n+ assertAcked(createIndexResponse);\n+ GetIndexResponse getIndexResponse = client().admin().indices().prepareGetIndex().addIndices(\"test\").addFeatures(\"_mappings\")\n+ .execute().actionGet();\n+ ImmutableOpenMap<String, ImmutableOpenMap<String, MappingMetaData>> mappings = getIndexResponse.mappings();\n+ assertThat(mappings, notNullValue());\n+ assertThat(mappings.size(), equalTo(1));\n+ ImmutableOpenMap<String, MappingMetaData> indexMappings = mappings.get(\"test\");\n+ assertThat(indexMappings, notNullValue());\n+ assertThat(indexMappings.size(), anyOf(equalTo(1), equalTo(2)));\n+ if (indexMappings.size() == 2) {\n+ MappingMetaData mapping = indexMappings.get(\"_default_\");\n+ assertThat(mapping, notNullValue());\n+ }\n+ MappingMetaData mapping = indexMappings.get(\"type1\");\n+ assertThat(mapping, notNullValue());\n+ assertThat(mapping.type(), equalTo(\"type1\"));\n+ }\n+\n+ @Test\n+ public void testGetSettings() throws Exception {\n+ CreateIndexResponse createIndexResponse = prepareCreate(\"test\").setSettings(ImmutableSettings.builder().put(\"number_of_shards\", 1)).execute().actionGet();\n+ assertAcked(createIndexResponse);\n+ GetIndexResponse getIndexResponse = client().admin().indices().prepareGetIndex().addIndices(\"test\").addFeatures(\"_settings\")\n+ .execute().actionGet();\n+ ImmutableOpenMap<String, Settings> settingsMap = getIndexResponse.settings();\n+ assertThat(settingsMap, notNullValue());\n+ assertThat(settingsMap.size(), equalTo(1));\n+ Settings settings = settingsMap.get(\"test\");\n+ assertThat(settings, notNullValue());\n+ assertThat(settings.get(\"index.number_of_shards\"), equalTo(\"1\"));\n+ }\n+\n+ @Test\n+ public void testGetWarmers() throws Exception {\n+ createIndex(\"test\");\n+ ensureSearchable(\"test\");\n+ assertAcked(client().admin().indices().preparePutWarmer(\"warmer1\").setSearchRequest(client().prepareSearch(\"test\")).get());\n+ ensureSearchable(\"test\");\n+ GetIndexResponse getIndexResponse = client().admin().indices().prepareGetIndex().addIndices(\"test\").addFeatures(\"_warmers\")\n+ .execute().actionGet();\n+ ImmutableOpenMap<String, ImmutableList<Entry>> warmersMap = getIndexResponse.warmers();\n+ assertThat(warmersMap, notNullValue());\n+ assertThat(warmersMap.size(), equalTo(1));\n+ ImmutableList<Entry> warmersList = warmersMap.get(\"test\");\n+ assertThat(warmersList, notNullValue());\n+ assertThat(warmersList.size(), equalTo(1));\n+ Entry warmer = warmersList.get(0);\n+ assertThat(warmer, notNullValue());\n+ assertThat(warmer.name(), equalTo(\"warmer1\"));\n+ }\n+}", "filename": "src/test/java/org/elasticsearch/bwcompat/GetIndexBackwardsCompatibilityTests.java", "status": "added" } ] }
{ "body": "**Version 1.2.2**\n\nAn elastic search instance is running with the following arguments:\n\n```\n/usr/lib/jvm/java-7-oracle/bin/java -Xms15g -Xmx15g -Xss256k -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -Delasticsearch -Des.pidfile=/var/run/elasticsearch.pid -Des.path.home=/usr/share/elasticsearch -cp :/usr/share/elasticsearch/lib/elasticsearch-1.2.2.jar:/usr/share/elasticsearch/lib/*:/usr/share/elasticsearch/lib/sigar/* -Des.default.config=/etc/elasticsearch/elasticsearch.yml -Des.default.path.home=/usr/share/elasticsearch -Des.default.path.logs=/var/log/elasticsearch -Des.default.path.data=/var/lib/elasticsearch -Des.default.path.work=/tmp/elasticsearch -Des.default.path.conf=/etc/elasticsearch org.elasticsearch.bootstrap.Elasticsearch\n```\n\n`es.default.path.data` is coming from the deb package's init.d script.\n\nInside the `elasticsearch.yml`, if we declare `path.data` as an array, we end up with 3 data directories according to ES.\n\n``` yaml\npath:\n data:\n - /var/lib/elasticsearch/data1\n - /var/lib/elasticsearch/data2\n```\n\nWhen ES runs, it now thinks there are 3 data directories. The one being set as default through the CLI arguments, plus the 2 new ones that were added in the config.\n\nAccording to `/_nodes/stats?all=true`, we have this (snipped to relevant parts):\n\n``` json\n{\n \"nodes\": {\n \"FOSlksI8ToK6G4YdIylYvQ\": {\n \"fs\": {\n \"data\": [{\n \"path\": \"/var/lib/elasticsearch/home/nodes/0\",\n \"mount\": \"/\",\n \"dev\": \"/dev/sda6\",\n \"total_in_bytes\": 982560202752,\n \"free_in_bytes\": 860723675136,\n \"available_in_bytes\": 810788864000,\n \"disk_reads\": 12402994,\n \"disk_writes\": 14437097,\n \"disk_io_op\": 26840091,\n \"disk_read_size_in_bytes\": 364877214720,\n \"disk_write_size_in_bytes\": 1342705430528,\n \"disk_io_size_in_bytes\": 1707582645248,\n \"disk_queue\": \"2.4\",\n \"disk_service_time\": \"2.1\"\n }, {\n \"path\": \"/var/lib/elasticsearch/data1/home/nodes/0\",\n \"mount\": \"/var/lib/elasticsearch/data1\",\n \"dev\": \"/dev/sdc1\",\n \"total_in_bytes\": 787239469056,\n \"free_in_bytes\": 787167297536,\n \"available_in_bytes\": 747154268160,\n \"disk_reads\": 691,\n \"disk_writes\": 49627,\n \"disk_io_op\": 50318,\n \"disk_read_size_in_bytes\": 2851840,\n \"disk_write_size_in_bytes\": 12656103424,\n \"disk_io_size_in_bytes\": 12658955264,\n \"disk_queue\": \"2.4\",\n \"disk_service_time\": \"2.1\"\n }, {\n \"path\": \"/var/lib/elasticsearch/data2/home/nodes/0\",\n \"mount\": \"/var/lib/elasticsearch/data2\",\n \"dev\": \"/dev/sdd1\",\n \"total_in_bytes\": 787239469056,\n \"free_in_bytes\": 787142717440,\n \"available_in_bytes\": 747129688064,\n \"disk_reads\": 9461,\n \"disk_writes\": 883718,\n \"disk_io_op\": 893179,\n \"disk_read_size_in_bytes\": 39285760,\n \"disk_write_size_in_bytes\": 174312841216,\n \"disk_io_size_in_bytes\": 174352126976,\n \"disk_queue\": \"2.4\",\n \"disk_service_time\": \"2.1\"\n }]\n },\n }\n }\n}\n```\n\nAnd if we look at `/_nodes` we see what the raw settings are:\n\n``` json\n{\n \"nodes\": {\n \"FOSlksI8ToK6G4YdIylYvQ\": {\n \"settings\": {\n \"path\": {\n \"data\": \"/var/lib/elasticsearch\",\n \"work\": \"/tmp/elasticsearch\",\n \"home\": \"/usr/share/elasticsearch\",\n \"conf\": \"/etc/elasticsearch\",\n \"logs\": \"/var/log/elasticsearch\",\n \"data.0\": \"/var/lib/elasticsearch/data1\",\n \"data.1\": \"/var/lib/elasticsearch/data2\"\n }\n }\n }\n }\n}\n```\n\nSo in this case, we see that there are 3 different data keys according to settings, and 3 different data directories. A **data**, **data.0** and **data.1**.\n\nNow, if we declare in our yaml, **data** as just a comma separated as a string, this does the correct behavior and overrides what the default was.\n\n``` yaml\npath:\n data: /var/lib/elasticsearch/data1,/var/lib/elasticsearch/data2\n```\n\n``` json\n \"data\": [{\n \"path\": \"/var/lib/elasticsearch/data1/home/nodes/0\",\n \"mount\": \"/var/lib/elasticsearch/data1\",\n \"dev\": \"/dev/sdc1\",\n \"total_in_bytes\": 787239469056,\n \"free_in_bytes\": 787167391744,\n \"available_in_bytes\": 747154362368,\n \"disk_reads\": 687,\n \"disk_writes\": 49803,\n \"disk_io_op\": 50490,\n \"disk_read_size_in_bytes\": 2835456,\n \"disk_write_size_in_bytes\": 12657053696,\n \"disk_io_size_in_bytes\": 12659889152,\n \"disk_queue\": \"0\",\n \"disk_service_time\": \"0\"\n }, {\n \"path\": \"/var/lib/elasticsearch/data2/home/nodes/0\",\n \"mount\": \"/var/lib/elasticsearch/data2\",\n \"dev\": \"/dev/sdd1\",\n \"total_in_bytes\": 787239469056,\n \"free_in_bytes\": 787167391744,\n \"available_in_bytes\": 747154362368,\n \"disk_reads\": 3436,\n \"disk_writes\": 684613,\n \"disk_io_op\": 688049,\n \"disk_read_size_in_bytes\": 14308352,\n \"disk_write_size_in_bytes\": 132454891520,\n \"disk_io_size_in_bytes\": 132469199872,\n \"disk_queue\": \"0\",\n \"disk_service_time\": \"0\"\n }]\n```\n\n``` json\n \"path\": {\n \"data\": \"/var/lib/elasticsearch/data1,/var/lib/elasticsearch/data2\",\n \"work\": \"/tmp/elasticsearch\",\n \"home\": \"/usr/share/elasticsearch\",\n \"conf\": \"/etc/elasticsearch\",\n \"logs\": \"/var/log/elasticsearch\"\n },\n```\n\nSo the behavior that we're seeing is that the data array is being flattened into a list, and them being appended to the default. Whereas I'd expect this new list to override the default.\n\nI feel that the problem here is that data can be declared both as a string and as a list. It may be more _correct_ if data were always a list internally, and when declaring as a string, it's coerced into a list, even if that list had one item. But I'm not proposing the solution, just a thought.\n\nI understand that this may not necessarily be a bug, but it is extremely unexpected behavior and took quite a bit of debugging to track down exactly what was going on.\n", "comments": [ { "body": "I agree about unexpected! Thanks for reporting this.\n", "created_at": "2014-07-16T08:45:03Z" }, { "body": "@clintongormley Let me know if there's anything else I can do to help, but I assume this is easily reproduced. :)\n", "created_at": "2014-07-16T09:23:54Z" } ], "number": 6887, "title": "es.default.path.data conflicts with path.data when path.data is a list" }
{ "body": "In the case you try to put two settings, one being an array and one being\na field, together, the settings were merged instead of being overridden.\n\nFirst config file\nmy.value: 1\n\nSecond config file\nmy.value: [ 2, 3 ]\n\nIf you execute\n\nsettingsBuilder().put(settings1).put(settings2).build()\n\nnow only values 2,3 will be in the final settings\n\nCloses #6887\n", "number": 8381, "review_comments": [ { "body": "Maybe we should clone the map first in order not to modify the parameters of this method?\n", "created_at": "2014-11-09T21:57:09Z" }, { "body": "Should we also add tests for the following cases:\n- overriding an array with a single value\n- overriding an array with a shorter array\n", "created_at": "2014-11-09T22:09:36Z" }, { "body": "sorry I now realize my previous comment was silly, I thought this method modified the incoming map (\"settings\") while it actually modifies the wrapped map (\"map\"). I don't think copying into an immutable map helps\n", "created_at": "2014-11-10T11:53:29Z" }, { "body": "Given that the changes that you made also apply to objects (the `else` branch), I'd also like to have tests that check what happens when replacing a single value with an object, an object with an array, etc.\n", "created_at": "2014-11-10T12:17:01Z" }, { "body": "+1 this also looks useful for objects\n", "created_at": "2014-11-10T12:17:18Z" }, { "body": "can you add a space after the `if`?\n", "created_at": "2015-01-05T15:13:03Z" }, { "body": "minor indentation issue?\n", "created_at": "2015-01-05T15:13:50Z" } ], "title": "Ensure fields are overriden and not merged when using arrays" }
{ "commits": [ { "message": "Settings: Ensure fields are overriden and not merged when using arrays\n\nIn the case you try to merge two settings, one being an array and one being\na field, together, the settings were merged instead of being overridden.\n\nFirst config\nmy.value: 1\n\nSecond config\nmy.value: [ 2, 3 ]\n\nIf you execute\n\nsettingsBuilder().put(settings1).put(settings2).build()\n\nnow only values 2,3 will be in the final settings\n\nCloses #8381" } ], "files": [ { "diff": "@@ -20,7 +20,9 @@\n package org.elasticsearch.common.settings;\n \n import com.google.common.base.Charsets;\n+import com.google.common.base.Predicate;\n import com.google.common.collect.ImmutableMap;\n+import com.google.common.collect.Iterables;\n import com.google.common.collect.Lists;\n import com.google.common.collect.Maps;\n import org.elasticsearch.ElasticsearchIllegalArgumentException;\n@@ -45,6 +47,8 @@\n import java.nio.file.Path;\n import java.util.*;\n import java.util.concurrent.TimeUnit;\n+import java.util.regex.Matcher;\n+import java.util.regex.Pattern;\n \n import static org.elasticsearch.common.Strings.toCamelCase;\n import static org.elasticsearch.common.unit.ByteSizeValue.parseBytesSizeValue;\n@@ -57,6 +61,7 @@\n public class ImmutableSettings implements Settings {\n \n public static final Settings EMPTY = new Builder().build();\n+ private final static Pattern ARRAY_PATTERN = Pattern.compile(\"(.*)\\\\.\\\\d+$\");\n \n private ImmutableMap<String, String> settings;\n private final ImmutableMap<String, String> forcedUnderscoreSettings;\n@@ -873,6 +878,7 @@ public Builder put(String settingPrefix, String groupName, String[] settings, St\n * Sets all the provided settings.\n */\n public Builder put(Settings settings) {\n+ removeNonArraysFieldsIfNewSettingsContainsFieldAsArray(settings.getAsMap());\n map.putAll(settings.getAsMap());\n classLoader = settings.getClassLoaderIfSet();\n return this;\n@@ -882,10 +888,42 @@ public Builder put(Settings settings) {\n * Sets all the provided settings.\n */\n public Builder put(Map<String, String> settings) {\n+ removeNonArraysFieldsIfNewSettingsContainsFieldAsArray(settings);\n map.putAll(settings);\n return this;\n }\n \n+ /**\n+ * Removes non array values from the existing map, if settings contains an array value instead\n+ *\n+ * Example:\n+ * Existing map contains: {key:value}\n+ * New map contains: {key:[value1,value2]} (which has been flattened to {}key.0:value1,key.1:value2})\n+ *\n+ * This ensure that that the 'key' field gets removed from the map in order to override all the\n+ * data instead of merging\n+ */\n+ private void removeNonArraysFieldsIfNewSettingsContainsFieldAsArray(Map<String, String> settings) {\n+ List<String> prefixesToRemove = new ArrayList<>();\n+ for (final Map.Entry<String, String> entry : settings.entrySet()) {\n+ final Matcher matcher = ARRAY_PATTERN.matcher(entry.getKey());\n+ if (matcher.matches()) {\n+ prefixesToRemove.add(matcher.group(1));\n+ } else if (Iterables.any(map.keySet(), startsWith(entry.getKey() + \".\"))) {\n+ prefixesToRemove.add(entry.getKey());\n+ }\n+ }\n+ for (String prefix : prefixesToRemove) {\n+ Iterator<Map.Entry<String, String>> iterator = map.entrySet().iterator();\n+ while (iterator.hasNext()) {\n+ Map.Entry<String, String> entry = iterator.next();\n+ if (entry.getKey().startsWith(prefix + \".\") || entry.getKey().equals(prefix)) {\n+ iterator.remove();\n+ }\n+ }\n+ }\n+ }\n+\n /**\n * Sets all the provided settings.\n */\n@@ -1090,4 +1128,22 @@ public Settings build() {\n return new ImmutableSettings(Collections.unmodifiableMap(map), classLoader);\n }\n }\n+\n+ private static StartsWithPredicate startsWith(String prefix) {\n+ return new StartsWithPredicate(prefix);\n+ }\n+\n+ private static final class StartsWithPredicate implements Predicate<String> {\n+\n+ private String prefix;\n+\n+ public StartsWithPredicate(String prefix) {\n+ this.prefix = prefix;\n+ }\n+\n+ @Override\n+ public boolean apply(String input) {\n+ return input.startsWith(prefix);\n+ }\n+ }\n }", "filename": "src/main/java/org/elasticsearch/common/settings/ImmutableSettings.java", "status": "modified" }, { "diff": "@@ -21,10 +21,12 @@\n \n import org.elasticsearch.common.settings.bar.BarTestClass;\n import org.elasticsearch.common.settings.foo.FooTestClass;\n+import org.elasticsearch.common.settings.loader.YamlSettingsLoader;\n import org.elasticsearch.test.ElasticsearchTestCase;\n import org.hamcrest.Matchers;\n import org.junit.Test;\n \n+import java.io.IOException;\n import java.util.List;\n import java.util.Map;\n import java.util.Set;\n@@ -216,6 +218,110 @@ public void testNames() {\n assertThat(names.size(), equalTo(2));\n assertTrue(names.contains(\"bar\"));\n assertTrue(names.contains(\"baz\"));\n+ }\n+\n+ @Test\n+ public void testThatArraysAreOverriddenCorrectly() throws IOException {\n+ // overriding a single value with an array\n+ Settings settings = settingsBuilder()\n+ .put(settingsBuilder().putArray(\"value\", \"1\").build())\n+ .put(settingsBuilder().putArray(\"value\", \"2\", \"3\").build())\n+ .build();\n+ assertThat(settings.getAsArray(\"value\"), arrayContaining(\"2\", \"3\"));\n+\n+ settings = settingsBuilder()\n+ .put(settingsBuilder().put(\"value\", \"1\").build())\n+ .put(settingsBuilder().putArray(\"value\", \"2\", \"3\").build())\n+ .build();\n+ assertThat(settings.getAsArray(\"value\"), arrayContaining(\"2\", \"3\"));\n+\n+ settings = settingsBuilder()\n+ .put(new YamlSettingsLoader().load(\"value: 1\"))\n+ .put(new YamlSettingsLoader().load(\"value: [ 2, 3 ]\"))\n+ .build();\n+ assertThat(settings.getAsArray(\"value\"), arrayContaining(\"2\", \"3\"));\n+\n+ settings = settingsBuilder()\n+ .put(settingsBuilder().put(\"value.with.deep.key\", \"1\").build())\n+ .put(settingsBuilder().putArray(\"value.with.deep.key\", \"2\", \"3\").build())\n+ .build();\n+ assertThat(settings.getAsArray(\"value.with.deep.key\"), arrayContaining(\"2\", \"3\"));\n+\n+ // overriding an array with a shorter array\n+ settings = settingsBuilder()\n+ .put(settingsBuilder().putArray(\"value\", \"1\", \"2\").build())\n+ .put(settingsBuilder().putArray(\"value\", \"3\").build())\n+ .build();\n+ assertThat(settings.getAsArray(\"value\"), arrayContaining(\"3\"));\n+\n+ settings = settingsBuilder()\n+ .put(settingsBuilder().putArray(\"value\", \"1\", \"2\", \"3\").build())\n+ .put(settingsBuilder().putArray(\"value\", \"4\", \"5\").build())\n+ .build();\n+ assertThat(settings.getAsArray(\"value\"), arrayContaining(\"4\", \"5\"));\n+\n+ settings = settingsBuilder()\n+ .put(settingsBuilder().putArray(\"value.deep.key\", \"1\", \"2\", \"3\").build())\n+ .put(settingsBuilder().putArray(\"value.deep.key\", \"4\", \"5\").build())\n+ .build();\n+ assertThat(settings.getAsArray(\"value.deep.key\"), arrayContaining(\"4\", \"5\"));\n \n+ // overriding an array with a longer array\n+ settings = settingsBuilder()\n+ .put(settingsBuilder().putArray(\"value\", \"1\", \"2\").build())\n+ .put(settingsBuilder().putArray(\"value\", \"3\", \"4\", \"5\").build())\n+ .build();\n+ assertThat(settings.getAsArray(\"value\"), arrayContaining(\"3\", \"4\", \"5\"));\n+\n+ settings = settingsBuilder()\n+ .put(settingsBuilder().putArray(\"value.deep.key\", \"1\", \"2\", \"3\").build())\n+ .put(settingsBuilder().putArray(\"value.deep.key\", \"4\", \"5\").build())\n+ .build();\n+ assertThat(settings.getAsArray(\"value.deep.key\"), arrayContaining(\"4\", \"5\"));\n+\n+ // overriding an array with a single value\n+ settings = settingsBuilder()\n+ .put(settingsBuilder().putArray(\"value\", \"1\", \"2\").build())\n+ .put(settingsBuilder().put(\"value\", \"3\").build())\n+ .build();\n+ assertThat(settings.getAsArray(\"value\"), arrayContaining(\"3\"));\n+\n+ settings = settingsBuilder()\n+ .put(settingsBuilder().putArray(\"value.deep.key\", \"1\", \"2\").build())\n+ .put(settingsBuilder().put(\"value.deep.key\", \"3\").build())\n+ .build();\n+ assertThat(settings.getAsArray(\"value.deep.key\"), arrayContaining(\"3\"));\n+\n+ // test that other arrays are not overridden\n+ settings = settingsBuilder()\n+ .put(settingsBuilder().putArray(\"value\", \"1\", \"2\", \"3\").putArray(\"a\", \"b\", \"c\").build())\n+ .put(settingsBuilder().putArray(\"value\", \"4\", \"5\").putArray(\"d\", \"e\", \"f\").build())\n+ .build();\n+ assertThat(settings.getAsArray(\"value\"), arrayContaining(\"4\", \"5\"));\n+ assertThat(settings.getAsArray(\"a\"), arrayContaining(\"b\", \"c\"));\n+ assertThat(settings.getAsArray(\"d\"), arrayContaining(\"e\", \"f\"));\n+\n+ settings = settingsBuilder()\n+ .put(settingsBuilder().putArray(\"value.deep.key\", \"1\", \"2\", \"3\").putArray(\"a\", \"b\", \"c\").build())\n+ .put(settingsBuilder().putArray(\"value.deep.key\", \"4\", \"5\").putArray(\"d\", \"e\", \"f\").build())\n+ .build();\n+ assertThat(settings.getAsArray(\"value.deep.key\"), arrayContaining(\"4\", \"5\"));\n+ assertThat(settings.getAsArray(\"a\"), notNullValue());\n+ assertThat(settings.getAsArray(\"d\"), notNullValue());\n+\n+ // overriding a deeper structure with an array\n+ settings = settingsBuilder()\n+ .put(settingsBuilder().put(\"value.data\", \"1\").build())\n+ .put(settingsBuilder().putArray(\"value\", \"4\", \"5\").build())\n+ .build();\n+ assertThat(settings.getAsArray(\"value\"), arrayContaining(\"4\", \"5\"));\n+\n+ // overriding an array with a deeper structure\n+ settings = settingsBuilder()\n+ .put(settingsBuilder().putArray(\"value\", \"4\", \"5\").build())\n+ .put(settingsBuilder().put(\"value.data\", \"1\").build())\n+ .build();\n+ assertThat(settings.get(\"value.data\"), is(\"1\"));\n+ assertThat(settings.get(\"value\"), is(nullValue()));\n }\n }", "filename": "src/test/java/org/elasticsearch/common/settings/ImmutableSettingsTests.java", "status": "modified" } ] }
{ "body": "If you do a _bulk that contains an update to a child doc (parent/child) and you don't (or forget to) specify the parent id, you will get an NPE error message in the item response. It would be good to adjust the error message to RoutingMissingException (just like when you do a single update (not _bulk) to the same doc but forget to specify parent id.\n\nSteps to reproduce:\n\n```\ncurl -XDELETE localhost:9200/test1\n\ncurl -XPUT localhost:9200/test1 -d '{\n \"mappings\": {\n \"p\": {},\n \"c\": {\n \"_parent\": {\n \"type\": \"p\"\n }\n }\n }\n}'\n\ncurl -XPUT localhost:9200/test1/c/1?parent=1 -d '{\n}'\n\ncurl -XPOST localhost:9200/test1/c/_bulk -d '\n{ \"update\": { \"_id\": \"1\" }}\n{ \"doc\": { \"foo\": \"bar\" } }\n'\n```\n\nResponse:\n\n```\n{\"error\":\"NullPointerException[null]\",\"status\":500}\n```\n", "comments": [ { "body": "This is fixed in: https://github.com/elasticsearch/elasticsearch/pull/8378\n", "created_at": "2014-11-07T06:19:32Z" } ], "number": 8365, "title": "Bulk update child doc, NPE error message when parent is not specified" }
{ "body": "Closes #8365\n", "number": 8378, "review_comments": [], "title": "Fix bulk update NPE error when parent is not specified for child doc" }
{ "commits": [ { "message": "Fix of https://github.com/elasticsearch/elasticsearch/issues/8259: Better handling of tabs vs spaces in elasticsearch.yml\n\n - Throw an exception if there is a 'tab' character in the elasticsearch.yml file" }, { "message": "Revert \"Fix of https://github.com/elasticsearch/elasticsearch/issues/8259: Better handling of tabs vs spaces in elasticsearch.yml\"\n\nThis reverts commit 7967f02c1a5e9d1c50731ff23f871e54e78d483f." }, { "message": "fix of Bulk update child doc, NPE error message when parent is not specified #8365\n - Throw an RoutingMissingException instead of NPE" } ], "files": [ { "diff": "@@ -19,6 +19,8 @@\n \n package org.elasticsearch.action.bulk;\n \n+import org.elasticsearch.action.RoutingMissingException;\n+\n import com.google.common.collect.Lists;\n import com.google.common.collect.Maps;\n import com.google.common.collect.Sets;\n@@ -285,7 +287,8 @@ private void executeBulk(final BulkRequest bulkRequest, final long startTime, fi\n String concreteIndex = concreteIndices.getConcreteIndex(updateRequest.index());\n MappingMetaData mappingMd = clusterState.metaData().index(concreteIndex).mappingOrDefault(updateRequest.type());\n if (mappingMd != null && mappingMd.routing().required() && updateRequest.routing() == null) {\n- continue; // What to do?\n+ //Bulk update child doc, NPE error message when parent is not specified #8365 \n+ throw new RoutingMissingException(concreteIndex, updateRequest.type(), updateRequest.id());\n }\n ShardId shardId = clusterService.operationRouting().indexShards(clusterState, concreteIndex, updateRequest.type(), updateRequest.id(), updateRequest.routing()).shardId();\n List<BulkItemRequest> list = requestsByShard.get(shardId);", "filename": "src/main/java/org/elasticsearch/action/bulk/TransportBulkAction.java", "status": "modified" } ] }
{ "body": "Is the example of filtered query template [1] working? It seems to contain extra `</6>` string which seems as an issue to me. I am unable to test this example even after I remove this extra string.\n\n[1] http://www.elasticsearch.org/guide/en/elasticsearch/reference/1.3/search-template.html#_conditional_clauses\n", "comments": [ { "body": "I was testing it with ES 1.3.0\n", "created_at": "2014-10-31T13:04:29Z" }, { "body": "@clintongormley thanks for clarifying this. Out of curiosity, does the template example in question work for you?\n", "created_at": "2014-10-31T14:15:28Z" }, { "body": "@lukas-vlcek did you wrap it in a `query` element and pass it as an escaped string? \n\nI've just pushed another improvement to the docs to include both of the above elements\n", "created_at": "2014-10-31T14:34:29Z" }, { "body": "@clintongormley thanks for this, but still I am unable to get search template with section work. Here is trivial recreation https://gist.github.com/lukas-vlcek/56540a7e8206d122d55c\nIs there anything I do wrong? Am I missing some escaping?\n", "created_at": "2014-10-31T18:06:38Z" }, { "body": "Just in case here is excerpt from server log:\n\n```\n[2014-10-31 19:03:29,575][DEBUG][action.search.type ] [Jester] [twitter][4], node[ccMkzywSSfi-4ttibpHh9w], [P], s[STARTED]: Failed to execute [org.elasticsearch.action.search.SearchRequest@7de37772] lastShard [true]\norg.elasticsearch.ElasticsearchParseException: Failed to parse template\n at org.elasticsearch.search.SearchService.parseTemplate(SearchService.java:612)\n at org.elasticsearch.search.SearchService.createContext(SearchService.java:514)\n at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:487)\n at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:256)\n at org.elasticsearch.search.action.SearchServiceTransportAction$5.call(SearchServiceTransportAction.java:206)\n at org.elasticsearch.search.action.SearchServiceTransportAction$5.call(SearchServiceTransportAction.java:203)\n at org.elasticsearch.search.action.SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:517)\n at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n at java.lang.Thread.run(Thread.java:745)\nCaused by: org.elasticsearch.common.jackson.core.JsonParseException: Unexpected character ('{' (code 123)): was expecting either valid name character (for unquoted name) or double-quote (for quoted) to start field name\n at [Source: [B@438d7c6b; line: 1, column: 4]\n at org.elasticsearch.common.jackson.core.JsonParser._constructError(JsonParser.java:1419)\n at org.elasticsearch.common.jackson.core.base.ParserMinimalBase._reportError(ParserMinimalBase.java:508)\n at org.elasticsearch.common.jackson.core.base.ParserMinimalBase._reportUnexpectedChar(ParserMinimalBase.java:437)\n at org.elasticsearch.common.jackson.core.json.UTF8StreamJsonParser._handleOddName(UTF8StreamJsonParser.java:1808)\n at org.elasticsearch.common.jackson.core.json.UTF8StreamJsonParser._parseName(UTF8StreamJsonParser.java:1496)\n at org.elasticsearch.common.jackson.core.json.UTF8StreamJsonParser.nextToken(UTF8StreamJsonParser.java:693)\n at org.elasticsearch.common.xcontent.json.JsonXContentParser.nextToken(JsonXContentParser.java:50)\n at org.elasticsearch.index.query.TemplateQueryParser.parse(TemplateQueryParser.java:113)\n at org.elasticsearch.index.query.TemplateQueryParser.parse(TemplateQueryParser.java:103)\n at org.elasticsearch.search.SearchService.parseTemplate(SearchService.java:605)\n ... 9 more\n```\n", "created_at": "2014-10-31T18:12:00Z" }, { "body": "@lukas-vlcek I can't get this working either. No idea what is going on here :(\n", "created_at": "2014-11-01T14:45:00Z" }, { "body": "@clintongormley thanks for look at this. Shall I change the title of this issue and remove the [DOC] part? It is probably not a DOC issue anymore, is it?\n", "created_at": "2014-11-02T13:22:30Z" }, { "body": "After looking deep inside the code, I am wondering has this (i.e. the conditional clause thing) been ever working? The root clause is, inside the JsonXContentParser.java:nextToken() method, the parser.nextToken() throws an exception straight out of the box when it encounters the clause like {{#use_size}}. I guess such clause has not been implemented to be recognized yet. Notice that the parser is of type org.elasticsearch.common.jackson.core.json.UTF8StreamJsonParser . \n\nIt looks like we have to override the UTF8StreamJsonParser's nextToken() method and do some look ahead for the conditional clauses. \n", "created_at": "2014-11-06T01:09:38Z" }, { "body": "@wwken As I understand it, the JSON parser should never see this the `{{#use_size}}`. The template is passed as a string, not as JSON, so the Mustache should render the template (thus removing the conditionals) to generate JSON which can _then_ be parsed.\n\nThis would be much easier to debug if we had #6821\n", "created_at": "2014-11-06T10:05:58Z" }, { "body": " I fixed it. Please refer to this pull request: https://github.com/elasticsearch/elasticsearch/pull/8376/files\n(P.S. plz ignore the above few links as i messed up checking in some incorrect commit files before by mistake)\n", "created_at": "2014-11-07T03:40:07Z" }, { "body": "thanks @wwken - we'll take a look and get back to you\n", "created_at": "2014-11-07T13:18:52Z" }, { "body": "@clintongormley #8393 is my try to fix this issue. If you find this relevant and to the point then it might make sense to consider renaming this issue again to more general description (I think the main issue here is broken parser logic of TemplateQueryParser in case the template value is a single string token. In other words I think there is more general issue not directly related to conditional clauses only).\n", "created_at": "2014-11-07T17:21:43Z" }, { "body": "@lukas-vlcek I think for sure your solution is better! I have two suggestions on it though (minor, not any important):\n\n1) In this file, https://github.com/lukas-vlcek/elasticsearch/blob/8308/src/main/java/org/elasticsearch/search/SearchService.java, you can omit the lines from 617-635 since they are not needed\n\n2) in this file, https://github.com/lukas-vlcek/elasticsearch/blob/8308/src/main/java/org/elasticsearch/index/query/TemplateQueryParser.java, you can separate the single if in line 113 to two if statements to make it more elegant :)\n\nthanks \n", "created_at": "2014-11-07T19:25:23Z" }, { "body": "@wwken thanks for looking at this. I am going to wait if ES devs give any feedback. May be they will want to fix it in very different approach. Who knows...\n", "created_at": "2014-11-09T14:28:08Z" }, { "body": "Bump.\n\nIs there any plan to have this fixed in 1.4? Or even backporting it to 1.3? We would like to make use of Search Templates but this issue is somehow limiting us. How can I help with this?\n", "created_at": "2014-11-12T09:58:31Z" }, { "body": "Closed by #11512\n", "created_at": "2015-06-23T17:54:59Z" }, { "body": "@clintongormley correct, this issue was created before I opened more general PR. Closing...\n", "created_at": "2015-06-24T06:32:40Z" } ], "number": 8308, "title": "Search Template - conditional clauses not rendering correctly" }
{ "body": "- implemented the conditional parsing capabilities\n- attached few junit test cases to test it\n\nCloses #8308\n", "number": 8376, "review_comments": [], "title": "Search Template - conditional clauses not rendering correctly" }
{ "commits": [ { "message": "Fix of https://github.com/elasticsearch/elasticsearch/issues/8259: Better handling of tabs vs spaces in elasticsearch.yml\n\n - Throw an exception if there is a 'tab' character in the elasticsearch.yml file" }, { "message": "Fix of Search Template - conditional clauses not rendering correctly #8308\n - implemented the conditional parsing capabilities\n - attached few junit test cases to test it" }, { "message": "Fix of Search Template - conditional clauses not rendering correctly #8308\n - implemented the conditional parsing capabilities\n - attached few junit test cases to test it (reverted from commit bf1ae3121d35072211113023be7f59239a053560)" }, { "message": "Fix of https://github.com/elasticsearch/elasticsearch/issues/8259: Better handling of tabs vs spaces in elasticsearch.yml\n\n - Throw an exception if there is a 'tab' character in the elasticsearch.yml file (reverted from commit 7967f02c1a5e9d1c50731ff23f871e54e78d483f)" } ], "files": [ { "diff": "@@ -33,6 +33,7 @@\n \n import java.io.IOException;\n import java.util.HashMap;\n+import java.util.Iterator;\n import java.util.Map;\n \n /**\n@@ -157,5 +158,69 @@ public ScriptService.ScriptType scriptType(){\n public String toString(){\n return type + \" \" + template;\n }\n+ \n+ /*\n+ * Search Template - conditional clauses not rendering correctly #8308\n+ */\n+ public void reduceConditionalClauses() throws IOException {\n+ if (params != null && params.size() > 0) {\n+ StringBuffer templateSb = new StringBuffer(template);\n+ reduceConditionalClauses(templateSb, params);\n+ template = templateSb.toString();\n+ }\n+ }\n+\n+ @SuppressWarnings(\"unchecked\")\n+ private void reduceConditionalClauses(StringBuffer templateSb, Map<String, Object> params) throws IOException {\n+ Iterator<?> it = params.entrySet().iterator();\n+ while (it.hasNext()) {\n+ Map.Entry<String, Object> pairs = (Map.Entry<String, Object>) it.next();\n+ String openTag = \"{{#\" + pairs.getKey() + \"}}\";\n+ String closeTag = \"{{/\" + pairs.getKey() + \"}}\";\n+ // System.out.println(openClause + \", \" + closeClause);\n+ Object v = pairs.getValue();\n+ if (v instanceof Boolean && !((Boolean) v)) {\n+ removeTwoTagsFromStringContents(templateSb, openTag, closeTag, true);\n+ } else {\n+ removeTwoTagsFromStringContents(templateSb, openTag, closeTag, false);\n+ if (v instanceof Map<?, ?>) {\n+ reduceConditionalClauses(templateSb, (Map<String, Object>) v); \n+ }\n+ }\n+\n+ }\n+ }\n+\n+ private static void removeTwoTagsFromStringContents(StringBuffer sb, String openTag, String closeTag,\n+ boolean shouldRemoveContentsInBewteen) throws IOException {\n+ while (true) {\n+ int posOpen = sb.indexOf(openTag);\n+ if (posOpen > -1) {\n+ if (!shouldRemoveContentsInBewteen)\n+ sb = sb.replace(posOpen, posOpen + openTag.length(), \"\");\n+ } else {\n+ break;\n+ }\n+ int posClose = sb.indexOf(closeTag);\n+ if (posClose > -1) {\n+ if (posOpen < 0) {\n+ throw new IOException(\"Search template is missing the opening tag: \" + openTag\n+ + \", which was calculated from the template params.\");\n+ } else if (posOpen > posClose) {\n+ throw new IOException(\"Search template is having opening tag: \" + openTag + \" placed after the closing tag: \"\n+ + closeTag + \"!\");\n+ }\n+ if (!shouldRemoveContentsInBewteen)\n+ sb = sb.replace(posClose, posClose + closeTag.length(), \"\");\n+ } else {\n+ if (posOpen > -1)\n+ throw new IOException(\"Search template is missing the closing tag: \" + closeTag\n+ + \", which was calculated from the template params.\");\n+ }\n+ if (shouldRemoveContentsInBewteen) {\n+ sb = sb.replace(posOpen, posClose+closeTag.length(), \"\");\n+ }\n+ }\n+ }\n }\n }", "filename": "src/main/java/org/elasticsearch/index/query/TemplateQueryParser.java", "status": "modified" }, { "diff": "@@ -611,6 +611,7 @@ private void parseTemplate(ShardSearchRequest request) {\n try {\n parser = XContentFactory.xContent(request.templateSource()).createParser(request.templateSource());\n templateContext = TemplateQueryParser.parse(parser, \"params\", \"template\");\n+ templateContext.reduceConditionalClauses();\n \n if (templateContext.scriptType().equals(ScriptService.ScriptType.INLINE)) {\n //Try to double parse for nested template id/file", "filename": "src/main/java/org/elasticsearch/search/SearchService.java", "status": "modified" }, { "diff": "@@ -56,6 +56,8 @@\n \n import java.io.IOException;\n \n+import static org.elasticsearch.common.io.Streams.copyToStringFromClasspath;\n+\n /**\n * Test parsing and executing a template request.\n */\n@@ -133,4 +135,35 @@ public void testParserCanExtractTemplateNames() throws Exception {\n Query query = parser.parse(context);\n assertTrue(\"Parsing template query failed.\", query instanceof ConstantScoreQuery);\n }\n+ \n+ @Test\n+ public void testParserWithConditionalClause() throws IOException {\n+ String templateString = copyToStringFromClasspath(\"/org/elasticsearch/index/query/search-template_conditional-clause-1-on.json\");\n+ XContentParser templateSourceParser = XContentFactory.xContent(templateString).createParser(templateString);\n+ TemplateQueryParser.TemplateContext templateContext = TemplateQueryParser.parse(templateSourceParser, \"params\", \"template\");\n+ templateContext.reduceConditionalClauses();\n+ assertTrue(templateContext.template().replace(\" \", \"\").equals(\"{\\\"size\\\":\\\"{{size}}\\\",\\\"query\\\":{\\\"match_all\\\":{}}}\"));\n+ \n+ templateString = copyToStringFromClasspath(\"/org/elasticsearch/index/query/search-template_conditional-clause-1-off.json\");\n+ templateSourceParser = XContentFactory.xContent(templateString).createParser(templateString);\n+ templateContext = TemplateQueryParser.parse(templateSourceParser, \"params\", \"template\");\n+ templateContext.reduceConditionalClauses();\n+ assertTrue(templateContext.template().replace(\" \", \"\").equals(\"{\\\"query\\\":{\\\"match_all\\\":{}}}\"));\n+ }\n+ \n+ @Test\n+ public void testParserWithConditionalClause2() throws IOException {\n+ String templateString = copyToStringFromClasspath(\"/org/elasticsearch/index/query/search-template_conditional-clause-2-on.json\");\n+ XContentParser templateSourceParser = XContentFactory.xContent(templateString).createParser(templateString);\n+ TemplateQueryParser.TemplateContext templateContext = TemplateQueryParser.parse(templateSourceParser, \"params\", \"template\");\n+ templateContext.reduceConditionalClauses();\n+ assertTrue(templateContext.template().replace(\" \", \"\").equals(\"{\\\"query\\\":{\\\"match_all\\\":{}},\\\"filter\\\":{\\\"range\\\":{\\\"line_no\\\":{\\\"gte\\\":\\\"{{start}}\\\",\\\"lte\\\":\\\"{{end}}\\\"}}}}\"));\n+ \n+ templateString = copyToStringFromClasspath(\"/org/elasticsearch/index/query/search-template_conditional-clause-2-off.json\");\n+ templateSourceParser = XContentFactory.xContent(templateString).createParser(templateString);\n+ templateContext = TemplateQueryParser.parse(templateSourceParser, \"params\", \"template\");\n+ templateContext.reduceConditionalClauses();\n+ assertTrue(templateContext.template().replace(\" \", \"\").equals(\"{\\\"query\\\":{\\\"match_all\\\":{}},\\\"filter\\\":{}}\"));\n+ }\n+ \n }", "filename": "src/test/java/org/elasticsearch/index/query/TemplateQueryParserTest.java", "status": "modified" }, { "diff": "@@ -0,0 +1,7 @@\n+{\n+ \"template\" : \"{ {{#use_size}} \\\"size\\\": \\\"{{size}}\\\", {{/use_size}} \\\"query\\\":{\\\"match_all\\\":{}}}\",\n+ \"params\":{\n+ \"size\":2,\n+ \"use_size\": false\n+ }\n+}\n\\ No newline at end of file", "filename": "src/test/java/org/elasticsearch/index/query/search-template_conditional-clause-1-off.json", "status": "added" }, { "diff": "@@ -0,0 +1,7 @@\n+{\n+ \"template\" : \"{ {{#use_size}} \\\"size\\\": \\\"{{size}}\\\", {{/use_size}} \\\"query\\\":{\\\"match_all\\\":{}}}\",\n+ \"params\":{\n+ \"size\":2,\n+ \"use_size\": true\n+ }\n+}\n\\ No newline at end of file", "filename": "src/test/java/org/elasticsearch/index/query/search-template_conditional-clause-1-on.json", "status": "added" }, { "diff": "@@ -0,0 +1,7 @@\n+{\n+ \"template\": \"{ \\\"query\\\":{\\\"match_all\\\":{}}, \\\"filter\\\":{ {{#line_no}} \\\"range\\\": { \\\"line_no\\\": { {{#start}} \\\"gte\\\": \\\"{{start}}\\\" {{#end}},{{/end}} {{/start}} {{#end}} \\\"lte\\\": \\\"{{end}}\\\" {{/end}} } } {{/line_no}} }}\",\n+ \"params\": {\n+ \"text\": \"words to search for\",\n+ \"line_no\": false\n+ }\n+}\n\\ No newline at end of file", "filename": "src/test/java/org/elasticsearch/index/query/search-template_conditional-clause-2-off.json", "status": "added" }, { "diff": "@@ -0,0 +1,10 @@\n+{\n+ \"template\": \"{ \\\"query\\\":{\\\"match_all\\\":{}}, \\\"filter\\\":{ {{#line_no}} \\\"range\\\": { \\\"line_no\\\": { {{#start}} \\\"gte\\\": \\\"{{start}}\\\" {{#end}},{{/end}} {{/start}} {{#end}} \\\"lte\\\": \\\"{{end}}\\\" {{/end}} } } {{/line_no}} }}\",\n+ \"params\": {\n+ \"text\": \"words to search for\",\n+ \"line_no\": {\n+ \"start\": 10,\n+ \"end\": 20\n+ }\n+ }\n+}\n\\ No newline at end of file", "filename": "src/test/java/org/elasticsearch/index/query/search-template_conditional-clause-2-on.json", "status": "added" } ] }
{ "body": "Hi,\n\nFor the following version:\n\n``` javascript\n \"version\": {\n \"number\": \"1.1.1\",\n \"build_hash\": \"f1585f096d3f3985e73456debdc1a0745f512bbc\",\n \"build_timestamp\": \"2014-04-16T14:27:12Z\",\n \"build_snapshot\": false,\n \"lucene_version\": \"4.7\"\n }\n```\n\nI'm getting weird behaviour for `and` filter combined with `size` in the request payload.\n\n``` javascript\n{\n \"query\": {\n \"match_all\": {}\n },\n \"filter\": {\n \"and\": [\n [\n {\n \"exists\": {\n \"field\": \"boo.foo\"\n }\n },\n {\n \"term\": {\n \"something\": \"46850009-cef7-4703-8877-29a2577f6d1d\"\n }\n }\n ]\n ]\n },\n \"_source\": [\n // list of fields\n ],\n \"sort\": [\n {\n \"foo_id\": \"asc\"\n }\n ],\n \"size\": \"30\"\n}\n```\n\nThis query results with `count=20`, but only 10 hits (so `size` falls back to default).\nHowever, if I put the `size: 30` attribute on top of the query payload like so:\n\n``` javascript\n{\n \"size\": 30,\n \"query\": {\n // ... snip\n```\n\nI receive `count=20` with 20 hits as expected.\nFWIW, the order of payload properties doesn't seem to matter when not using `and` filter.\n\nAm I missing something?\n", "comments": [ { "body": "@aerosol You have double arrays in your `and` filter, which is throwing off the parsing. I've changed the title to the real issue: we should throw a syntax error in this situation.\n", "created_at": "2014-08-18T13:13:13Z" }, { "body": "aha, nice catch. Thanks @clintongormley \n", "created_at": "2014-08-18T13:26:50Z" }, { "body": "Closing as `and` is now deprecated anyway.\n", "created_at": "2015-08-26T15:12:08Z" } ], "number": 7311, "title": "Make `and` filter parsing stricter" }
{ "body": "Closes #7311\n", "number": 8356, "review_comments": [], "title": "Make `and` filter parsing stricter" }
{ "commits": [ { "message": "Fix of https://github.com/elasticsearch/elasticsearch/issues/8259: Better handling of tabs vs spaces in elasticsearch.yml\n\n - Throw an exception if there is a 'tab' character in the elasticsearch.yml file" }, { "message": "Fix of #7311 Make `and` filter parsing stricter\n - Throw an exception if there is double array inside a 'AND' filter" }, { "message": "Revert \"Fix of https://github.com/elasticsearch/elasticsearch/issues/8259: Better handling of tabs vs spaces in elasticsearch.yml\"\n\nThis reverts commit 7967f02c1a5e9d1c50731ff23f871e54e78d483f." } ], "files": [ { "diff": "@@ -60,12 +60,18 @@ public Filter parse(QueryParseContext parseContext) throws IOException, QueryPar\n String currentFieldName = null;\n XContentParser.Token token = parser.currentToken();\n if (token == XContentParser.Token.START_ARRAY) {\n+ boolean subSequentToken = true;\n while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) {\n+ if(subSequentToken && token == XContentParser.Token.START_ARRAY) {\n+ //Make `and` filter parsing stricter #7311\n+ throw new QueryParsingException(parseContext.index(), \"[and] filter should not have double or more nested arrays.\");\n+ }\n filtersFound = true;\n Filter filter = parseContext.parseInnerFilter();\n if (filter != null) {\n filters.add(filter);\n }\n+ subSequentToken = false;\n }\n } else {\n while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) {", "filename": "src/main/java/org/elasticsearch/index/query/AndFilterParser.java", "status": "modified" } ] }
{ "body": "There are various bugs to do with `match` query rewriting.\n## Wrong `fuzzy_rewrite` default\n\nThe docs for the [boolean match query](http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl-match-query.html#_boolean) say that fuzziness should use a `constant_score` rewrite method by default. Instead it uses the `top_terms_N` method.\n## `fuzzy_rewrite` never applied\n\nThe `fuzzy_rewrite` parameter is never applied, so it is impossible to change it. This is fixed with the following patch:\n\n```\ndiff --git a/src/main/java/org/elasticsearch/index/search/MatchQuery.java b/src/main/java/org/elasticsearch/index/search/MatchQuery.java\nindex b776416..7cc1f4f 100644\n--- a/src/main/java/org/elasticsearch/index/search/MatchQuery.java\n+++ b/src/main/java/org/elasticsearch/index/search/MatchQuery.java\n@@ -303,10 +303,11 @@ public class MatchQuery {\n if (query instanceof FuzzyQuery) {\n QueryParsers.setRewriteMethod((FuzzyQuery) query, fuzzyRewriteMethod);\n }\n+ return query;\n }\n int edits = fuzziness.asDistance(term.text());\n FuzzyQuery query = new FuzzyQuery(term, edits, fuzzyPrefixLength, maxExpansions, transpositions);\n- QueryParsers.setRewriteMethod(query, rewriteMethod);\n+ QueryParsers.setRewriteMethod((FuzzyQuery) query, fuzzyRewriteMethod);\n return query;\n }\n if (mapper != null) {\n@@ -318,4 +319,4 @@ public class MatchQuery {\n return new TermQuery(term);\n }\n\n-}\n\\ No newline at end of file\n+}\n```\n## Unused `rewrite` parameter\n\nThe `match` query accepts a `rewrite` parameter which is never used. \n## Bad rewriting of `match_phrase_prefix`\n\nThe `match_phrase_prefix` should (IMO) use a `constant_score_auto` rewrite on the final term (and this should be settable with `rewrite`), but this functionality is implemented with the MultiPhraseQuery which doesn't support the rewrite methods. \n", "comments": [ { "body": "@clintongormley Your fix for the fuzzy rewrite looks good. However, where do you see that the default rewrite method for fuzzy is `top_terms_n`?\n\n> The match_phrase_prefix should (IMO) use a constant_score_auto rewrite on the final term (and this should be settable with rewrite), but this functionality is implemented with the MultiPhraseQuery which doesn't support the rewrite methods. \n\nThe rewrite methods are for multi-term queries but `match_phrase_prefix` is a bit more complicated than that. In particular positions need to be taken into account so I'm not sure if we can allow to take all possible terms into account (even without taking care of scoring) without making this query super costly.\n\nIndeed rewrite is not used, and apart from fuzzy queries the `match` query never generates multi-term queries so we should probably remove that parameter.\n", "created_at": "2014-10-02T12:06:38Z" }, { "body": "> @clintongormley Your fix for the fuzzy rewrite looks good. However, where do you see that the default rewrite method for fuzzy is top_terms_n?\n\nThe `fuzzy_rewrite` param defaults to `null`, so if you don't set it, you get the default setting:\nhttps://github.com/apache/lucene-solr/blob/trunk/lucene/core/src/java/org/apache/lucene/search/FuzzyQuery.java#L101\n", "created_at": "2014-10-17T07:09:06Z" } ], "number": 6932, "title": "Match query rewrite and fuzzy_rewrite not applied" }
{ "body": "Fixed documentation since the default rewrite method for fuzzy queries is to\nselect top terms, fixed usage of the fuzzy rewrite method, and removed unused\n`rewrite` parameter.\n\nClose #6932\n", "number": 8352, "review_comments": [], "title": "Minor fixes to the `match` query" }
{ "commits": [ { "message": "Minor fixes to the `match` query.\n\nFixed documentation since the default rewrite method for fuzzy queries is to\nselect top terms, fixed usage of the fuzzy rewrite method, and removed unused\n`rewrite` parameter.\n\nClose #6932" } ], "files": [ { "diff": "@@ -123,8 +123,6 @@ public Query parse(QueryParseContext parseContext) throws IOException, QueryPars\n }\n } else if (\"minimum_should_match\".equals(currentFieldName) || \"minimumShouldMatch\".equals(currentFieldName)) {\n minimumShouldMatch = parser.textOrNull();\n- } else if (\"rewrite\".equals(currentFieldName)) {\n- matchQuery.setRewriteMethod(QueryParsers.parseRewriteMethod(parseContext.parseFieldMatcher(), parser.textOrNull(), null));\n } else if (\"fuzzy_rewrite\".equals(currentFieldName) || \"fuzzyRewrite\".equals(currentFieldName)) {\n matchQuery.setFuzzyRewriteMethod(QueryParsers.parseRewriteMethod(parseContext.parseFieldMatcher(), parser.textOrNull(), null));\n } else if (\"fuzzy_transpositions\".equals(currentFieldName)) {", "filename": "core/src/main/java/org/elasticsearch/index/query/MatchQueryParser.java", "status": "modified" }, { "diff": "@@ -113,8 +113,6 @@ public Query parse(QueryParseContext parseContext) throws IOException, QueryPars\n }\n } else if (\"minimum_should_match\".equals(currentFieldName) || \"minimumShouldMatch\".equals(currentFieldName)) {\n minimumShouldMatch = parser.textOrNull();\n- } else if (\"rewrite\".equals(currentFieldName)) {\n- multiMatchQuery.setRewriteMethod(QueryParsers.parseRewriteMethod(parseContext.parseFieldMatcher(), parser.textOrNull(), null));\n } else if (\"fuzzy_rewrite\".equals(currentFieldName) || \"fuzzyRewrite\".equals(currentFieldName)) {\n multiMatchQuery.setFuzzyRewriteMethod(QueryParsers.parseRewriteMethod(parseContext.parseFieldMatcher(), parser.textOrNull(), null));\n } else if (\"use_dis_max\".equals(currentFieldName) || \"useDisMax\".equals(currentFieldName)) {", "filename": "core/src/main/java/org/elasticsearch/index/query/MultiMatchQueryParser.java", "status": "modified" }, { "diff": "@@ -29,7 +29,6 @@\n import org.elasticsearch.common.lucene.search.MultiPhrasePrefixQuery;\n import org.elasticsearch.common.lucene.search.Queries;\n import org.elasticsearch.common.unit.Fuzziness;\n-import org.elasticsearch.index.mapper.FieldMapper;\n import org.elasticsearch.index.mapper.MappedFieldType;\n import org.elasticsearch.index.query.QueryParseContext;\n import org.elasticsearch.index.query.support.QueryParsers;\n@@ -68,8 +67,6 @@ public static enum ZeroTermsQuery {\n \n protected boolean transpositions = FuzzyQuery.defaultTranspositions;\n \n- protected MultiTermQuery.RewriteMethod rewriteMethod;\n-\n protected MultiTermQuery.RewriteMethod fuzzyRewriteMethod;\n \n protected boolean lenient;\n@@ -118,10 +115,6 @@ public void setTranspositions(boolean transpositions) {\n this.transpositions = transpositions;\n }\n \n- public void setRewriteMethod(MultiTermQuery.RewriteMethod rewriteMethod) {\n- this.rewriteMethod = rewriteMethod;\n- }\n-\n public void setFuzzyRewriteMethod(MultiTermQuery.RewriteMethod fuzzyRewriteMethod) {\n this.fuzzyRewriteMethod = fuzzyRewriteMethod;\n }\n@@ -278,10 +271,11 @@ protected Query blendTermQuery(Term term, MappedFieldType fieldType) {\n if (query instanceof FuzzyQuery) {\n QueryParsers.setRewriteMethod((FuzzyQuery) query, fuzzyRewriteMethod);\n }\n+ return query;\n }\n int edits = fuzziness.asDistance(term.text());\n FuzzyQuery query = new FuzzyQuery(term, edits, fuzzyPrefixLength, maxExpansions, transpositions);\n- QueryParsers.setRewriteMethod(query, rewriteMethod);\n+ QueryParsers.setRewriteMethod(query, fuzzyRewriteMethod);\n return query;\n }\n if (fieldType != null) {", "filename": "core/src/main/java/org/elasticsearch/index/search/MatchQuery.java", "status": "modified" }, { "diff": "@@ -46,7 +46,7 @@ See <<fuzziness>> for allowed settings.\n \n The `prefix_length` and\n `max_expansions` can be set in this case to control the fuzzy process.\n-If the fuzzy option is set the query will use `constant_score_rewrite`\n+If the fuzzy option is set the query will use `top_terms_blended_freqs_${max_expansions}`\n as its <<query-dsl-multi-term-rewrite,rewrite\n method>> the `fuzzy_rewrite` parameter allows to control how the query will get\n rewritten.", "filename": "docs/reference/query-dsl/match-query.asciidoc", "status": "modified" } ] }
{ "body": "ES 1.3.3\n1. Have two nodes that form a cluster (one shard per index, no replicas).\n2. Insert some documents (multiple indices).\n3. Put repository, backup the cluster (global state = true, partial = false ).\n4. Stop both nodes.\n5. Delete the data folder of one of the nodes (path.data).\n6. Start the two nodes (the cluster has red status).\n7. Close all indices\n7. Put repository, restore (global state = true, partial = false).\n\nObserved : Half of the shards remain unassigned.\n\nNote : If deleting data folders of both nodes (on step 5), after restore everything is ok.\n", "comments": [], "number": 8224, "title": "Unassigned shards after restore" }
{ "body": "Fixes the issue with restoring of an index that had only some of its primary shards allocated before it was closed.\n\nFixes #8224\n", "number": 8341, "review_comments": [ { "body": "can this be `else if` saves a line or two\n", "created_at": "2014-11-09T20:45:40Z" } ], "title": "Restore of indices that are only partially available in the cluster" }
{ "commits": [ { "message": "Snapshot/Restore: restore of indices that are only partially available in the cluster\n\nFixes the issue with restoring of an index that had only some of its primary shards allocated before it was closed.\n\nFixes #8224" } ], "files": [ { "diff": "@@ -159,39 +159,47 @@ public boolean allocateUnassigned(RoutingAllocation allocation) {\n \n // check if the counts meets the minimum set\n int requiredAllocation = 1;\n- try {\n- IndexMetaData indexMetaData = routingNodes.metaData().index(shard.index());\n- String initialShards = indexMetaData.settings().get(INDEX_RECOVERY_INITIAL_SHARDS, settings.get(INDEX_RECOVERY_INITIAL_SHARDS, this.initialShards));\n- if (\"quorum\".equals(initialShards)) {\n- if (indexMetaData.numberOfReplicas() > 1) {\n- requiredAllocation = ((1 + indexMetaData.numberOfReplicas()) / 2) + 1;\n- }\n- } else if (\"quorum-1\".equals(initialShards) || \"half\".equals(initialShards)) {\n- if (indexMetaData.numberOfReplicas() > 2) {\n- requiredAllocation = ((1 + indexMetaData.numberOfReplicas()) / 2);\n- }\n- } else if (\"one\".equals(initialShards)) {\n- requiredAllocation = 1;\n- } else if (\"full\".equals(initialShards) || \"all\".equals(initialShards)) {\n- requiredAllocation = indexMetaData.numberOfReplicas() + 1;\n- } else if (\"full-1\".equals(initialShards) || \"all-1\".equals(initialShards)) {\n- if (indexMetaData.numberOfReplicas() > 1) {\n- requiredAllocation = indexMetaData.numberOfReplicas();\n+ // if we restore from a repository one copy is more then enough\n+ if (shard.restoreSource() == null) {\n+ try {\n+ IndexMetaData indexMetaData = routingNodes.metaData().index(shard.index());\n+ String initialShards = indexMetaData.settings().get(INDEX_RECOVERY_INITIAL_SHARDS, settings.get(INDEX_RECOVERY_INITIAL_SHARDS, this.initialShards));\n+ if (\"quorum\".equals(initialShards)) {\n+ if (indexMetaData.numberOfReplicas() > 1) {\n+ requiredAllocation = ((1 + indexMetaData.numberOfReplicas()) / 2) + 1;\n+ }\n+ } else if (\"quorum-1\".equals(initialShards) || \"half\".equals(initialShards)) {\n+ if (indexMetaData.numberOfReplicas() > 2) {\n+ requiredAllocation = ((1 + indexMetaData.numberOfReplicas()) / 2);\n+ }\n+ } else if (\"one\".equals(initialShards)) {\n+ requiredAllocation = 1;\n+ } else if (\"full\".equals(initialShards) || \"all\".equals(initialShards)) {\n+ requiredAllocation = indexMetaData.numberOfReplicas() + 1;\n+ } else if (\"full-1\".equals(initialShards) || \"all-1\".equals(initialShards)) {\n+ if (indexMetaData.numberOfReplicas() > 1) {\n+ requiredAllocation = indexMetaData.numberOfReplicas();\n+ }\n+ } else {\n+ requiredAllocation = Integer.parseInt(initialShards);\n }\n- } else {\n- requiredAllocation = Integer.parseInt(initialShards);\n+ } catch (Exception e) {\n+ logger.warn(\"[{}][{}] failed to derived initial_shards from value {}, ignore allocation for {}\", shard.index(), shard.id(), initialShards, shard);\n }\n- } catch (Exception e) {\n- logger.warn(\"[{}][{}] failed to derived initial_shards from value {}, ignore allocation for {}\", shard.index(), shard.id(), initialShards, shard);\n }\n \n // not enough found for this shard, continue...\n if (numberOfAllocationsFound < requiredAllocation) {\n- // we can't really allocate, so ignore it and continue\n- unassignedIterator.remove();\n- routingNodes.ignoredUnassigned().add(shard);\n- if (logger.isDebugEnabled()) {\n- logger.debug(\"[{}][{}]: not allocating, number_of_allocated_shards_found [{}], required_number [{}]\", shard.index(), shard.id(), numberOfAllocationsFound, requiredAllocation);\n+ // if we are restoring this shard we still can allocate\n+ if (shard.restoreSource() == null) {\n+ // we can't really allocate, so ignore it and continue\n+ unassignedIterator.remove();\n+ routingNodes.ignoredUnassigned().add(shard);\n+ if (logger.isDebugEnabled()) {\n+ logger.debug(\"[{}][{}]: not allocating, number_of_allocated_shards_found [{}], required_number [{}]\", shard.index(), shard.id(), numberOfAllocationsFound, requiredAllocation);\n+ }\n+ } else if (logger.isDebugEnabled()) {\n+ logger.debug(\"[{}][{}]: missing local data, will restore from [{}]\", shard.index(), shard.id(), shard.restoreSource());\n }\n continue;\n }", "filename": "src/main/java/org/elasticsearch/gateway/local/LocalGatewayAllocator.java", "status": "modified" }, { "diff": "@@ -19,6 +19,8 @@\n \n package org.elasticsearch.snapshots;\n \n+import com.carrotsearch.hppc.IntOpenHashSet;\n+import com.carrotsearch.hppc.IntSet;\n import com.carrotsearch.randomizedtesting.LifecycleScope;\n import com.google.common.base.Predicate;\n import com.google.common.collect.ImmutableList;\n@@ -32,6 +34,7 @@\n import org.elasticsearch.action.admin.cluster.snapshots.restore.RestoreSnapshotResponse;\n import org.elasticsearch.action.admin.cluster.snapshots.status.SnapshotStatus;\n import org.elasticsearch.action.admin.cluster.snapshots.status.SnapshotsStatusResponse;\n+import org.elasticsearch.action.admin.indices.recovery.ShardRecoveryResponse;\n import org.elasticsearch.client.Client;\n import org.elasticsearch.cluster.ClusterService;\n import org.elasticsearch.cluster.ClusterState;\n@@ -48,8 +51,10 @@\n import org.elasticsearch.common.xcontent.XContentBuilder;\n import org.elasticsearch.common.xcontent.XContentParser;\n import org.elasticsearch.index.store.support.AbstractIndexStore;\n+import org.elasticsearch.node.internal.InternalNode;\n import org.elasticsearch.repositories.RepositoryMissingException;\n import org.elasticsearch.snapshots.mockstore.MockRepositoryModule;\n+import org.elasticsearch.test.InternalTestCluster;\n import org.elasticsearch.test.junit.annotations.TestLogging;\n import org.elasticsearch.threadpool.ThreadPool;\n import org.junit.Ignore;\n@@ -190,10 +195,10 @@ public ClusterState execute(ClusterState currentState) throws Exception {\n ClusterState clusterState = client.admin().cluster().prepareState().get().getState();\n logger.info(\"Cluster state: {}\", clusterState);\n MetaData metaData = clusterState.getMetaData();\n- assertThat(((SnapshottableMetadata)metaData.custom(SnapshottableMetadata.TYPE)).getData(), equalTo(\"before_snapshot_s\"));\n- assertThat(((NonSnapshottableMetadata)metaData.custom(NonSnapshottableMetadata.TYPE)).getData(), equalTo(\"after_snapshot_ns\"));\n- assertThat(((SnapshottableGatewayMetadata)metaData.custom(SnapshottableGatewayMetadata.TYPE)).getData(), equalTo(\"before_snapshot_s_gw\"));\n- assertThat(((NonSnapshottableGatewayMetadata)metaData.custom(NonSnapshottableGatewayMetadata.TYPE)).getData(), equalTo(\"after_snapshot_ns_gw\"));\n+ assertThat(((SnapshottableMetadata) metaData.custom(SnapshottableMetadata.TYPE)).getData(), equalTo(\"before_snapshot_s\"));\n+ assertThat(((NonSnapshottableMetadata) metaData.custom(NonSnapshottableMetadata.TYPE)).getData(), equalTo(\"after_snapshot_ns\"));\n+ assertThat(((SnapshottableGatewayMetadata) metaData.custom(SnapshottableGatewayMetadata.TYPE)).getData(), equalTo(\"before_snapshot_s_gw\"));\n+ assertThat(((NonSnapshottableGatewayMetadata) metaData.custom(NonSnapshottableGatewayMetadata.TYPE)).getData(), equalTo(\"after_snapshot_ns_gw\"));\n \n logger.info(\"--> restart all nodes\");\n internalCluster().fullRestart();\n@@ -205,13 +210,13 @@ public ClusterState execute(ClusterState currentState) throws Exception {\n metaData = clusterState.getMetaData();\n assertThat(metaData.custom(SnapshottableMetadata.TYPE), nullValue());\n assertThat(metaData.custom(NonSnapshottableMetadata.TYPE), nullValue());\n- assertThat(((SnapshottableGatewayMetadata)metaData.custom(SnapshottableGatewayMetadata.TYPE)).getData(), equalTo(\"before_snapshot_s_gw\"));\n- assertThat(((NonSnapshottableGatewayMetadata)metaData.custom(NonSnapshottableGatewayMetadata.TYPE)).getData(), equalTo(\"after_snapshot_ns_gw\"));\n+ assertThat(((SnapshottableGatewayMetadata) metaData.custom(SnapshottableGatewayMetadata.TYPE)).getData(), equalTo(\"before_snapshot_s_gw\"));\n+ assertThat(((NonSnapshottableGatewayMetadata) metaData.custom(NonSnapshottableGatewayMetadata.TYPE)).getData(), equalTo(\"after_snapshot_ns_gw\"));\n // Shouldn't be returned as part of API response\n assertThat(metaData.custom(SnapshotableGatewayNoApiMetadata.TYPE), nullValue());\n // But should still be in state\n metaData = internalCluster().getInstance(ClusterService.class).state().metaData();\n- assertThat(((SnapshotableGatewayNoApiMetadata)metaData.custom(SnapshotableGatewayNoApiMetadata.TYPE)).getData(), equalTo(\"before_snapshot_s_gw_noapi\"));\n+ assertThat(((SnapshotableGatewayNoApiMetadata) metaData.custom(SnapshotableGatewayNoApiMetadata.TYPE)).getData(), equalTo(\"before_snapshot_s_gw_noapi\"));\n }\n \n private void updateClusterState(final ClusterStateUpdater updater) throws InterruptedException {\n@@ -237,7 +242,7 @@ public void clusterStateProcessed(String source, ClusterState oldState, ClusterS\n }\n \n private static interface ClusterStateUpdater {\n- public ClusterState execute(ClusterState currentState) throws Exception;\n+ public ClusterState execute(ClusterState currentState) throws Exception;\n }\n \n @Test\n@@ -489,6 +494,64 @@ public boolean apply(Object o) {\n assertThat(client().prepareCount(\"test-idx-some\").get().getCount(), allOf(greaterThan(0L), lessThan(100L)));\n }\n \n+ @Test\n+ @TestLogging(\"indices.recovery:TRACE,index.gateway:TRACE,gateway:TRACE\")\n+ public void restoreIndexWithShardsMissingInLocalGateway() throws Exception {\n+ logger.info(\"--> start 2 nodes\");\n+ //NO COMMIT: remove HTTP_ENABLED\n+ internalCluster().startNode(settingsBuilder().put(\"gateway.type\", \"local\").put(InternalNode.HTTP_ENABLED, true));\n+ internalCluster().startNode(settingsBuilder().put(\"gateway.type\", \"local\").put(InternalNode.HTTP_ENABLED, true));\n+ cluster().wipeIndices(\"_all\");\n+\n+ logger.info(\"--> create repository\");\n+ PutRepositoryResponse putRepositoryResponse = client().admin().cluster().preparePutRepository(\"test-repo\")\n+ .setType(\"fs\").setSettings(ImmutableSettings.settingsBuilder().put(\"location\", newTempDir())).execute().actionGet();\n+ assertThat(putRepositoryResponse.isAcknowledged(), equalTo(true));\n+ int numberOfShards = 6;\n+ logger.info(\"--> create an index that will have some unallocated shards\");\n+ assertAcked(prepareCreate(\"test-idx\", 2, settingsBuilder().put(\"number_of_shards\", numberOfShards)\n+ .put(\"number_of_replicas\", 0)));\n+ ensureGreen();\n+\n+ logger.info(\"--> indexing some data into test-idx\");\n+ for (int i = 0; i < 100; i++) {\n+ index(\"test-idx\", \"doc\", Integer.toString(i), \"foo\", \"bar\" + i);\n+ }\n+ refresh();\n+ assertThat(client().prepareCount(\"test-idx\").get().getCount(), equalTo(100L));\n+\n+ logger.info(\"--> start snapshot\");\n+ assertThat(client().admin().cluster().prepareCreateSnapshot(\"test-repo\", \"test-snap-1\").setIndices(\"test-idx\").setWaitForCompletion(true).get().getSnapshotInfo().state(), equalTo(SnapshotState.SUCCESS));\n+\n+ logger.info(\"--> close the index\");\n+ assertAcked(client().admin().indices().prepareClose(\"test-idx\"));\n+\n+ logger.info(\"--> shutdown one of the nodes that should make half of the shards unavailable\");\n+ internalCluster().restartRandomDataNode(new InternalTestCluster.RestartCallback() {\n+ @Override\n+ public boolean clearData(String nodeName) {\n+ return true;\n+ }\n+ });\n+\n+ assertThat(client().admin().cluster().prepareHealth().setWaitForEvents(Priority.LANGUID).setTimeout(\"1m\").setWaitForNodes(\"2\").execute().actionGet().isTimedOut(), equalTo(false));\n+\n+ logger.info(\"--> restore index snapshot\");\n+ assertThat(client().admin().cluster().prepareRestoreSnapshot(\"test-repo\", \"test-snap-1\").setRestoreGlobalState(false).setWaitForCompletion(true).get().getRestoreInfo().successfulShards(), equalTo(6));\n+\n+ ensureGreen(\"test-idx\");\n+ assertThat(client().prepareCount(\"test-idx\").get().getCount(), equalTo(100L));\n+\n+ IntSet reusedShards = IntOpenHashSet.newInstance();\n+ for (ShardRecoveryResponse response : client().admin().indices().prepareRecoveries(\"test-idx\").get().shardResponses().get(\"test-idx\")) {\n+ if (response.recoveryState().getIndex().reusedByteCount() > 0) {\n+ reusedShards.add(response.getShardId());\n+ }\n+ }\n+ logger.info(\"--> check that at least half of the shards had some reuse: [{}]\", reusedShards);\n+ assertThat(reusedShards.size(), greaterThanOrEqualTo(numberOfShards/2));\n+ }\n+\n @Test\n @TestLogging(\"snapshots:TRACE,repositories:TRACE\")\n @Ignore\n@@ -662,7 +725,7 @@ public static abstract class TestCustomMetaDataFactory<T extends TestCustomMetaD\n \n @Override\n public T readFrom(StreamInput in) throws IOException {\n- return (T)newTestCustomMetaData(in.readString());\n+ return (T) newTestCustomMetaData(in.readString());\n }\n \n @Override\n@@ -692,7 +755,7 @@ public T fromXContent(XContentParser parser) throws IOException {\n if (data == null) {\n throw new ElasticsearchParseException(\"failed to parse snapshottable metadata, data not found\");\n }\n- return (T)newTestCustomMetaData(data);\n+ return (T) newTestCustomMetaData(data);\n }\n \n @Override", "filename": "src/test/java/org/elasticsearch/snapshots/DedicatedClusterSnapshotRestoreTests.java", "status": "modified" } ] }
{ "body": "hi there,\n\nwe have successfully been using integer representations of javascript Date objects for the \"seed\" property of the \"random_function\" scoring function, i.e. output of the following function:\n\n```\nfunction seedval(max_age) \n{\n var ma = max_age || 60000;\n return Math.ceil(+new Date() / ma) * ma;\n}\n```\n\nwhich results in numbers like:\n\n> 1414573200000\n\nyesterday a colleague built and installed ES from source using the 1.x branch and stumbled upon the following error (it was definitely referring to the value set for \"random_score.seed\"!)\n\n```\nJsonParseException[Numeric value (1414573200000) out of range of int\n```\n\n> NB: I am not looking for a fix as such (we decided to simply divide our generated seed values by 1000 thereby solving our immediate problem) \n\nI am creating this issue because it just might be something that you would consider to be a regression, I have no idea which of the following might be true ... so by all means close this issue if you consider it to be irrelevant :) \n1. we are running ES as a 32bit process (not the case AFAICT)\n2. this is an intentional change/fix\n3. this is an unintentional change/bug - possibly related to upgrading (or introducing?) the Jackson lib, from which the parse error seems to originate.\n\nhope this is of some use, thanks for reading!\n", "comments": [ { "body": "@rjernst could you take a look at this please? Looks like https://github.com/elasticsearch/elasticsearch/pull/7446/files changed from long to int.\n", "created_at": "2014-10-29T10:57:50Z" }, { "body": "I agree it would be nice to fix it as it is common to provide 64-bits values as seeds (eg. timestamps).\n\nMaybe we could internally translate the 64-bits to 32-bits (since we only use 32-bits in practice) in a `Long.hashCode` fashion? Or maybe we could even allow the seed to be any string and internally use the 32-bits hash of this string as a seed. This could potentially be more user-friendly by allowing users to directly provide things like a session id?\n", "created_at": "2014-10-31T09:26:36Z" } ], "number": 8267, "title": "\"random_score\", acceptable values for \"seed\" changed between 1.3 and 1.x" }
{ "body": "closes #8267\n", "number": 8311, "review_comments": [ { "body": "this javadoc is wrong\n", "created_at": "2014-10-31T16:08:58Z" }, { "body": "maybe we can just make this `Number` and we try to parse the string?\n", "created_at": "2014-10-31T16:09:16Z" }, { "body": "I think you should just do `instanceof Number` and else call `.toString()`\n", "created_at": "2014-10-31T16:09:38Z" }, { "body": "nevermind I saw the usage \n", "created_at": "2014-10-31T16:10:24Z" }, { "body": "I was trying to satisfy the possibility for a session id (say alphanum + dashes).\n", "created_at": "2014-10-31T16:11:25Z" }, { "body": "I modified the comment after you replied ^^\n", "created_at": "2014-10-31T16:12:10Z" }, { "body": "Fixed.\n", "created_at": "2014-10-31T16:14:07Z" }, { "body": "@rjernst can you fix this ^^\n", "created_at": "2014-11-03T11:26:46Z" }, { "body": "Oops, sorry, I misunderstood. Fixed.\n", "created_at": "2014-11-03T14:56:30Z" } ], "title": "FunctionScore: RandomScoreFunction now accepts long, as well a strings." }
{ "commits": [ { "message": "FunctionScore: RandomScoreFunction now accepts long, as well a strings.\n\ncloses #8267" }, { "message": "Fix javadoc" }, { "message": "Simplify conditions for serializing seed parameter." } ], "files": [ { "diff": "@@ -80,6 +80,14 @@ public static FactorBuilder factorFunction(float boost) {\n public static RandomScoreFunctionBuilder randomFunction(int seed) {\n return (new RandomScoreFunctionBuilder()).seed(seed);\n }\n+\n+ public static RandomScoreFunctionBuilder randomFunction(long seed) {\n+ return (new RandomScoreFunctionBuilder()).seed(seed);\n+ }\n+\n+ public static RandomScoreFunctionBuilder randomFunction(String seed) {\n+ return (new RandomScoreFunctionBuilder()).seed(seed);\n+ }\n \n public static WeightBuilder weightFactorFunction(float weight) {\n return (WeightBuilder)(new WeightBuilder().setWeight(weight));", "filename": "src/main/java/org/elasticsearch/index/query/functionscore/ScoreFunctionBuilders.java", "status": "modified" }, { "diff": "@@ -28,7 +28,7 @@\n */\n public class RandomScoreFunctionBuilder extends ScoreFunctionBuilder {\n \n- private Integer seed = null;\n+ private Object seed = null;\n \n public RandomScoreFunctionBuilder() {\n }\n@@ -49,11 +49,31 @@ public RandomScoreFunctionBuilder seed(int seed) {\n return this;\n }\n \n+ /**\n+ * seed variant taking a long value.\n+ * @see {@link #seed(int)}\n+ */\n+ public RandomScoreFunctionBuilder seed(long seed) {\n+ this.seed = seed;\n+ return this;\n+ }\n+\n+ /**\n+ * seed variant taking a String value.\n+ * @see {@link #seed(int)}\n+ */\n+ public RandomScoreFunctionBuilder seed(String seed) {\n+ this.seed = seed;\n+ return this;\n+ }\n+\n @Override\n public void doXContent(XContentBuilder builder, Params params) throws IOException {\n builder.startObject(getName());\n- if (seed != null) {\n- builder.field(\"seed\", seed.intValue());\n+ if (seed instanceof Number) {\n+ builder.field(\"seed\", ((Number)seed).longValue());\n+ } else if (seed != null) {\n+ builder.field(\"seed\", seed.toString());\n }\n builder.endObject();\n }", "filename": "src/main/java/org/elasticsearch/index/query/functionscore/random/RandomScoreFunctionBuilder.java", "status": "modified" }, { "diff": "@@ -20,6 +20,7 @@\n \n package org.elasticsearch.index.query.functionscore.random;\n \n+import com.google.common.primitives.Longs;\n import org.elasticsearch.common.inject.Inject;\n import org.elasticsearch.common.lucene.search.function.RandomScoreFunction;\n import org.elasticsearch.common.lucene.search.function.ScoreFunction;\n@@ -59,7 +60,19 @@ public ScoreFunction parse(QueryParseContext parseContext, XContentParser parser\n currentFieldName = parser.currentName();\n } else if (token.isValue()) {\n if (\"seed\".equals(currentFieldName)) {\n- seed = parser.intValue();\n+ if (token == XContentParser.Token.VALUE_NUMBER) {\n+ if (parser.numberType() == XContentParser.NumberType.INT) {\n+ seed = parser.intValue();\n+ } else if (parser.numberType() == XContentParser.NumberType.LONG) {\n+ seed = Longs.hashCode(parser.longValue());\n+ } else {\n+ throw new QueryParsingException(parseContext.index(), \"random_score seed must be an int, long or string, not '\" + token.toString() + \"'\");\n+ }\n+ } else if (token == XContentParser.Token.VALUE_STRING) {\n+ seed = parser.text().hashCode();\n+ } else {\n+ throw new QueryParsingException(parseContext.index(), \"random_score seed must be an int/long or string, not '\" + token.toString() + \"'\");\n+ }\n } else {\n throw new QueryParsingException(parseContext.index(), NAMES[0] + \" query does not support [\" + currentFieldName + \"]\");\n }\n@@ -73,7 +86,7 @@ public ScoreFunction parse(QueryParseContext parseContext, XContentParser parser\n }\n \n if (seed == -1) {\n- seed = (int)parseContext.nowInMillis();\n+ seed = Longs.hashCode(parseContext.nowInMillis());\n }\n final ShardId shardId = SearchContext.current().indexShard().shardId();\n final int salt = (shardId.index().name().hashCode() << 10) | shardId.id();", "filename": "src/main/java/org/elasticsearch/index/query/functionscore/random/RandomScoreFunctionParser.java", "status": "modified" }, { "diff": "@@ -25,7 +25,6 @@\n import org.elasticsearch.test.ElasticsearchIntegrationTest;\n import org.hamcrest.CoreMatchers;\n import org.junit.Ignore;\n-import org.junit.Test;\n \n import java.util.Arrays;\n import java.util.Comparator;\n@@ -41,7 +40,6 @@\n \n public class RandomScoreFunctionTests extends ElasticsearchIntegrationTest {\n \n- @Test\n public void testConsistentHitsWithSameSeed() throws Exception {\n createIndex(\"test\");\n ensureGreen(); // make sure we are done otherwise preference could change?\n@@ -103,7 +101,6 @@ public int compare(SearchHit o1, SearchHit o2) {\n }\n }\n \n- @Test\n public void testScoreAccessWithinScript() throws Exception {\n assertAcked(prepareCreate(\"test\")\n .addMapping(\"type\", \"body\", \"type=string\", \"index\", \"type=\" + randomFrom(new String[]{\"short\", \"float\", \"long\", \"integer\", \"double\"})));\n@@ -170,7 +167,6 @@ public void testScoreAccessWithinScript() throws Exception {\n assertThat(firstHit.getScore(), greaterThan(1f));\n }\n \n- @Test\n public void testSeedReportedInExplain() throws Exception {\n createIndex(\"test\");\n ensureGreen();\n@@ -201,7 +197,6 @@ public void testNoDocs() throws Exception {\n assertEquals(0, resp.getHits().totalHits());\n }\n \n- @Test\n public void testScoreRange() throws Exception {\n // all random scores should be in range [0.0, 1.0]\n createIndex(\"test\");\n@@ -227,8 +222,32 @@ public void testScoreRange() throws Exception {\n }\n }\n }\n+ \n+ public void testSeeds() throws Exception {\n+ createIndex(\"test\");\n+ ensureGreen();\n+ final int docCount = randomIntBetween(100, 200);\n+ for (int i = 0; i < docCount; i++) {\n+ index(\"test\", \"type\", \"\" + i, jsonBuilder().startObject().endObject());\n+ }\n+ flushAndRefresh();\n+\n+ assertNoFailures(client().prepareSearch()\n+ .setSize(docCount) // get all docs otherwise we are prone to tie-breaking\n+ .setQuery(functionScoreQuery(matchAllQuery(), randomFunction(randomInt())))\n+ .execute().actionGet());\n+\n+ assertNoFailures(client().prepareSearch()\n+ .setSize(docCount) // get all docs otherwise we are prone to tie-breaking\n+ .setQuery(functionScoreQuery(matchAllQuery(), randomFunction(randomLong())))\n+ .execute().actionGet());\n+\n+ assertNoFailures(client().prepareSearch()\n+ .setSize(docCount) // get all docs otherwise we are prone to tie-breaking\n+ .setQuery(functionScoreQuery(matchAllQuery(), randomFunction(randomRealisticUnicodeOfLengthBetween(10, 20))))\n+ .execute().actionGet());\n+ }\n \n- @Test\n @Ignore\n public void checkDistribution() throws Exception {\n int count = 10000;", "filename": "src/test/java/org/elasticsearch/search/functionscore/RandomScoreFunctionTests.java", "status": "modified" } ] }
{ "body": "I looked at the heap dump from #8249 and it looks like the majority of the heap is consumed by filter cache keys (TermFilter in that case, where each term was quite large: ~200 bytes long). There were ~5.5 M such entries, I suspect most of which matching 0 docs.\n\nWe don't track cache key RAM usage today in Elasticsearch, because it's tricky. E.g. Lucene's Filter doesn't implement Accountable, so we can't (easily) ask it how much RAM it's using. In general there can be sharing between Filters so the RAM used across N filters is less than the sum of each, though I'm not sure Elasticsearch does this. Also, ideally we'd avoid over-counting when the same Filter is cached across N segments.\n\nIt's tricky, but I think we need to do something here...\n\nMaybe a separate improvement we could make here is to not bother caching a filter that matched 0 docs? Such filters are usually (?) fast to re-execute...\n", "comments": [ { "body": "> Maybe a separate improvement we could make here is to not bother caching a filter that matched 0 docs? Such filters are usually (?) fast to re-execute...\n\nThat depends on how much effort it takes to come to that conclusion. Say a geo filter with an arc distance, or cached boolean clause etc. Maybe we can make it filter dependent like a term filter with 0 hits is not cached but others like Range Terms might still be worth it? I'm not sure we need this complexity... \n", "created_at": "2014-10-29T11:51:19Z" }, { "body": "What about just limiting the number of filters that can be added to the cache?\n", "created_at": "2014-10-29T12:07:08Z" }, { "body": "Agreed with Clinton. I think part of the bug is that a filter cache should never reach 5.5M entries: it only makes sense to cache filters if they are going to be reused, otherwise the caching logic is going to add overhead by consuming all documents (as opposed to using skipping in the case of a conjunction) and increasing memory pressure by promoting object to the old gen.\n\nSo maybe the fix is to allow to configure an absolute maximum size on the filter cache (with a reasonable value) and to cache filters less aggressively?\n", "created_at": "2014-10-29T12:11:46Z" }, { "body": "+1 for a simple size limit. I think Solr (example solrconfig.xml) \"defaults\" to 512...\n", "created_at": "2014-10-29T20:23:13Z" }, { "body": "512 seems awful low to me. Is this number per-segment? So if you have one filter and 20 segments, you'd have 20 entries? Remember also that we have multiple shards per node.\n\nI think we need to choose a big number, which still allows for a lot of caching, without letting it grow to a ridiculous number like 5 million. Perhaps 50,000?\n", "created_at": "2014-10-30T10:05:01Z" }, { "body": "I think for Solr it's per-index, i.e. after 512 cached filters for the index it starts evicting by default.\n\nI agree this may be low, since we now compactly store the sparse cases ... but still caching is \"supposed\" to be for cases where you expect high re-use of the given filter and the cost to re-generate it is highish.\n\nIn general I think Elasticsearch should be less aggressive about filter caching; I suspect in many cases where we are caching, we are not saving that much time vs. letting the OS take that RAM instead and cache the IO pages Lucene will access to recreate that filter. Running a TermFilter when the IO pages are hot should be quite fast.\n\nAnyway 50K seems OK...\n", "created_at": "2014-10-30T10:49:04Z" }, { "body": "@mikemccand take this example:\n\nUsers are only interested in posts from people that they follow (or even 2 degrees - the people followed by the people that they follow). eg there are 100,000 user IDs to filter on.\n\nThese can all be put into a `terms` query, potentially with a short custom cache key. The first execution takes a bit of time, but the filter remains valid for the rest of the user's session. This represents a huge saving in execution time. It is quite feasible that a big website could have 100k user sessions live at the same time.\n", "created_at": "2014-10-30T11:17:15Z" }, { "body": "I think 512 is actually overly large. _way over the top_\n\nIMO: we should only cache filters that:\n- are slow (stuff like wildcards and ranges, but not individual terms)\n- clear evidence of reuse (e.g. not the first time we have seen the filter)\n- being intersected with dense queries (where the risk-reward tradeoff is more clear)\n\nRetrieving every single matching document for a filter to put it into a bitset is extremely slow. Most times, its better to just intersect it on the fly. So this is a huge risk, and we should only do it when the reward is clear: and the reward is tiny unless the criteria above are being met.\n\nI'm not interested in seeing flawed benchmarks around this within current elasticsearch, because it bogusly does this slow way pretty much all the time, even when not caching. So of course mechanisms like caching and bulk bitset intersection will always look like a huge win with the current code. \n", "created_at": "2014-10-30T12:14:26Z" }, { "body": "@mikemccand what's the status of this issue?\n", "created_at": "2015-05-28T21:06:49Z" }, { "body": "This is bounded to 100k currently in master. I don't much like this number, its ridiculously large, but other changes address the real problems (overcaching). For example only caching on large segments and only when reuse has been noted and so on. \n", "created_at": "2015-05-28T21:21:49Z" }, { "body": "I don't think this issue is relevant anymore now that we are on the lucene query cache:\n- keys are taken into account (the result of ramBytesUsed() if the query implements Accountable and a constant otherwise)\n- the cache has a limit on the number of filters that can be added to the cache to limit issues that would be caused by ram usage of keys being underestimated\n- also we require that segments have at least 10000 docs for caching so it should be less likely to have cache keys that are much larger than the values than before\n", "created_at": "2015-05-28T21:23:52Z" }, { "body": "Robert, you are right it's 100k currently, nor 10k. I agree we should reduce this number to a more reasonable value.\n", "created_at": "2015-05-28T21:29:00Z" } ], "number": 8268, "title": "Core: filter cache heap usage should include RAM used by the cache keys (Filter)" }
{ "body": "This changes the weighing function for the filter cache to use a\nconfigurable minimum weight for each filter cached. This value defaults\nto 1kb and can be configured with the\n`indices.cache.filter.minimum_entry_weight` setting.\n\nThis also fixes an issue with the filter cache where the concurrency\nlevel of the cache was exposed as a setting, but not used in cache\nconstruction.\n\nRelates to #8268\nFixes #8249\n", "number": 8304, "review_comments": [ { "body": "Is it needed because of this change or is it unrelated?\n", "created_at": "2014-10-31T10:26:03Z" }, { "body": "Not needed, but always good to actually clean the cache when closing, happy to remove it if you'd like\n", "created_at": "2014-10-31T10:30:48Z" }, { "body": "I was just wondering if there could be bad side-effects such as causing some operations to take longer.\n", "created_at": "2014-10-31T11:52:11Z" }, { "body": "There aren't any bad side-effects, without this, if the cache were closed and everything was invalidated, they would not be immediately removed, this just immediately removes them.\n", "created_at": "2014-10-31T11:56:17Z" } ], "title": "Use a 1024 byte minimum weight for filter cache entries" }
{ "commits": [ { "message": "Use a 1024 byte minimum weight for filter cache entries\n\nThis changes the weighing function for the filter cache to use a\nconfigurable minimum weight for each filter cached. This value defaults\nto 1kb and can be configured with the\n`indices.cache.filter.minimum_entry_weight` setting.\n\nThis also fixes an issue with the filter cache where the concurrency\nlevel of the cache was exposed as a setting, but not used in cache\nconstruction.\n\nRelates to #8268" } ], "files": [ { "diff": "@@ -209,12 +209,19 @@ public int hashCode() {\n }\n \n \n+ /** A weigher for the Guava filter cache that uses a minimum entry size */\n public static class FilterCacheValueWeigher implements Weigher<WeightedFilterCache.FilterCacheKey, DocIdSet> {\n \n+ private final int minimumEntrySize;\n+\n+ public FilterCacheValueWeigher(int minimumEntrySize) {\n+ this.minimumEntrySize = minimumEntrySize;\n+ }\n+\n @Override\n public int weigh(FilterCacheKey key, DocIdSet value) {\n int weight = (int) Math.min(DocIdSets.sizeInBytes(value), Integer.MAX_VALUE);\n- return weight == 0 ? 1 : weight;\n+ return Math.max(weight, this.minimumEntrySize);\n }\n }\n ", "filename": "src/main/java/org/elasticsearch/index/cache/filter/weighted/WeightedFilterCache.java", "status": "modified" }, { "diff": "@@ -55,29 +55,33 @@ public class IndicesFilterCache extends AbstractComponent implements RemovalList\n private volatile int concurrencyLevel;\n \n private final TimeValue cleanInterval;\n+ private final int minimumEntryWeight;\n \n private final Set<Object> readersKeysToClean = ConcurrentCollections.newConcurrentSet();\n \n private volatile boolean closed;\n \n-\n public static final String INDICES_CACHE_FILTER_SIZE = \"indices.cache.filter.size\";\n public static final String INDICES_CACHE_FILTER_EXPIRE = \"indices.cache.filter.expire\";\n public static final String INDICES_CACHE_FILTER_CONCURRENCY_LEVEL = \"indices.cache.filter.concurrency_level\";\n+ public static final String INDICES_CACHE_FILTER_CLEAN_INTERVAL = \"indices.cache.filter.clean_interval\";\n+ public static final String INDICES_CACHE_FILTER_MINIMUM_ENTRY_WEIGHT = \"indices.cache.filter.minimum_entry_weight\";\n \n class ApplySettings implements NodeSettingsService.Listener {\n @Override\n public void onRefreshSettings(Settings settings) {\n boolean replace = false;\n String size = settings.get(INDICES_CACHE_FILTER_SIZE, IndicesFilterCache.this.size);\n if (!size.equals(IndicesFilterCache.this.size)) {\n- logger.info(\"updating [indices.cache.filter.size] from [{}] to [{}]\", IndicesFilterCache.this.size, size);\n+ logger.info(\"updating [{}] from [{}] to [{}]\",\n+ INDICES_CACHE_FILTER_SIZE, IndicesFilterCache.this.size, size);\n IndicesFilterCache.this.size = size;\n replace = true;\n }\n TimeValue expire = settings.getAsTime(INDICES_CACHE_FILTER_EXPIRE, IndicesFilterCache.this.expire);\n if (!Objects.equal(expire, IndicesFilterCache.this.expire)) {\n- logger.info(\"updating [indices.cache.filter.expire] from [{}] to [{}]\", IndicesFilterCache.this.expire, expire);\n+ logger.info(\"updating [{}] from [{}] to [{}]\",\n+ INDICES_CACHE_FILTER_EXPIRE, IndicesFilterCache.this.expire, expire);\n IndicesFilterCache.this.expire = expire;\n replace = true;\n }\n@@ -86,7 +90,8 @@ public void onRefreshSettings(Settings settings) {\n throw new ElasticsearchIllegalArgumentException(\"concurrency_level must be > 0 but was: \" + concurrencyLevel);\n }\n if (!Objects.equal(concurrencyLevel, IndicesFilterCache.this.concurrencyLevel)) {\n- logger.info(\"updating [indices.cache.filter.concurrency_level] from [{}] to [{}]\", IndicesFilterCache.this.concurrencyLevel, concurrencyLevel);\n+ logger.info(\"updating [{}] from [{}] to [{}]\",\n+ INDICES_CACHE_FILTER_CONCURRENCY_LEVEL, IndicesFilterCache.this.concurrencyLevel, concurrencyLevel);\n IndicesFilterCache.this.concurrencyLevel = concurrencyLevel;\n replace = true;\n }\n@@ -103,9 +108,14 @@ public void onRefreshSettings(Settings settings) {\n public IndicesFilterCache(Settings settings, ThreadPool threadPool, NodeSettingsService nodeSettingsService) {\n super(settings);\n this.threadPool = threadPool;\n- this.size = componentSettings.get(\"size\", \"10%\");\n- this.expire = componentSettings.getAsTime(\"expire\", null);\n- this.cleanInterval = componentSettings.getAsTime(\"clean_interval\", TimeValue.timeValueSeconds(60));\n+ this.size = settings.get(INDICES_CACHE_FILTER_SIZE, \"10%\");\n+ this.expire = settings.getAsTime(INDICES_CACHE_FILTER_EXPIRE, null);\n+ this.minimumEntryWeight = settings.getAsInt(INDICES_CACHE_FILTER_MINIMUM_ENTRY_WEIGHT, 1024); // 1k per entry minimum\n+ if (minimumEntryWeight <= 0) {\n+ throw new ElasticsearchIllegalArgumentException(\"minimum_entry_weight must be > 0 but was: \" + minimumEntryWeight);\n+ }\n+ this.cleanInterval = settings.getAsTime(INDICES_CACHE_FILTER_CLEAN_INTERVAL, TimeValue.timeValueSeconds(60));\n+ // defaults to 4, but this is a busy map for all indices, increase it a bit\n this.concurrencyLevel = settings.getAsInt(INDICES_CACHE_FILTER_CONCURRENCY_LEVEL, 16);\n if (concurrencyLevel <= 0) {\n throw new ElasticsearchIllegalArgumentException(\"concurrency_level must be > 0 but was: \" + concurrencyLevel);\n@@ -122,10 +132,9 @@ public IndicesFilterCache(Settings settings, ThreadPool threadPool, NodeSettings\n private void buildCache() {\n CacheBuilder<WeightedFilterCache.FilterCacheKey, DocIdSet> cacheBuilder = CacheBuilder.newBuilder()\n .removalListener(this)\n- .maximumWeight(sizeInBytes).weigher(new WeightedFilterCache.FilterCacheValueWeigher());\n+ .maximumWeight(sizeInBytes).weigher(new WeightedFilterCache.FilterCacheValueWeigher(minimumEntryWeight));\n \n- // defaults to 4, but this is a busy map for all indices, increase it a bit\n- cacheBuilder.concurrencyLevel(16);\n+ cacheBuilder.concurrencyLevel(this.concurrencyLevel);\n \n if (expire != null) {\n cacheBuilder.expireAfterAccess(expire.millis(), TimeUnit.MILLISECONDS);\n@@ -145,6 +154,7 @@ public void addReaderKeyToClean(Object readerKey) {\n public void close() {\n closed = true;\n cache.invalidateAll();\n+ cache.cleanUp();\n }\n \n public Cache<WeightedFilterCache.FilterCacheKey, DocIdSet> cache() {", "filename": "src/main/java/org/elasticsearch/indices/cache/filter/IndicesFilterCache.java", "status": "modified" } ] }
{ "body": "I have upgraded from v1.0.0 to 1.3.4 and now see the heap used grow from 60 to 99% over 7 days and crashes ES. When hits 70% heap used see 100% on one CPU at a time. ES is working fine, just appears to be a memory leak or failing to complete GC. Can I safely downgrade to v1.3.1 to see if fixes the problem? Currently have 55M documents, 23GB size.\n\nThanks, John.\n", "comments": [ { "body": "@johnc10uk Few questions:\n- what custom settings are you using?\n- what is your `ES_HEAP_SIZE`\n- do you have swap disabled?\n\ncan you send the output of these two requests:\n\n```\ncurl localhost:9200/_nodes > nodes.json\ncurl localhost:9200/_nodes/stats?fields=* > nodes_stats.json\n```\n", "created_at": "2014-10-28T10:45:38Z" }, { "body": "No custom settings, 8.8GB heap and no swap. Again, this was working fine at 1.0.0 .\n\n<script src=\"https://gist.github.com/johnc10uk/441b58a6181bc07af4c1.js\"></script>\n", "created_at": "2014-10-28T11:04:32Z" }, { "body": "Files https://gist.github.com/johnc10uk/441b58a6181bc07af4c1\n", "created_at": "2014-10-28T11:05:36Z" }, { "body": "Tried doubling RAM to 16GB but just lasts longer before filling heap. Upgraded Java from 25 to 55, no difference.\n\njava version \"1.7.0_55\"\nJava(TM) SE Runtime Environment (build 1.7.0_55-b13)\nJava HotSpot(TM) 64-Bit Server VM (build 24.55-b03, mixed mode)\n", "created_at": "2014-10-28T11:12:30Z" }, { "body": "@johnc10uk thanks for the info. Could I ask you to provide this output as well please:\n\n```\ncurl localhost:9200/_nodes/hot_threads > hot_threads.txt\n```\n", "created_at": "2014-10-28T11:33:19Z" }, { "body": "Added to Gist https://gist.github.com/johnc10uk/441b58a6181bc07af4c1\n", "created_at": "2014-10-28T11:39:21Z" }, { "body": "Thanks. Nothing terribly unusual so far. Do you have a heap dump left from when one of the nodes crashed? If not, could you take a heap dump when memory usage is high, and share it with us?\n\nthanks\n", "created_at": "2014-10-28T11:42:15Z" }, { "body": "Link for heap dump: http://stackoverflow.com/questions/17135721/generating-heap-dumps-java-jre7\n\nPlease could you .tar.gz it, and you can share its location with me privately: clinton dot gormley at elasticsearch dot com.\n\nAlso, your mappings and config please.\n", "created_at": "2014-10-28T11:43:31Z" }, { "body": "Had to add -F to get jmap to work \"Unable to open socket file: target process not responding or HotSpot VM not loaded\"\n\njmap has just taken elasticsearch1 out of the cluster and taking forever to dump. Had to kill it, dump only 10Mb. Found out need to run jmap as elasticsearch user. Emailed link to files on S3.\n\nThanks, John.\n", "created_at": "2014-10-28T14:26:03Z" }, { "body": "Ended up with both elasticsearch1 and 3 server processes being restarted to get all in sync after 1 dropped out. However the heap used on 2 also dropped to about 10% and all now at 40% heap used. Some sort of communication failure not completing jobs?\n", "created_at": "2014-10-28T14:39:00Z" }, { "body": "I looked at the heap dump here: it's dominated by the filter cache, which for some reason is not clearing itself. What settings do you have for the filter cache?\n\nIt's filled with many (5.5 million) TermFilter instances, and the Term in each of these is typically very large (~200 bytes).\n", "created_at": "2014-10-28T19:27:29Z" }, { "body": "@johnc10uk when I looked at the stats you sent, filters were only using 0.5GB of memory. It looks like you are using the default filter cache settings, which defaults to 10% of heap, ie 0.9GB in your case (see http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/index-modules-cache.html#filter)\n\nPlease could you keep an eye on the output from the following request, and see if the filter cache is growing unbounded:\n\n```\ncurl -XGET \"http://localhost:9200/_nodes/stats/indices/filter_cache?human\"\n```\n\nIn particular, I'd be interested in hearing if the filter cache usage doesn't grow, but memory usage does.\n", "created_at": "2014-10-29T09:52:22Z" }, { "body": "Yes, all defaults. The HQ node diagnostics is currently showing 320Mb filter cache and I don't think it ever reached 880Mb to start any evictions. I'll keep a log of filter cache, it is growing.\n\n:{\"filter_cache\":{\"memory_size\":\"322.3mb\",\"memory_size_in_bytes\":337965460,\"evictions\":0}}}\n\n{\"filter_cache\":{\"memory_size\":\"325.8mb\",\"memory_size_in_bytes\":341677692,\"evictions\":0}}} \n\nThanks, John. \n", "created_at": "2014-10-29T10:09:23Z" }, { "body": "We are expiriencing this, in production, as well since the upgrade to 1.3.4.\nwhen do you have plans to fix this?\n", "created_at": "2014-10-29T12:46:02Z" }, { "body": "We have turned off caching for the TermFilter you identified which should stop the memory creep (it didn't need caching anyway). Restarted each node to clear out heap and will monitor. Thanks for help, very informative. \n", "created_at": "2014-10-29T13:13:08Z" }, { "body": "Hi , \nWe have upgraded our Elasticsearch cluster to version 1.3.4, after the upgrade we see that the heap used grow from 60 to 99% in a few hours, when the Heap reaches 99% some of the machines disconnect from the cluster and reconnect again.\nIn addition , when the Heap used reaches 90% + it does not return to normal.\n\nI have added your esdiagdump.sh script output , so you could check our cluster stats.\nPlease advise ?\n\nhttps://drive.google.com/a/totango.com/file/d/0B17CHQUEl6W8dE15MGVveWlycFk/view?usp=sharing\n\nThanks,\nCostya.\n", "created_at": "2014-10-29T14:19:31Z" }, { "body": "@CostyaRegev you have high filter eviction rates. I'm guessing that you're also using filters with very long keys which don't match any documents?\n", "created_at": "2014-10-29T15:28:15Z" }, { "body": "@clintongormley - as for your question:\nIt is a test environment of us, and we don't query it much. We usually have less than 1 query per second, and at most 20 filter cache evictions per second. Most of the time, there are 0 evictions per second. And yet, the JVM memory % of some nodes hits the high nineties and never go down. The nodes also crash every here and there. BTW, the CPU usage percentage is not that high, something that might indicate a memory leak of some sort.\n\nHere's our Marvel screenshot - \n![screen shot 2014-10-29 at 6 07 02 pm](https://cloud.githubusercontent.com/assets/7669819/4829258/c62475c0-5f85-11e4-9981-2260e0bc631e.png)\n", "created_at": "2014-10-29T16:10:37Z" }, { "body": "20 filter evictions per second is a lot - it indicates that you're caching lots of filters that shouldn't be cached. Also, you didn't answer my question about the length of cache keys. Are you using filters with, eg, many string IDs in them?\n", "created_at": "2014-10-29T16:18:07Z" }, { "body": "Practically, no - not in our test environment. But then again, we have search shard query rate of <0.1 almost always - it's mostly idle. And we have almost no indexing at all all day long (except for a few peaks)... Why is all the memory occupied to the point that nodes just crash?\n", "created_at": "2014-10-29T16:32:02Z" }, { "body": "@CostyaRegev Please try this to confirm it is the same problem:\n\n```\ncurl -XPOST 'http://localhost:9200/_cache/clear?filter=true'\n```\n\nWithin one minute of sending the request, you should see your heap size decrease (although it may take a while longer for a GC to kick in).\n\nIf that doesn't work then I suggest taking a heap dump of one of the nodes, and using eg the YourKit profiler to figure out what is taking so much memory.\n", "created_at": "2014-10-29T17:25:04Z" }, { "body": "After trying the sent request , indeed i saw our heap size decrease.\nIt there a workaround for this problem ? When will you fix this problem?\n", "created_at": "2014-10-29T18:07:59Z" }, { "body": "@clintongormley regarding waiting here maybe we should do the same thing we did for FieldData here too in https://github.com/elasticsearch/elasticsearch/commit/65ce5acfb41b4abd0c527aa0e870c2a1076d76cd\n", "created_at": "2014-10-29T19:43:17Z" }, { "body": "@s1monw I've opened #8285 for that.\n\n@CostyaRegev The problem is as follows:\n\nThe filter cache size is the sum of the size of the values in the cache. The size does not take the size of the cache keys into account, because this info is difficult to access.\n\nPreviously, cached filters used 1 bit for every document in the segment, so the sum of the size of the values worked as a reasonable heuristic for the total amount of memory consumed. When the cache filled up with too many values, old filters were evicted.\n\nNow, cached filters which match (almost) all docs or (almost) no docs can be represented in a much more efficient manner, and consume very little memory indeed. The result is that many more of these filters can be cached before triggering evictions. \n\nIf you have:\n- many unique filters\n- with long cache keys (eg `terms` filters on thousands of IDs)\n- that match almost all or almost no docs\n\n...then you will run into this situation, where the cache keys can consume all of your heap, while the sum of the size of the values in the cache is small.\n\nWe are looking at ways to deal with this situation better in #8268. Either we will figure out a way to include the size of cache keys in the calculation (difficult), or we will just limit the number of entries allowed in the cache (easy).\n\nWorkarounds for now:\n- Reduce the size of the filter cache so that evictions are triggered earlier (eg set `indices.cache.filter.size` to 2% instead of the default 10%, see http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/index-modules-cache.html#node-filter)\n- Use a custom (short) `_cache_key` - see http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl-terms-filter.html#_terms_lookup_twitter_example\n- Disable caching on these filters\n- Manually clear the cache once a day or as often as needed (see http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/indices-clearcache.html#indices-clearcache)\n", "created_at": "2014-10-30T09:53:02Z" }, { "body": "@clintongormley as far as getting the size of the cache keys, is that something that could/should be added to Lucene API?\n", "created_at": "2014-10-30T10:01:40Z" }, { "body": "@johnc10uk no - the caching happens on our side.\n", "created_at": "2014-10-30T11:05:58Z" }, { "body": "ES folks,\n Thanks for the suggestions so far. It helped. However, IMO the problem is much bigger then you think. we have found the following:\n1. The ID cache is also having the same problem as Field data and filter cache. we had to stop using all our parent-child queries. This was not a big loss as this model is practically not usable since it is way too slow.\n2. THE MAIN problem is the cluster management. When a node id crashing after OOM cluster management does not recognize that and then:\n3. Calls for the status api return all green\n4. Cluster is not using replicas and queries never return \n\nOur production cluster is crashing every day. The workarounds suggested reduce the number of crashes on our test env and will be implemented in production.\nThe main problem though, is the cluster recovery.\n\nLet us know what other info/help you need in order to fix.\n", "created_at": "2014-11-01T11:45:42Z" }, { "body": "@Yakx it sounds like you have a number of configuration issues, which are unrelated to this thread. Please ask these questions in the mailing list instead: http://elasticsearch.org/community\n", "created_at": "2014-11-01T14:56:05Z" }, { "body": "Closed by #8304\n", "created_at": "2014-11-03T11:26:33Z" } ], "number": 8249, "title": "1.3.4 heap used growing and crashing ES" }
{ "body": "This changes the weighing function for the filter cache to use a\nconfigurable minimum weight for each filter cached. This value defaults\nto 1kb and can be configured with the\n`indices.cache.filter.minimum_entry_weight` setting.\n\nThis also fixes an issue with the filter cache where the concurrency\nlevel of the cache was exposed as a setting, but not used in cache\nconstruction.\n\nRelates to #8268\nFixes #8249\n", "number": 8304, "review_comments": [ { "body": "Is it needed because of this change or is it unrelated?\n", "created_at": "2014-10-31T10:26:03Z" }, { "body": "Not needed, but always good to actually clean the cache when closing, happy to remove it if you'd like\n", "created_at": "2014-10-31T10:30:48Z" }, { "body": "I was just wondering if there could be bad side-effects such as causing some operations to take longer.\n", "created_at": "2014-10-31T11:52:11Z" }, { "body": "There aren't any bad side-effects, without this, if the cache were closed and everything was invalidated, they would not be immediately removed, this just immediately removes them.\n", "created_at": "2014-10-31T11:56:17Z" } ], "title": "Use a 1024 byte minimum weight for filter cache entries" }
{ "commits": [ { "message": "Use a 1024 byte minimum weight for filter cache entries\n\nThis changes the weighing function for the filter cache to use a\nconfigurable minimum weight for each filter cached. This value defaults\nto 1kb and can be configured with the\n`indices.cache.filter.minimum_entry_weight` setting.\n\nThis also fixes an issue with the filter cache where the concurrency\nlevel of the cache was exposed as a setting, but not used in cache\nconstruction.\n\nRelates to #8268" } ], "files": [ { "diff": "@@ -209,12 +209,19 @@ public int hashCode() {\n }\n \n \n+ /** A weigher for the Guava filter cache that uses a minimum entry size */\n public static class FilterCacheValueWeigher implements Weigher<WeightedFilterCache.FilterCacheKey, DocIdSet> {\n \n+ private final int minimumEntrySize;\n+\n+ public FilterCacheValueWeigher(int minimumEntrySize) {\n+ this.minimumEntrySize = minimumEntrySize;\n+ }\n+\n @Override\n public int weigh(FilterCacheKey key, DocIdSet value) {\n int weight = (int) Math.min(DocIdSets.sizeInBytes(value), Integer.MAX_VALUE);\n- return weight == 0 ? 1 : weight;\n+ return Math.max(weight, this.minimumEntrySize);\n }\n }\n ", "filename": "src/main/java/org/elasticsearch/index/cache/filter/weighted/WeightedFilterCache.java", "status": "modified" }, { "diff": "@@ -55,29 +55,33 @@ public class IndicesFilterCache extends AbstractComponent implements RemovalList\n private volatile int concurrencyLevel;\n \n private final TimeValue cleanInterval;\n+ private final int minimumEntryWeight;\n \n private final Set<Object> readersKeysToClean = ConcurrentCollections.newConcurrentSet();\n \n private volatile boolean closed;\n \n-\n public static final String INDICES_CACHE_FILTER_SIZE = \"indices.cache.filter.size\";\n public static final String INDICES_CACHE_FILTER_EXPIRE = \"indices.cache.filter.expire\";\n public static final String INDICES_CACHE_FILTER_CONCURRENCY_LEVEL = \"indices.cache.filter.concurrency_level\";\n+ public static final String INDICES_CACHE_FILTER_CLEAN_INTERVAL = \"indices.cache.filter.clean_interval\";\n+ public static final String INDICES_CACHE_FILTER_MINIMUM_ENTRY_WEIGHT = \"indices.cache.filter.minimum_entry_weight\";\n \n class ApplySettings implements NodeSettingsService.Listener {\n @Override\n public void onRefreshSettings(Settings settings) {\n boolean replace = false;\n String size = settings.get(INDICES_CACHE_FILTER_SIZE, IndicesFilterCache.this.size);\n if (!size.equals(IndicesFilterCache.this.size)) {\n- logger.info(\"updating [indices.cache.filter.size] from [{}] to [{}]\", IndicesFilterCache.this.size, size);\n+ logger.info(\"updating [{}] from [{}] to [{}]\",\n+ INDICES_CACHE_FILTER_SIZE, IndicesFilterCache.this.size, size);\n IndicesFilterCache.this.size = size;\n replace = true;\n }\n TimeValue expire = settings.getAsTime(INDICES_CACHE_FILTER_EXPIRE, IndicesFilterCache.this.expire);\n if (!Objects.equal(expire, IndicesFilterCache.this.expire)) {\n- logger.info(\"updating [indices.cache.filter.expire] from [{}] to [{}]\", IndicesFilterCache.this.expire, expire);\n+ logger.info(\"updating [{}] from [{}] to [{}]\",\n+ INDICES_CACHE_FILTER_EXPIRE, IndicesFilterCache.this.expire, expire);\n IndicesFilterCache.this.expire = expire;\n replace = true;\n }\n@@ -86,7 +90,8 @@ public void onRefreshSettings(Settings settings) {\n throw new ElasticsearchIllegalArgumentException(\"concurrency_level must be > 0 but was: \" + concurrencyLevel);\n }\n if (!Objects.equal(concurrencyLevel, IndicesFilterCache.this.concurrencyLevel)) {\n- logger.info(\"updating [indices.cache.filter.concurrency_level] from [{}] to [{}]\", IndicesFilterCache.this.concurrencyLevel, concurrencyLevel);\n+ logger.info(\"updating [{}] from [{}] to [{}]\",\n+ INDICES_CACHE_FILTER_CONCURRENCY_LEVEL, IndicesFilterCache.this.concurrencyLevel, concurrencyLevel);\n IndicesFilterCache.this.concurrencyLevel = concurrencyLevel;\n replace = true;\n }\n@@ -103,9 +108,14 @@ public void onRefreshSettings(Settings settings) {\n public IndicesFilterCache(Settings settings, ThreadPool threadPool, NodeSettingsService nodeSettingsService) {\n super(settings);\n this.threadPool = threadPool;\n- this.size = componentSettings.get(\"size\", \"10%\");\n- this.expire = componentSettings.getAsTime(\"expire\", null);\n- this.cleanInterval = componentSettings.getAsTime(\"clean_interval\", TimeValue.timeValueSeconds(60));\n+ this.size = settings.get(INDICES_CACHE_FILTER_SIZE, \"10%\");\n+ this.expire = settings.getAsTime(INDICES_CACHE_FILTER_EXPIRE, null);\n+ this.minimumEntryWeight = settings.getAsInt(INDICES_CACHE_FILTER_MINIMUM_ENTRY_WEIGHT, 1024); // 1k per entry minimum\n+ if (minimumEntryWeight <= 0) {\n+ throw new ElasticsearchIllegalArgumentException(\"minimum_entry_weight must be > 0 but was: \" + minimumEntryWeight);\n+ }\n+ this.cleanInterval = settings.getAsTime(INDICES_CACHE_FILTER_CLEAN_INTERVAL, TimeValue.timeValueSeconds(60));\n+ // defaults to 4, but this is a busy map for all indices, increase it a bit\n this.concurrencyLevel = settings.getAsInt(INDICES_CACHE_FILTER_CONCURRENCY_LEVEL, 16);\n if (concurrencyLevel <= 0) {\n throw new ElasticsearchIllegalArgumentException(\"concurrency_level must be > 0 but was: \" + concurrencyLevel);\n@@ -122,10 +132,9 @@ public IndicesFilterCache(Settings settings, ThreadPool threadPool, NodeSettings\n private void buildCache() {\n CacheBuilder<WeightedFilterCache.FilterCacheKey, DocIdSet> cacheBuilder = CacheBuilder.newBuilder()\n .removalListener(this)\n- .maximumWeight(sizeInBytes).weigher(new WeightedFilterCache.FilterCacheValueWeigher());\n+ .maximumWeight(sizeInBytes).weigher(new WeightedFilterCache.FilterCacheValueWeigher(minimumEntryWeight));\n \n- // defaults to 4, but this is a busy map for all indices, increase it a bit\n- cacheBuilder.concurrencyLevel(16);\n+ cacheBuilder.concurrencyLevel(this.concurrencyLevel);\n \n if (expire != null) {\n cacheBuilder.expireAfterAccess(expire.millis(), TimeUnit.MILLISECONDS);\n@@ -145,6 +154,7 @@ public void addReaderKeyToClean(Object readerKey) {\n public void close() {\n closed = true;\n cache.invalidateAll();\n+ cache.cleanUp();\n }\n \n public Cache<WeightedFilterCache.FilterCacheKey, DocIdSet> cache() {", "filename": "src/main/java/org/elasticsearch/indices/cache/filter/IndicesFilterCache.java", "status": "modified" } ] }
{ "body": "ElasticSearch seems to advertise HTTP pipelining support by using HTTP 1/1 and not supplying a Connection-header in the response, but fails to deliver on the promises of responding to the requests in the same order they are sent, which means that clients might get the response from an unexpected request.\n\nExample reproduction:\n\nIt sometimes works es expected\n\n```\n$ printf \"GET /_nodes HTTP/1.1\\r\\n\\r\\nGET / HTTP/1.1\\r\\n\\r\\n\" | nc -i 1 127.0.0.1 9200\nHTTP/1.1 200 OK\nContent-Type: application/json; charset=UTF-8\nContent-Length: 222\n\n{\"ok\":true,\"cluster_name\":\"elasticsearch\",\"nodes\":{\"MVf7UrJJRyaOJj35MAdODg\":{\"name\":\"Caiera\",\"transport_address\":\"inet[/10.0.0.6:9300] \",\"hostname\":\"machine.local\",\"version\":\"0.20.4\",\"http_address\":\"inet[/10.0.0.6:9200]\"}}}HTTP/1.1 200 OK\nContent-Type: application/json; charset=UTF-8\nContent-Length: 169\n\n{\n \"ok\" : true,\n \"status\" : 200,\n \"name\" : \"Caiera\",\n \"version\" : {\n \"number\" : \"0.20.4\",\n \"snapshot_build\" : false\n },\n \"tagline\" : \"You Know, for Search\"\n}\n```\n\nBut sometimes, given the exact same request, changes the order of the responses:\n\n```\n$ printf \"GET /_nodes HTTP/1.1\\r\\n\\r\\nGET / HTTP/1.1\\r\\n\\r\\n\" | nc -i 1 127.0.0.1 9200\nHTTP/1.1 200 OK\nContent-Type: application/json; charset=UTF-8\nContent-Length: 169\n\n{\n \"ok\" : true,\n \"status\" : 200,\n \"name\" : \"Caiera\",\n \"version\" : {\n \"number\" : \"0.20.4\",\n \"snapshot_build\" : false\n },\n \"tagline\" : \"You Know, for Search\"\n}HTTP/1.1 200 OK\nContent-Type: application/json; charset=UTF-8\nContent-Length: 222\n\n{\"ok\":true,\"cluster_name\":\"elasticsearch\",\"nodes\":{\"MVf7UrJJRyaOJj35MAdODg\":{\"name\":\"Caiera\",\"transport_address\":\"inet[/10.0.0.6:9300] \",\"hostname\":\"machine.local\",\"version\":\"0.20.4\",\"http_address\":\"inet[/10.0.0.6:9200]\"}}} \n```\n", "comments": [ { "body": "HTTP 1.1 and HTTP pipeline are two different things, don't confuse them :). Yea, there will be problems today with using HTTP pipeline feature. \n", "created_at": "2013-02-22T22:55:43Z" }, { "body": "maybe worth a look in this regard: https://github.com/typesafehub/netty-http-pipelining\n", "created_at": "2013-04-12T07:34:13Z" }, { "body": "@nkvoll Are you trying to make use of the HTTP pipelining feature in production? I actually was not really able to find a Java library supporting this from the client side in order to conduct some tests. Just trying to understand, what you are trying to archive.\n\nWhat you can do as a different solution at the moment (which is also returning the responses when they are finished instead of creating the responses in a queue and ensuring their order on server side), is to use the \"X-Opaque-Id\" header of elasticsearch in order to map back the request/response pairs on the client instead of the server side (which might not be what you want depending on your use-case). You will obviously have a hard time with this inside of a browser (not all of them is supporting pipelining anyway)\n\n```\nprintf \"GET /_nodes HTTP/1.1\\r\\nX-Opaque-Id: 2\\r\\n\\r\\nGET / HTTP/1.1\\r\\nX-Opaque-Id: 1\\r\\n\\r\\n\" | nc -i 1 127.0.0.1 9200\n```\n", "created_at": "2013-04-26T08:05:14Z" }, { "body": "@nkvoll could you solve your issue by using the `X-Opaque-Id` header or do you think there needs more to be done in elasticsearch in order to support your use-case? \n\nI am not sure if we can 'not advertise' pipelining support (as you wrote in the initial ticket), but still have this functionality. Would like to hear your opinions on that!\n", "created_at": "2013-06-06T14:53:15Z" }, { "body": "Not that I need pipelining with ES for anything, but http 1.1 spec at http://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html#sec8.1.2.2 says \"A server MUST send its responses to those requests in the same order that the requests were received\"\n", "created_at": "2014-02-18T16:20:04Z" }, { "body": "We just got bit **really badly** by this issue over at Stack Overflow.\n\nWe've got an app over here that does an awful lot of (highly parallel) reads and writes by id over a small number of types. Because the _types_ matched most of the time, we'd normally be able to deserialize and proceed normally; oftentimes even updating the wrong document at the end.\n\nThis is exacerbated by [.NET's default behavior being to pipeline all web requests](http://msdn.microsoft.com/en-us/library/system.net.httpwebrequest.pipelined%28v=vs.110%29.aspx).\n\nIt took the better part of a week to isolate and debug this issue. It's particularly insidious since capturing requests at a proxy can easily strip out pipelining (as Fiddler does, for example).\n\nWe have a workaround (just setting `myHttpWebRequest.Pipelined = false`), pipelined really should fail if they're not going to be honored to spec.\n\nThere's a non-trivial performance penalty to disabling pipeling, at least in .NET. I threw together a [quick gist in LinqPad](https://gist.github.com/kevin-montrose/cc614baa67e066696352) against our production cluster. In my testing it's about 3 times faster to pipeline requests on our infrastructure.\n", "created_at": "2014-07-25T20:12:33Z" }, { "body": "Maybe a priority queue like in https://github.com/typesafehub/netty-http-pipelining could be integrated into ES netty http transport, and enabled by an option, e.g. `http.netty.pipelining: true|false`.\n", "created_at": "2014-07-25T22:01:03Z" }, { "body": "@kevin-montrose , we've never run into that issue, but it sounds a bit scary. We're considering making a change to the .NET client to disable pipelining by default (exposed as a connection setting) until this is addressed with elasticsearch. However, I'm a bit hesitant after hearing your claim about performance.\n\nFor what it's worth though, I ran your gist on my machine to see if there would be a notable difference, and there wasn't. In fact, pipelined was a bit slower more times than non. Maybe totally environment related.\n", "created_at": "2014-07-25T22:12:39Z" }, { "body": "@jprante looking that the code, I don't think its enough what is done there in the context of ES. The implementation supports returning the results in the same order, while in ES, we need to make sure that we execute it in the same order. For example, if an index request and then a get request for the same id are in the same inflight requests, yet serialized on the client side, they need to be serialized on the request side to be executed one after the other, not just return the _results_ of them in order, since in that case, they might still execute out of order.\n\nI still need to think about this more, but effectively, in order to implement ordered pipeline support, it means it needs to be ordered when handling the upstream events, and progressing with one at all before the the previous one is sent downstream as a response.\n", "created_at": "2014-07-25T22:35:06Z" }, { "body": "btw, to make things more interesting..., assume you only execute search requests, then HTTP pipelining with just ordering the responses is great, since it allow for concurrent execution of them on the server side. Tricky.... . \n", "created_at": "2014-07-25T22:43:39Z" }, { "body": "@kimchy maybe it is possible just to timestamp the incoming HTTP request in NettyHttpServerTransport, with a channels handler especially for HTTP pipelining, so the responses can be ordered downstream in a priority queue by timestamp? Queueing up a certain number of responses in a queue in this special HTTP pipeline channels handler would be the downside.\n\nI think executions can be still unordered, even with HTTP pipelining, because these are separate things. If e.g. set/get on the same id fails, there might be other obscure reasons. But it will just become reliably visible, by eliminating Netty's weakness in HTTP pipelining.\n", "created_at": "2014-07-26T09:07:10Z" }, { "body": "@gmarz Pipelining's main gains are in any environment with non-0 latency: the higher the ratio of network latency vs. time spent inside elastic, the higher the impact. It's definitely going to vary by environment, but there are many cases where this is a huge impact...in others like link-local or on-box it may have little difference. It's important to test the performance impact of pipelining over _not_ on your local machine as well.\n", "created_at": "2014-07-27T14:04:30Z" }, { "body": "@NickCraver thanks a lot for the info, I figured as such. Definitely plan on doing more testing before making any changes to the .NET client. I just opened #830 for this, suggestions/input very much welcomed.\n", "created_at": "2014-07-27T15:36:46Z" }, { "body": "We're getting bit by it as well. Using erlang ibrowse client, which uses pipelining by default.\n", "created_at": "2014-07-29T08:08:15Z" }, { "body": "Has there been any update on this? It was a deciding factor in us dropping ElasticSearch as a viable option for projects at Stack Overflow, but we'd love to see this fixed for other potential uses.\n\nThis is a serious bug, and it doesn't seem (at least to our developers) to be treated as serious as the impact it has.\n", "created_at": "2014-10-15T20:51:57Z" }, { "body": "@NickCraver thanks for giving context into this, I have personally been thinking about how to solve this, I think that potentially we can start with baby steps and make sure that at the very least the response are returned in order, and iterate from there. Will keep you posted.\n", "created_at": "2014-10-15T21:16:34Z" }, { "body": "@kimchy I'm not sure that execution order is a problem for pipelined requests, mainly because pipelining is a transport optimisation. Pipelined requests are semantically equivalent to the same requests being issued on separate connections. If client applications care about ordering, they should not issue requests that depend on requests whose response status is still pending.\n", "created_at": "2014-10-29T14:48:01Z" }, { "body": "To better illustrate for people the impact here, let's compare a real world example. Let's say for example I'm querying our elasticsearch cluster in Oregon from New York. I have 10 requests to make. The trip from NY to OR takes about 40ms, and elasticsearch only takes 10ms to fulfill each request. Let's compare performance with and without pipelining:\n\nPipelining:\n\n```\n10 requests issued in order immediately\n40ms travel time\n10 requests processed (any order), responded to (in order) (10+ms, elasticsearch processing)\n40ms travel time\n10 responses received in order\n```\n\nWe're talking about 90+ ms (the + depends on how well elasticsearch handles the concurrent queries). Let's say it's not even concurrent (worst case), then we're talking 100ms for elastic, so 190ms total.\n\nNow let's turn pipelining off:\n\n```\n1 request issued\n40ms travel time\n10ms 1 request processed, 1 response sent\n40ms travel time\nSingle request received, next request sent\n40ms travel time\n10ms 1 request processed, 1 response sent\n40ms travel time\nSingle request received, next request sent\n40ms travel time\n10ms 1 request processed, 1 response sent\n40ms travel time\nSingle request received, next request sent\n40ms travel time\n10ms 1 request processed, 1 response sent\n40ms travel time\nSingle request received, next request sent\n40ms travel time\n10ms 1 request processed, 1 response sent\n40ms travel time\nSingle request received, next request sent\n40ms travel time\n10ms 1 request processed, 1 response sent\n40ms travel time\nSingle request received, next request sent\n40ms travel time\n10ms 1 request processed, 1 response sent\n40ms travel time\nSingle request received, next request sent\n40ms travel time\n10ms 1 request processed, 1 response sent\n40ms travel time\nSingle request received, next request sent\n40ms travel time\n10ms 1 request processed, 1 response sent\n40ms travel time\nSingle request received, next request sent\n40ms travel time\n10ms 1 request processed, 1 response sent\n40ms travel time\nLast request received\n```\n\nThat took 90ms **per request**, so at best it's 900ms. Hopefully that better illustrates the performance problem that not having HTTP pipelining working introduces. The same happens at < 1ms latency which adds up for millions of requests per day. We hit some systems like this (redis for example) with 4 billion+ hits a day...without pipelining Stack Overflow would be hosed.\n\nCurrently, if one for 10 indexing operations fails (especially on the indexing side) we can get an \"okay, indexed!\" when in fact it failed...and we don't even know _which one_ failed. So not only is this a major problem (which is why I feel this makes easily the resiliency list), it's a problem a user is incapable of solving. You currently just have to turn off pipelining and take a _huge_ performance hit for high traffic applications.\n\nAs for execution order, yes it _does_ matter. If a Stack Overflow post changes, we may issue an index request as soon as that happens - edits can happen rapidly and 2 indexing requests for the same document (with a new post body, title, tags, last activity date, etc. in our case) can easily be in the pipeline. If the last index command doesn't execute last, then an invalid document has been indexed which is out of date.\n", "created_at": "2014-10-30T11:15:35Z" }, { "body": "@NickCraver regarding the execution order -- wouldn't using the `external` version type for example with the source timestamp work for you? (http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/docs-index_.html#_version_types) I'm assuming that you have an explicit document identifier and explicitly versioning documents makes a lot of sense when working with a distributed system. This way, it wouldn't even matter which server you send the request to and the actual operation speed of that particular server -- much less the exact execution order for requests coming from a given HTTP pipeline -- the latest version would be the one that sticks in any case.\n", "created_at": "2014-10-30T11:22:26Z" }, { "body": "@nkvoll for that exact case, we could version the documents yes (though we have no other reason to was we only care about the current one). But we'd be explicitly doing so just to work around this bug.\n\nLet's take another example we hit that causes Elasticsearch to be dropped from the project:\n1. Index a document.\n2. Do a GET for that document, by Id.\n\nSince 2 can execute and return before 1, _on the same node_, even a getting a document you just stored isn't guaranteed, and bit us several times.\n\nNot related to (regardless of) the execution order, let me address transport impact:\n\nThe transport switch (even if executed in order) is still a compounding issue. We were doing batch index operations for documents, GETting documents as well (all at high volume/speed) and checking cluster status along every so often as well to check index queue limits and such. We'd get cluster status responses to the document GET by Id requests. I don't think I have to impress just how bad that behavior is - and it's exactly what we hit many times that led us to this Github issue.\n\nFor all the documents that where the correct type and didn't throw deserialization errors as a result, we then had no confidence they were the right documents returned to the right requests. We know at least a decent portion of them weren't. As a result we had to throw away all of the data and start over. That's when the decision was made to redesign the data layout and abandon Elasticsearch for the project.\n", "created_at": "2014-10-30T12:18:27Z" }, { "body": "@NickCraver I read pipelining wrong and actually part of the spec is not to allow to pipeline non-idempotent methods, and its the responsibility of the client not to do so (http://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html). This heavily simplifies the implementation on our side, so I hope it happens quicker (@spinscale is on it, we are aiming for 1.4). So the question around order is not relevant.\n\nRegarding the example you gave, of course pipelining will help on a single connection, I don't think there was any denying it.\n", "created_at": "2014-10-30T12:28:31Z" }, { "body": "@NickCraver true, the document will not be available before the Lucene segment is written, which is after the default `refresh_interval` of 1 second (configurable, of course). You could do a an explicit call to `_refresh` (http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/indices-refresh.html#indices-refresh), but I'm not sure I'd recommend trying to work _around_ the Near Real Time-properties of Elasticsearch/Lucene as much as work _with_ it. I'm unaware of any planned changes to this part, as it's pretty central to achieving good indexing performance and coordinating shards across nodes, but then again, I'm not an Elasticsearch employee.\n\nRegarding the transport level mix-up, I certainly feel you (which is why I created this issue in the first place). I think the worst part is that pipelining is advertised according to the HTTP 1.1 specification, but the actual results are wrong -- and that this might not be evident when doing small-scale development/testing runs. It can certainly catch unwary developers out quickly.\n\nWhile maybe not optimal for your case, it's possible to work around this as well running a small proxy on the same server as your Elasticsearch instance that supports pipelining proper on the server side, but is possible to configure to _not_ pipeline when forwarding requests to Elasticsearch. Whether this is a feasible approach depends on how you do operations and what systems you're comfortable with using. It's not at all optimal, but will get you closer to the goal until it's supported properly.\n", "created_at": "2014-10-30T12:28:58Z" }, { "body": "@NickCraver out of interest what are you using to issue a:\n\n```\nPOST [index doc]\nGET [doc]\n```\n\nOn a pipelined http connection ?\n\nTo the best of my understanding `HttpWebRequest` (which the new `HttpClient` class still uses under cover by default) adheres to the RFC and won't pipeline any request that has a body:\n\nhttp://referencesource.microsoft.com/#System/net/System/Net/_Connection.cs#800\n\nand force it to wait on a new/free connection of it sees a request with a body on a pipelined connection:\nhttp://referencesource.microsoft.com/#System/net/System/Net/_Connection.cs#614\n\nAs @kimchy pointed out The RFC disallows this:\n\n`Clients SHOULD NOT pipeline requests using non-idempotent methods or non-idempotent sequences of methods (see section 9.1.2). Otherwise, a premature termination of the transport connection could lead to indeterminate results. A client wishing to send a non-idempotent request SHOULD wait to send that request until it has received the response status for the previous request.`\n\nOf course the ordering issue still exists when doing 10 GETS sequentially on a pipelined connection and if they return out of order your application might end up updating the wrong documents (which should be fixed with the new pipelining support)\n\nPlease nudge me If my understanding of the `HttpWebRequest` in this regard is flawed!\n\n@nkvoll elasticsearch should be realtime when indexing a single doc and then doing a single doc get provided the client waits for an ack on the client for the index and they are executed in the right order.\n", "created_at": "2014-10-31T14:40:22Z" } ], "number": 2665, "title": "HTTP Pipelining causes responses to mixed up." }
{ "body": "This adds HTTP pipelining support to netty. Previously pipelining was not\nsupported due to the asynchronous nature of elasticsearch. The first request\nthat was returned by Elasticsearch, was returned as first response,\nregardless of the correct order.\n\nThe solution to this problem is to add a handler to the netty pipeline\nthat maintains an ordered list and thus orders the responses before\nreturning them to the client. This means, we will always have some state\non the server side and also requires some memory in order to keep the\nresponses there.\n\nPipelining is enabled by default, but can be configured by setting the\nhttp.pipelining property to true|false. In addition the maximum size of\nthe event queue can be configured.\n\nThe initial netty handler is copied from this repo\nhttps://github.com/typesafehub/netty-http-pipelining\n\nCloses #2665\n", "number": 8299, "review_comments": [ { "body": "should be cached thread pool, the default constructor does the right thing here\n", "created_at": "2014-10-31T13:04:11Z" }, { "body": "check if you managed to connect here, and if not, throw a failure?\n", "created_at": "2014-10-31T13:06:30Z" }, { "body": "also, I think the opened channel needs to be closed at one point\n", "created_at": "2014-10-31T13:07:45Z" }, { "body": "this means that the method needs to be synchronized, right?\n", "created_at": "2014-10-31T13:09:04Z" }, { "body": "do we need this logic? we know what we will send it\n", "created_at": "2014-10-31T13:12:47Z" }, { "body": "`await()` throws an InterruptedException which then gets bubbled up...\n", "created_at": "2014-10-31T13:28:11Z" }, { "body": "good catch, hadnt thought about parallel usage in tests...\n", "created_at": "2014-10-31T13:30:38Z" }, { "body": "no trimmed it down to needed things...\n", "created_at": "2014-10-31T13:31:28Z" }, { "body": "right, channel gets closed in finally call now\n", "created_at": "2014-10-31T13:31:47Z" }, { "body": "yes, also fixed in the HttpPipelineHandlerTest\n", "created_at": "2014-10-31T13:32:09Z" }, { "body": "I wonder if it should not also decrement the CountDownLatch if an exception is caught in the SimpleChannelUpstreamHandler ?\n\nSomething like\n\n```\nnew SimpleChannelUpstreamHandler() {\n @Override\n public void messageReceived(..) {\n ...\n latch.countDown();\n }\n @Override\n public void exceptionCaught(...) {\n latch.countDown();\n }\n```\n\nJust to be sure that the client does not hang indefintily.\n", "created_at": "2014-10-31T14:44:37Z" }, { "body": "makes sense, will add\n", "created_at": "2014-10-31T15:09:23Z" } ], "title": "Netty: Add HTTP pipelining support" }
{ "commits": [ { "message": "Netty: Add HTTP pipelining support\n\nThis adds HTTP pipelining support to netty. Previously pipelining was not\nsupported due to the asynchronous nature of elasticsearch. The first request\nthat was returned by Elasticsearch, was returned as first response,\nregardless of the correct order.\n\nThe solution to this problem is to add a handler to the netty pipeline\nthat maintains an ordered list and thus orders the responses before\nreturning them to the client. This means, we will always have some state\non the server side and also requires some memory in order to keep the\nresponses there.\n\nPipelining is enabled by default, but can be configured by setting the\nhttp.pipelining property to true|false. In addition the maximum size of\nthe event queue can be configured.\n\nThe initial netty handler is copied from this repo\nhttps://github.com/typesafehub/netty-http-pipelining\n\nCloses #2665" } ], "files": [ { "diff": "@@ -15,8 +15,6 @@ when connecting for better performance and try to get your favorite\n client not to do\n http://en.wikipedia.org/wiki/Chunked_transfer_encoding[HTTP chunking].\n \n-IMPORTANT: HTTP pipelining is not supported and should be disabled in your HTTP client.\n-\n [float]\n === Settings\n \n@@ -69,6 +67,9 @@ be cached for. Defaults to `1728000` (20 days)\n header should be returned. Note: This header is only returned, when the setting is\n set to `true`. Defaults to `false`\n \n+|`http.pipelining` |Enable or disable HTTP pipelining, defaults to `true`.\n+\n+|`http.pipelining.max_events` |The maximum number of events to be queued up in memory before a HTTP connection is closed, defaults to `10000`.\n \n |=======================================================================\n ", "filename": "docs/reference/modules/http.asciidoc", "status": "modified" }, { "diff": "@@ -1406,17 +1406,18 @@\n <include>src/test/java/org/elasticsearch/**/*.java</include>\n </includes>\n <excludes>\n- <exclude>src/main/java/org/elasticsearch/common/inject/**</exclude>\n <!-- Guice -->\n+ <exclude>src/main/java/org/elasticsearch/common/inject/**</exclude>\n <exclude>src/main/java/org/elasticsearch/common/geo/GeoHashUtils.java</exclude>\n <exclude>src/main/java/org/elasticsearch/common/lucene/search/XBooleanFilter.java</exclude>\n <exclude>src/main/java/org/elasticsearch/common/lucene/search/XFilteredQuery.java</exclude>\n <exclude>src/main/java/org/apache/lucene/queryparser/XSimpleQueryParser.java</exclude>\n <exclude>src/main/java/org/apache/lucene/**/X*.java</exclude>\n <!-- t-digest -->\n- <exclude>src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/tdigest/TDigestState.java\n- </exclude>\n+ <exclude>src/main/java/org/elasticsearch/search/aggregations/metrics/percentiles/tdigest/TDigestState.java</exclude>\n <exclude>src/test/java/org/elasticsearch/search/aggregations/metrics/GroupTree.java</exclude>\n+ <!-- netty pipelining -->\n+ <exclude>src/main/java/org/elasticsearch/http/netty/pipelining/**</exclude>\n </excludes>\n </configuration>\n <executions>", "filename": "pom.xml", "status": "modified" }, { "diff": "@@ -19,6 +19,7 @@\n \n package org.elasticsearch.http.netty;\n \n+import org.elasticsearch.http.netty.pipelining.OrderedUpstreamMessageEvent;\n import org.elasticsearch.rest.support.RestUtils;\n import org.jboss.netty.channel.*;\n import org.jboss.netty.handler.codec.http.HttpRequest;\n@@ -34,19 +35,33 @@ public class HttpRequestHandler extends SimpleChannelUpstreamHandler {\n \n private final NettyHttpServerTransport serverTransport;\n private final Pattern corsPattern;\n+ private final boolean httpPipeliningEnabled;\n \n public HttpRequestHandler(NettyHttpServerTransport serverTransport) {\n this.serverTransport = serverTransport;\n this.corsPattern = RestUtils.getCorsSettingRegex(serverTransport.settings());\n+ this.httpPipeliningEnabled = serverTransport.pipelining;\n }\n \n @Override\n public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) throws Exception {\n- HttpRequest request = (HttpRequest) e.getMessage();\n+ HttpRequest request;\n+ OrderedUpstreamMessageEvent oue = null;\n+ if (this.httpPipeliningEnabled && e instanceof OrderedUpstreamMessageEvent) {\n+ oue = (OrderedUpstreamMessageEvent) e;\n+ request = (HttpRequest) oue.getMessage();\n+ } else {\n+ request = (HttpRequest) e.getMessage();\n+ }\n+\n // the netty HTTP handling always copy over the buffer to its own buffer, either in NioWorker internally\n // when reading, or using a cumalation buffer\n NettyHttpRequest httpRequest = new NettyHttpRequest(request, e.getChannel());\n- serverTransport.dispatchRequest(httpRequest, new NettyHttpChannel(serverTransport, e.getChannel(), httpRequest, corsPattern));\n+ if (oue != null) {\n+ serverTransport.dispatchRequest(httpRequest, new NettyHttpChannel(serverTransport, httpRequest, corsPattern, oue));\n+ } else {\n+ serverTransport.dispatchRequest(httpRequest, new NettyHttpChannel(serverTransport, httpRequest, corsPattern));\n+ }\n super.messageReceived(ctx, e);\n }\n ", "filename": "src/main/java/org/elasticsearch/http/netty/HttpRequestHandler.java", "status": "modified" }, { "diff": "@@ -28,14 +28,14 @@\n import org.elasticsearch.common.netty.NettyUtils;\n import org.elasticsearch.common.netty.ReleaseChannelFutureListener;\n import org.elasticsearch.http.HttpChannel;\n+import org.elasticsearch.http.netty.pipelining.OrderedDownstreamChannelEvent;\n+import org.elasticsearch.http.netty.pipelining.OrderedUpstreamMessageEvent;\n import org.elasticsearch.rest.RestResponse;\n import org.elasticsearch.rest.RestStatus;\n import org.elasticsearch.rest.support.RestUtils;\n import org.jboss.netty.buffer.ChannelBuffer;\n import org.jboss.netty.buffer.ChannelBuffers;\n-import org.jboss.netty.channel.Channel;\n-import org.jboss.netty.channel.ChannelFuture;\n-import org.jboss.netty.channel.ChannelFutureListener;\n+import org.jboss.netty.channel.*;\n import org.jboss.netty.handler.codec.http.*;\n \n import java.util.List;\n@@ -61,16 +61,22 @@ public class NettyHttpChannel extends HttpChannel {\n private final NettyHttpServerTransport transport;\n private final Channel channel;\n private final org.jboss.netty.handler.codec.http.HttpRequest nettyRequest;\n+ private OrderedUpstreamMessageEvent orderedUpstreamMessageEvent = null;\n private Pattern corsPattern;\n \n- public NettyHttpChannel(NettyHttpServerTransport transport, Channel channel, NettyHttpRequest request, Pattern corsPattern) {\n+ public NettyHttpChannel(NettyHttpServerTransport transport, NettyHttpRequest request, Pattern corsPattern) {\n super(request);\n this.transport = transport;\n- this.channel = channel;\n+ this.channel = request.getChannel();\n this.nettyRequest = request.request();\n this.corsPattern = corsPattern;\n }\n \n+ public NettyHttpChannel(NettyHttpServerTransport transport, NettyHttpRequest request, Pattern corsPattern, OrderedUpstreamMessageEvent orderedUpstreamMessageEvent) {\n+ this(transport, request, corsPattern);\n+ this.orderedUpstreamMessageEvent = orderedUpstreamMessageEvent;\n+ }\n+\n @Override\n public BytesStreamOutput newBytesOutput() {\n return new ReleasableBytesStreamOutput(transport.bigArrays);\n@@ -185,14 +191,25 @@ public void sendResponse(RestResponse response) {\n }\n }\n \n- ChannelFuture future = channel.write(resp);\n+ ChannelFuture future;\n+\n+ if (orderedUpstreamMessageEvent != null) {\n+ OrderedDownstreamChannelEvent downstreamChannelEvent = new OrderedDownstreamChannelEvent(orderedUpstreamMessageEvent, 0, true, resp);\n+ future = downstreamChannelEvent.getFuture();\n+ channel.getPipeline().sendDownstream(downstreamChannelEvent);\n+ } else {\n+ future = channel.write(resp);\n+ }\n+\n if (response.contentThreadSafe() && content instanceof Releasable) {\n future.addListener(new ReleaseChannelFutureListener((Releasable) content));\n addedReleaseListener = true;\n }\n+\n if (close) {\n future.addListener(ChannelFutureListener.CLOSE);\n }\n+\n } finally {\n if (!addedReleaseListener && content instanceof Releasable) {\n ((Releasable) content).close();", "filename": "src/main/java/org/elasticsearch/http/netty/NettyHttpChannel.java", "status": "modified" }, { "diff": "@@ -145,6 +145,10 @@ public SocketAddress getLocalAddress() {\n return channel.getLocalAddress();\n }\n \n+ public Channel getChannel() {\n+ return channel;\n+ }\n+\n @Override\n public String header(String name) {\n return request.headers().get(name);", "filename": "src/main/java/org/elasticsearch/http/netty/NettyHttpRequest.java", "status": "modified" }, { "diff": "@@ -37,6 +37,7 @@\n import org.elasticsearch.common.util.BigArrays;\n import org.elasticsearch.common.util.concurrent.EsExecutors;\n import org.elasticsearch.http.*;\n+import org.elasticsearch.http.netty.pipelining.HttpPipeliningHandler;\n import org.elasticsearch.monitor.jvm.JvmInfo;\n import org.elasticsearch.transport.BindTransportException;\n import org.jboss.netty.bootstrap.ServerBootstrap;\n@@ -72,6 +73,13 @@ public class NettyHttpServerTransport extends AbstractLifecycleComponent<HttpSer\n public static final String SETTING_CORS_ALLOW_METHODS = \"http.cors.allow-methods\";\n public static final String SETTING_CORS_ALLOW_HEADERS = \"http.cors.allow-headers\";\n public static final String SETTING_CORS_ALLOW_CREDENTIALS = \"http.cors.allow-credentials\";\n+ public static final String SETTING_PIPELINING = \"http.pipelining\";\n+ public static final String SETTING_PIPELINING_MAX_EVENTS = \"http.pipelining.max_events\";\n+ public static final String SETTING_HTTP_COMPRESSION = \"http.compression\";\n+ public static final String SETTING_HTTP_COMPRESSION_LEVEL = \"http.compression_level\";\n+\n+ public static final boolean DEFAULT_SETTING_PIPELINING = true;\n+ public static final int DEFAULT_SETTING_PIPELINING_MAX_EVENTS = 10000;\n \n private final NetworkService networkService;\n final BigArrays bigArrays;\n@@ -85,6 +93,10 @@ public class NettyHttpServerTransport extends AbstractLifecycleComponent<HttpSer\n \n private final boolean blockingServer;\n \n+ final boolean pipelining;\n+\n+ private final int pipeliningMaxEvents;\n+\n final boolean compression;\n \n private final int compressionLevel;\n@@ -164,8 +176,10 @@ public NettyHttpServerTransport(Settings settings, NetworkService networkService\n receiveBufferSizePredictorFactory = new AdaptiveReceiveBufferSizePredictorFactory((int) receivePredictorMin.bytes(), (int) receivePredictorMin.bytes(), (int) receivePredictorMax.bytes());\n }\n \n- this.compression = settings.getAsBoolean(\"http.compression\", false);\n- this.compressionLevel = settings.getAsInt(\"http.compression_level\", 6);\n+ this.compression = settings.getAsBoolean(SETTING_HTTP_COMPRESSION, false);\n+ this.compressionLevel = settings.getAsInt(SETTING_HTTP_COMPRESSION_LEVEL, 6);\n+ this.pipelining = settings.getAsBoolean(SETTING_PIPELINING, DEFAULT_SETTING_PIPELINING);\n+ this.pipeliningMaxEvents = settings.getAsInt(SETTING_PIPELINING_MAX_EVENTS, DEFAULT_SETTING_PIPELINING_MAX_EVENTS);\n \n // validate max content length\n if (maxContentLength.bytes() > Integer.MAX_VALUE) {\n@@ -174,8 +188,8 @@ public NettyHttpServerTransport(Settings settings, NetworkService networkService\n }\n this.maxContentLength = maxContentLength;\n \n- logger.debug(\"using max_chunk_size[{}], max_header_size[{}], max_initial_line_length[{}], max_content_length[{}], receive_predictor[{}->{}]\",\n- maxChunkSize, maxHeaderSize, maxInitialLineLength, this.maxContentLength, receivePredictorMin, receivePredictorMax);\n+ logger.debug(\"using max_chunk_size[{}], max_header_size[{}], max_initial_line_length[{}], max_content_length[{}], receive_predictor[{}->{}], pipelining[{}], pipelining_max_events[{}]\",\n+ maxChunkSize, maxHeaderSize, maxInitialLineLength, this.maxContentLength, receivePredictorMin, receivePredictorMax, pipelining, pipeliningMaxEvents);\n }\n \n public Settings settings() {\n@@ -370,6 +384,9 @@ public ChannelPipeline getPipeline() throws Exception {\n if (transport.compression) {\n pipeline.addLast(\"encoder_compress\", new HttpContentCompressor(transport.compressionLevel));\n }\n+ if (transport.pipelining) {\n+ pipeline.addLast(\"pipelining\", new HttpPipeliningHandler(transport.pipeliningMaxEvents));\n+ }\n pipeline.addLast(\"handler\", requestHandler);\n return pipeline;\n }", "filename": "src/main/java/org/elasticsearch/http/netty/NettyHttpServerTransport.java", "status": "modified" }, { "diff": "@@ -0,0 +1,109 @@\n+package org.elasticsearch.http.netty.pipelining;\n+\n+import org.elasticsearch.common.logging.ESLogger;\n+import org.elasticsearch.common.logging.ESLoggerFactory;\n+import org.jboss.netty.channel.*;\n+import org.jboss.netty.handler.codec.http.DefaultHttpRequest;\n+import org.jboss.netty.handler.codec.http.HttpRequest;\n+\n+import java.util.*;\n+\n+/**\n+ * Implements HTTP pipelining ordering, ensuring that responses are completely served in the same order as their\n+ * corresponding requests. NOTE: A side effect of using this handler is that upstream HttpRequest objects will\n+ * cause the original message event to be effectively transformed into an OrderedUpstreamMessageEvent. Conversely\n+ * OrderedDownstreamChannelEvent objects are expected to be received for the correlating response objects.\n+ *\n+ * @author Christopher Hunt\n+ */\n+public class HttpPipeliningHandler extends SimpleChannelHandler {\n+\n+ public static final int INITIAL_EVENTS_HELD = 3;\n+\n+ private final int maxEventsHeld;\n+\n+ private int sequence;\n+ private int nextRequiredSequence;\n+ private int nextRequiredSubsequence;\n+\n+ private final Queue<OrderedDownstreamChannelEvent> holdingQueue;\n+\n+ /**\n+ * @param maxEventsHeld the maximum number of channel events that will be retained prior to aborting the channel\n+ * connection. This is required as events cannot queue up indefintely; we would run out of\n+ * memory if this was the case.\n+ */\n+ public HttpPipeliningHandler(final int maxEventsHeld) {\n+ this.maxEventsHeld = maxEventsHeld;\n+\n+ holdingQueue = new PriorityQueue<>(INITIAL_EVENTS_HELD, new Comparator<OrderedDownstreamChannelEvent>() {\n+ @Override\n+ public int compare(OrderedDownstreamChannelEvent o1, OrderedDownstreamChannelEvent o2) {\n+ final int delta = o1.getOrderedUpstreamMessageEvent().getSequence() - o2.getOrderedUpstreamMessageEvent().getSequence();\n+ if (delta == 0) {\n+ return o1.getSubsequence() - o2.getSubsequence();\n+ } else {\n+ return delta;\n+ }\n+ }\n+ });\n+ }\n+\n+ public int getMaxEventsHeld() {\n+ return maxEventsHeld;\n+ }\n+\n+ @Override\n+ public void messageReceived(final ChannelHandlerContext ctx, final MessageEvent e) {\n+ final Object msg = e.getMessage();\n+ if (msg instanceof HttpRequest) {\n+ ctx.sendUpstream(new OrderedUpstreamMessageEvent(sequence++, e.getChannel(), msg, e.getRemoteAddress()));\n+ } else {\n+ ctx.sendUpstream(e);\n+ }\n+ }\n+\n+ @Override\n+ public void handleDownstream(ChannelHandlerContext ctx, ChannelEvent e)\n+ throws Exception {\n+ if (e instanceof OrderedDownstreamChannelEvent) {\n+\n+ boolean channelShouldClose = false;\n+\n+ synchronized (holdingQueue) {\n+ if (holdingQueue.size() < maxEventsHeld) {\n+\n+ final OrderedDownstreamChannelEvent currentEvent = (OrderedDownstreamChannelEvent) e;\n+ holdingQueue.add(currentEvent);\n+\n+ while (!holdingQueue.isEmpty()) {\n+ final OrderedDownstreamChannelEvent nextEvent = holdingQueue.peek();\n+\n+ if (nextEvent.getOrderedUpstreamMessageEvent().getSequence() != nextRequiredSequence |\n+ nextEvent.getSubsequence() != nextRequiredSubsequence) {\n+ break;\n+ }\n+ holdingQueue.remove();\n+ ctx.sendDownstream(nextEvent.getChannelEvent());\n+ if (nextEvent.isLast()) {\n+ ++nextRequiredSequence;\n+ nextRequiredSubsequence = 0;\n+ } else {\n+ ++nextRequiredSubsequence;\n+ }\n+ }\n+\n+ } else {\n+ channelShouldClose = true;\n+ }\n+ }\n+\n+ if (channelShouldClose) {\n+ Channels.close(e.getChannel());\n+ }\n+ } else {\n+ super.handleDownstream(ctx, e);\n+ }\n+ }\n+\n+}", "filename": "src/main/java/org/elasticsearch/http/netty/pipelining/HttpPipeliningHandler.java", "status": "added" }, { "diff": "@@ -0,0 +1,77 @@\n+package org.elasticsearch.http.netty.pipelining;\n+\n+import org.jboss.netty.channel.*;\n+\n+/**\n+ * Permits downstream channel events to be ordered and signalled as to whether more are to come for a given sequence.\n+ *\n+ * @author Christopher Hunt\n+ */\n+public class OrderedDownstreamChannelEvent implements ChannelEvent {\n+\n+ final ChannelEvent ce;\n+ final OrderedUpstreamMessageEvent oue;\n+ final int subsequence;\n+ final boolean last;\n+\n+ /**\n+ * Construct a downstream channel event for all types of events.\n+ *\n+ * @param oue the OrderedUpstreamMessageEvent that this response is associated with\n+ * @param subsequence the sequence within the sequence\n+ * @param last when set to true this indicates that there are no more responses to be received for the\n+ * original OrderedUpstreamMessageEvent\n+ */\n+ public OrderedDownstreamChannelEvent(final OrderedUpstreamMessageEvent oue, final int subsequence, boolean last,\n+ final ChannelEvent ce) {\n+ this.oue = oue;\n+ this.ce = ce;\n+ this.subsequence = subsequence;\n+ this.last = last;\n+ }\n+\n+ /**\n+ * Convenience constructor signifying that this downstream message event is the last one for the given sequence,\n+ * and that there is only one response.\n+ */\n+ public OrderedDownstreamChannelEvent(final OrderedUpstreamMessageEvent oe,\n+ final Object message) {\n+ this(oe, 0, true, message);\n+ }\n+\n+ /**\n+ * Convenience constructor for passing message events.\n+ */\n+ public OrderedDownstreamChannelEvent(final OrderedUpstreamMessageEvent oue, final int subsequence, boolean last,\n+ final Object message) {\n+ this(oue, subsequence, last, new DownstreamMessageEvent(oue.getChannel(), Channels.future(oue.getChannel()),\n+ message, oue.getRemoteAddress()));\n+\n+ }\n+\n+ public OrderedUpstreamMessageEvent getOrderedUpstreamMessageEvent() {\n+ return oue;\n+ }\n+\n+ public int getSubsequence() {\n+ return subsequence;\n+ }\n+\n+ public boolean isLast() {\n+ return last;\n+ }\n+\n+ @Override\n+ public Channel getChannel() {\n+ return ce.getChannel();\n+ }\n+\n+ @Override\n+ public ChannelFuture getFuture() {\n+ return ce.getFuture();\n+ }\n+\n+ public ChannelEvent getChannelEvent() {\n+ return ce;\n+ }\n+}", "filename": "src/main/java/org/elasticsearch/http/netty/pipelining/OrderedDownstreamChannelEvent.java", "status": "added" }, { "diff": "@@ -0,0 +1,25 @@\n+package org.elasticsearch.http.netty.pipelining;\n+\n+import org.jboss.netty.channel.Channel;\n+import org.jboss.netty.channel.UpstreamMessageEvent;\n+\n+import java.net.SocketAddress;\n+\n+/**\n+ * Permits upstream message events to be ordered.\n+ *\n+ * @author Christopher Hunt\n+ */\n+public class OrderedUpstreamMessageEvent extends UpstreamMessageEvent {\n+ final int sequence;\n+\n+ public OrderedUpstreamMessageEvent(final int sequence, final Channel channel, final Object msg, final SocketAddress remoteAddress) {\n+ super(channel, msg, remoteAddress);\n+ this.sequence = sequence;\n+ }\n+\n+ public int getSequence() {\n+ return sequence;\n+ }\n+\n+}", "filename": "src/main/java/org/elasticsearch/http/netty/pipelining/OrderedUpstreamMessageEvent.java", "status": "added" }, { "diff": "@@ -0,0 +1,163 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.http.netty;\n+\n+import com.google.common.base.Charsets;\n+import com.google.common.base.Function;\n+import com.google.common.collect.Collections2;\n+import org.elasticsearch.common.unit.ByteSizeUnit;\n+import org.elasticsearch.common.unit.ByteSizeValue;\n+import org.jboss.netty.bootstrap.ClientBootstrap;\n+import org.jboss.netty.channel.*;\n+import org.jboss.netty.channel.socket.nio.NioClientSocketChannelFactory;\n+import org.jboss.netty.handler.codec.http.*;\n+\n+import java.io.Closeable;\n+import java.net.SocketAddress;\n+import java.util.ArrayList;\n+import java.util.Collection;\n+import java.util.Collections;\n+import java.util.concurrent.CountDownLatch;\n+\n+import static org.hamcrest.MatcherAssert.assertThat;\n+import static org.hamcrest.Matchers.is;\n+import static org.hamcrest.Matchers.lessThan;\n+import static org.jboss.netty.handler.codec.http.HttpHeaders.Names.HOST;\n+import static org.jboss.netty.handler.codec.http.HttpVersion.HTTP_1_1;\n+\n+/**\n+ * Tiny helper\n+ */\n+public class NettyHttpClient implements Closeable {\n+\n+ private static final Function<? super HttpResponse, String> FUNCTION_RESPONSE_TO_CONTENT = new Function<HttpResponse, String>() {\n+ @Override\n+ public String apply(HttpResponse response) {\n+ return response.getContent().toString(Charsets.UTF_8);\n+ }\n+ };\n+\n+ private static final Function<? super HttpResponse, String> FUNCTION_RESPONSE_OPAQUE_ID = new Function<HttpResponse, String>() {\n+ @Override\n+ public String apply(HttpResponse response) {\n+ return response.headers().get(\"X-Opaque-Id\");\n+ }\n+ };\n+\n+ public static Collection<String> returnHttpResponseBodies(Collection<HttpResponse> responses) {\n+ return Collections2.transform(responses, FUNCTION_RESPONSE_TO_CONTENT);\n+ }\n+\n+ public static Collection<String> returnOpaqueIds(Collection<HttpResponse> responses) {\n+ return Collections2.transform(responses, FUNCTION_RESPONSE_OPAQUE_ID);\n+ }\n+\n+ private final ClientBootstrap clientBootstrap;\n+\n+ public NettyHttpClient() {\n+ clientBootstrap = new ClientBootstrap(new NioClientSocketChannelFactory());;\n+ }\n+\n+ public Collection<HttpResponse> sendRequests(SocketAddress remoteAddress, String... uris) throws InterruptedException {\n+ return sendRequests(remoteAddress, -1, uris);\n+ }\n+\n+ public synchronized Collection<HttpResponse> sendRequests(SocketAddress remoteAddress, long expectedMaxDuration, String... uris) throws InterruptedException {\n+ final CountDownLatch latch = new CountDownLatch(uris.length);\n+ final Collection<HttpResponse> content = Collections.synchronizedList(new ArrayList<HttpResponse>(uris.length));\n+\n+ clientBootstrap.setPipelineFactory(new CountDownLatchPipelineFactory(latch, content));\n+\n+ ChannelFuture channelFuture = null;\n+ try {\n+ channelFuture = clientBootstrap.connect(remoteAddress);\n+ channelFuture.await(1000);\n+\n+ long startTime = System.currentTimeMillis();\n+\n+ for (int i = 0; i < uris.length; i++) {\n+ final HttpRequest httpRequest = new DefaultHttpRequest(HTTP_1_1, HttpMethod.GET, uris[i]);\n+ httpRequest.headers().add(HOST, \"localhost\");\n+ httpRequest.headers().add(\"X-Opaque-ID\", String.valueOf(i));\n+ channelFuture.getChannel().write(httpRequest);\n+ }\n+ latch.await();\n+\n+ long duration = System.currentTimeMillis() - startTime;\n+ // make sure the request were executed in parallel\n+ if (expectedMaxDuration > 0) {\n+ assertThat(duration, is(lessThan(expectedMaxDuration)));\n+ }\n+ } finally {\n+ if (channelFuture != null) {\n+ channelFuture.getChannel().close();\n+ }\n+ }\n+\n+\n+ return content;\n+ }\n+\n+ @Override\n+ public void close() {\n+ clientBootstrap.shutdown();\n+ clientBootstrap.releaseExternalResources();\n+ }\n+\n+ /**\n+ * helper factory which adds returned data to a list and uses a count down latch to decide when done\n+ */\n+ public static class CountDownLatchPipelineFactory implements ChannelPipelineFactory {\n+ private final CountDownLatch latch;\n+ private final Collection<HttpResponse> content;\n+\n+ public CountDownLatchPipelineFactory(CountDownLatch latch, Collection<HttpResponse> content) {\n+ this.latch = latch;\n+ this.content = content;\n+ }\n+\n+ @Override\n+ public ChannelPipeline getPipeline() throws Exception {\n+ final int maxBytes = new ByteSizeValue(100, ByteSizeUnit.MB).bytesAsInt();\n+ return Channels.pipeline(\n+ new HttpClientCodec(),\n+ new HttpChunkAggregator(maxBytes),\n+ new SimpleChannelUpstreamHandler() {\n+ @Override\n+ public void messageReceived(final ChannelHandlerContext ctx, final MessageEvent e) {\n+ final Object message = e.getMessage();\n+\n+ if (message instanceof HttpResponse) {\n+ HttpResponse response = (HttpResponse) message;\n+ content.add(response);\n+ }\n+\n+ latch.countDown();\n+ }\n+\n+ @Override\n+ public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) throws Exception {\n+ super.exceptionCaught(ctx, e);\n+ latch.countDown();\n+ }\n+ });\n+ }\n+ }\n+\n+}", "filename": "src/test/java/org/elasticsearch/http/netty/NettyHttpClient.java", "status": "added" }, { "diff": "@@ -0,0 +1,227 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.http.netty;\n+\n+import com.google.common.base.Charsets;\n+import com.google.common.collect.Lists;\n+import org.elasticsearch.ElasticsearchException;\n+import org.elasticsearch.common.network.NetworkService;\n+import org.elasticsearch.common.settings.ImmutableSettings;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.transport.InetSocketTransportAddress;\n+import org.elasticsearch.http.HttpServerTransport;\n+import org.elasticsearch.http.netty.pipelining.OrderedDownstreamChannelEvent;\n+import org.elasticsearch.http.netty.pipelining.OrderedUpstreamMessageEvent;\n+import org.elasticsearch.indices.breaker.NoneCircuitBreakerService;\n+import org.elasticsearch.test.ElasticsearchTestCase;\n+import org.elasticsearch.test.cache.recycler.MockBigArrays;\n+import org.elasticsearch.test.cache.recycler.MockPageCacheRecycler;\n+import org.elasticsearch.test.junit.annotations.TestLogging;\n+import org.elasticsearch.threadpool.ThreadPool;\n+import org.jboss.netty.buffer.ChannelBuffer;\n+import org.jboss.netty.buffer.ChannelBuffers;\n+import org.jboss.netty.channel.*;\n+import org.jboss.netty.handler.codec.http.*;\n+import org.junit.After;\n+import org.junit.Before;\n+import org.junit.Test;\n+\n+import java.util.Arrays;\n+import java.util.Collection;\n+import java.util.List;\n+import java.util.concurrent.ExecutorService;\n+import java.util.concurrent.Executors;\n+\n+import static org.elasticsearch.common.settings.ImmutableSettings.settingsBuilder;\n+import static org.elasticsearch.http.netty.NettyHttpClient.returnHttpResponseBodies;\n+import static org.elasticsearch.http.netty.NettyHttpServerTransport.HttpChannelPipelineFactory;\n+import static org.hamcrest.Matchers.*;\n+import static org.jboss.netty.handler.codec.http.HttpHeaders.Names.CONNECTION;\n+import static org.jboss.netty.handler.codec.http.HttpHeaders.Names.CONTENT_LENGTH;\n+import static org.jboss.netty.handler.codec.http.HttpHeaders.Values.CLOSE;\n+import static org.jboss.netty.handler.codec.http.HttpResponseStatus.OK;\n+import static org.jboss.netty.handler.codec.http.HttpVersion.HTTP_1_0;\n+import static org.jboss.netty.handler.codec.http.HttpVersion.HTTP_1_1;\n+\n+/**\n+ * This test just tests, if he pipelining works in general with out any connection the elasticsearch handler\n+ */\n+public class NettyHttpServerPipeliningTest extends ElasticsearchTestCase {\n+\n+ private NetworkService networkService;\n+ private ThreadPool threadPool;\n+ private MockPageCacheRecycler mockPageCacheRecycler;\n+ private MockBigArrays bigArrays;\n+ private CustomNettyHttpServerTransport httpServerTransport;\n+\n+ @Before\n+ public void setup() throws Exception {\n+ networkService = new NetworkService(ImmutableSettings.EMPTY);\n+ threadPool = new ThreadPool(\"test\");\n+ mockPageCacheRecycler = new MockPageCacheRecycler(ImmutableSettings.EMPTY, threadPool);\n+ bigArrays = new MockBigArrays(ImmutableSettings.EMPTY, mockPageCacheRecycler, new NoneCircuitBreakerService());\n+ }\n+\n+ @After\n+ public void shutdown() throws Exception {\n+ if (threadPool != null) {\n+ threadPool.shutdownNow();\n+ }\n+ if (httpServerTransport != null) {\n+ httpServerTransport.close();\n+ }\n+ }\n+\n+ @Test\n+ @TestLogging(\"_root:DEBUG\")\n+ public void testThatHttpPipeliningWorksWhenEnabled() throws Exception {\n+ Settings settings = settingsBuilder().put(\"http.pipelining\", true).build();\n+ httpServerTransport = new CustomNettyHttpServerTransport(settings);\n+ httpServerTransport.start();\n+ InetSocketTransportAddress transportAddress = (InetSocketTransportAddress) httpServerTransport.boundAddress().boundAddress();\n+\n+ List<String> requests = Arrays.asList(\"/firstfast\", \"/slow?sleep=500\", \"/secondfast\", \"/slow?sleep=1000\", \"/thirdfast\");\n+ long maxdurationInMilliSeconds = 1200;\n+ try (NettyHttpClient nettyHttpClient = new NettyHttpClient()) {\n+ Collection<HttpResponse> responses = nettyHttpClient.sendRequests(transportAddress.address(), maxdurationInMilliSeconds, requests.toArray(new String[]{}));\n+ Collection<String> responseBodies = returnHttpResponseBodies(responses);\n+ assertThat(responseBodies, contains(\"/firstfast\", \"/slow?sleep=500\", \"/secondfast\", \"/slow?sleep=1000\", \"/thirdfast\"));\n+ }\n+ }\n+\n+ @Test\n+ @TestLogging(\"_root:TRACE\")\n+ public void testThatHttpPipeliningCanBeDisabled() throws Exception {\n+ Settings settings = settingsBuilder().put(\"http.pipelining\", false).build();\n+ httpServerTransport = new CustomNettyHttpServerTransport(settings);\n+ httpServerTransport.start();\n+ InetSocketTransportAddress transportAddress = (InetSocketTransportAddress) httpServerTransport.boundAddress().boundAddress();\n+\n+ List<String> requests = Arrays.asList(\"/slow?sleep=1000\", \"/firstfast\", \"/secondfast\", \"/thirdfast\", \"/slow?sleep=500\");\n+ long maxdurationInMilliSeconds = 1200;\n+ try (NettyHttpClient nettyHttpClient = new NettyHttpClient()) {\n+ Collection<HttpResponse> responses = nettyHttpClient.sendRequests(transportAddress.address(), maxdurationInMilliSeconds, requests.toArray(new String[]{}));\n+ List<String> responseBodies = Lists.newArrayList(returnHttpResponseBodies(responses));\n+ // we cannot be sure about the order of the fast requests, but the slow ones should have to be last\n+ assertThat(responseBodies, hasSize(5));\n+ assertThat(responseBodies.get(3), is(\"/slow?sleep=500\"));\n+ assertThat(responseBodies.get(4), is(\"/slow?sleep=1000\"));\n+ }\n+ }\n+\n+ class CustomNettyHttpServerTransport extends NettyHttpServerTransport {\n+\n+ private final ExecutorService executorService;\n+\n+ public CustomNettyHttpServerTransport(Settings settings) {\n+ super(settings, NettyHttpServerPipeliningTest.this.networkService, NettyHttpServerPipeliningTest.this.bigArrays);\n+ this.executorService = Executors.newFixedThreadPool(5);\n+ }\n+\n+ @Override\n+ public ChannelPipelineFactory configureServerChannelPipelineFactory() {\n+ return new CustomHttpChannelPipelineFactory(this, executorService);\n+ }\n+\n+ @Override\n+ public HttpServerTransport stop() throws ElasticsearchException {\n+ executorService.shutdownNow();\n+ return super.stop();\n+ }\n+ }\n+\n+ private class CustomHttpChannelPipelineFactory extends HttpChannelPipelineFactory {\n+\n+ private final ExecutorService executorService;\n+\n+ public CustomHttpChannelPipelineFactory(NettyHttpServerTransport transport, ExecutorService executorService) {\n+ super(transport);\n+ this.executorService = executorService;\n+ }\n+\n+ @Override\n+ public ChannelPipeline getPipeline() throws Exception {\n+ ChannelPipeline pipeline = super.getPipeline();\n+ pipeline.replace(\"handler\", \"handler\", new PossiblySlowUpstreamHandler(executorService));\n+ return pipeline;\n+ }\n+ }\n+\n+ class PossiblySlowUpstreamHandler extends SimpleChannelUpstreamHandler {\n+\n+ private final ExecutorService executorService;\n+\n+ public PossiblySlowUpstreamHandler(ExecutorService executorService) {\n+ this.executorService = executorService;\n+ }\n+\n+ @Override\n+ public void messageReceived(final ChannelHandlerContext ctx, final MessageEvent e) throws Exception {\n+ executorService.submit(new PossiblySlowRunnable(ctx, e));\n+ }\n+\n+ @Override\n+ public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) {\n+ e.getCause().printStackTrace();\n+ e.getChannel().close();\n+ }\n+ }\n+\n+ class PossiblySlowRunnable implements Runnable {\n+\n+ private ChannelHandlerContext ctx;\n+ private MessageEvent e;\n+\n+ public PossiblySlowRunnable(ChannelHandlerContext ctx, MessageEvent e) {\n+ this.ctx = ctx;\n+ this.e = e;\n+ }\n+\n+ @Override\n+ public void run() {\n+ HttpRequest request;\n+ OrderedUpstreamMessageEvent oue = null;\n+ if (e instanceof OrderedUpstreamMessageEvent) {\n+ oue = (OrderedUpstreamMessageEvent) e;\n+ request = (HttpRequest) oue.getMessage();\n+ } else {\n+ request = (HttpRequest) e.getMessage();\n+ }\n+\n+ ChannelBuffer buffer = ChannelBuffers.copiedBuffer(request.getUri(), Charsets.UTF_8);\n+\n+ DefaultHttpResponse httpResponse = new DefaultHttpResponse(HTTP_1_1, OK);\n+ httpResponse.headers().add(CONTENT_LENGTH, buffer.readableBytes());\n+ httpResponse.setContent(buffer);\n+\n+ QueryStringDecoder decoder = new QueryStringDecoder(request.getUri());\n+\n+ final int timeout = request.getUri().startsWith(\"/slow\") && decoder.getParameters().containsKey(\"sleep\") ? Integer.valueOf(decoder.getParameters().get(\"sleep\").get(0)) : 0;\n+ if (timeout > 0) {\n+ sleep(timeout);\n+ }\n+\n+ if (oue != null) {\n+ ctx.sendDownstream(new OrderedDownstreamChannelEvent(oue, 0, true, httpResponse));\n+ } else {\n+ ctx.getChannel().write(httpResponse);\n+ }\n+ }\n+ }\n+}", "filename": "src/test/java/org/elasticsearch/http/netty/NettyHttpServerPipeliningTest.java", "status": "added" }, { "diff": "@@ -0,0 +1,78 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.http.netty;\n+\n+import com.google.common.collect.Lists;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.transport.InetSocketTransportAddress;\n+import org.elasticsearch.http.HttpServerTransport;\n+import org.elasticsearch.node.internal.InternalNode;\n+import org.elasticsearch.test.ElasticsearchIntegrationTest;\n+import org.jboss.netty.handler.codec.http.HttpResponse;\n+import org.junit.Test;\n+\n+import java.util.Arrays;\n+import java.util.Collection;\n+import java.util.List;\n+import java.util.Locale;\n+\n+import static org.elasticsearch.common.settings.ImmutableSettings.settingsBuilder;\n+import static org.elasticsearch.http.netty.NettyHttpClient.returnOpaqueIds;\n+import static org.elasticsearch.test.ElasticsearchIntegrationTest.ClusterScope;\n+import static org.elasticsearch.test.ElasticsearchIntegrationTest.Scope;\n+import static org.hamcrest.Matchers.*;\n+\n+/**\n+ *\n+ */\n+@ClusterScope(scope = Scope.TEST, numDataNodes = 1)\n+public class NettyPipeliningDisabledIntegrationTest extends ElasticsearchIntegrationTest {\n+\n+ @Override\n+ protected Settings nodeSettings(int nodeOrdinal) {\n+ return settingsBuilder().put(super.nodeSettings(nodeOrdinal)).put(InternalNode.HTTP_ENABLED, true).put(\"http.pipelining\", false).build();\n+ }\n+\n+ @Test\n+ public void testThatNettyHttpServerDoesNotSupportPipelining() throws Exception {\n+ ensureGreen();\n+ List<String> requests = Arrays.asList(\"/\", \"/_nodes/stats\", \"/\", \"/_cluster/state\", \"/\", \"/_nodes\", \"/\");\n+\n+ HttpServerTransport httpServerTransport = internalCluster().getInstance(HttpServerTransport.class);\n+ InetSocketTransportAddress inetSocketTransportAddress = (InetSocketTransportAddress) httpServerTransport.boundAddress().boundAddress();\n+\n+ try (NettyHttpClient nettyHttpClient = new NettyHttpClient()) {\n+ Collection<HttpResponse> responses = nettyHttpClient.sendRequests(inetSocketTransportAddress.address(), requests.toArray(new String[]{}));\n+ assertThat(responses, hasSize(requests.size()));\n+\n+ List<String> opaqueIds = Lists.newArrayList(returnOpaqueIds(responses));\n+\n+ assertResponsesOutOfOrder(opaqueIds);\n+ }\n+ }\n+\n+ /**\n+ * checks if all responses are there, but also tests that they are out of order because pipelining is disabled\n+ */\n+ private void assertResponsesOutOfOrder(List<String> opaqueIds) {\n+ String message = String.format(Locale.ROOT, \"Expected returned http message ids to be out of order: %s\", opaqueIds);\n+ assertThat(opaqueIds, hasItems(\"0\", \"1\", \"2\", \"3\", \"4\", \"5\", \"6\"));\n+ assertThat(message, opaqueIds, not(contains(\"0\", \"1\", \"2\", \"3\", \"4\", \"5\", \"6\")));\n+ }\n+}", "filename": "src/test/java/org/elasticsearch/http/netty/NettyPipeliningDisabledIntegrationTest.java", "status": "added" }, { "diff": "@@ -0,0 +1,75 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.http.netty;\n+\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.common.transport.InetSocketTransportAddress;\n+import org.elasticsearch.http.HttpServerTransport;\n+import org.elasticsearch.node.internal.InternalNode;\n+import org.elasticsearch.test.ElasticsearchIntegrationTest;\n+import org.jboss.netty.handler.codec.http.HttpResponse;\n+import org.junit.Test;\n+\n+import java.util.Arrays;\n+import java.util.Collection;\n+import java.util.List;\n+import java.util.Locale;\n+\n+import static org.elasticsearch.common.settings.ImmutableSettings.settingsBuilder;\n+import static org.elasticsearch.http.netty.NettyHttpClient.returnOpaqueIds;\n+import static org.elasticsearch.test.ElasticsearchIntegrationTest.ClusterScope;\n+import static org.elasticsearch.test.ElasticsearchIntegrationTest.Scope;\n+import static org.hamcrest.Matchers.hasSize;\n+import static org.hamcrest.Matchers.is;\n+\n+\n+@ClusterScope(scope = Scope.TEST, numDataNodes = 1)\n+public class NettyPipeliningEnabledIntegrationTest extends ElasticsearchIntegrationTest {\n+\n+ @Override\n+ protected Settings nodeSettings(int nodeOrdinal) {\n+ return settingsBuilder().put(super.nodeSettings(nodeOrdinal)).put(InternalNode.HTTP_ENABLED, true).put(\"http.pipelining\", true).build();\n+ }\n+\n+ @Test\n+ public void testThatNettyHttpServerSupportsPipelining() throws Exception {\n+ List<String> requests = Arrays.asList(\"/\", \"/_nodes/stats\", \"/\", \"/_cluster/state\", \"/\");\n+\n+ HttpServerTransport httpServerTransport = internalCluster().getInstance(HttpServerTransport.class);\n+ InetSocketTransportAddress inetSocketTransportAddress = (InetSocketTransportAddress) httpServerTransport.boundAddress().boundAddress();\n+\n+ try (NettyHttpClient nettyHttpClient = new NettyHttpClient()) {\n+ Collection<HttpResponse> responses = nettyHttpClient.sendRequests(inetSocketTransportAddress.address(), requests.toArray(new String[]{}));\n+ assertThat(responses, hasSize(5));\n+\n+ Collection<String> opaqueIds = returnOpaqueIds(responses);\n+ assertOpaqueIdsInOrder(opaqueIds);\n+ }\n+ }\n+\n+ private void assertOpaqueIdsInOrder(Collection<String> opaqueIds) {\n+ // check if opaque ids are monotonically increasing\n+ int i = 0;\n+ String msg = String.format(Locale.ROOT, \"Expected list of opaque ids to be monotonically increasing, got [\" + opaqueIds + \"]\");\n+ for (String opaqueId : opaqueIds) {\n+ assertThat(msg, opaqueId, is(String.valueOf(i++)));\n+ }\n+ }\n+\n+}\n\\ No newline at end of file", "filename": "src/test/java/org/elasticsearch/http/netty/NettyPipeliningEnabledIntegrationTest.java", "status": "added" }, { "diff": "@@ -0,0 +1,215 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+package org.elasticsearch.http.netty.pipelining;\n+\n+import org.elasticsearch.test.ElasticsearchTestCase;\n+import org.jboss.netty.bootstrap.ClientBootstrap;\n+import org.jboss.netty.bootstrap.ServerBootstrap;\n+import org.jboss.netty.channel.*;\n+import org.jboss.netty.channel.socket.nio.NioClientSocketChannelFactory;\n+import org.jboss.netty.channel.socket.nio.NioServerSocketChannelFactory;\n+import org.jboss.netty.handler.codec.http.*;\n+import org.jboss.netty.util.HashedWheelTimer;\n+import org.jboss.netty.util.Timeout;\n+import org.jboss.netty.util.TimerTask;\n+import org.junit.After;\n+import org.junit.Before;\n+import org.junit.Test;\n+\n+import java.net.InetSocketAddress;\n+import java.util.ArrayList;\n+import java.util.List;\n+import java.util.concurrent.CountDownLatch;\n+import java.util.concurrent.Executors;\n+import java.util.concurrent.atomic.AtomicBoolean;\n+\n+import static java.util.concurrent.TimeUnit.MILLISECONDS;\n+import static org.jboss.netty.buffer.ChannelBuffers.EMPTY_BUFFER;\n+import static org.jboss.netty.buffer.ChannelBuffers.copiedBuffer;\n+import static org.jboss.netty.handler.codec.http.HttpHeaders.Names.*;\n+import static org.jboss.netty.handler.codec.http.HttpHeaders.Values.CHUNKED;\n+import static org.jboss.netty.handler.codec.http.HttpHeaders.Values.KEEP_ALIVE;\n+import static org.jboss.netty.handler.codec.http.HttpResponseStatus.OK;\n+import static org.jboss.netty.handler.codec.http.HttpVersion.HTTP_1_1;\n+import static org.jboss.netty.util.CharsetUtil.UTF_8;\n+\n+/**\n+ *\n+ */\n+public class HttpPipeliningHandlerTest extends ElasticsearchTestCase {\n+\n+ private static final long RESPONSE_TIMEOUT = 10000L;\n+ private static final long CONNECTION_TIMEOUT = 10000L;\n+ private static final String CONTENT_TYPE_TEXT = \"text/plain; charset=UTF-8\";\n+ // TODO make me random\n+ private static final InetSocketAddress HOST_ADDR = new InetSocketAddress(\"127.0.0.1\", 9080);\n+ private static final String PATH1 = \"/1\";\n+ private static final String PATH2 = \"/2\";\n+ private static final String SOME_RESPONSE_TEXT = \"some response for \";\n+\n+ private ClientBootstrap clientBootstrap;\n+ private ServerBootstrap serverBootstrap;\n+\n+ private CountDownLatch responsesIn;\n+ private final List<String> responses = new ArrayList<>(2);\n+\n+ private HashedWheelTimer timer;\n+\n+ @Before\n+ public void startBootstraps() {\n+ clientBootstrap = new ClientBootstrap(new NioClientSocketChannelFactory());\n+\n+ clientBootstrap.setPipelineFactory(new ChannelPipelineFactory() {\n+ @Override\n+ public ChannelPipeline getPipeline() throws Exception {\n+ return Channels.pipeline(\n+ new HttpClientCodec(),\n+ new ClientHandler()\n+ );\n+ }\n+ });\n+\n+ serverBootstrap = new ServerBootstrap(new NioServerSocketChannelFactory());\n+\n+ serverBootstrap.setPipelineFactory(new ChannelPipelineFactory() {\n+ @Override\n+ public ChannelPipeline getPipeline() throws Exception {\n+ return Channels.pipeline(\n+ new HttpRequestDecoder(),\n+ new HttpResponseEncoder(),\n+ new HttpPipeliningHandler(10000),\n+ new ServerHandler()\n+ );\n+ }\n+ });\n+\n+ serverBootstrap.bind(HOST_ADDR);\n+\n+ timer = new HashedWheelTimer();\n+ }\n+\n+ @After\n+ public void releaseResources() {\n+ timer.stop();\n+\n+ serverBootstrap.shutdown();\n+ serverBootstrap.releaseExternalResources();\n+ clientBootstrap.shutdown();\n+ clientBootstrap.releaseExternalResources();\n+ }\n+\n+ @Test\n+ public void shouldReturnMessagesInOrder() throws InterruptedException {\n+ responsesIn = new CountDownLatch(1);\n+ responses.clear();\n+\n+ final ChannelFuture connectionFuture = clientBootstrap.connect(HOST_ADDR);\n+\n+ assertTrue(connectionFuture.await(CONNECTION_TIMEOUT));\n+ final Channel clientChannel = connectionFuture.getChannel();\n+\n+ final HttpRequest request1 = new DefaultHttpRequest(\n+ HTTP_1_1, HttpMethod.GET, PATH1);\n+ request1.headers().add(HOST, HOST_ADDR.toString());\n+\n+ final HttpRequest request2 = new DefaultHttpRequest(\n+ HTTP_1_1, HttpMethod.GET, PATH2);\n+ request2.headers().add(HOST, HOST_ADDR.toString());\n+\n+ clientChannel.write(request1);\n+ clientChannel.write(request2);\n+\n+ responsesIn.await(RESPONSE_TIMEOUT, MILLISECONDS);\n+\n+ assertTrue(responses.contains(SOME_RESPONSE_TEXT + PATH1));\n+ assertTrue(responses.contains(SOME_RESPONSE_TEXT + PATH2));\n+ }\n+\n+ public class ClientHandler extends SimpleChannelUpstreamHandler {\n+ @Override\n+ public void messageReceived(final ChannelHandlerContext ctx, final MessageEvent e) {\n+ final Object message = e.getMessage();\n+ if (message instanceof HttpChunk) {\n+ final HttpChunk response = (HttpChunk) e.getMessage();\n+ if (!response.isLast()) {\n+ final String content = response.getContent().toString(UTF_8);\n+ responses.add(content);\n+ if (content.equals(SOME_RESPONSE_TEXT + PATH2)) {\n+ responsesIn.countDown();\n+ }\n+ }\n+ }\n+ }\n+ }\n+\n+ public class ServerHandler extends SimpleChannelUpstreamHandler {\n+ private final AtomicBoolean sendFinalChunk = new AtomicBoolean(false);\n+\n+ @Override\n+ public void messageReceived(final ChannelHandlerContext ctx, final MessageEvent e) throws InterruptedException {\n+ final HttpRequest request = (HttpRequest) e.getMessage();\n+\n+ final OrderedUpstreamMessageEvent oue = (OrderedUpstreamMessageEvent) e;\n+ final String uri = request.getUri();\n+\n+ final HttpResponse initialChunk = new DefaultHttpResponse(HTTP_1_1, OK);\n+ initialChunk.headers().add(CONTENT_TYPE, CONTENT_TYPE_TEXT);\n+ initialChunk.headers().add(CONNECTION, KEEP_ALIVE);\n+ initialChunk.headers().add(TRANSFER_ENCODING, CHUNKED);\n+\n+ ctx.sendDownstream(new OrderedDownstreamChannelEvent(oue, 0, false, initialChunk));\n+\n+ timer.newTimeout(new ChunkWriter(ctx, e, uri, oue, 1), 0, MILLISECONDS);\n+ }\n+\n+ private class ChunkWriter implements TimerTask {\n+ private final ChannelHandlerContext ctx;\n+ private final MessageEvent e;\n+ private final String uri;\n+ private final OrderedUpstreamMessageEvent oue;\n+ private final int subSequence;\n+\n+ public ChunkWriter(final ChannelHandlerContext ctx, final MessageEvent e, final String uri,\n+ final OrderedUpstreamMessageEvent oue, final int subSequence) {\n+ this.ctx = ctx;\n+ this.e = e;\n+ this.uri = uri;\n+ this.oue = oue;\n+ this.subSequence = subSequence;\n+ }\n+\n+ @Override\n+ public void run(final Timeout timeout) {\n+ if (sendFinalChunk.get() && subSequence > 1) {\n+ final HttpChunk finalChunk = new DefaultHttpChunk(EMPTY_BUFFER);\n+ ctx.sendDownstream(new OrderedDownstreamChannelEvent(oue, subSequence, true, finalChunk));\n+ } else {\n+ final HttpChunk chunk = new DefaultHttpChunk(copiedBuffer(SOME_RESPONSE_TEXT + uri, UTF_8));\n+ ctx.sendDownstream(new OrderedDownstreamChannelEvent(oue, subSequence, false, chunk));\n+\n+ timer.newTimeout(new ChunkWriter(ctx, e, uri, oue, subSequence + 1), 0, MILLISECONDS);\n+\n+ if (uri.equals(PATH2)) {\n+ sendFinalChunk.set(true);\n+ }\n+ }\n+ }\n+ }\n+ }\n+}", "filename": "src/test/java/org/elasticsearch/http/netty/pipelining/HttpPipeliningHandlerTest.java", "status": "added" }, { "diff": "@@ -1594,6 +1594,7 @@ protected Settings transportClientSettings() {\n protected TestCluster buildTestCluster(Scope scope, long seed) throws IOException {\n int numClientNodes = InternalTestCluster.DEFAULT_NUM_CLIENT_NODES;\n boolean enableRandomBenchNodes = InternalTestCluster.DEFAULT_ENABLE_RANDOM_BENCH_NODES;\n+ boolean enableHttpPipelining = InternalTestCluster.DEFAULT_ENABLE_HTTP_PIPELINING;\n int minNumDataNodes = InternalTestCluster.DEFAULT_MIN_NUM_DATA_NODES;\n int maxNumDataNodes = InternalTestCluster.DEFAULT_MAX_NUM_DATA_NODES;\n SettingsSource settingsSource = InternalTestCluster.DEFAULT_SETTINGS_SOURCE;\n@@ -1660,7 +1661,7 @@ public Settings transportClient() {\n }\n return new InternalTestCluster(seed, minNumDataNodes, maxNumDataNodes,\n clusterName(scope.name(), Integer.toString(CHILD_JVM_ID), seed), settingsSource, numClientNodes,\n- enableRandomBenchNodes, CHILD_JVM_ID, nodePrefix);\n+ enableRandomBenchNodes, enableHttpPipelining, CHILD_JVM_ID, nodePrefix);\n }\n \n /**", "filename": "src/test/java/org/elasticsearch/test/ElasticsearchIntegrationTest.java", "status": "modified" }, { "diff": "@@ -152,6 +152,7 @@ public final class InternalTestCluster extends TestCluster {\n static final int DEFAULT_MAX_NUM_CLIENT_NODES = 1;\n \n static final boolean DEFAULT_ENABLE_RANDOM_BENCH_NODES = true;\n+ static final boolean DEFAULT_ENABLE_HTTP_PIPELINING = true;\n \n public static final String NODE_MODE = nodeMode();\n \n@@ -193,13 +194,13 @@ public final class InternalTestCluster extends TestCluster {\n private ServiceDisruptionScheme activeDisruptionScheme;\n \n public InternalTestCluster(long clusterSeed, int minNumDataNodes, int maxNumDataNodes, String clusterName, int numClientNodes, boolean enableRandomBenchNodes,\n- int jvmOrdinal, String nodePrefix) {\n- this(clusterSeed, minNumDataNodes, maxNumDataNodes, clusterName, DEFAULT_SETTINGS_SOURCE, numClientNodes, enableRandomBenchNodes, jvmOrdinal, nodePrefix);\n+ boolean enableHttpPipelining, int jvmOrdinal, String nodePrefix) {\n+ this(clusterSeed, minNumDataNodes, maxNumDataNodes, clusterName, DEFAULT_SETTINGS_SOURCE, numClientNodes, enableRandomBenchNodes, enableHttpPipelining, jvmOrdinal, nodePrefix);\n }\n \n public InternalTestCluster(long clusterSeed,\n int minNumDataNodes, int maxNumDataNodes, String clusterName, SettingsSource settingsSource, int numClientNodes,\n- boolean enableRandomBenchNodes,\n+ boolean enableRandomBenchNodes, boolean enableHttpPipelining,\n int jvmOrdinal, String nodePrefix) {\n super(clusterSeed);\n this.clusterName = clusterName;\n@@ -267,6 +268,7 @@ public InternalTestCluster(long clusterSeed,\n builder.put(\"config.ignore_system_properties\", true);\n builder.put(\"node.mode\", NODE_MODE);\n builder.put(\"script.disable_dynamic\", false);\n+ builder.put(\"http.pipelining\", enableHttpPipelining);\n builder.put(\"plugins.\" + PluginsService.LOAD_PLUGIN_FROM_CLASSPATH, false);\n if (Strings.hasLength(System.getProperty(\"es.logger.level\"))) {\n builder.put(\"logger.level\", System.getProperty(\"es.logger.level\"));", "filename": "src/test/java/org/elasticsearch/test/InternalTestCluster.java", "status": "modified" }, { "diff": "@@ -50,11 +50,12 @@ public void testInitializiationIsConsistent() {\n SettingsSource settingsSource = SettingsSource.EMPTY;\n int numClientNodes = randomIntBetween(0, 10);\n boolean enableRandomBenchNodes = randomBoolean();\n+ boolean enableHttpPipelining = randomBoolean();\n int jvmOrdinal = randomIntBetween(0, 10);\n String nodePrefix = randomRealisticUnicodeOfCodepointLengthBetween(1, 10);\n \n- InternalTestCluster cluster0 = new InternalTestCluster(clusterSeed, minNumDataNodes, maxNumDataNodes, clusterName, settingsSource, numClientNodes, enableRandomBenchNodes, jvmOrdinal, nodePrefix);\n- InternalTestCluster cluster1 = new InternalTestCluster(clusterSeed, minNumDataNodes, maxNumDataNodes, clusterName, settingsSource, numClientNodes, enableRandomBenchNodes, jvmOrdinal, nodePrefix);\n+ InternalTestCluster cluster0 = new InternalTestCluster(clusterSeed, minNumDataNodes, maxNumDataNodes, clusterName, settingsSource, numClientNodes, enableRandomBenchNodes, enableHttpPipelining, jvmOrdinal, nodePrefix);\n+ InternalTestCluster cluster1 = new InternalTestCluster(clusterSeed, minNumDataNodes, maxNumDataNodes, clusterName, settingsSource, numClientNodes, enableRandomBenchNodes, enableHttpPipelining, jvmOrdinal, nodePrefix);\n assertClusters(cluster0, cluster1, true);\n \n }\n@@ -94,11 +95,12 @@ public void testBeforeTest() throws IOException {\n SettingsSource settingsSource = SettingsSource.EMPTY;\n int numClientNodes = randomIntBetween(0, 2);\n boolean enableRandomBenchNodes = randomBoolean();\n+ boolean enableHttpPipelining = randomBoolean();\n int jvmOrdinal = randomIntBetween(0, 10);\n String nodePrefix = \"foobar\";\n \n- InternalTestCluster cluster0 = new InternalTestCluster(clusterSeed, minNumDataNodes, maxNumDataNodes, clusterName, settingsSource, numClientNodes, enableRandomBenchNodes, jvmOrdinal, nodePrefix);\n- InternalTestCluster cluster1 = new InternalTestCluster(clusterSeed, minNumDataNodes, maxNumDataNodes, clusterName1, settingsSource, numClientNodes, enableRandomBenchNodes, jvmOrdinal, nodePrefix);\n+ InternalTestCluster cluster0 = new InternalTestCluster(clusterSeed, minNumDataNodes, maxNumDataNodes, clusterName, settingsSource, numClientNodes, enableRandomBenchNodes, enableHttpPipelining, jvmOrdinal, nodePrefix);\n+ InternalTestCluster cluster1 = new InternalTestCluster(clusterSeed, minNumDataNodes, maxNumDataNodes, clusterName1, settingsSource, numClientNodes, enableRandomBenchNodes, enableHttpPipelining, jvmOrdinal, nodePrefix);\n \n assertClusters(cluster0, cluster1, false);\n long seed = randomLong();", "filename": "src/test/java/org/elasticsearch/test/test/InternalTestClusterTests.java", "status": "modified" }, { "diff": "@@ -18,7 +18,6 @@\n */\n package org.elasticsearch.test.transport;\n \n-import org.apache.lucene.util.LuceneTestCase;\n import org.elasticsearch.action.admin.cluster.health.ClusterHealthResponse;\n import org.elasticsearch.action.admin.cluster.health.ClusterHealthStatus;\n import org.elasticsearch.client.transport.TransportClient;", "filename": "src/test/java/org/elasticsearch/test/transport/NettyTransportMultiPortIntegrationTests.java", "status": "modified" }, { "diff": "@@ -64,7 +64,7 @@ public class TribeTests extends ElasticsearchIntegrationTest {\n public static void setupSecondCluster() throws Exception {\n ElasticsearchIntegrationTest.beforeClass();\n // create another cluster\n- cluster2 = new InternalTestCluster(randomLong(), 2, 2, Strings.randomBase64UUID(getRandom()), 0, false, CHILD_JVM_ID, SECOND_CLUSTER_NODE_PREFIX);\n+ cluster2 = new InternalTestCluster(randomLong(), 2, 2, Strings.randomBase64UUID(getRandom()), 0, false, false, CHILD_JVM_ID, SECOND_CLUSTER_NODE_PREFIX);\n cluster2.beforeTest(getRandom(), 0.1);\n cluster2.ensureAtLeastNumDataNodes(2);\n }", "filename": "src/test/java/org/elasticsearch/tribe/TribeTests.java", "status": "modified" } ] }
{ "body": "The accountable interface specifies that such values are illegal. When extending this interface to give more detailed information in https://issues.apache.org/jira/browse/LUCENE-5949, I know I added more checks and so on around this. \n\nWe should fix the places in e.g. fielddata returning -1.\n", "comments": [ { "body": "These changes look great.\n\nFYI there are more places returning -1 \"indirectly\". For example, AtomicLongFieldData takes the ram usage in its constructor, and some places (e.g. docvalues impls of it) pass -1 there:\n\n```\nAtomicLongFieldData(long ramBytesUsed) {\n this.ramBytesUsed = ramBytesUsed;\n}\n```\n", "created_at": "2014-10-30T16:19:54Z" } ], "number": 8239, "title": "Don't use negative ramBytesUsed() values." }
{ "body": "From @rmuir - \"The accountable interface specifies that such values are illegal\"\n\nFixes #8239\n", "number": 8291, "review_comments": [], "title": "Return 0 instead of -1 for unknown/non-exposed ramBytesUsed()" }
{ "commits": [ { "message": "Return 0 instead of -1 for unknown/non-exposed ramBytesUsed()\n\nThe accountable interface specifies that such values are illegal\n\nFixes #8239" } ], "files": [ { "diff": "@@ -63,7 +63,7 @@ public void close() {\n \n @Override\n public long ramBytesUsed() {\n- return -1; // unknown\n+ return 0; // unknown\n }\n \n }", "filename": "src/main/java/org/elasticsearch/index/fielddata/plain/BinaryDVAtomicFieldData.java", "status": "modified" }, { "diff": "@@ -84,7 +84,7 @@ public SortedNumericDoubleValues getDoubleValues() {\n \n };\n } else {\n- return new AtomicLongFieldData(-1) {\n+ return new AtomicLongFieldData(0) {\n \n @Override\n public SortedNumericDocValues getLongValues() {", "filename": "src/main/java/org/elasticsearch/index/fielddata/plain/BinaryDVNumericIndexFieldData.java", "status": "modified" }, { "diff": "@@ -42,7 +42,7 @@ final class BytesBinaryDVAtomicFieldData implements AtomicFieldData {\n \n @Override\n public long ramBytesUsed() {\n- return -1; // not exposed by Lucene\n+ return 0; // not exposed by Lucene\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/index/fielddata/plain/BytesBinaryDVAtomicFieldData.java", "status": "modified" }, { "diff": "@@ -43,7 +43,7 @@ final class GeoPointBinaryDVAtomicFieldData extends AbstractAtomicGeoPointFieldD\n \n @Override\n public long ramBytesUsed() {\n- return -1; // not exposed by Lucene\n+ return 0; // not exposed by Lucene\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/index/fielddata/plain/GeoPointBinaryDVAtomicFieldData.java", "status": "modified" }, { "diff": "@@ -42,7 +42,7 @@ public NumericDVIndexFieldData(Index index, Names fieldNames, FieldDataType fiel\n public AtomicLongFieldData load(AtomicReaderContext context) {\n final AtomicReader reader = context.reader();\n final String field = fieldNames.indexName();\n- return new AtomicLongFieldData(-1) {\n+ return new AtomicLongFieldData(0) {\n @Override\n public SortedNumericDocValues getLongValues() {\n try {", "filename": "src/main/java/org/elasticsearch/index/fielddata/plain/NumericDVIndexFieldData.java", "status": "modified" }, { "diff": "@@ -103,7 +103,7 @@ static final class SortedNumericLongFieldData extends AtomicLongFieldData {\n final String field;\n \n SortedNumericLongFieldData(AtomicReader reader, String field) {\n- super(-1L);\n+ super(0L);\n this.reader = reader;\n this.field = field;\n }\n@@ -140,7 +140,7 @@ static final class SortedNumericFloatFieldData extends AtomicDoubleFieldData {\n final String field;\n \n SortedNumericFloatFieldData(AtomicReader reader, String field) {\n- super(-1L);\n+ super(0L);\n this.reader = reader;\n this.field = field;\n }\n@@ -226,7 +226,7 @@ static final class SortedNumericDoubleFieldData extends AtomicDoubleFieldData {\n final String field;\n \n SortedNumericDoubleFieldData(AtomicReader reader, String field) {\n- super(-1L);\n+ super(0L);\n this.reader = reader;\n this.field = field;\n }", "filename": "src/main/java/org/elasticsearch/index/fielddata/plain/SortedNumericDVIndexFieldData.java", "status": "modified" }, { "diff": "@@ -56,7 +56,7 @@ public void close() {\n \n @Override\n public long ramBytesUsed() {\n- return -1; // unknown\n+ return 0; // unknown\n }\n \n }", "filename": "src/main/java/org/elasticsearch/index/fielddata/plain/SortedSetDVBytesAtomicFieldData.java", "status": "modified" } ] }
{ "body": "This change corrects the location information gathered by the loggers so that when printing class name, method name, and line numbers in the log pattern, the information from the class calling the logger is used rather than a location within the logger itself.\n\nCloses #5130\n", "comments": [ { "body": "@spinscale could you review this one please?\n", "created_at": "2014-10-16T09:52:15Z" }, { "body": "Left a couple of minor comments, could you add some javadocs to the `ESLogRecord` class to explain at a glance why this is needed for anyone stumbling upon it in the future?\n\nOther than that it looks good to me.\n", "created_at": "2014-10-29T09:40:48Z" }, { "body": "LGTM\n", "created_at": "2014-10-29T10:21:58Z" } ], "number": 8052, "title": "Fix location information for loggers" }
{ "body": "This change corrects the location information gathered by the loggers so that when printing class name, method name, and line numbers in the log pattern, the information from the class calling the logger is used rather than a location within the logger itself.\n\nA reset method has also been added to the LogConfigurator class which allows the logging configuration to be reset. This is needed because if the LoggingConfigurationTests and Log4jESLoggerTests are run in the same JVM the second one to run needs to be able to override the log configuration set by the first\n\nCloses #5130, #8052\n", "number": 8286, "review_comments": [], "title": "Core: Fix location information for loggers" }
{ "commits": [ { "message": "Core: Fix location information for loggers\n\nThis change corrects the location information gathered by the loggers so that when printing class name, method name, and line numbers in the log pattern, the information from the class calling the logger is used rather than a location within the logger itself.\n\nA reset method has also been added to the LogConfigurator class which allows the logging configuration to be reset. This is needed because if the LoggingConfigurationTests and Log4jESLoggerTests are run in the same JVM the second one to run needs to be able to override the log configuration set by the first\n\nCloses #5130, #8052" } ], "files": [ { "diff": "@@ -0,0 +1,106 @@\n+/*\r\n+ * Licensed to Elasticsearch under one or more contributor\r\n+ * license agreements. See the NOTICE file distributed with\r\n+ * this work for additional information regarding copyright\r\n+ * ownership. Elasticsearch licenses this file to you under\r\n+ * the Apache License, Version 2.0 (the \"License\"); you may\r\n+ * not use this file except in compliance with the License.\r\n+ * You may obtain a copy of the License at\r\n+ *\r\n+ * http://www.apache.org/licenses/LICENSE-2.0\r\n+ *\r\n+ * Unless required by applicable law or agreed to in writing,\r\n+ * software distributed under the License is distributed on an\r\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\r\n+ * KIND, either express or implied. See the License for the\r\n+ * specific language governing permissions and limitations\r\n+ * under the License.\r\n+ */\r\n+\r\n+package org.elasticsearch.common.logging.jdk;\r\n+\r\n+import org.elasticsearch.common.logging.support.AbstractESLogger;\r\n+\r\n+import java.util.logging.Level;\r\n+import java.util.logging.LogRecord;\r\n+\r\n+/**\r\n+ * A {@link LogRecord} which is used in conjunction with {@link JdkESLogger}\r\n+ * with the ability to provide the class name, method name and line number\r\n+ * information of the code calling the logger\r\n+ */\r\n+public class ESLogRecord extends LogRecord {\r\n+\r\n+ private static final long serialVersionUID = 1107741560233585726L;\r\n+ private static final String FQCN = AbstractESLogger.class.getName();\r\n+ private String sourceClassName;\r\n+ private String sourceMethodName;\r\n+ private transient boolean needToInferCaller;\r\n+\r\n+ public ESLogRecord(Level level, String msg) {\r\n+ super(level, msg);\r\n+ needToInferCaller = true;\r\n+ }\r\n+\r\n+ public String getSourceClassName() {\r\n+ if (needToInferCaller) {\r\n+ inferCaller();\r\n+ }\r\n+ return sourceClassName;\r\n+ }\r\n+\r\n+ public void setSourceClassName(String sourceClassName) {\r\n+ this.sourceClassName = sourceClassName;\r\n+ needToInferCaller = false;\r\n+ }\r\n+\r\n+ public String getSourceMethodName() {\r\n+ if (needToInferCaller) {\r\n+ inferCaller();\r\n+ }\r\n+ return sourceMethodName;\r\n+ }\r\n+\r\n+ public void setSourceMethodName(String sourceMethodName) {\r\n+ this.sourceMethodName = sourceMethodName;\r\n+ needToInferCaller = false;\r\n+ }\r\n+\r\n+ /**\r\n+ * Determines the source information for the caller of the logger (class\r\n+ * name, method name, and line number)\r\n+ */\r\n+ private void inferCaller() {\r\n+ needToInferCaller = false;\r\n+ Throwable throwable = new Throwable();\r\n+\r\n+ boolean lookingForLogger = true;\r\n+ for (final StackTraceElement frame : throwable.getStackTrace()) {\r\n+ String cname = frame.getClassName();\r\n+ boolean isLoggerImpl = isLoggerImplFrame(cname);\r\n+ if (lookingForLogger) {\r\n+ // Skip all frames until we have found the first logger frame.\r\n+ if (isLoggerImpl) {\r\n+ lookingForLogger = false;\r\n+ }\r\n+ } else {\r\n+ if (!isLoggerImpl) {\r\n+ // skip reflection call\r\n+ if (!cname.startsWith(\"java.lang.reflect.\") && !cname.startsWith(\"sun.reflect.\")) {\r\n+ // We've found the relevant frame.\r\n+ setSourceClassName(cname);\r\n+ setSourceMethodName(frame.getMethodName());\r\n+ return;\r\n+ }\r\n+ }\r\n+ }\r\n+ }\r\n+ // We haven't found a suitable frame, so just punt. This is\r\n+ // OK as we are only committed to making a \"best effort\" here.\r\n+ }\r\n+\r\n+ private boolean isLoggerImplFrame(String cname) {\r\n+ // the log record could be created for a platform logger\r\n+ return cname.equals(FQCN);\r\n+ }\r\n+}\r", "filename": "src/main/java/org/elasticsearch/common/logging/jdk/ESLogRecord.java", "status": "added" }, { "diff": "@@ -22,6 +22,7 @@\n import org.elasticsearch.common.logging.support.AbstractESLogger;\n \n import java.util.logging.Level;\n+import java.util.logging.LogRecord;\n import java.util.logging.Logger;\n \n /**\n@@ -31,12 +32,9 @@ public class JdkESLogger extends AbstractESLogger {\n \n private final Logger logger;\n \n- private final String name;\n-\n- public JdkESLogger(String prefix, String name, Logger logger) {\n+ public JdkESLogger(String prefix, Logger logger) {\n super(prefix);\n this.logger = logger;\n- this.name = name;\n }\n \n @Override\n@@ -96,51 +94,70 @@ public boolean isErrorEnabled() {\n \n @Override\n protected void internalTrace(String msg) {\n- logger.logp(Level.FINEST, name, null, msg);\n+ LogRecord record = new ESLogRecord(Level.FINEST, msg);\n+ logger.log(record);\n }\n \n @Override\n protected void internalTrace(String msg, Throwable cause) {\n- logger.logp(Level.FINEST, name, null, msg, cause);\n+ LogRecord record = new ESLogRecord(Level.FINEST, msg);\n+ record.setThrown(cause);\n+ logger.log(record);\n }\n \n @Override\n protected void internalDebug(String msg) {\n- logger.logp(Level.FINE, name, null, msg);\n+ LogRecord record = new ESLogRecord(Level.FINE, msg);\n+ logger.log(record);\n }\n \n @Override\n protected void internalDebug(String msg, Throwable cause) {\n- logger.logp(Level.FINE, name, null, msg, cause);\n+ LogRecord record = new ESLogRecord(Level.FINE, msg);\n+ record.setThrown(cause);\n+ logger.log(record);\n }\n \n @Override\n protected void internalInfo(String msg) {\n- logger.logp(Level.INFO, name, null, msg);\n+ LogRecord record = new ESLogRecord(Level.INFO, msg);\n+ logger.log(record);\n }\n \n @Override\n protected void internalInfo(String msg, Throwable cause) {\n- logger.logp(Level.INFO, name, null, msg, cause);\n+ LogRecord record = new ESLogRecord(Level.INFO, msg);\n+ record.setThrown(cause);\n+ logger.log(record);\n }\n \n @Override\n protected void internalWarn(String msg) {\n- logger.logp(Level.WARNING, name, null, msg);\n+ LogRecord record = new ESLogRecord(Level.WARNING, msg);\n+ logger.log(record);\n }\n \n @Override\n protected void internalWarn(String msg, Throwable cause) {\n- logger.logp(Level.WARNING, name, null, msg, cause);\n+ LogRecord record = new ESLogRecord(Level.WARNING, msg);\n+ record.setThrown(cause);\n+ logger.log(record);\n }\n \n @Override\n protected void internalError(String msg) {\n- logger.logp(Level.SEVERE, name, null, msg);\n+ LogRecord record = new ESLogRecord(Level.SEVERE, msg);\n+ logger.log(record);\n }\n \n @Override\n protected void internalError(String msg, Throwable cause) {\n- logger.logp(Level.SEVERE, name, null, msg, cause);\n+ LogRecord record = new ESLogRecord(Level.SEVERE, msg);\n+ record.setThrown(cause);\n+ logger.log(record);\n+ }\n+\n+ protected Logger logger() {\n+ return logger;\n }\n }", "filename": "src/main/java/org/elasticsearch/common/logging/jdk/JdkESLogger.java", "status": "modified" }, { "diff": "@@ -22,8 +22,6 @@\n import org.elasticsearch.common.logging.ESLogger;\n import org.elasticsearch.common.logging.ESLoggerFactory;\n \n-import java.util.logging.LogManager;\n-\n /**\n *\n */\n@@ -37,6 +35,6 @@ protected ESLogger rootLogger() {\n @Override\n protected ESLogger newInstance(String prefix, String name) {\n final java.util.logging.Logger logger = java.util.logging.Logger.getLogger(name);\n- return new JdkESLogger(prefix, name, logger);\n+ return new JdkESLogger(prefix, logger);\n }\n }", "filename": "src/main/java/org/elasticsearch/common/logging/jdk/JdkESLoggerFactory.java", "status": "modified" }, { "diff": "@@ -29,6 +29,7 @@\n public class Log4jESLogger extends AbstractESLogger {\n \n private final org.apache.log4j.Logger logger;\n+ private final String FQCN = AbstractESLogger.class.getName();\n \n public Log4jESLogger(String prefix, Logger logger) {\n super(prefix);\n@@ -95,51 +96,51 @@ public boolean isErrorEnabled() {\n \n @Override\n protected void internalTrace(String msg) {\n- logger.trace(msg);\n+ logger.log(FQCN, Level.TRACE, msg, null);\n }\n \n @Override\n protected void internalTrace(String msg, Throwable cause) {\n- logger.trace(msg, cause);\n+ logger.log(FQCN, Level.TRACE, msg, cause);\n }\n \n @Override\n protected void internalDebug(String msg) {\n- logger.debug(msg);\n+ logger.log(FQCN, Level.DEBUG, msg, null);\n }\n \n @Override\n protected void internalDebug(String msg, Throwable cause) {\n- logger.debug(msg, cause);\n+ logger.log(FQCN, Level.DEBUG, msg, cause);\n }\n \n @Override\n protected void internalInfo(String msg) {\n- logger.info(msg);\n+ logger.log(FQCN, Level.INFO, msg, null);\n }\n \n @Override\n protected void internalInfo(String msg, Throwable cause) {\n- logger.info(msg, cause);\n+ logger.log(FQCN, Level.INFO, msg, cause);\n }\n \n @Override\n protected void internalWarn(String msg) {\n- logger.warn(msg);\n+ logger.log(FQCN, Level.WARN, msg, null);\n }\n \n @Override\n protected void internalWarn(String msg, Throwable cause) {\n- logger.warn(msg, cause);\n+ logger.log(FQCN, Level.WARN, msg, cause);\n }\n \n @Override\n protected void internalError(String msg) {\n- logger.error(msg);\n+ logger.log(FQCN, Level.ERROR, msg, null);\n }\n \n @Override\n protected void internalError(String msg, Throwable cause) {\n- logger.error(msg, cause);\n+ logger.log(FQCN, Level.ERROR, msg, cause);\n }\n }", "filename": "src/main/java/org/elasticsearch/common/logging/log4j/Log4jESLogger.java", "status": "modified" }, { "diff": "@@ -102,6 +102,14 @@ public static void configure(Settings settings) {\n PropertyConfigurator.configure(props);\n }\n \n+ /**\n+ * sets the loaded flag to false so that logging configuration can be\n+ * overridden. Should only be used in tests.\n+ */\n+ public static void reset() {\n+ loaded = false;\n+ }\n+\n public static void resolveConfig(Environment env, final ImmutableSettings.Builder settingsBuilder) {\n \n try {", "filename": "src/main/java/org/elasticsearch/common/logging/log4j/LogConfigurator.java", "status": "modified" }, { "diff": "@@ -21,17 +21,25 @@\n \n import org.elasticsearch.common.logging.support.AbstractESLogger;\n import org.slf4j.Logger;\n+import org.slf4j.spi.LocationAwareLogger;\n \n /**\n *\n */\n public class Slf4jESLogger extends AbstractESLogger {\n \n private final Logger logger;\n+ private final LocationAwareLogger lALogger;\n+ private final String FQCN = AbstractESLogger.class.getName();\n \n public Slf4jESLogger(String prefix, Logger logger) {\n super(prefix);\n this.logger = logger;\n+ if (logger instanceof LocationAwareLogger) {\n+ lALogger = (LocationAwareLogger) logger;\n+ } else {\n+ lALogger = null;\n+ }\n }\n \n @Override\n@@ -77,51 +85,95 @@ public boolean isErrorEnabled() {\n \n @Override\n protected void internalTrace(String msg) {\n- logger.trace(msg);\n+ if (lALogger != null) {\n+ lALogger.log(null, FQCN, LocationAwareLogger.TRACE_INT, msg, null, null);\n+ } else {\n+ logger.trace(msg);\n+ }\n }\n \n @Override\n protected void internalTrace(String msg, Throwable cause) {\n- logger.trace(msg, cause);\n+ if (lALogger != null) {\n+ lALogger.log(null, FQCN, LocationAwareLogger.TRACE_INT, msg, null, cause);\n+ } else {\n+ logger.trace(msg);\n+ }\n }\n \n @Override\n protected void internalDebug(String msg) {\n- logger.debug(msg);\n+ if (lALogger != null) {\n+ lALogger.log(null, FQCN, LocationAwareLogger.DEBUG_INT, msg, null, null);\n+ } else {\n+ logger.debug(msg);\n+ }\n }\n \n @Override\n protected void internalDebug(String msg, Throwable cause) {\n- logger.debug(msg, cause);\n+ if (lALogger != null) {\n+ lALogger.log(null, FQCN, LocationAwareLogger.DEBUG_INT, msg, null, cause);\n+ } else {\n+ logger.debug(msg);\n+ }\n }\n \n @Override\n protected void internalInfo(String msg) {\n- logger.info(msg);\n+ if (lALogger != null) {\n+ lALogger.log(null, FQCN, LocationAwareLogger.INFO_INT, msg, null, null);\n+ } else {\n+ logger.info(msg);\n+ }\n }\n \n @Override\n protected void internalInfo(String msg, Throwable cause) {\n- logger.info(msg, cause);\n+ if (lALogger != null) {\n+ lALogger.log(null, FQCN, LocationAwareLogger.INFO_INT, msg, null, cause);\n+ } else {\n+ logger.info(msg, cause);\n+ }\n }\n \n @Override\n protected void internalWarn(String msg) {\n- logger.warn(msg);\n+ if (lALogger != null) {\n+ lALogger.log(null, FQCN, LocationAwareLogger.WARN_INT, msg, null, null);\n+ } else {\n+ logger.warn(msg);\n+ }\n }\n \n @Override\n protected void internalWarn(String msg, Throwable cause) {\n- logger.warn(msg, cause);\n+ if (lALogger != null) {\n+ lALogger.log(null, FQCN, LocationAwareLogger.WARN_INT, msg, null, cause);\n+ } else {\n+ logger.warn(msg);\n+ }\n }\n \n @Override\n protected void internalError(String msg) {\n- logger.error(msg);\n+ if (lALogger != null) {\n+ lALogger.log(null, FQCN, LocationAwareLogger.ERROR_INT, msg, null, null);\n+ } else {\n+ logger.error(msg);\n+ }\n }\n \n @Override\n protected void internalError(String msg, Throwable cause) {\n- logger.error(msg, cause);\n+ if (lALogger != null) {\n+ lALogger.log(null, FQCN, LocationAwareLogger.ERROR_INT, msg, null, cause);\n+ } else {\n+ logger.error(msg);\n+ }\n+ }\n+\n+ protected Logger logger() {\n+ return logger;\n }\n }", "filename": "src/main/java/org/elasticsearch/common/logging/slf4j/Slf4jESLogger.java", "status": "modified" }, { "diff": "@@ -41,6 +41,7 @@ public class LoggingConfigurationTests extends ElasticsearchTestCase {\n \n @Test\n public void testMultipleConfigs() throws Exception {\n+ LogConfigurator.reset();\n File configDir = resolveConfigDir();\n Settings settings = ImmutableSettings.builder()\n .put(\"path.conf\", configDir.getAbsolutePath())", "filename": "src/test/java/org/elasticsearch/common/logging/LoggingConfigurationTests.java", "status": "modified" }, { "diff": "@@ -0,0 +1,120 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.common.logging.jdk;\n+\n+import org.elasticsearch.common.logging.ESLogger;\n+import org.elasticsearch.test.ElasticsearchTestCase;\n+import org.junit.Test;\n+\n+import java.util.ArrayList;\n+import java.util.List;\n+import java.util.logging.Handler;\n+import java.util.logging.Level;\n+import java.util.logging.LogRecord;\n+import java.util.logging.Logger;\n+\n+import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.notNullValue;\n+\n+public class JDKESLoggerTests extends ElasticsearchTestCase {\n+\n+ private ESLogger esTestLogger;\n+ private TestHandler testHandler;\n+\n+ @Override\n+ public void setUp() throws Exception {\n+ super.setUp();\n+\n+ JdkESLoggerFactory esTestLoggerFactory = new JdkESLoggerFactory();\n+ esTestLogger = esTestLoggerFactory.newInstance(\"test\");\n+ Logger testLogger = ((JdkESLogger) esTestLogger).logger();\n+ testLogger.setLevel(Level.FINEST);\n+ assertThat(testLogger.getLevel(), equalTo(Level.FINEST));\n+ testHandler = new TestHandler();\n+ testLogger.addHandler(testHandler);\n+ }\n+\n+ @Test\n+ public void locationInfoTest() {\n+ esTestLogger.error(\"This is an error\");\n+ esTestLogger.warn(\"This is a warning\");\n+ esTestLogger.info(\"This is an info\");\n+ esTestLogger.debug(\"This is a debug\");\n+ esTestLogger.trace(\"This is a trace\");\n+ List<LogRecord> records = testHandler.getEvents();\n+ assertThat(records, notNullValue());\n+ assertThat(records.size(), equalTo(5));\n+ LogRecord record = records.get(0);\n+ assertThat(record, notNullValue());\n+ assertThat(record.getLevel(), equalTo(Level.SEVERE));\n+ assertThat(record.getMessage(), equalTo(\"This is an error\"));\n+ assertThat(record.getSourceClassName(), equalTo(JDKESLoggerTests.class.getCanonicalName()));\n+ assertThat(record.getSourceMethodName(), equalTo(\"locationInfoTest\"));\n+ record = records.get(1);\n+ assertThat(record, notNullValue());\n+ assertThat(record.getLevel(), equalTo(Level.WARNING));\n+ assertThat(record.getMessage(), equalTo(\"This is a warning\"));\n+ assertThat(record.getSourceClassName(), equalTo(JDKESLoggerTests.class.getCanonicalName()));\n+ assertThat(record.getSourceMethodName(), equalTo(\"locationInfoTest\"));\n+ record = records.get(2);\n+ assertThat(record, notNullValue());\n+ assertThat(record.getLevel(), equalTo(Level.INFO));\n+ assertThat(record.getMessage(), equalTo(\"This is an info\"));\n+ assertThat(record.getSourceClassName(), equalTo(JDKESLoggerTests.class.getCanonicalName()));\n+ assertThat(record.getSourceMethodName(), equalTo(\"locationInfoTest\"));\n+ record = records.get(3);\n+ assertThat(record, notNullValue());\n+ assertThat(record.getLevel(), equalTo(Level.FINE));\n+ assertThat(record.getMessage(), equalTo(\"This is a debug\"));\n+ assertThat(record.getSourceClassName(), equalTo(JDKESLoggerTests.class.getCanonicalName()));\n+ assertThat(record.getSourceMethodName(), equalTo(\"locationInfoTest\"));\n+ record = records.get(4);\n+ assertThat(record, notNullValue());\n+ assertThat(record.getLevel(), equalTo(Level.FINEST));\n+ assertThat(record.getMessage(), equalTo(\"This is a trace\"));\n+ assertThat(record.getSourceClassName(), equalTo(JDKESLoggerTests.class.getCanonicalName()));\n+ assertThat(record.getSourceMethodName(), equalTo(\"locationInfoTest\"));\n+ \n+ }\n+\n+ private static class TestHandler extends Handler {\n+\n+ private List<LogRecord> records = new ArrayList<>();\n+\n+ @Override\n+ public void close() {\n+ }\n+\n+ public List<LogRecord> getEvents() {\n+ return records;\n+ }\n+\n+ @Override\n+ public void publish(LogRecord record) {\n+ // Forces it to generate the location information\n+ record.getSourceClassName();\n+ records.add(record);\n+ }\n+\n+ @Override\n+ public void flush() {\n+ }\n+ }\n+}", "filename": "src/test/java/org/elasticsearch/common/logging/jdk/JDKESLoggerTests.java", "status": "added" }, { "diff": "@@ -0,0 +1,146 @@\n+/*\n+ * Licensed to Elasticsearch under one or more contributor\n+ * license agreements. See the NOTICE file distributed with\n+ * this work for additional information regarding copyright\n+ * ownership. Elasticsearch licenses this file to you under\n+ * the Apache License, Version 2.0 (the \"License\"); you may\n+ * not use this file except in compliance with the License.\n+ * You may obtain a copy of the License at\n+ *\n+ * http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing,\n+ * software distributed under the License is distributed on an\n+ * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n+ * KIND, either express or implied. See the License for the\n+ * specific language governing permissions and limitations\n+ * under the License.\n+ */\n+\n+package org.elasticsearch.common.logging.log4j;\n+\n+import org.apache.log4j.AppenderSkeleton;\n+import org.apache.log4j.Level;\n+import org.apache.log4j.Logger;\n+import org.apache.log4j.spi.LocationInfo;\n+import org.apache.log4j.spi.LoggingEvent;\n+import org.elasticsearch.common.logging.ESLogger;\n+import org.elasticsearch.common.settings.ImmutableSettings;\n+import org.elasticsearch.common.settings.Settings;\n+import org.elasticsearch.test.ElasticsearchTestCase;\n+import org.junit.Test;\n+\n+import java.io.File;\n+import java.net.URL;\n+import java.util.ArrayList;\n+import java.util.List;\n+\n+import static org.hamcrest.Matchers.equalTo;\n+import static org.hamcrest.Matchers.notNullValue;\n+\n+public class Log4jESLoggerTests extends ElasticsearchTestCase {\n+\n+ private ESLogger esTestLogger;\n+ private TestAppender testAppender;\n+\n+ @Override\n+ public void setUp() throws Exception {\n+ super.setUp();\n+ LogConfigurator.reset();\n+ File configDir = resolveConfigDir();\n+ // Need to set custom path.conf so we can use a custom logging.yml file for the test\n+ Settings settings = ImmutableSettings.builder()\n+ .put(\"path.conf\", configDir.getAbsolutePath())\n+ .build();\n+ LogConfigurator.configure(settings);\n+\n+ esTestLogger = Log4jESLoggerFactory.getLogger(\"test\");\n+ Logger testLogger = ((Log4jESLogger) esTestLogger).logger();\n+ assertThat(testLogger.getLevel(), equalTo(Level.TRACE));\n+ testAppender = new TestAppender();\n+ testLogger.addAppender(testAppender);\n+ }\n+\n+ @Test\n+ public void locationInfoTest() {\n+ esTestLogger.error(\"This is an error\");\n+ esTestLogger.warn(\"This is a warning\");\n+ esTestLogger.info(\"This is an info\");\n+ esTestLogger.debug(\"This is a debug\");\n+ esTestLogger.trace(\"This is a trace\");\n+ List<LoggingEvent> events = testAppender.getEvents();\n+ assertThat(events, notNullValue());\n+ assertThat(events.size(), equalTo(5));\n+ LoggingEvent event = events.get(0);\n+ assertThat(event, notNullValue());\n+ assertThat(event.getLevel(), equalTo(Level.ERROR));\n+ assertThat(event.getRenderedMessage(), equalTo(\"This is an error\"));\n+ LocationInfo locationInfo = event.getLocationInformation();\n+ assertThat(locationInfo, notNullValue());\n+ assertThat(locationInfo.getClassName(), equalTo(Log4jESLoggerTests.class.getCanonicalName()));\n+ assertThat(locationInfo.getMethodName(), equalTo(\"locationInfoTest\"));\n+ event = events.get(1);\n+ assertThat(event, notNullValue());\n+ assertThat(event.getLevel(), equalTo(Level.WARN));\n+ assertThat(event.getRenderedMessage(), equalTo(\"This is a warning\"));\n+ locationInfo = event.getLocationInformation();\n+ assertThat(locationInfo, notNullValue());\n+ assertThat(locationInfo.getClassName(), equalTo(Log4jESLoggerTests.class.getCanonicalName()));\n+ assertThat(locationInfo.getMethodName(), equalTo(\"locationInfoTest\"));\n+ event = events.get(2);\n+ assertThat(event, notNullValue());\n+ assertThat(event.getLevel(), equalTo(Level.INFO));\n+ assertThat(event.getRenderedMessage(), equalTo(\"This is an info\"));\n+ locationInfo = event.getLocationInformation();\n+ assertThat(locationInfo, notNullValue());\n+ assertThat(locationInfo.getClassName(), equalTo(Log4jESLoggerTests.class.getCanonicalName()));\n+ assertThat(locationInfo.getMethodName(), equalTo(\"locationInfoTest\"));\n+ event = events.get(3);\n+ assertThat(event, notNullValue());\n+ assertThat(event.getLevel(), equalTo(Level.DEBUG));\n+ assertThat(event.getRenderedMessage(), equalTo(\"This is a debug\"));\n+ locationInfo = event.getLocationInformation();\n+ assertThat(locationInfo, notNullValue());\n+ assertThat(locationInfo.getClassName(), equalTo(Log4jESLoggerTests.class.getCanonicalName()));\n+ assertThat(locationInfo.getMethodName(), equalTo(\"locationInfoTest\"));\n+ event = events.get(4);\n+ assertThat(event, notNullValue());\n+ assertThat(event.getLevel(), equalTo(Level.TRACE));\n+ assertThat(event.getRenderedMessage(), equalTo(\"This is a trace\"));\n+ locationInfo = event.getLocationInformation();\n+ assertThat(locationInfo, notNullValue());\n+ assertThat(locationInfo.getClassName(), equalTo(Log4jESLoggerTests.class.getCanonicalName()));\n+ assertThat(locationInfo.getMethodName(), equalTo(\"locationInfoTest\"));\n+ \n+ }\n+\n+ private static File resolveConfigDir() throws Exception {\n+ URL url = Log4jESLoggerTests.class.getResource(\"config\");\n+ return new File(url.toURI());\n+ }\n+\n+ private static class TestAppender extends AppenderSkeleton {\n+\n+ private List<LoggingEvent> events = new ArrayList<>();\n+\n+ @Override\n+ public void close() {\n+ }\n+\n+ @Override\n+ public boolean requiresLayout() {\n+ return false;\n+ }\n+\n+ @Override\n+ protected void append(LoggingEvent event) {\n+ // Forces it to generate the location information\n+ event.getLocationInformation();\n+ events.add(event);\n+ }\n+\n+ public List<LoggingEvent> getEvents() {\n+ return events;\n+ }\n+ }\n+}", "filename": "src/test/java/org/elasticsearch/common/logging/log4j/Log4jESLoggerTests.java", "status": "added" }, { "diff": "@@ -0,0 +1,12 @@\n+# you can override this using by setting a system property, for example -Des.logger.level=DEBUG\n+es.logger.level: INFO\n+rootLogger: ${es.logger.level}, console\n+logger:\n+ test: TRACE\n+\n+appender:\n+ console:\n+ type: console\n+ layout:\n+ type: consolePattern\n+ conversionPattern: \"[%d{ISO8601}][%-5p][%-25c] %m%n\"", "filename": "src/test/resources/org/elasticsearch/common/logging/log4j/config/logging.yml", "status": "added" } ] }
{ "body": "Fixes a bug where alias creation would allow `null` for index name, which thereby applied the alias to _all_ indices. This patch makes the validator throw an exception if the index is null.\n\nFixes #7863\n\n``` bash\nPOST /_aliases\n{\n \"actions\": [\n {\n \"add\": {\n \"alias\": \"empty-alias\",\n \"index\": null\n }\n }\n ]\n}\n```\n\n``` json\n{\n \"error\": \"ActionRequestValidationException[Validation Failed: 1: Alias action [add]: [index] may not be null;]\",\n \"status\": 400\n}\n```\n\nEdit:\n\nThe reason this bug wasn't caught by the existing tests is because the old test for nullness only validated against a cluster which had zero indices. The null index is translated into \"_all\", and since there are no indices, this fails because the index doesn't exist. So the test passes.\n\nHowever, as soon as you add an index, \"_all\" resolves and you get the situation described in the original bug report: null index is accepted by the alias, resolves to \"_all\" and gets applied to everything.\n", "comments": [ { "body": "What is here looks good, but I think we should do more. Right now not passing any index at all results in using `_all`, which I think is very dangerous?\n\n```\nRyans-MacBook-Pro:~ rjernst$ curl -i -XPOST 'http://localhost:9200/_aliases' -d '{\"actions\":[{\"add\":{\"alias\":\"myalias\"}}]}'\nHTTP/1.1 200 OK\nContent-Type: application/json; charset=UTF-8\nContent-Length: 21\n\n{\"acknowledged\":true}\n```\n", "created_at": "2014-10-03T17:39:09Z" }, { "body": "@rjernst Just looked into this, and it appears the patch will also protect against complete omission of the field.\n\nThe [`indices()`](https://github.com/polyfractal/elasticsearch/blob/bugfix/null_alias/src/main/java/org/elasticsearch/action/admin/indices/alias/IndicesAliasesRequest.java#L146) method ensures that the array of indices is always initialized (even if empty), which means the new `else` clause introduced in the patch will throw an exception if indices is omitted entirely.\n\n``` bash\n$ curl -XPOST \"http://localhost:9200/_aliases\" -d'\n{\n \"actions\":[\n {\n \"add\":{\n \"alias\":\"empty-alias\"\n }\n }\n ]\n}'\n\n{\n \"error\": \"ActionRequestValidationException[Validation Failed: 1: Alias action [add]: [index] may not be null;]\",\n \"status\": 400\n}\n```\n", "created_at": "2014-10-03T18:41:26Z" }, { "body": "Cool! LGTM.\n\nMy only thought is maybe the error message should be more generic, since it could be a missing \"index\" key, or index set to null?\n", "created_at": "2014-10-03T18:47:20Z" }, { "body": "LGTM too agreed with @rjernst \n", "created_at": "2014-10-06T13:41:47Z" } ], "number": 7976, "title": "Aliases: Throw exception if index is null when creating alias" }
{ "body": "Fixes a bug where alias creation would allow `null` for index name, which thereby applied the alias to _all_ indices. This patch makes the validator throw an exception if the index is null.\n\nJava integration tests and YAML REST tests had to be modified, since they were testing this bug as if it were a real feature.\n\nThis is a **breaking** change, if someone was accidentally relying on this bug.\n\nFixes #7863, related to old PR #7976\n\n``` bash\nPOST /_aliases\n{\n \"actions\": [\n {\n \"add\": {\n \"alias\": \"empty-alias\",\n \"index\": null\n }\n }\n ]\n}\n```\n\n``` json\n{\n \"error\": \"ActionRequestValidationException[Validation Failed: 1: Alias action [add]: [index] may not be null;]\",\n \"status\": 400\n}\n```\n", "number": 8240, "review_comments": [ { "body": "Perhaps use a \"contains\" here instead of exact equality? This is less error prone if the wording is slightly tweaked in the future.\n", "created_at": "2014-10-27T16:02:46Z" } ], "title": "Throw exception if `index` is null or missing when creating an alias" }
{ "commits": [ { "message": "Throw exception if index is null when creating alias\n\nFixes #7863" }, { "message": "Remove redundant check on indices[]\n\nThe Remove section independently checked the size of indices, but now that\nit is being checked at the end of verification it is not needed inside the\nRemove clause" }, { "message": "Reconfigure tests to more accurately describe the scenarios\n\nThe reason this bug wasn't caught by the existing tests is because the old test for nullness\nonly validated against a cluster which had zero indices. The null index is translated into \"_all\",\nand since there are no indices, this fails because the index doesn't exist.\n\nHowever, as soon as you add an index, \"_all\" resolves and you get the situation described in the\noriginal bug report (but which is not tested by the suite)." }, { "message": "Better exception message" }, { "message": "[TEST] Fix yaml REST tests to reflect changes in behavior for alias with null index" }, { "message": "Fix exception text" }, { "message": "Make test more lenient by using 'contains'\n\nAlso add a reason for failure" }, { "message": "[DOCS] Update docs to reflect changes -- not allowed to have blank/null index name" } ], "files": [ { "diff": "@@ -185,7 +185,7 @@ An alias can also be added with the endpoint\n where\n \n [horizontal]\n-`index`:: The index the alias refers to. Can be any of `blank | * | _all | glob pattern | name1, name2, …`\n+`index`:: The index the alias refers to. Can be any of `* | _all | glob pattern | name1, name2, …`\n `name`:: The name of the alias. This is a required option.\n `routing`:: An optional routing that can be associated with an alias.\n `filter`:: An optional filter that can be associated with an alias.", "filename": "docs/reference/indices/aliases.asciidoc", "status": "modified" }, { "diff": "@@ -16,11 +16,17 @@ setup:\n - do:\n indices.put_alias:\n name: alias1\n+ index:\n+ - test_index1\n+ - foo\n body:\n routing: \"routing value\"\n - do:\n indices.put_alias:\n name: alias2\n+ index:\n+ - test_index2\n+ - foo\n body:\n routing: \"routing value\"\n \n@@ -31,14 +37,12 @@ setup:\n name: alias1\n \n - match: {test_index1.aliases.alias1.search_routing: \"routing value\"}\n- - match: {test_index2.aliases.alias1.search_routing: \"routing value\"}\n - match: {foo.aliases.alias1.search_routing: \"routing value\"}\n \n - do:\n indices.get_alias:\n name: alias2\n \n- - match: {test_index1.aliases.alias2.search_routing: \"routing value\"}\n - match: {test_index2.aliases.alias2.search_routing: \"routing value\"}\n - match: {foo.aliases.alias2.search_routing: \"routing value\"}\n \n@@ -57,7 +61,6 @@ setup:\n indices.get_alias:\n name: alias2\n \n- - match: {test_index1.aliases.alias2.search_routing: \"routing value\"}\n - match: {test_index2.aliases.alias2.search_routing: \"routing value\"}\n - match: {foo.aliases.alias2.search_routing: \"routing value\"}\n \n@@ -76,7 +79,6 @@ setup:\n indices.get_alias:\n name: alias2\n \n- - match: {test_index1.aliases.alias2.search_routing: \"routing value\"}\n - match: {test_index2.aliases.alias2.search_routing: \"routing value\"}\n - match: {foo.aliases.alias2.search_routing: \"routing value\"}\n \n@@ -99,7 +101,6 @@ setup:\n indices.get_alias:\n name: alias2\n \n- - match: {test_index1.aliases.alias2.search_routing: \"routing value\"}\n - match: {test_index2.aliases.alias2.search_routing: \"routing value\"}\n - match: {foo.aliases.alias2.search_routing: \"routing value\"}\n \n@@ -122,7 +123,6 @@ setup:\n indices.get_alias:\n name: alias2\n \n- - match: {test_index1.aliases.alias2.search_routing: \"routing value\"}\n - match: {test_index2.aliases.alias2.search_routing: \"routing value\"}\n - match: {foo.aliases.alias2.search_routing: \"routing value\"}\n \n@@ -192,7 +192,6 @@ setup:\n indices.get_alias:\n name: alias2\n \n- - match: {test_index1.aliases.alias2.search_routing: \"routing value\"}\n - match: {test_index2.aliases.alias2.search_routing: \"routing value\"}\n - match: {foo.aliases.alias2.search_routing: \"routing value\"}\n ", "filename": "rest-api-spec/test/indices.delete_alias/all_path_options.yaml", "status": "modified" }, { "diff": "@@ -106,16 +106,10 @@ setup:\n \n \n - do:\n+ catch: param\n indices.put_alias:\n name: alias\n \n- - do:\n- indices.get_alias:\n- name: alias\n-\n- - match: {test_index1.aliases.alias: {}}\n- - match: {test_index2.aliases.alias: {}}\n- - match: {foo.aliases.alias: {}}\n \n ---\n \"put alias with missing name\":", "filename": "rest-api-spec/test/indices.put_alias/all_path_options.yaml", "status": "modified" }, { "diff": "@@ -294,10 +294,6 @@ public ActionRequestValidationException validate() {\n + \"]: [alias] may not be empty string\", validationException);\n }\n }\n- if (CollectionUtils.isEmpty(aliasAction.indices)) {\n- validationException = addValidationError(\"Alias action [\" + aliasAction.actionType().name().toLowerCase(Locale.ENGLISH)\n- + \"]: indices may not be empty\", validationException);\n- }\n }\n if (!CollectionUtils.isEmpty(aliasAction.indices)) {\n for (String index : aliasAction.indices) {\n@@ -306,6 +302,9 @@ public ActionRequestValidationException validate() {\n + \"]: [index] may not be empty string\", validationException);\n }\n }\n+ } else {\n+ validationException = addValidationError(\"Alias action [\" + aliasAction.actionType().name().toLowerCase(Locale.ENGLISH)\n+ + \"]: Property [index] was either missing or null\", validationException);\n }\n }\n return validationException;", "filename": "src/main/java/org/elasticsearch/action/admin/indices/alias/IndicesAliasesRequest.java", "status": "modified" }, { "diff": "@@ -744,9 +744,35 @@ public void testIndicesGetAliases() throws Exception {\n assertThat(existsResponse.exists(), equalTo(false));\n }\n \n- @Test(expected = IndexMissingException.class)\n- public void testAddAliasNullIndex() {\n- admin().indices().prepareAliases().addAliasAction(AliasAction.newAddAliasAction(null, \"alias1\")).get();\n+ @Test\n+ public void testAddAliasNullWithoutExistingIndices() {\n+ try {\n+ assertAcked(admin().indices().prepareAliases().addAliasAction(AliasAction.newAddAliasAction(null, \"alias1\")));\n+ fail(\"create alias should have failed due to null index\");\n+ } catch (ElasticsearchIllegalArgumentException e) {\n+ assertThat(\"Exception text does not contain \\\"Property [index] was either missing or null\\\"\",\n+ e.getMessage().contains(\"Property [index] was either missing or null\"),\n+ equalTo(true));\n+ }\n+\n+ }\n+\n+ @Test\n+ public void testAddAliasNullWithExistingIndices() throws Exception {\n+ logger.info(\"--> creating index [test]\");\n+ createIndex(\"test\");\n+ ensureGreen();\n+\n+ logger.info(\"--> aliasing index [null] with [empty-alias]\");\n+\n+ try {\n+ assertAcked(admin().indices().prepareAliases().addAlias((String) null, \"empty-alias\"));\n+ fail(\"create alias should have failed due to null index\");\n+ } catch (ElasticsearchIllegalArgumentException e) {\n+ assertThat(\"Exception text does not contain \\\"Property [index] was either missing or null\\\"\",\n+ e.getMessage().contains(\"Property [index] was either missing or null\"),\n+ equalTo(true));\n+ }\n }\n \n @Test(expected = ActionRequestValidationException.class)\n@@ -771,7 +797,7 @@ public void testAddAliasNullAliasNullIndex() {\n assertTrue(\"Should throw \" + ActionRequestValidationException.class.getSimpleName(), false);\n } catch (ActionRequestValidationException e) {\n assertThat(e.validationErrors(), notNullValue());\n- assertThat(e.validationErrors().size(), equalTo(1));\n+ assertThat(e.validationErrors().size(), equalTo(2));\n }\n }\n \n@@ -928,7 +954,7 @@ public void testAddAliasWithFilterNoMapping() throws Exception {\n .addAlias(\"test\", \"a\", FilterBuilders.matchAllFilter()) // <-- no fail, b/c no field mentioned\n .get();\n }\n-\n+ \n private void checkAliases() {\n GetAliasesResponse getAliasesResponse = admin().indices().prepareGetAliases(\"alias1\").get();\n assertThat(getAliasesResponse.getAliases().get(\"test\").size(), equalTo(1));", "filename": "src/test/java/org/elasticsearch/aliases/IndexAliasesTests.java", "status": "modified" } ] }
{ "body": "I've been trying to track down some strange sorting results and have narrowed it down to having an empty index. I have multiple indexes all aliased to the same value. Sort order works fine, but once you add an empty index, the order is corrupted. Reproduction below:\n\n```\n#!/bin/bash\n\n# Create index with no docs with alias test\ncurl -s -XPOST \"http://localhost:9200/test-1\" -d '{ \"aliases\" : { \"test\" : {} }}' > /dev/null\n\n# Then create a bunch of indices with alias test\ncurl -s XPOST \"http://localhost:9200/test-2\" -d '{ \"aliases\" : { \"test\" : {} }}' > /dev/null\ncurl -s -XPOST \"http://localhost:9200/test-2/doc\" -d \"{ \\\"entry\\\": 1 }\"\n\ncurl -s -XPOST \"http://localhost:9200/test-3\" -d '{ \"aliases\" : { \"test\" : {} }}' > /dev/null\ncurl -s -XPOST \"http://localhost:9200/test-3/doc\" -d \"{ \\\"entry\\\": 2 }\"\n\ncurl -s -XPOST \"http://localhost:9200/test-4\" -d '{ \"aliases\" : { \"test\" : {} }}' > /dev/null\ncurl -s -XPOST \"http://localhost:9200/test-4/doc\" -d \"{ \\\"entry\\\": 3 }\"\n\ncurl -s -XPOST \"http://localhost:9200/test-5\" -d '{ \"aliases\" : { \"test\" : {} }}' > /dev/null\ncurl -s -XPOST \"http://localhost:9200/test-5/doc\" -d \"{ \\\"entry\\\": 4 }\"\n\ncurl -s -XPOST \"http://localhost:9200/test-6\" -d '{ \"aliases\" : { \"test\" : {} }}' > /dev/null\ncurl -s -XPOST \"http://localhost:9200/test-6/doc\" -d \"{ \\\"entry\\\": 5 }\"\n\ncurl -s -XPOST \"http://localhost:9200/test-7\" -d '{ \"aliases\" : { \"test\" : {} }}' > /dev/null\ncurl -s -XPOST \"http://localhost:9200/test-7/doc\" -d \"{ \\\"entry\\\": 6 }\"\n\nsleep 2\n\n# Perform a sorted query, descending on field 'entry'\ncurl -XPOST 'http://localhost:9200/test/_search?pretty' -d '{\n \"query\": {\n \"filtered\": {\n \"query\": {\n \"bool\": {\n \"should\": [\n {\n \"query_string\": {\n \"query\": \"*\"\n }\n }\n ]\n }\n },\n \"filter\": {\n \"bool\": {\n \"must\": [\n {\n \"match_all\": {}\n }\n ]\n }\n }\n }\n },\n \"highlight\": {\n \"fields\": {},\n \"fragment_size\": 2147483647,\n \"pre_tags\": [\n \"@start-highlight@\"\n ],\n \"post_tags\": [\n \"@end-highlight@\"\n ]\n },\n \"size\": 500,\n \"sort\": [\n {\n \"entry\": {\n \"order\": \"desc\",\n \"ignore_unmapped\": true\n }\n }\n ]\n}' | jq \".hits.hits[] | ._source.entry\"\n```\n\nResults:\n\n```\n5\n1\n3\n2\n6\n4\n```\n\nI can fix sort order in two ways: 1) remove the empty index or 2) change \"ignore_unmapped\" to false.\n\nBut IMO an empty index should no affect sort results. I'm also confused on why \"ignore_unmapped\": false, fixes things. I would think you would want the reverse, but it fails when true.\n", "comments": [ { "body": "I can reproduce this on `1.3` branch with:\n\n``` Java\n\npublic void testIssue8226() {\n int numIndices = between(5, 10);\n for (int i = 0; i < numIndices; i++) {\n assertAcked(prepareCreate(\"test_\" + i).addAlias(new Alias(\"test\")));\n if (i > 0) {\n client().prepareIndex(\"test_\" + i, \"foo\", \"\" + i).setSource(\"{\\\"entry\\\": \" + i + \"}\").get();\n }\n }\n ensureYellow();\n refresh();\n SearchResponse searchResponse = client().prepareSearch()\n .addSort(new FieldSortBuilder(\"entry\").order(SortOrder.DESC).ignoreUnmapped(true))\n .setSize(10).get();\n assertSearchResponse(searchResponse);\n\n for (int j = 1; j < searchResponse.getHits().hits().length; j++) {\n assertThat(searchResponse.toString(), searchResponse.getHits().hits()[j].getId(), lessThan(searchResponse.getHits().hits()[j-1].getId()));\n }\n }\n```\n", "created_at": "2014-10-27T06:19:15Z" }, { "body": "It seems this doesn't happen in `master`, `1.x` and neither on `1.4` but the test fails on `1.3` on every run so either we forgot to backport a fix or it's been fixed due to the cut-over to lucene comparators or so?\n", "created_at": "2014-10-27T08:25:28Z" }, { "body": "alright I found the issue - nasty... will open a PR soon\n", "created_at": "2014-10-27T08:37:40Z" }, { "body": "> It seems this doesn't happen in master, 1.x and neither on 1.4 but the test fails on 1.3 on every run so either we forgot to backport a fix or it's been fixed due to the cut-over to lucene comparators or so?\n\nI believe it is because this change https://github.com/elasticsearch/elasticsearch/pull/7039 is only on 1.4+\n", "created_at": "2014-10-27T12:03:16Z" }, { "body": "fixed in `1.3.5`\n", "created_at": "2014-10-27T12:22:12Z" } ], "number": 8226, "title": "Empty index corrupts sort order" }
{ "body": "Closes #8226\n", "number": 8236, "review_comments": [], "title": "Sorting: Use first non-empty result to detect if sort is required" }
{ "commits": [ { "message": "Fix transient testScore failure by making DF consistent for query." }, { "message": "Share numeric data analyzer instances between mappings\nuse similar mechanism that shares numeric analyzers for long/double/... for dates as well. This has nice memory save properties with many date fields mapping case, as well as analysis saves (thread local resources)\ncloses #6843" }, { "message": "Releasable XContentBuilder\nmake the builder releasable (auto closeable), and use it in shards state\nalso make XContentParser releasable (AutoCloseable) and not closeable since it doesn't throw an IOException\ncloses #6869" }, { "message": "[AGGS] Pass current docid being processed to scripts.\n\nScripts may internally cache based on docid (as expressions do). This\nchange makes numeric aggregations using scripts pass the docid when\nit changes." }, { "message": "Store: Only send shard exists requests if shards exist locally on disk and are not allocated on that node according to the cluster state.\n\nCloses #6870" }, { "message": "[Infra] re-send failed shard messages when receiving a cluster state that still refers to them\n\nIn rare cases we may fail to send a shard failure event to the master, or there is no known master when the shard has failed (ex. a couple of node leave the cluster canceling recoveries and causing a master to step down at the same time). When that happens and a cluster state arrives from the (new) master we should resend the shard failure in order for the master to remove the shard from this node.\n\nCloses #6881" }, { "message": "[Recovery] don't start a gateway recovery if source node is not found\n\nDue to change introduced in #6825, we now start a local gateway recovery for replicas, if the source node can not be found. The recovery then fails because we never recover replicas from disk.\n\nCloses #6879" }, { "message": "[TESTS] Stabilize DisabledFieldDataFormatTests by setting number_of_replicas to 0." }, { "message": "[DOCS] : Indexed scripts/templates\n\nThese are the docs for the indexed scripts/templates feature.\nAlso moved the namespace for the REST endpoints.\n\nCloses #6851\n\nConflicts:\n\tdocs/reference/modules/scripting.asciidoc" }, { "message": "[DOCS][FIX] Fix doc parsing, broken closing block" }, { "message": "[DOCS][FIX] Fix reference check in indexed scripts/templates doc." }, { "message": "Aggregations: Fixed Histogram key_as_string bug\n\nThe key as string field in the response for the histogram aggregation will now only show if format is specified on the request.\n\nCloses #6655" }, { "message": "[TEST] Verify if clear cache request went to all shards." }, { "message": "Revert Benchmark API for version 1.3\n\nRelates to #6256" }, { "message": "Threadpool Info: Allow to serialize negative thread pool sizes\n\nAs a SizeValue is used for serializing the thread pool size, a negative number\nresulted in throwing an exception when deserializing (using -ea an assertionerror\nwas thrown).\n\nThis fixes a check for changing the serialization logic, so that negative numbers are read correctly, by adding an internal UNBOUNDED value.\n\nCloses #6325\nCloses #5357" }, { "message": "REST: Renamed indexed_script and indexed_template specs\n\nThe file name of the REST specs should be the same as\nthe endpoint which it documents." }, { "message": "REST: Fixed indexed script/template tests\n\nThe clients don't have access to the error message\nfor matching purposes. Rewritten tests to work\nwith the clients" }, { "message": "Serialization: Fix bwc issue by falling back to old threadpool serialization\n\nThis fixes an issue introduced by the serialization changes in #6486\nwhich are not needed at all. Node that the serialization itself is not broken\nbut the TransportClient uses its own version on initial connect and getting\nthe NodeInfos." }, { "message": "bin/plugin removes itself\n\nIf you call `bin/plugin --remove es-plugin` the plugin got removed but the file `bin/plugin` itself was also deleted.\n\nWe now don't allow the following plugin names:\n\n* elasticsearch\n* plugin\n* elasticsearch.bat\n* plugin.bat\n* elasticsearch.in.sh\n* service.bat\n\nCloses #6745\n\n(cherry picked from commit 248ea31)" }, { "message": "[TEST] Fixed intermittent failure due to lack of mapping" }, { "message": "[Infra] remove indicesLifecycle.Listener from IndexingMemoryController\n\nThe IndexingMemoryController determines the amount of indexing buffer size and translog buffer size each shard should have. It takes memory from inactive shards (indexing wise) and assigns it to other shards. To do so it needs to know about the addition and closing of shards. The current implementation hooks into the indicesService.indicesLifecycle() mechanism to receive call backs, such shard entered the POST_RECOVERY state. Those call backs are typically run on the thread that actually made the change. A mutex was used to synchronize those callbacks with IndexingMemoryController's background thread, which updates the internal engines memory usage on a regular interval. This introduced a dependency between those threads and the locks of the internal engines hosted on the node. In a *very* rare situation (two tests runs locally) this can cause recovery time outs where two nodes are recovering replicas from each other.\n\n This commit introduces a a lock free approach that updates the internal data structures during iterations in the background thread.\n\nCloses #6892" }, { "message": "[Store] delete unallocated shards under a cluster state task\n\nThis is to prevent a rare racing condition where the very same shard gets allocated to the node after our sanity check that the cluster state didn't check and the actual deletion of the files.\n\nCloses #6902" }, { "message": "[TEST] an active shard might also be relocating" }, { "message": "[TEST] Add SuppressSysoutChecks to DistributorDirectoryTest" }, { "message": "[TEST] Temporarily ignore transport update tests." }, { "message": "[Engine] `index.fail_on_corruption` is not updateable\n\nThe `index.fail_on_corruption` was not updateable via the index settings\nAPI. This commit also fixed the setting prefix to be consistent with other\nsetting on the engine. Yet, this feature is unreleased so this won't break anything.\n\nCloses #6941" }, { "message": "[RESTORE] Fail restore if snapshot is corrupted\n\ntoday if a snapshot is corrupted the restore operation\nnever terminates. Yet, if the snapshot is corrupted there\nis no way to restore it anyway. If such a snapshot is restored\ntoday the only way to cancle it is to delete the entire index which\nmight cause dataloss. This commit also fixes an issue in InternalEngine\nwhere a deadlock can occur if a corruption is detected during flush\nsince the InternalEngine#snapshotIndex aqcuires a topLevel read lock\nwhich prevents closing the engine.\n\nCloses #6938" }, { "message": "[TEST] Activate test in PluginManagerTests that is supposed to be fixed" }, { "message": "[TEST] Stress test for update and delete concurrency.\n\nThis test deletes and updates using upserts documents over several threads in a\ntight loop. It counts the number of responses and verifies that the versions at\nthe end are correct." }, { "message": "[TEST] remove DiscoveryWithNetworkFailuresTests - zen improvements are not committed yet" } ], "files": [ { "diff": "@@ -3,14 +3,16 @@\n *.iml\n work/\n /data/\n+/plugins/\n logs/\n .DS_Store\n build/\n target/\n-.local-execution-hints.log\n+*-execution-hints.log\n docs/html/\n docs/build.log\n /tmp/\n+backwards/\n \n ## eclipse ignores (use 'mvn eclipse:eclipse' to build eclipse projects)\n ## The only configuration files which are not ignored are certain files in", "filename": ".gitignore", "status": "modified" }, { "diff": "@@ -1,8 +1,8 @@\n eclipse.preferences.version=1\n-# We target Java 1.6\n-org.eclipse.jdt.core.compiler.codegen.targetPlatform=1.6\n-org.eclipse.jdt.core.compiler.compliance=1.6\n-org.eclipse.jdt.core.compiler.source=1.6\n+# We target Java 1.7\n+org.eclipse.jdt.core.compiler.codegen.targetPlatform=1.7\n+org.eclipse.jdt.core.compiler.compliance=1.7\n+org.eclipse.jdt.core.compiler.source=1.7\n # Lines should be splitted at 140 chars\n org.eclipse.jdt.core.formatter.lineSplit=140\n # Indentation is 4 spaces", "filename": ".settings/org.eclipse.jdt.core.prefs", "status": "modified" }, { "diff": "@@ -1,8 +1,6 @@\n language: java\n jdk:\n- - openjdk6\n - openjdk7\n- - oraclejdk7\n \n env:\n - ES_TEST_LOCAL=true", "filename": ".travis.yml", "status": "modified" }, { "diff": "@@ -42,7 +42,7 @@ The process for contributing to any of the [Elasticsearch repositories](https://\n \n ### Fork and clone the repository\n \n-You will need to fork the main Elasticsearch code or documentation repository and clone it to your local machine. See \n+You will need to fork the main Elasticsearch code or documentation repository and clone it to your local machine. See\n [github help page](https://help.github.com/articles/fork-a-repo) for help.\n \n Further instructions for specific projects are given below.\n@@ -52,20 +52,24 @@ Further instructions for specific projects are given below.\n Once your changes and tests are ready to submit for review:\n \n 1. Test your changes\n-Run the test suite to make sure that nothing is broken. See the\n+\n+ Run the test suite to make sure that nothing is broken. See the\n [TESTING](TESTING.asciidoc) file for help running tests.\n \n 2. Sign the Contributor License Agreement\n-Please make sure you have signed our [Contributor License Agreement](http://www.elasticsearch.org/contributor-agreement/). We are not asking you to assign copyright to us, but to give us the right to distribute your code without restriction. We ask this of all contributors in order to assure our users of the origin and continuing existence of the code. You only need to sign the CLA once.\n+\n+ Please make sure you have signed our [Contributor License Agreement](http://www.elasticsearch.org/contributor-agreement/). We are not asking you to assign copyright to us, but to give us the right to distribute your code without restriction. We ask this of all contributors in order to assure our users of the origin and continuing existence of the code. You only need to sign the CLA once.\n \n 3. Rebase your changes\n-Update your local repository with the most recent code from the main Elasticsearch repository, and rebase your branch on top of the latest master branch. We prefer your changes to be squashed into a single commit.\n+\n+ Update your local repository with the most recent code from the main Elasticsearch repository, and rebase your branch on top of the latest master branch. We prefer your initial changes to be squashed into a single commit. Later, if we ask you to make changes, add them as separate commits. This makes them easier to review. As a final step before merging we will either ask you to squash all commits yourself or we'll do it for you.\n+\n \n 4. Submit a pull request\n-Push your local changes to your forked copy of the repository and [submit a pull request](https://help.github.com/articles/using-pull-requests). In the pull request, describe what your changes do and mention the number of the issue where discussion has taken place, eg \"Closes #123\".\n \n-Then sit back and wait. There will probably be discussion about the pull request and, if any changes are needed, we would love to work with you to get your pull request merged into Elasticsearch.\n+ Push your local changes to your forked copy of the repository and [submit a pull request](https://help.github.com/articles/using-pull-requests). In the pull request, choose a title which sums up the changes that you have made, and in the body provide more details about what your changes do. Also mention the number of the issue where discussion has taken place, eg \"Closes #123\".\n \n+Then sit back and wait. There will probably be discussion about the pull request and, if any changes are needed, we would love to work with you to get your pull request merged into Elasticsearch.\n \n Contributing to the Elasticsearch codebase\n ------------------------------------------", "filename": "CONTRIBUTING.md", "status": "modified" }, { "diff": "@@ -37,13 +37,13 @@ First of all, DON'T PANIC. It will take 5 minutes to get the gist of what Elasti\n h3. Installation\n \n * \"Download\":http://www.elasticsearch.org/download and unzip the Elasticsearch official distribution.\n-* Run @bin/elasticsearch -f@ on unix, or @bin/elasticsearch.bat@ on windows.\n+* Run @bin/elasticsearch@ on unix, or @bin\\elasticsearch.bat@ on windows.\n * Run @curl -X GET http://localhost:9200/@.\n * Start more servers ...\n \n h3. Indexing\n \n-Lets try and index some twitter like information. First, lets create a twitter user, and add some tweets (the @twitter@ index will be created automatically):\n+Let's try and index some twitter like information. First, let's create a twitter user, and add some tweets (the @twitter@ index will be created automatically):\n \n <pre>\n curl -XPUT 'http://localhost:9200/twitter/user/kimchy' -d '{ \"name\" : \"Shay Banon\" }'\n@@ -63,7 +63,7 @@ curl -XPUT 'http://localhost:9200/twitter/tweet/2' -d '\n }'\n </pre>\n \n-Now, lets see if the information was added by GETting it:\n+Now, let's see if the information was added by GETting it:\n \n <pre>\n curl -XGET 'http://localhost:9200/twitter/user/kimchy?pretty=true'\n@@ -74,7 +74,7 @@ curl -XGET 'http://localhost:9200/twitter/tweet/2?pretty=true'\n h3. Searching\n \n Mmm search..., shouldn't it be elastic? \n-Lets find all the tweets that @kimchy@ posted:\n+Let's find all the tweets that @kimchy@ posted:\n \n <pre>\n curl -XGET 'http://localhost:9200/twitter/tweet/_search?q=user:kimchy&pretty=true'\n@@ -86,12 +86,12 @@ We can also use the JSON query language Elasticsearch provides instead of a quer\n curl -XGET 'http://localhost:9200/twitter/tweet/_search?pretty=true' -d '\n { \n \"query\" : { \n- \"text\" : { \"user\": \"kimchy\" }\n+ \"match\" : { \"user\": \"kimchy\" }\n } \n }'\n </pre>\n \n-Just for kicks, lets get all the documents stored (we should see the user as well):\n+Just for kicks, let's get all the documents stored (we should see the user as well):\n \n <pre>\n curl -XGET 'http://localhost:9200/twitter/_search?pretty=true' -d '\n@@ -119,7 +119,7 @@ There are many more options to perform search, after all, its a search product n\n \n h3. Multi Tenant - Indices and Types\n \n-Maan, that twitter index might get big (in this case, index size == valuation). Lets see if we can structure our twitter system a bit differently in order to support such large amount of data.\n+Maan, that twitter index might get big (in this case, index size == valuation). Let's see if we can structure our twitter system a bit differently in order to support such large amount of data.\n \n Elasticsearch support multiple indices, as well as multiple types per index. In the previous example we used an index called @twitter@, with two types, @user@ and @tweet@.\n \n@@ -184,7 +184,7 @@ curl -XGET 'http://localhost:9200/_search?pretty=true' -d '\n \n h3. Distributed, Highly Available\n \n-Lets face it, things will fail....\n+Let's face it, things will fail....\n \n Elasticsearch is a highly available and distributed search engine. Each index is broken down into shards, and each shard can have one or more replica. By default, an index is created with 5 shards and 1 replica per shard (5/1). There are many topologies that can be used, including 1/10 (improve search performance), or 20/1 (improve indexing performance, with search executed in a map reduce fashion across shards).\n \n@@ -206,6 +206,10 @@ The distribution will be created under @target/releases@.\n See the \"TESTING\":TESTING.asciidoc file for more information about\n running the Elasticsearch test suite.\n \n+h3. Upgrading to Elasticsearch 1.x?\n+\n+In order to ensure a smooth upgrade process from earlier versions of Elasticsearch (< 1.0.0), it is recommended to perform a full cluster restart. Please see the \"Upgrading\" section of the \"setup reference\":http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/setup.html.\n+\n h1. License\n \n <pre>", "filename": "README.textile", "status": "modified" }, { "diff": "@@ -2,7 +2,7 @@\n = Testing\n \n [partintro]\n---\n+\n Elasticsearch uses jUnit for testing, it also uses randomness in the\n tests, that can be set using a seed, the following is a cheatsheet of\n options for running the tests for ES.\n@@ -18,16 +18,21 @@ mvn clean package -DskipTests\n \n == Other test options\n \n-To disable and enable network transport, set the `ES_TEST_LOCAL`\n-environment variable.\n+To disable and enable network transport, set the `Des.node.mode`.\n \n-Use network transport (default):\n+Use network transport:\n \n ------------------------------------\n-export ES_TEST_LOCAL=false && mvn test\n+-Des.node.mode=network\n ------------------------------------\n \n-Use local transport:\n+Use local transport (default since 1.3):\n+\n+-------------------------------------\n+-Des.node.mode=local\n+-------------------------------------\n+\n+Alternatively, you can set the `ES_TEST_LOCAL` environment variable:\n \n -------------------------------------\n export ES_TEST_LOCAL=true && mvn test\n@@ -57,6 +62,29 @@ Run any test methods that contain 'esi' (like: ...r*esi*ze...).\n mvn test \"-Dtests.method=*esi*\"\n -------------------------------\n \n+You can also filter tests by certain annotations ie:\n+\n+ * `@Slow` - tests that are know to take a long time to execute\n+ * `@Nightly` - tests that only run in nightly builds (disabled by default)\n+ * `@Integration` - integration tests\n+ * `@Backwards` - backwards compatibility tests (disabled by default)\n+ * `@AwaitsFix` - tests that are waiting for a bugfix (disabled by default)\n+ * `@BadApple` - tests that are known to fail randomly (disabled by default)\n+\n+Those annotation names can be combined into a filter expression like:\n+\n+------------------------------------------------\n+mvn test -Dtests.filter=\"@nightly and not @slow\" \n+------------------------------------------------\n+\n+to run all nightly test but not the ones that are slow. `tests.filter` supports\n+the boolean operators `and, or, not` and grouping ie:\n+\n+\n+---------------------------------------------------------------\n+mvn test -Dtests.filter=\"@nightly and not(@slow or @backwards)\" \n+---------------------------------------------------------------\n+\n === Seed and repetitions.\n \n Run with a given seed (seed is a hex-encoded long).\n@@ -67,19 +95,20 @@ mvn test -Dtests.seed=DEADBEEF\n \n === Repeats _all_ tests of ClassName N times.\n \n-Every test repetition will have a different seed.\n+Every test repetition will have a different method seed \n+(derived from a single random master seed).\n \n --------------------------------------------------\n mvn test -Dtests.iters=N -Dtests.class=*.ClassName\n --------------------------------------------------\n \n === Repeats _all_ tests of ClassName N times.\n \n-Every test repetition will have exactly the same master (dead) and\n-method-level (beef) seed.\n+Every test repetition will have exactly the same master (0xdead) and\n+method-level (0xbeef) seed.\n \n ------------------------------------------------------------------------\n-mvn test -Dtests.iters=N -Dtests.class=*.ClassName -Dtests.seed=DEADBEEF\n+mvn test -Dtests.iters=N -Dtests.class=*.ClassName -Dtests.seed=DEAD:BEEF\n ------------------------------------------------------------------------\n \n === Repeats a given test N times\n@@ -114,20 +143,28 @@ mvn test -Dtests.slow=[true] - slow tests (@Slow)\n \n === Load balancing and caches.\n \n-Run sequentially (one slave JVM). By default, the tests run with 3\n-concurrent JVMs.\n+By default, the tests run sequentially on a single forked JVM. \n \n-----------------------------\n-mvn test -Dtests.jvms=1 test\n-----------------------------\n-\n-Run with more slave JVMs than the default. Don't count hypercores for\n-CPU-intense tests. Make sure there is enough RAM to handle child JVMs.\n+To run with more forked JVMs than the default use:\n \n ----------------------------\n mvn test -Dtests.jvms=8 test\n ----------------------------\n \n+Don't count hypercores for CPU-intense tests and leave some slack\n+for JVM-internal threads (like the garbage collector). Make sure there is \n+enough RAM to handle child JVMs.\n+\n+=== Test compatibility.\n+\n+It is possible to provide a version that allows to adapt the tests behaviour\n+to older features or bugs that have been changed or fixed in the meantime.\n+\n+-----------------------------------------\n+mvn test -Dtests.compatibility=1.0.0\n+-----------------------------------------\n+\n+\n === Miscellaneous.\n \n Run all tests without stopping on errors (inspect log files).\n@@ -142,7 +179,8 @@ Run more verbose output (slave JVM parameters, etc.).\n mvn test -verbose test\n ----------------------\n \n-Change the default suite timeout to 5 seconds.\n+Change the default suite timeout to 5 seconds for all\n+tests (note the exclamation mark).\n \n ---------------------------------------\n mvn test -Dtests.timeoutSuite=5000! ...\n@@ -161,6 +199,45 @@ even if tests are passing.\n mvn test -Dtests.output=always\n ------------------------------\n \n+Configure the heap size.\n+\n+------------------------------\n+mvn test -Dtests.heap.size=512m \n+------------------------------\n+\n+Pass arbitrary jvm arguments.\n+\n+------------------------------\n+mvn test -Dtests.jvm.argline=\"-XX:HeapDumpPath=/path/to/heapdumps\"\n+------------------------------\n+\n+== Backwards Compatibility Tests\n+\n+Running backwards compatibility tests is disabled by default since it\n+requires a release version of elasticsearch to be present on the test system.\n+To run backwards compatibiilty tests untar or unzip a release and run the tests\n+with the following command:\n+\n+---------------------------------------------------------------------------\n+mvn test -Dtests.filter=\"@backwards\" -Dtests.bwc.version=x.y.z -Dtests.bwc.path=/path/to/elasticsearch\n+---------------------------------------------------------------------------\n+\n+If the elasticsearch release is placed under `./backwards/elasticsearch-x.y.z` the path\n+can be omitted:\n+ \n+---------------------------------------------------------------------------\n+mvn test -Dtests.filter=\"@backwards\" -Dtests.bwc.version=x.y.z\n+---------------------------------------------------------------------------\n+\n+To setup the bwc test environment execute the following steps (provided you are\n+already in your elasticsearch clone):\n+\n+---------------------------------------------------------------------------\n+$ mkdir backwards && cd backwards\n+$ curl -O https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.2.1.tar.gz\n+$ tar -xzf elasticsearch-1.2.1.tar.gz \n+---------------------------------------------------------------------------\n+\n == Testing the REST layer\n \n The available integration tests make use of the java API to communicate with\n@@ -180,19 +257,23 @@ mvn test -Dtests.class=org.elasticsearch.test.rest.ElasticsearchRestTests\n `ElasticsearchRestTests` is the executable test class that runs all the\n yaml suites available within the `rest-api-spec` folder.\n \n-The following are the options supported by the REST tests runner:\n+The REST tests support all the options provided by the randomized runner, plus the following:\n \n-* `tests.rest[true|false|host:port]`: determines whether the REST tests need\n-to be run and if so whether to rely on an external cluster (providing host\n-and port) or fire a test cluster (default)\n+* `tests.rest[true|false]`: determines whether the REST tests need to be run (default) or not.\n * `tests.rest.suite`: comma separated paths of the test suites to be run\n (by default loaded from /rest-api-spec/test). It is possible to run only a subset\n of the tests providing a sub-folder or even a single yaml file (the default\n /rest-api-spec/test prefix is optional when files are loaded from classpath)\n e.g. -Dtests.rest.suite=index,get,create/10_with_id\n+* `tests.rest.blacklist`: comma separated globs that identify tests that are\n+blacklisted and need to be skipped\n+e.g. -Dtests.rest.blacklist=index/*/Index document,get/10_basic/*\n * `tests.rest.spec`: REST spec path (default /rest-api-spec/api)\n-* `tests.iters`: runs multiple iterations\n-* `tests.seed`: seed to base the random behaviours on\n-* `tests.appendseed[true|false]`: enables adding the seed to each test\n-section's description (default false)\n-* `tests.cluster_seed`: seed used to create the test cluster (if enabled)\n+\n+Note that the REST tests, like all the integration tests, can be run against an external\n+cluster by specifying the `tests.cluster` property, which if present needs to contain a\n+comma separated list of nodes to connect to (e.g. localhost:9300). A transport client will\n+be created based on that and used for all the before|after test operations, and to extract\n+the http addresses of the nodes so that REST requests can be sent to them.\n+\n+", "filename": "TESTING.asciidoc", "status": "modified" }, { "diff": "@@ -135,6 +135,11 @@ launch_service()\n es_parms=\"$es_parms -Des.pidfile=$pidpath\"\n fi\n \n+ # Make sure we dont use any predefined locale, as we check some exception message strings and rely on english language\n+ # As those strings are created by the OS, they are dependant on the configured locale\n+ LANG=en_US.UTF-8\n+ LC_ALL=en_US.UTF-8\n+\n # The es-foreground option will tell Elasticsearch not to close stdout/stderr, but it's up to us not to daemonize.\n if [ \"x$daemonized\" = \"x\" ]; then\n es_parms=\"$es_parms -Des.foreground=yes\"", "filename": "bin/elasticsearch", "status": "modified" }, { "diff": "@@ -62,9 +62,14 @@ REM The path to the heap dump location, note directory must exists and have enou\n REM space for a full heap dump.\n REM JAVA_OPTS=%JAVA_OPTS% -XX:HeapDumpPath=$ES_HOME/logs/heapdump.hprof\n \n+REM Disables explicit GC\n+set JAVA_OPTS=%JAVA_OPTS% -XX:+DisableExplicitGC\n+\n set ES_CLASSPATH=%ES_CLASSPATH%;%ES_HOME%/lib/${project.build.finalName}.jar;%ES_HOME%/lib/*;%ES_HOME%/lib/sigar/*\n set ES_PARAMS=-Delasticsearch -Des-foreground=yes -Des.path.home=\"%ES_HOME%\"\n \n+TITLE Elasticsearch ${project.version}\n+\n \"%JAVA_HOME%\\bin\\java\" %JAVA_OPTS% %ES_JAVA_OPTS% %ES_PARAMS% %* -cp \"%ES_CLASSPATH%\" \"org.elasticsearch.bootstrap.Elasticsearch\"\n goto finally\n ", "filename": "bin/elasticsearch.bat", "status": "modified" }, { "diff": "@@ -62,3 +62,6 @@ JAVA_OPTS=\"$JAVA_OPTS -XX:+HeapDumpOnOutOfMemoryError\"\n # The path to the heap dump location, note directory must exists and have enough\n # space for a full heap dump.\n #JAVA_OPTS=\"$JAVA_OPTS -XX:HeapDumpPath=$ES_HOME/logs/heapdump.hprof\"\n+\n+# Disables explicit GC\n+JAVA_OPTS=\"$JAVA_OPTS -XX:+DisableExplicitGC\"", "filename": "bin/elasticsearch.in.sh", "status": "modified" }, { "diff": "@@ -7,6 +7,7 @@ if NOT DEFINED JAVA_HOME goto err\n set SCRIPT_DIR=%~dp0\n for %%I in (\"%SCRIPT_DIR%..\") do set ES_HOME=%%~dpfI\n \n+TITLE Elasticsearch Plugin Manager ${project.version}\n \n \"%JAVA_HOME%\\bin\\java\" %JAVA_OPTS% -Xmx64m -Xms16m -Des.path.home=\"%ES_HOME%\" -cp \"%ES_HOME%/lib/*;\" \"org.elasticsearch.plugins.PluginManager\" %*\n goto finally", "filename": "bin/plugin.bat", "status": "modified" }, { "diff": "@@ -8,7 +8,7 @@ for %%I in (\"%SCRIPT_DIR%..\") do set ES_HOME=%%~dpfI\n \n rem Detect JVM version to figure out appropriate executable to use\n if not exist \"%JAVA_HOME%\\bin\\java.exe\" (\n-echo JAVA_HOME points to an invalid Java installation (no java.exe found in \"%JAVA_HOME%\"^). Existing...\n+echo JAVA_HOME points to an invalid Java installation (no java.exe found in \"%JAVA_HOME%\"^). Exiting...\n goto:eof\n )\n \"%JAVA_HOME%\\bin\\java\" -version 2>&1 | find \"64-Bit\" >nul:\n@@ -43,6 +43,8 @@ set SERVICE_ID=%1\n \n if \"%LOG_OPTS%\" == \"\" set LOG_OPTS=--LogPath \"%LOG_DIR%\" --LogPrefix \"%SERVICE_ID%\" --StdError auto --StdOutput auto\n \n+TITLE Elasticsearch Service ${project.version}\n+\n if /i %SERVICE_CMD% == install goto doInstall\n if /i %SERVICE_CMD% == remove goto doRemove\n if /i %SERVICE_CMD% == start goto doStart\n@@ -160,6 +162,9 @@ REM The path to the heap dump location, note directory must exists and have enou\n REM space for a full heap dump.\n REM JAVA_OPTS=%JAVA_OPTS% -XX:HeapDumpPath=$ES_HOME/logs/heapdump.hprof\n \n+REM Disables explicit GC\n+set JAVA_OPTS=%JAVA_OPTS% -XX:+DisableExplicitGC\n+\n if \"%DATA_DIR%\" == \"\" set DATA_DIR=%ES_HOME%\\data\n \n if \"%WORK_DIR%\" == \"\" set WORK_DIR=%ES_HOME%", "filename": "bin/service.bat", "status": "modified" }, { "diff": "@@ -18,7 +18,7 @@\n # Any element in the configuration can be replaced with environment variables\n # by placing them in ${...} notation. For example:\n #\n-# node.rack: ${RACK_ENV_VAR}\n+#node.rack: ${RACK_ENV_VAR}\n \n # For information on supported formats and syntax for the config file, see\n # <http://elasticsearch.org/guide/en/elasticsearch/reference/current/setup-configuration.html>\n@@ -29,62 +29,64 @@\n # Cluster name identifies your cluster for auto-discovery. If you're running\n # multiple clusters on the same network, make sure you're using unique names.\n #\n-# cluster.name: elasticsearch\n+#cluster.name: elasticsearch\n \n \n #################################### Node #####################################\n \n # Node names are generated dynamically on startup, so you're relieved\n # from configuring them manually. You can tie this node to a specific name:\n #\n-# node.name: \"Franz Kafka\"\n+#node.name: \"Franz Kafka\"\n \n # Every node can be configured to allow or deny being eligible as the master,\n # and to allow or deny to store the data.\n #\n # Allow this node to be eligible as a master node (enabled by default):\n #\n-# node.master: true\n+#node.master: true\n #\n # Allow this node to store data (enabled by default):\n #\n-# node.data: true\n+#node.data: true\n \n # You can exploit these settings to design advanced cluster topologies.\n #\n # 1. You want this node to never become a master node, only to hold data.\n # This will be the \"workhorse\" of your cluster.\n #\n-# node.master: false\n-# node.data: true\n+#node.master: false\n+#node.data: true\n #\n # 2. You want this node to only serve as a master: to not store any data and\n # to have free resources. This will be the \"coordinator\" of your cluster.\n #\n-# node.master: true\n-# node.data: false\n+#node.master: true\n+#node.data: false\n #\n # 3. You want this node to be neither master nor data node, but\n # to act as a \"search load balancer\" (fetching data from nodes,\n # aggregating results, etc.)\n #\n-# node.master: false\n-# node.data: false\n+#node.master: false\n+#node.data: false\n \n # Use the Cluster Health API [http://localhost:9200/_cluster/health], the\n-# Node Info API [http://localhost:9200/_cluster/nodes] or GUI tools\n-# such as <http://github.com/lukas-vlcek/bigdesk> and\n+# Node Info API [http://localhost:9200/_nodes] or GUI tools\n+# such as <http://www.elasticsearch.org/overview/marvel/>,\n+# <http://github.com/karmi/elasticsearch-paramedic>,\n+# <http://github.com/lukas-vlcek/bigdesk> and\n # <http://mobz.github.com/elasticsearch-head> to inspect the cluster state.\n \n # A node can have generic attributes associated with it, which can later be used\n # for customized shard allocation filtering, or allocation awareness. An attribute\n # is a simple key value pair, similar to node.key: value, here is an example:\n #\n-# node.rack: rack314\n+#node.rack: rack314\n \n # By default, multiple nodes are allowed to start from the same installation location\n # to disable it, set the following:\n-# node.max_local_storage_nodes: 1\n+#node.max_local_storage_nodes: 1\n \n \n #################################### Index ####################################\n@@ -102,17 +104,17 @@\n \n # Set the number of shards (splits) of an index (5 by default):\n #\n-# index.number_of_shards: 5\n+#index.number_of_shards: 5\n \n # Set the number of replicas (additional copies) of an index (1 by default):\n #\n-# index.number_of_replicas: 1\n+#index.number_of_replicas: 1\n \n # Note, that for development on a local machine, with small indices, it usually\n # makes sense to \"disable\" the distributed features:\n #\n-# index.number_of_shards: 1\n-# index.number_of_replicas: 0\n+#index.number_of_shards: 1\n+#index.number_of_replicas: 0\n \n # These settings directly affect the performance of index and search operations\n # in your cluster. Assuming you have enough machines to hold shards and\n@@ -140,36 +142,36 @@\n \n # Path to directory containing configuration (this file and logging.yml):\n #\n-# path.conf: /path/to/conf\n+#path.conf: /path/to/conf\n \n # Path to directory where to store index data allocated for this node.\n #\n-# path.data: /path/to/data\n+#path.data: /path/to/data\n #\n # Can optionally include more than one location, causing data to be striped across\n # the locations (a la RAID 0) on a file level, favouring locations with most free\n # space on creation. For example:\n #\n-# path.data: /path/to/data1,/path/to/data2\n+#path.data: /path/to/data1,/path/to/data2\n \n # Path to temporary files:\n #\n-# path.work: /path/to/work\n+#path.work: /path/to/work\n \n # Path to log files:\n #\n-# path.logs: /path/to/logs\n+#path.logs: /path/to/logs\n \n # Path to where plugins are installed:\n #\n-# path.plugins: /path/to/plugins\n+#path.plugins: /path/to/plugins\n \n \n #################################### Plugin ###################################\n \n # If a plugin listed here is not installed for current node, the node will not start.\n #\n-# plugin.mandatory: mapper-attachments,lang-groovy\n+#plugin.mandatory: mapper-attachments,lang-groovy\n \n \n ################################### Memory ####################################\n@@ -179,7 +181,7 @@\n #\n # Set this property to true to lock the memory:\n #\n-# bootstrap.mlockall: true\n+#bootstrap.mlockall: true\n \n # Make sure that the ES_MIN_MEM and ES_MAX_MEM environment variables are set\n # to the same value, and that the machine has enough memory to allocate\n@@ -198,36 +200,36 @@\n \n # Set the bind address specifically (IPv4 or IPv6):\n #\n-# network.bind_host: 192.168.0.1\n+#network.bind_host: 192.168.0.1\n \n # Set the address other nodes will use to communicate with this node. If not\n # set, it is automatically derived. It must point to an actual IP address.\n #\n-# network.publish_host: 192.168.0.1\n+#network.publish_host: 192.168.0.1\n \n # Set both 'bind_host' and 'publish_host':\n #\n-# network.host: 192.168.0.1\n+#network.host: 192.168.0.1\n \n # Set a custom port for the node to node communication (9300 by default):\n #\n-# transport.tcp.port: 9300\n+#transport.tcp.port: 9300\n \n # Enable compression for all communication between nodes (disabled by default):\n #\n-# transport.tcp.compress: true\n+#transport.tcp.compress: true\n \n # Set a custom port to listen for HTTP traffic:\n #\n-# http.port: 9200\n+#http.port: 9200\n \n # Set a custom allowed content length:\n #\n-# http.max_content_length: 100mb\n+#http.max_content_length: 100mb\n \n # Disable HTTP completely:\n #\n-# http.enabled: false\n+#http.enabled: false\n \n \n ################################### Gateway ###################################\n@@ -242,26 +244,26 @@\n \n # The default gateway type is the \"local\" gateway (recommended):\n #\n-# gateway.type: local\n+#gateway.type: local\n \n # Settings below control how and when to start the initial recovery process on\n # a full cluster restart (to reuse as much local data as possible when using shared\n # gateway).\n \n # Allow recovery process after N nodes in a cluster are up:\n #\n-# gateway.recover_after_nodes: 1\n+#gateway.recover_after_nodes: 1\n \n # Set the timeout to initiate the recovery process, once the N nodes\n # from previous setting are up (accepts time value):\n #\n-# gateway.recover_after_time: 5m\n+#gateway.recover_after_time: 5m\n \n # Set how many nodes are expected in this cluster. Once these N nodes\n # are up (and recover_after_nodes is met), begin recovery process immediately\n # (without waiting for recover_after_time to expire):\n #\n-# gateway.expected_nodes: 2\n+#gateway.expected_nodes: 2\n \n \n ############################# Recovery Throttling #############################\n@@ -274,20 +276,20 @@\n #\n # 1. During the initial recovery\n #\n-# cluster.routing.allocation.node_initial_primaries_recoveries: 4\n+#cluster.routing.allocation.node_initial_primaries_recoveries: 4\n #\n # 2. During adding/removing nodes, rebalancing, etc\n #\n-# cluster.routing.allocation.node_concurrent_recoveries: 2\n+#cluster.routing.allocation.node_concurrent_recoveries: 2\n \n # Set to throttle throughput when recovering (eg. 100mb, by default 20mb):\n #\n-# indices.recovery.max_bytes_per_sec: 20mb\n+#indices.recovery.max_bytes_per_sec: 20mb\n \n # Set to limit the number of open concurrent streams when\n # recovering a shard from a peer:\n #\n-# indices.recovery.concurrent_streams: 5\n+#indices.recovery.concurrent_streams: 5\n \n \n ################################## Discovery ##################################\n@@ -299,13 +301,13 @@\n # operational within the cluster. Its recommended to set it to a higher value\n # than 1 when running more than 2 nodes in the cluster.\n #\n-# discovery.zen.minimum_master_nodes: 1\n+#discovery.zen.minimum_master_nodes: 1\n \n # Set the time to wait for ping responses from other nodes when discovering.\n # Set this option to a higher value on a slow or congested network\n # to minimize discovery failures:\n #\n-# discovery.zen.ping.timeout: 3s\n+#discovery.zen.ping.timeout: 3s\n \n # For more information, see\n # <http://elasticsearch.org/guide/en/elasticsearch/reference/current/modules-discovery-zen.html>\n@@ -316,12 +318,12 @@\n #\n # 1. Disable multicast discovery (enabled by default):\n #\n-# discovery.zen.ping.multicast.enabled: false\n+#discovery.zen.ping.multicast.enabled: false\n #\n # 2. Configure an initial list of master nodes in the cluster\n # to perform discovery when new nodes (master or data) are started:\n #\n-# discovery.zen.ping.unicast.hosts: [\"host1\", \"host2:port\"]\n+#discovery.zen.ping.unicast.hosts: [\"host1\", \"host2:port\"]\n \n # EC2 discovery allows to use AWS EC2 API in order to perform discovery.\n #\n@@ -333,6 +335,17 @@\n # See <http://elasticsearch.org/tutorials/elasticsearch-on-ec2/>\n # for a step-by-step tutorial.\n \n+# GCE discovery allows to use Google Compute Engine API in order to perform discovery.\n+#\n+# You have to install the cloud-gce plugin for enabling the GCE discovery.\n+#\n+# For more information, see <https://github.com/elasticsearch/elasticsearch-cloud-gce>.\n+\n+# Azure discovery allows to use Azure API in order to perform discovery.\n+#\n+# You have to install the cloud-azure plugin for enabling the Azure discovery.\n+#\n+# For more information, see <https://github.com/elasticsearch/elasticsearch-cloud-azure>.\n \n ################################## Slow Log ##################################\n \n@@ -362,3 +375,11 @@\n #monitor.jvm.gc.old.warn: 10s\n #monitor.jvm.gc.old.info: 5s\n #monitor.jvm.gc.old.debug: 2s\n+\n+################################## Security ################################\n+\n+# Uncomment if you want to enable JSONP as a valid return transport on the\n+# http server. With this enabled, it may pose a security risk, so disabling\n+# it unless you need it is recommended (it is disabled by default).\n+#\n+#http.jsonp.enable: true", "filename": "config/elasticsearch.yml", "status": "modified" }, { "diff": "@@ -40,8 +40,9 @@ def get_jdk\n \n # do ranomize selection from a given array\n def select_one(selection_array = nil)\n- selection_array ||= @jdk_list\n+ selection_array = filter_java_6(selection_array || @jdk_list)\n selection_array[rand(selection_array.size)]\n+\n get_random_one(selection_array)\n end\n end\n@@ -50,33 +51,36 @@ def get_random_one(data_array)\n data_array[rand(data_array.size)]\n end\n \n+def filter_java_6(files)\n+ files.select{ |i| File.basename(i).split(/[^0-9]/)[-1].to_i > 6 }\n+end\n+\n # given a jdk directory selection, generate relevant environment variables\n def get_env_matrix(data_array)\n \n #refactoring target\n es_test_jvm_option1 = get_random_one(['-server']) #only server for now get_random_one(['-client', '-server'])\n- greater_than_six = File.basename([*data_array].first).split(/[^0-9]/)[-1].to_i > 6\n- es_test_jvm_option2 = greater_than_six ? get_random_one(['-XX:+UseConcMarkSweepGC', '-XX:+UseParallelGC', '-XX:+UseSerialGC',\n- '-XX:+UseG1GC']) :\n- get_random_one(['-XX:+UseConcMarkSweepGC', '-XX:+UseParallelGC', '-XX:+UseSerialGC'])\n+ es_test_jvm_option2 = get_random_one(['-XX:+UseConcMarkSweepGC', '-XX:+UseParallelGC', '-XX:+UseSerialGC', '-XX:+UseG1GC'])\n \n es_test_jvm_option3 = get_random_one(['-XX:+UseCompressedOops', '-XX:-UseCompressedOops'])\n es_node_mode = get_random_one(['local', 'network'])\n tests_nightly = get_random_one([true, false])\n tests_nightly = get_random_one([false]) #bug\n \n test_assert_off = (rand(10) == 9) #10 percent chance turning it off \n-\n+ tests_security_manager = (rand(10) != 9) #10 percent chance running without security manager\n+ arg_line = [es_test_jvm_option1, es_test_jvm_option2, es_test_jvm_option3]\n [*data_array].map do |x|\n data_hash = {\n 'PATH' => File.join(x,'bin') + ':' + ENV['PATH'],\n 'JAVA_HOME' => x,\n- 'BUILD_DESC' => \"%s,%s,%s%s,%s %s%s\"%[File.basename(x), es_node_mode, tests_nightly ? 'nightly,':'',\n+ 'BUILD_DESC' => \"%s,%s,%s%s,%s %s%s%s\"%[File.basename(x), es_node_mode, tests_nightly ? 'nightly,':'',\n es_test_jvm_option1[1..-1], es_test_jvm_option2[4..-1], es_test_jvm_option3[4..-1],\n- test_assert_off ? ',assert off' : ''], \n+ test_assert_off ? ',assert off' : '', tests_security_manager ? ', security manager enabled' : ''], \n 'es.node.mode' => es_node_mode,\n 'tests.nightly' => tests_nightly,\n- 'tests.jvm.argline' => \"%s %s %s\"%[es_test_jvm_option1, es_test_jvm_option2, es_test_jvm_option3],\n+ 'tests.security.manager' => tests_security_manager,\n+ 'tests.jvm.argline' => arg_line.join(\" \"),\n }\n data_hash['tests.assertion.disabled'] = 'org.elasticsearch' if test_assert_off\n data_hash\n@@ -96,11 +100,8 @@ def generate_property_file(directory, data)\n end\n end\n \n-\n-if(ENV['WORKSPACE'])\n- #jenkin mode\n- working_directory = ENV['WORKSPACE']\n-else\n+working_directory = ENV['WORKSPACE'] || '/var/tmp'\n+unless(ENV['BUILD_ID'])\n #local mode set up fake environment \n test_directory = 'tools/hudson.model.JDK/'\n unless(File.exist?(test_directory))", "filename": "dev-tools/build_randomization.rb", "status": "modified" }, { "diff": "@@ -28,8 +28,11 @@\n import fnmatch\n import socket\n import urllib.request\n+import subprocess\n \n from http.client import HTTPConnection\n+from http.client import HTTPSConnection\n+\n \n \"\"\" \n This tool builds a release from the a given elasticsearch branch.\n@@ -40,13 +43,13 @@\n '--publish' option is set the actual release is done. The script takes over almost all\n steps necessary for a release from a high level point of view it does the following things:\n \n- - run prerequisit checks ie. check for Java 1.6 being presend or S3 credentials available as env variables\n+ - run prerequisit checks ie. check for Java 1.7 being presend or S3 credentials available as env variables\n - detect the version to release from the specified branch (--branch) or the current branch\n - creates a release branch & updates pom.xml and Version.java to point to a release version rather than a snapshot\n - builds the artifacts and runs smoke-tests on the build zip & tar.gz files\n - commits the new version and merges the release branch into the source branch\n - creates a tag and pushes the commit to the specified origin (--remote)\n- - publishes the releases to sonar-type and S3\n+ - publishes the releases to Sonatype and S3\n \n Once it's done it will print all the remaining steps.\n \n@@ -87,38 +90,36 @@ def run(command, quiet=False):\n except KeyError:\n raise RuntimeError(\"\"\"\n Please set JAVA_HOME in the env before running release tool\n- On OSX use: export JAVA_HOME=`/usr/libexec/java_home -v '1.6*'`\"\"\")\n+ On OSX use: export JAVA_HOME=`/usr/libexec/java_home -v '1.7*'`\"\"\")\n \n try:\n- JAVA_HOME = env['JAVA6_HOME']\n+ JAVA_HOME = env['JAVA7_HOME']\n except KeyError:\n- pass #no JAVA6_HOME - we rely on JAVA_HOME\n+ pass #no JAVA7_HOME - we rely on JAVA_HOME\n \n \n try:\n- MVN='mvn'\n # make sure mvn3 is used if mvn3 is available\n # some systems use maven 2 as default\n- run('mvn3 --version', quiet=True)\n- MVN='mvn3'\n-except RuntimeError:\n- pass\n-\n+ subprocess.check_output('mvn3 --version', shell=True, stderr=subprocess.STDOUT)\n+ MVN = 'mvn3'\n+except subprocess.CalledProcessError:\n+ MVN = 'mvn'\n \n def java_exe():\n path = JAVA_HOME\n return 'export JAVA_HOME=\"%s\" PATH=\"%s/bin:$PATH\" JAVACMD=\"%s/bin/java\"' % (path, path, path)\n \n def verify_java_version(version):\n s = os.popen('%s; java -version 2>&1' % java_exe()).read()\n- if s.find(' version \"%s.' % version) == -1:\n+ if ' version \"%s.' % version not in s:\n raise RuntimeError('got wrong version for java %s:\\n%s' % (version, s))\n \n-# Verifies the java version. We guarantee that we run with Java 1.6\n-# If 1.6 is not available fail the build!\n+# Verifies the java version. We guarantee that we run with Java 1.7\n+# If 1.7 is not available fail the build!\n def verify_mvn_java_version(version, mvn):\n s = os.popen('%s; %s --version 2>&1' % (java_exe(), mvn)).read()\n- if s.find('Java version: %s' % version) == -1:\n+ if 'Java version: %s' % version not in s:\n raise RuntimeError('got wrong java version for %s %s:\\n%s' % (mvn, version, s))\n \n # Returns the hash of the current git HEAD revision\n@@ -133,8 +134,8 @@ def get_tag_hash(tag):\n def get_current_branch():\n return os.popen('git rev-parse --abbrev-ref HEAD 2>&1').read().strip()\n \n-verify_java_version('1.6') # we require to build with 1.6\n-verify_mvn_java_version('1.6', MVN)\n+verify_java_version('1.7') # we require to build with 1.7\n+verify_mvn_java_version('1.7', MVN)\n \n # Utility that returns the name of the release branch for a given version\n def release_branch(version):\n@@ -222,22 +223,29 @@ def add_pending_files(*files):\n def commit_release(release):\n run('git commit -m \"release [%s]\"' % release)\n \n+def commit_feature_flags(release):\n+ run('git commit -m \"Update Documentation Feature Flags [%s]\"' % release)\n+\n def tag_release(release):\n run('git tag -a v%s -m \"Tag release version %s\"' % (release, release))\n \n def run_mvn(*cmd):\n for c in cmd:\n run('%s; %s %s' % (java_exe(), MVN, c))\n \n-def build_release(run_tests=False, dry_run=True, cpus=1):\n+def build_release(run_tests=False, dry_run=True, cpus=1, bwc_version=None):\n target = 'deploy'\n if dry_run:\n target = 'package'\n if run_tests:\n run_mvn('clean',\n 'test -Dtests.jvms=%s -Des.node.mode=local' % (cpus),\n 'test -Dtests.jvms=%s -Des.node.mode=network' % (cpus))\n- run_mvn('clean %s -DskipTests' %(target))\n+ if bwc_version:\n+ print('Running Backwards compatibilty tests against version [%s]' % (bwc_version))\n+ run_mvn('clean', 'test -Dtests.filter=@backwards -Dtests.bwc.version=%s -Dtests.bwc=true -Dtests.jvms=1' % bwc_version)\n+ run_mvn('clean test-compile -Dforbidden.test.signatures=\"org.apache.lucene.util.LuceneTestCase\\$AwaitsFix @ Please fix all bugs before release\"')\n+ run_mvn('clean %s -DskipTests' % (target))\n success = False\n try:\n run_mvn('-DskipTests rpm:rpm')\n@@ -251,7 +259,32 @@ def build_release(run_tests=False, dry_run=True, cpus=1):\n $ apt-get install rpm # on Ubuntu et.al\n \"\"\")\n \n-\n+# Uses the github API to fetch open tickets for the given release version\n+# if it finds any tickets open for that version it will throw an exception\n+def ensure_no_open_tickets(version):\n+ version = \"v%s\" % version\n+ conn = HTTPSConnection('api.github.com')\n+ try:\n+ log('Checking for open tickets on Github for version %s' % version)\n+ log('Check if node is available')\n+ conn.request('GET', '/repos/elasticsearch/elasticsearch/issues?state=open&labels=%s' % version, headers= {'User-Agent' : 'Elasticsearch version checker'})\n+ res = conn.getresponse()\n+ if res.status == 200:\n+ issues = json.loads(res.read().decode(\"utf-8\"))\n+ if issues:\n+ urls = []\n+ for issue in issues:\n+ urls.append(issue['url'])\n+ raise RuntimeError('Found open issues for release version %s see - %s' % (version, urls))\n+ else:\n+ log(\"No open issues found for version %s\" % version)\n+ else:\n+ raise RuntimeError('Failed to fetch issue list from Github for release version %s' % version)\n+ except socket.error as e:\n+ log(\"Failed to fetch issue list from Github for release version %s' % version - Exception: [%s]\" % (version, e))\n+ #that is ok it might not be there yet\n+ finally:\n+ conn.close()\n \n def wait_for_node_startup(host='127.0.0.1', port=9200,timeout=15):\n for _ in range(timeout):\n@@ -314,7 +347,7 @@ def generate_checksums(files):\n directory = os.path.dirname(release_file)\n file = os.path.basename(release_file)\n checksum_file = '%s.sha1.txt' % file\n- \n+\n if os.system('cd %s; shasum %s > %s' % (directory, file, checksum_file)):\n raise RuntimeError('Failed to generate checksum for file %s' % release_file)\n res = res + [os.path.join(directory, checksum_file), release_file]\n@@ -348,27 +381,27 @@ def smoke_test_release(release, files, expected_hash, plugins):\n raise RuntimeError('Smoketest failed missing file %s' % (release_file))\n tmp_dir = tempfile.mkdtemp()\n if release_file.endswith('tar.gz'):\n- run('tar -xzf %s -C %s' % (release_file, tmp_dir)) \n+ run('tar -xzf %s -C %s' % (release_file, tmp_dir))\n elif release_file.endswith('zip'):\n- run('unzip %s -d %s' % (release_file, tmp_dir)) \n+ run('unzip %s -d %s' % (release_file, tmp_dir))\n else:\n log('Skip SmokeTest for [%s]' % release_file)\n- continue # nothing to do here \n+ continue # nothing to do here\n es_run_path = os.path.join(tmp_dir, 'elasticsearch-%s' % (release), 'bin/elasticsearch')\n print(' Smoke testing package [%s]' % release_file)\n es_plugin_path = os.path.join(tmp_dir, 'elasticsearch-%s' % (release),'bin/plugin')\n plugin_names = {}\n for name, plugin in plugins:\n print(' Install plugin [%s] from [%s]' % (name, plugin))\n- run('%s %s %s' % (es_plugin_path, '-install', plugin))\n+ run('%s; %s %s %s' % (java_exe(), es_plugin_path, '-install', plugin))\n plugin_names[name] = True\n \n if release.startswith(\"0.90.\"):\n background = '' # 0.90.x starts in background automatically\n else:\n background = '-d'\n print(' Starting elasticsearch deamon from [%s]' % os.path.join(tmp_dir, 'elasticsearch-%s' % release))\n- run('%s; %s -Des.node.name=smoke_tester -Des.cluster.name=prepare_release -Des.discovery.zen.ping.multicast.enabled=false %s'\n+ run('%s; %s -Des.node.name=smoke_tester -Des.cluster.name=prepare_release -Des.discovery.zen.ping.multicast.enabled=false -Des.node.bench=true -Des.script.disable_dynamic=false %s'\n % (java_exe(), es_run_path, background))\n conn = HTTPConnection('127.0.0.1', 9200, 20);\n wait_for_node_startup()\n@@ -385,7 +418,7 @@ def smoke_test_release(release, files, expected_hash, plugins):\n if version['build_hash'].strip() != expected_hash:\n raise RuntimeError('HEAD hash does not match expected [%s] but got [%s]' % (expected_hash, version['build_hash']))\n print(' Running REST Spec tests against package [%s]' % release_file)\n- run_mvn('test -Dtests.rest=%s -Dtests.class=*.*RestTests' % (\"127.0.0.1:9200\"))\n+ run_mvn('test -Dtests.cluster=%s -Dtests.class=*.*RestTests' % (\"127.0.0.1:9300\"))\n print(' Verify if plugins are listed in _nodes')\n conn.request('GET', '/_nodes?plugin=true&pretty=true')\n res = conn.getresponse()\n@@ -434,17 +467,17 @@ def publish_artifacts(artifacts, base='elasticsearch/elasticsearch', dry_run=Tru\n # requires boto to be installed but it is not available on python3k yet so we use a dedicated tool\n run('python %s/upload-s3.py --file %s ' % (location, os.path.abspath(artifact)))\n \n-def print_sonartype_notice():\n+def print_sonatype_notice():\n settings = os.path.join(os.path.expanduser('~'), '.m2/settings.xml')\n if os.path.isfile(settings):\n with open(settings, encoding='utf-8') as settings_file:\n for line in settings_file:\n if line.strip() == '<id>sonatype-nexus-snapshots</id>':\n # moving out - we found the indicator no need to print the warning\n- return \n+ return\n print(\"\"\"\n- NOTE: No sonartype settings detected, make sure you have configured\n- your sonartype credentials in '~/.m2/settings.xml':\n+ NOTE: No sonatype settings detected, make sure you have configured\n+ your sonatype credentials in '~/.m2/settings.xml':\n \n <settings>\n ...\n@@ -468,15 +501,55 @@ def check_s3_credentials():\n if not env.get('AWS_ACCESS_KEY_ID', None) or not env.get('AWS_SECRET_ACCESS_KEY', None):\n raise RuntimeError('Could not find \"AWS_ACCESS_KEY_ID\" / \"AWS_SECRET_ACCESS_KEY\" in the env variables please export in order to upload to S3')\n \n-VERSION_FILE = 'src/main/java/org/elasticsearch/Version.java' \n+VERSION_FILE = 'src/main/java/org/elasticsearch/Version.java'\n POM_FILE = 'pom.xml'\n \n-# we print a notice if we can not find the relevant infos in the ~/.m2/settings.xml \n-print_sonartype_notice()\n+# we print a notice if we can not find the relevant infos in the ~/.m2/settings.xml\n+print_sonatype_notice()\n+\n+# finds the highest available bwc version to test against\n+def find_bwc_version(release_version, bwc_dir='backwards'):\n+ log(' Lookup bwc version in directory [%s]' % bwc_dir)\n+ bwc_version = None\n+ if os.path.exists(bwc_dir) and os.path.isdir(bwc_dir):\n+ max_version = [int(x) for x in release_version.split('.')]\n+ for dir in os.listdir(bwc_dir):\n+ if os.path.isdir(os.path.join(bwc_dir, dir)) and dir.startswith('elasticsearch-'):\n+ version = [int(x) for x in dir[len('elasticsearch-'):].split('.')]\n+ if version < max_version: # bwc tests only against smaller versions\n+ if (not bwc_version) or version > [int(x) for x in bwc_version.split('.')]:\n+ bwc_version = dir[len('elasticsearch-'):]\n+ log(' Using bwc version [%s]' % bwc_version)\n+ else:\n+ log(' bwc directory [%s] does not exists or is not a directory - skipping' % bwc_dir)\n+ return bwc_version\n+\n+def ensure_checkout_is_clean(branchName):\n+ # Make sure no local mods:\n+ s = subprocess.check_output('git diff --shortstat', shell=True)\n+ if len(s) > 0:\n+ raise RuntimeError('git diff --shortstat is non-empty: got:\\n%s' % s)\n+\n+ # Make sure no untracked files:\n+ s = subprocess.check_output('git status', shell=True).decode('utf-8', errors='replace')\n+ if 'Untracked files:' in s:\n+ raise RuntimeError('git status shows untracked files: got:\\n%s' % s)\n+\n+ # Make sure we are on the right branch (NOTE: a bit weak, since we default to current branch):\n+ if 'On branch %s' % branchName not in s:\n+ raise RuntimeError('git status does not show branch %s: got:\\n%s' % (branchName, s))\n+\n+ # Make sure we have all changes from origin:\n+ if 'is behind' in s:\n+ raise RuntimeError('git status shows not all changes pulled from origin; try running \"git pull origin %s\": got:\\n%s' % (branchName, s))\n+\n+ # Make sure we no local unpushed changes (this is supposed to be a clean area):\n+ if 'is ahead' in s:\n+ raise RuntimeError('git status shows local commits; try running \"git fetch origin\", \"git checkout %s\", \"git reset --hard origin/%s\": got:\\n%s' % (branchName, branchName, s))\n \n if __name__ == '__main__':\n parser = argparse.ArgumentParser(description='Builds and publishes a Elasticsearch Release')\n- parser.add_argument('--branch', '-b', metavar='master', default=get_current_branch(),\n+ parser.add_argument('--branch', '-b', metavar='RELEASE_BRANCH', default=get_current_branch(),\n help='The branch to release from. Defaults to the current branch.')\n parser.add_argument('--cpus', '-c', metavar='1', default=1,\n help='The number of cpus to use for running the test. Default is [1]')\n@@ -489,30 +562,37 @@ def check_s3_credentials():\n help='Publishes the release. Disable by default.')\n parser.add_argument('--smoke', '-s', dest='smoke', default='',\n help='Smoke tests the given release')\n+ parser.add_argument('--bwc', '-w', dest='bwc', metavar='backwards', default='backwards',\n+ help='Backwards compatibility version path to use to run compatibility tests against')\n \n parser.set_defaults(dryrun=True)\n parser.set_defaults(smoke=None)\n args = parser.parse_args()\n-\n+ bwc_path = args.bwc\n src_branch = args.branch\n remote = args.remote\n run_tests = args.tests\n dry_run = args.dryrun\n cpus = args.cpus\n build = not args.smoke\n smoke_test_version = args.smoke\n+\n+ if os.path.exists(LOG):\n+ raise RuntimeError('please remove old release log %s first' % LOG)\n+ \n if not dry_run:\n check_s3_credentials()\n- print('WARNING: dryrun is set to \"false\" - this will push and publish the release') \n+ print('WARNING: dryrun is set to \"false\" - this will push and publish the release')\n input('Press Enter to continue...')\n \n print(''.join(['-' for _ in range(80)]))\n print('Preparing Release from branch [%s] running tests: [%s] dryrun: [%s]' % (src_branch, run_tests, dry_run))\n print(' JAVA_HOME is [%s]' % JAVA_HOME)\n print(' Running with maven command: [%s] ' % (MVN))\n-\n if build:\n+ ensure_checkout_is_clean(src_branch)\n release_version = find_release_version(src_branch)\n+ ensure_no_open_tickets(release_version)\n if not dry_run:\n smoke_test_version = release_version\n head_hash = get_head_hash()\n@@ -525,19 +605,25 @@ def check_s3_credentials():\n pending_files = [POM_FILE, VERSION_FILE]\n remove_maven_snapshot(POM_FILE, release_version)\n remove_version_snapshot(VERSION_FILE, release_version)\n- pending_files = pending_files + update_reference_docs(release_version)\n print(' Done removing snapshot version')\n add_pending_files(*pending_files) # expects var args use * to expand\n commit_release(release_version)\n+ pending_files = update_reference_docs(release_version)\n+ version_head_hash = None\n+ # split commits for docs and version to enable easy cherry-picking\n+ if pending_files:\n+ add_pending_files(*pending_files) # expects var args use * to expand\n+ commit_feature_flags(release_version)\n+ version_head_hash = get_head_hash()\n print(' Committed release version [%s]' % release_version)\n print(''.join(['-' for _ in range(80)]))\n print('Building Release candidate')\n input('Press Enter to continue...')\n if not dry_run:\n- print(' Running maven builds now and publish to sonartype - run-tests [%s]' % run_tests)\n+ print(' Running maven builds now and publish to Sonatype - run-tests [%s]' % run_tests)\n else:\n print(' Running maven builds now run-tests [%s]' % run_tests)\n- build_release(run_tests=run_tests, dry_run=dry_run, cpus=cpus)\n+ build_release(run_tests=run_tests, dry_run=dry_run, cpus=cpus, bwc_version=find_bwc_version(release_version, bwc_path))\n artifacts = get_artifacts(release_version)\n artifacts_and_checksum = generate_checksums(artifacts)\n smoke_test_release(release_version, artifacts, get_head_hash(), PLUGINS)\n@@ -548,18 +634,21 @@ def check_s3_credentials():\n merge_tag_push(remote, src_branch, release_version, dry_run)\n print(' publish artifacts to S3 -- dry_run: %s' % dry_run)\n publish_artifacts(artifacts_and_checksum, dry_run=dry_run)\n+ cherry_pick_command = '.'\n+ if version_head_hash:\n+ cherry_pick_command = ' and cherry-pick the documentation changes: \\'git cherry-pick %s\\' to the development branch' % (version_head_hash)\n pending_msg = \"\"\"\n Release successful pending steps:\n- * create a version tag on github for version 'v%(version)s'\n- * check if there are pending issues for this version (https://github.com/elasticsearch/elasticsearch/issues?labels=v%(version)s&page=1&state=open)\n- * publish the maven artifacts on sonartype: https://oss.sonatype.org/index.html\n+ * create a new vX.Y.Z label on github for the next release, with label color #dddddd (https://github.com/elasticsearch/elasticsearch/labels)\n+ * publish the maven artifacts on Sonatype: https://oss.sonatype.org/index.html\n - here is a guide: https://docs.sonatype.org/display/Repository/Sonatype+OSS+Maven+Repository+Usage+Guide#SonatypeOSSMavenRepositoryUsageGuide-8a.ReleaseIt\n * check if the release is there https://oss.sonatype.org/content/repositories/releases/org/elasticsearch/elasticsearch/%(version)s\n * announce the release on the website / blog post\n * tweet about the release\n * announce the release in the google group/mailinglist\n+ * Move to a Snapshot version to the current branch for the next point release%(cherry_pick)s\n \"\"\"\n- print(pending_msg % { 'version' : release_version} )\n+ print(pending_msg % { 'version' : release_version, 'cherry_pick' : cherry_pick_command} )\n success = True\n finally:\n if not success:", "filename": "dev-tools/build_release.py", "status": "modified" }, { "diff": "@@ -17,5 +17,8 @@\n #\n # This is used for client testings to pull in master, 090 bits\n #\n-URL_MASTER=http://s3-us-west-2.amazonaws.com/build.elasticsearch.org/origin/master/nightly/JDK6/elasticsearch-latest-SNAPSHOT.zip\n+URL_MASTER=http://s3-us-west-2.amazonaws.com/build.elasticsearch.org/origin/master/nightly/JDK7/elasticsearch-latest-SNAPSHOT.zip\n+URL_1x=http://s3-us-west-2.amazonaws.com/build.elasticsearch.org/origin/1.x/nightly/JDK7/elasticsearch-latest-SNAPSHOT.zip\n+URL_11=http://s3-us-west-2.amazonaws.com/build.elasticsearch.org/origin/1.1/nightly/JDK6/elasticsearch-latest-SNAPSHOT.zip\n+URL_10=http://s3-us-west-2.amazonaws.com/build.elasticsearch.org/origin/1.0/nightly/JDK6/elasticsearch-latest-SNAPSHOT.zip\n URL_090=http://s3-us-west-2.amazonaws.com/build.elasticsearch.org/origin/0.90/nightly/JDK6/elasticsearch-latest-SNAPSHOT.zip", "filename": "dev-tools/client_tests_urls.prop", "status": "modified" }, { "diff": "@@ -0,0 +1,65 @@\n+# Licensed to Elasticsearch under one or more contributor\n+# license agreements. See the NOTICE file distributed with\n+# this work for additional information regarding copyright\n+# ownership. Elasticsearch licenses this file to you under\n+# the Apache License, Version 2.0 (the \"License\"); you may\n+# not use this file except in compliance with the License.\n+# You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on \n+# an 'AS IS' BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND,\n+# either express or implied. See the License for the specific\n+# language governing permissions and limitations under the License.\n+\n+import os\n+import sys\n+import argparse\n+try:\n+ import boto.s3\n+except:\n+ raise RuntimeError(\"\"\"\n+ S3 download requires boto to be installed\n+ Use one of:\n+ 'pip install -U boto'\n+ 'apt-get install python-boto'\n+ 'easy_install boto'\n+ \"\"\")\n+\n+import boto.s3\n+\n+\n+def list_buckets(conn):\n+ return conn.get_all_buckets()\n+\n+\n+def download_s3(conn, path, key, file, bucket):\n+ print 'Downloading %s from Amazon S3 bucket %s/%s' % \\\n+ (file, bucket, os.path.join(path, key))\n+ def percent_cb(complete, total):\n+ sys.stdout.write('.')\n+ sys.stdout.flush()\n+ bucket = conn.get_bucket(bucket)\n+ k = bucket.get_key(os.path.join(path, key))\n+ k.get_contents_to_filename(file, cb=percent_cb, num_cb=100)\n+\n+\n+if __name__ == '__main__':\n+ parser = argparse.ArgumentParser(description='Downloads a bucket from Amazon S3')\n+ parser.add_argument('--file', '-f', metavar='path to file',\n+ help='path to store the bucket to', required=True)\n+ parser.add_argument('--bucket', '-b', default='downloads.elasticsearch.org',\n+ help='The S3 Bucket to download from')\n+ parser.add_argument('--path', '-p', default='',\n+ help='The key path to use')\n+ parser.add_argument('--key', '-k', default=None,\n+ help='The key - uses the file name as default key')\n+ args = parser.parse_args()\n+ if args.key:\n+ key = args.key\n+ else:\n+ key = os.path.basename(args.file)\n+ connection = boto.connect_s3()\n+ download_s3(connection, args.path, key, args.file, args.bucket);", "filename": "dev-tools/download-s3.py", "status": "added" }, { "diff": "@@ -0,0 +1,196 @@\n+#!/usr/bin/env perl\n+# Licensed to Elasticsearch under one or more contributor\n+# license agreements. See the NOTICE file distributed with\n+# this work for additional information regarding copyright\n+# ownership. Elasticsearch licenses this file to you under\n+# the Apache License, Version 2.0 (the \"License\"); you may\n+# not use this file except in compliance with the License.\n+# You may obtain a copy of the License at\n+#\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing,\n+# software distributed under the License is distributed on\n+# an 'AS IS' BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND,\n+# either express or implied. See the License for the specific\n+# language governing permissions and limitations under the License.\n+\n+use strict;\n+use warnings;\n+\n+use HTTP::Tiny;\n+use IO::Socket::SSL 1.52;\n+\n+my $Base_URL = 'https://api.github.com/repos/';\n+my $User_Repo = 'elasticsearch/elasticsearch/';\n+my $Issue_URL = \"http://github.com/${User_Repo}issues/issue/\";\n+\n+my @Groups = qw(breaking feature enhancement bug regression doc test);\n+my %Group_Labels = (\n+ breaking => 'Breaking changes',\n+ doc => 'Docs',\n+ feature => 'New features',\n+ enhancement => 'Enhancements',\n+ bug => 'Bug fixes',\n+ regression => 'Regression',\n+ test => 'Tests',\n+ other => 'Not classified',\n+);\n+\n+use JSON();\n+use Encode qw(encode_utf8);\n+\n+my $json = JSON->new->utf8(1);\n+\n+my %All_Labels = fetch_labels();\n+\n+my $version = shift @ARGV\n+ or dump_labels();\n+\n+dump_labels(\"Unknown version '$version'\")\n+ unless $All_Labels{$version};\n+\n+my $format = shift @ARGV || \"html\";\n+\n+my $issues = fetch_issues($version);\n+dump_issues( $version, $issues );\n+\n+#===================================\n+sub dump_issues {\n+#===================================\n+ my $version = shift;\n+ my $issues = shift;\n+\n+ $version =~ s/v//;\n+ my ( $day, $month, $year ) = (gmtime)[ 3 .. 5 ];\n+ $month++;\n+ $year += 1900;\n+\n+ for my $group ( @Groups, 'other' ) {\n+ my $group_issues = $issues->{$group} or next;\n+ $format eq 'html' and print \"<h2>$Group_Labels{$group}</h2>\\n\\n<ul>\\n\";\n+ $format eq 'markdown' and print \"## $Group_Labels{$group}\\n\\n\";\n+\n+ for my $header ( sort keys %$group_issues ) {\n+ my $header_issues = $group_issues->{$header};\n+ my $prefix = \"<li>\";\n+ if ($format eq 'html') {\n+ if ( $header && @$header_issues > 1 ) {\n+ print \"<li>$header:<ul>\";\n+ $prefix = \"<li>\";\n+ }\n+ elsif ($header) {\n+ $prefix = \"<li>$header: \";\n+ }\n+ }\n+ for my $issue (@$header_issues) {\n+ my $title = $issue->{title};\n+ if ( $issue->{state} eq 'open' ) {\n+ $title .= \" [OPEN]\";\n+ }\n+ my $number = $issue->{number};\n+ $format eq 'markdown' and print encode_utf8( \"* \"\n+ . $title\n+ . qq( [#$number](${Issue_URL}${number})\\n)\n+ );\n+ $format eq 'html' and print encode_utf8( $prefix\n+ . $title\n+ . qq[ <a href=\"${Issue_URL}${number}\">#${number}</a></li>\\n]\n+ );\n+ }\n+ if ($format eq 'html' && $header && @$header_issues > 1 ) {\n+ print \"</li></ul></li>\\n\";\n+ }\n+ }\n+ $format eq 'html' and print \"</ul>\";\n+ print \"\\n\\n\"\n+ }\n+}\n+\n+#===================================\n+sub fetch_issues {\n+#===================================\n+ my $version = shift;\n+ my @issues;\n+ for my $state ( 'open', 'closed' ) {\n+ my $page = 1;\n+ while (1) {\n+ my $tranche\n+ = fetch( $User_Repo\n+ . 'issues?labels='\n+ . $version\n+ . '&pagesize=100&state='\n+ . $state\n+ . '&page='\n+ . $page )\n+ or die \"Couldn't fetch issues for version '$version'\";\n+ last unless @$tranche;\n+ push @issues, @$tranche;\n+ $page++;\n+ }\n+ }\n+\n+ my %group;\n+ISSUE:\n+ for my $issue (@issues) {\n+ my %labels = map { $_->{name} => 1 } @{ $issue->{labels} };\n+ my $header = $issue->{title} =~ s/^([^:]+):\\s+// ? $1 : '';\n+ for (@Groups) {\n+ if ( $labels{$_} ) {\n+ push @{ $group{$_}{$header} }, $issue;\n+ next ISSUE;\n+ }\n+ }\n+ push @{ $group{other}{$header} }, $issue;\n+ }\n+\n+ return \\%group;\n+}\n+\n+#===================================\n+sub fetch_labels {\n+#===================================\n+ my %all;\n+ my $page = 1;\n+ while (1) {\n+ my $labels = fetch( $User_Repo . 'labels?page=' . $page++ )\n+ or die \"Couldn't retrieve version labels\";\n+ last unless @$labels;\n+ for (@$labels) {\n+ my $name = $_->{name};\n+ next unless $name =~ /^v/;\n+ $all{$name} = 1;\n+ }\n+ }\n+ return %all;\n+}\n+\n+#===================================\n+sub fetch {\n+#===================================\n+ my $url = $Base_URL . shift();\n+ my $response = HTTP::Tiny->new->get($url);\n+ die \"$response->{status} $response->{reason}\\n\"\n+ unless $response->{success};\n+\n+ # print $response->{content};\n+ return $json->decode( $response->{content} );\n+}\n+\n+#===================================\n+sub dump_labels {\n+#===================================\n+ my $error = shift || '';\n+ if ($error) {\n+ $error = \"\\nERROR: $error\\n\";\n+ }\n+ my $labels = join( \"\\n - \", '', ( sort keys %All_Labels ) );\n+ die <<USAGE\n+ $error\n+ USAGE: $0 version > outfile\n+\n+ Known versions:$labels\n+\n+USAGE\n+\n+}", "filename": "dev-tools/es_release_notes.pl", "status": "added" }, { "diff": "@@ -0,0 +1,3 @@\n+@defaultMessage Convert to URI\n+java.net.URL#getPath()\n+java.net.URL#getFile()", "filename": "dev-tools/forbidden/all-signatures.txt", "status": "added" }, { "diff": "@@ -0,0 +1,71 @@\n+@defaultMessage spawns threads with vague names; use a custom thread factory and name threads so that you can tell (by its name) which executor it is associated with\n+\n+java.util.concurrent.Executors#newFixedThreadPool(int)\n+java.util.concurrent.Executors#newSingleThreadExecutor()\n+java.util.concurrent.Executors#newCachedThreadPool()\n+java.util.concurrent.Executors#newSingleThreadScheduledExecutor()\n+java.util.concurrent.Executors#newScheduledThreadPool(int)\n+java.util.concurrent.Executors#defaultThreadFactory()\n+java.util.concurrent.Executors#privilegedThreadFactory()\n+\n+java.lang.Character#codePointBefore(char[],int) @ Implicit start offset is error-prone when the char[] is a buffer and the first chars are random chars\n+java.lang.Character#codePointAt(char[],int) @ Implicit end offset is error-prone when the char[] is a buffer and the last chars are random chars\n+\n+@defaultMessage Collections.sort dumps data into an array, sorts the array and reinserts data into the list, one should rather use Lucene's CollectionUtil sort methods which sort in place\n+\n+java.util.Collections#sort(java.util.List)\n+java.util.Collections#sort(java.util.List,java.util.Comparator)\n+\n+java.io.StringReader#<init>(java.lang.String) @ Use FastStringReader instead\n+\n+@defaultMessage Reference management is tricky, leave it to SearcherManager\n+org.apache.lucene.index.IndexReader#decRef()\n+org.apache.lucene.index.IndexReader#incRef()\n+org.apache.lucene.index.IndexReader#tryIncRef()\n+\n+@defaultMessage QueryWrapperFilter is cachable by default - use Queries#wrap instead\n+org.apache.lucene.search.QueryWrapperFilter#<init>(org.apache.lucene.search.Query)\n+\n+@defaultMessage Because the filtercache doesn't take deletes into account FilteredQuery can't be used - use XFilteredQuery instead\n+org.apache.lucene.search.FilteredQuery#<init>(org.apache.lucene.search.Query,org.apache.lucene.search.Filter)\n+org.apache.lucene.search.FilteredQuery#<init>(org.apache.lucene.search.Query,org.apache.lucene.search.Filter,org.apache.lucene.search.FilteredQuery$FilterStrategy)\n+\n+@defaultMessage Pass the precision step from the mappings explicitly instead\n+org.apache.lucene.search.NumericRangeQuery#newDoubleRange(java.lang.String,java.lang.Double,java.lang.Double,boolean,boolean)\n+org.apache.lucene.search.NumericRangeQuery#newFloatRange(java.lang.String,java.lang.Float,java.lang.Float,boolean,boolean)\n+org.apache.lucene.search.NumericRangeQuery#newIntRange(java.lang.String,java.lang.Integer,java.lang.Integer,boolean,boolean)\n+org.apache.lucene.search.NumericRangeQuery#newLongRange(java.lang.String,java.lang.Long,java.lang.Long,boolean,boolean)\n+org.apache.lucene.search.NumericRangeFilter#newDoubleRange(java.lang.String,java.lang.Double,java.lang.Double,boolean,boolean)\n+org.apache.lucene.search.NumericRangeFilter#newFloatRange(java.lang.String,java.lang.Float,java.lang.Float,boolean,boolean)\n+org.apache.lucene.search.NumericRangeFilter#newIntRange(java.lang.String,java.lang.Integer,java.lang.Integer,boolean,boolean)\n+org.apache.lucene.search.NumericRangeFilter#newLongRange(java.lang.String,java.lang.Long,java.lang.Long,boolean,boolean)\n+\n+@defaultMessage Only use wait / notify when really needed try to use concurrency primitives, latches or callbacks instead. \n+java.lang.Object#wait()\n+java.lang.Object#wait(long)\n+java.lang.Object#wait(long,int)\n+java.lang.Object#notify()\n+java.lang.Object#notifyAll()\n+\n+@defaultMessage Beware of the behavior of this method on MIN_VALUE\n+java.lang.Math#abs(int)\n+java.lang.Math#abs(long)\n+\n+@defaultMessage Please do not try to stop the world\n+java.lang.System#gc()\n+\n+@defaultMessage Use Long.compare instead we are on Java7\n+com.google.common.primitives.Longs#compare(long,long)\n+\n+@defaultMessage Use Channels.* methods to write to channels. Do not write directly.\n+java.nio.channels.WritableByteChannel#write(java.nio.ByteBuffer)\n+java.nio.channels.FileChannel#write(java.nio.ByteBuffer, long)\n+java.nio.channels.GatheringByteChannel#write(java.nio.ByteBuffer[], int, int)\n+java.nio.channels.GatheringByteChannel#write(java.nio.ByteBuffer[])\n+java.nio.channels.ReadableByteChannel#read(java.nio.ByteBuffer)\n+java.nio.channels.ScatteringByteChannel#read(java.nio.ByteBuffer[])\n+java.nio.channels.ScatteringByteChannel#read(java.nio.ByteBuffer[], int, int)\n+java.nio.channels.FileChannel#read(java.nio.ByteBuffer, long)\n+\n+@defaultMessage Use Lucene.parseLenient instead it strips off minor version\n+org.apache.lucene.util.Version#parseLeniently(java.lang.String)", "filename": "dev-tools/forbidden/core-signatures.txt", "status": "added" }, { "diff": "", "filename": "dev-tools/forbidden/test-signatures.txt", "status": "added" }, { "diff": "@@ -0,0 +1,20 @@\n+<?xml version=\"1.0\"?>\n+<ruleset name=\"Custom ruleset\"\n+ xmlns=\"http://pmd.sourceforge.net/ruleset/2.0.0\"\n+ xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n+ xsi:schemaLocation=\"http://pmd.sourceforge.net/ruleset/2.0.0 http://pmd.sourceforge.net/ruleset_2_0_0.xsd\">\n+ <description>\n+ Default ruleset for elasticsearch server project\n+ </description>\n+ <rule ref=\"rulesets/java/basic.xml\"/>\n+ <rule ref=\"rulesets/java/braces.xml\"/>\n+ <rule ref=\"rulesets/java/clone.xml\"/>\n+ <rule ref=\"rulesets/java/codesize.xml\"/>\n+ <rule ref=\"rulesets/java/coupling.xml\">\n+ <exclude name=\"LawOfDemeter\" />\n+ </rule>\n+ <rule ref=\"rulesets/java/design.xml\"/>\n+ <rule ref=\"rulesets/java/unnecessary.xml\">\n+ <exclude name=\"UselessParentheses\" />\n+ </rule>\n+</ruleset>", "filename": "dev-tools/pmd/custom.xml", "status": "added" }, { "diff": "@@ -27,10 +27,11 @@ grant {\n permission java.io.FilePermission \"${junit4.childvm.cwd}\", \"read,execute,write\";\n permission java.io.FilePermission \"${junit4.childvm.cwd}${/}-\", \"read,execute,write,delete\";\n permission java.io.FilePermission \"${junit4.tempDir}${/}*\", \"read,execute,write,delete\";\n- \n+ permission groovy.security.GroovyCodeSourcePermission \"/groovy/script\";\n+\n // Allow connecting to the internet anywhere\n permission java.net.SocketPermission \"*\", \"accept,listen,connect,resolve\";\n- \n+\n // Basic permissions needed for Lucene / Elasticsearch to work:\n permission java.util.PropertyPermission \"*\", \"read,write\";\n permission java.lang.reflect.ReflectPermission \"*\";", "filename": "dev-tools/tests.policy", "status": "modified" }, { "diff": "@@ -0,0 +1,4 @@\n+The Elasticsearch docs are in AsciiDoc format and can be built using the Elasticsearch documentation build process\n+\n+See: https://github.com/elasticsearch/docs\n+", "filename": "docs/README.md", "status": "added" }, { "diff": "@@ -39,15 +39,15 @@ See the {client}/ruby-api/current/index.html[official Elasticsearch Ruby client]\n * http://github.com/karmi/tire[Tire]:\n Ruby API & DSL, with ActiveRecord/ActiveModel integration.\n \n-* http://github.com/grantr/rubberband[rubberband]:\n- Ruby client.\n-\n * https://github.com/PoseBiz/stretcher[stretcher]:\n Ruby client.\n \n * https://github.com/wireframe/elastic_searchable/[elastic_searchable]:\n Ruby client + Rails integration.\n \n+* https://github.com/ddnexus/flex[Flex]:\n+ Ruby Client.\n+\n \n [[community-php]]\n === PHP\n@@ -62,13 +62,15 @@ See the {client}/php-api/current/index.html[official Elasticsearch PHP client].\n * http://github.com/polyfractal/Sherlock[Sherlock]:\n PHP client, one-to-one mapping with query DSL, fluid interface.\n \n+* https://github.com/nervetattoo/elasticsearch[elasticsearch]\n+ PHP 5.3 client\n \n [[community-java]]\n === Java\n \n * https://github.com/searchbox-io/Jest[Jest]:\n Java Rest client.\n-\n+* There is of course the http://www.elasticsearch.org/guide/en/elasticsearch/client/java-api/current/index.html[native ES Java client]\n \n [[community-javascript]]\n === JavaScript\n@@ -184,3 +186,7 @@ See the {client}/javascript-api/current/index.html[official Elasticsearch JavaSc\n * https://github.com/jasonfill/ColdFusion-ElasticSearch-Client[ColdFusion-Elasticsearch-Client]\n Cold Fusion client for Elasticsearch\n \n+[[community-nodejs]]\n+=== NodeJS\n+* https://github.com/phillro/node-elasticsearch-client[Node-Elasticsearch-Client]\n+ A node.js client for elasticsearch", "filename": "docs/community/clients.asciidoc", "status": "modified" }, { "diff": "@@ -1,9 +1,6 @@\n [[front-ends]]\n == Front Ends\n \n-* https://chrome.google.com/webstore/detail/sense/doinijnbnggojdlcjifpdckfokbbfpbo[Sense]:\n- Chrome curl-like plugin for running requests against an Elasticsearch node\n-\n * https://github.com/mobz/elasticsearch-head[elasticsearch-head]: \n A web front end for an Elasticsearch cluster.\n \n@@ -15,3 +12,6 @@\n \n * http://elastichammer.exploringelasticsearch.com/[Hammer]: \n Web front-end for elasticsearch\n+\n+* https://github.com/romansanchez/Calaca[Calaca]: \n+ Simple search client for Elasticsearch", "filename": "docs/community/frontends.asciidoc", "status": "modified" }, { "diff": "@@ -44,6 +44,9 @@\n * https://github.com/dadoonet/spring-elasticsearch[Spring Elasticsearch]:\n Spring Factory for Elasticsearch\n \n+* https://github.com/spring-projects/spring-data-elasticsearch[Spring Data Elasticsearch]:\n+ Spring Data implementation for Elasticsearch\n+\n * https://camel.apache.org/elasticsearch.html[Apache Camel Integration]:\n An Apache camel component to integrate elasticsearch\n \n@@ -67,9 +70,18 @@\n * https://github.com/cleverage/play2-elasticsearch[play2-elasticsearch]:\n Elasticsearch module for Play Framework 2.x\n \n+* https://github.com/goodow/realtime-search[realtime-search]:\n+ Elasticsearch module for Vert.x\n+\n * https://github.com/fullscale/dangle[dangle]:\n A set of AngularJS directives that provide common visualizations for elasticsearch based on\n D3.\n \n * https://github.com/roundscope/ember-data-elasticsearch-kit[ember-data-elasticsearch-kit]:\n An ember-data kit for both pushing and querying objects to Elasticsearch cluster\n+\n+* https://github.com/kzwang/elasticsearch-osem[elasticsearch-osem]:\n+ A Java Object Search Engine Mapping (OSEM) for Elasticsearch\n+\n+* https://github.com/twitter/storehaus[Twitter Storehaus]:\n+ Thin asynchronous scala client for storehaus.", "filename": "docs/community/integrations.asciidoc", "status": "modified" }, { "diff": "@@ -1,15 +1,12 @@\n [[misc]]\n == Misc\n \n-* https://github.com/electrical/puppet-elasticsearch[Puppet]:\n+* https://github.com/elasticsearch/puppet-elasticsearch[Puppet]:\n Elasticsearch puppet module.\n \n * http://github.com/elasticsearch/cookbook-elasticsearch[Chef]:\n Chef cookbook for Elasticsearch\n \n-* https://github.com/tavisto/elasticsearch-rpms[elasticsearch-rpms]:\n- RPMs for elasticsearch.\n-\n * http://www.github.com/neogenix/daikon[daikon]:\n Daikon Elasticsearch CLI\n ", "filename": "docs/community/misc.asciidoc", "status": "modified" }, { "diff": "@@ -4,6 +4,9 @@\n * https://github.com/lukas-vlcek/bigdesk[bigdesk]:\n Live charts and statistics for elasticsearch cluster.\n \n+* https://github.com/lmenezes/elasticsearch-kopf/[Kopf]:\n+ Live cluster health and shard allocation monitoring with administration toolset.\n+ \n * https://github.com/karmi/elasticsearch-paramedic[paramedic]:\n Live charts with cluster stats and indices/shards information.\n ", "filename": "docs/community/monitoring.asciidoc", "status": "modified" }, { "diff": "@@ -30,7 +30,7 @@ For example, you can define the latest version in your `pom.xml` file:\n --------------------------------------------------\n <dependency>\n <groupId>org.elasticsearch</groupId>\n- <artifactId>elasticsearch-client-groovy</artifactId>\n+ <artifactId>elasticsearch-lang-groovy</artifactId>\n <version>${es.version}</version>\n </dependency>\n --------------------------------------------------", "filename": "docs/groovy-api/index.asciidoc", "status": "modified" } ] }
{ "body": "It modifies the docidset (casts to FixedBitSet and flips all the bits).\n\nIf this bitset was cached, this will be modifying the cached instance each time flipping all of its bits back and forth.\n", "comments": [ { "body": "IndexedGeoBoundingBoxFilter has a similar issue: it modifies the bits returned by another filter\n", "created_at": "2014-10-31T16:54:18Z" }, { "body": "@jpountz Right, it should instead like TermsFilter create a clean bitset and perform the or operation on that.\n", "created_at": "2014-11-03T08:50:34Z" }, { "body": "@martijnvg I tried to understand how it works and it looks like we could fix it and make the code simpler at the same time by just creating an BooleanFilter?\n", "created_at": "2014-11-03T08:57:14Z" }, { "body": "@jpountz +1 I like it, this should work and we can reuse the BooleanFilter.\n", "created_at": "2014-11-03T09:01:25Z" } ], "number": 8227, "title": "NonNestedDocsFilter.getDocIDSet() looks buggy" }
{ "body": "If the filter producing the FBS were to be cached then it would flip bits each time the NonNestedDocsFilter was executed.\n\nIn our case we cached the NonNestedDocsFilter itself so flipping bits each time NonNestedDocsFilter was used didn't happen. However the hashcode of NonNestedDocsFilter and NestedDocsFilter was the same, since NonNestedDocsFilter directly used NestedDocsFilter#hashCode()\n\nPR for #8227\n", "number": 8232, "review_comments": [], "title": "Cleanup non nested filter to not flip the FixedBitSet returned by the wrapped filter." }
{ "commits": [ { "message": "Core: Cleanup non nested filter to not flip the bits in the FixedBitSet returned by the wrapped filter.\n\nIf the filter producing the FBS were to be cached then it would flip bits each time the NonNestedDocsFilter was executed.\nIn our case we cache the NonNestedDocsFilter itself so that didn't happen each time this filter was used.\nHowever the hashcode of NonNestedDocsFilter and NestedDocsFilter was the same, since NonNestedDocsFilter directly used NestedDocsFilter#hashCode()\n\nCloses #8227\nCloses #8232" } ], "files": [ { "diff": "@@ -30,16 +30,19 @@\n \n import java.io.IOException;\n \n+/**\n+ * Filter that returns all nested documents.\n+ * A nested document is a sub documents that belong to a root document.\n+ * Nested documents share the unique id and type and optionally the _source with root documents.\n+ */\n public class NestedDocsFilter extends Filter {\n \n public static final NestedDocsFilter INSTANCE = new NestedDocsFilter();\n \n- private final PrefixFilter filter = new PrefixFilter(new Term(TypeFieldMapper.NAME, new BytesRef(\"__\")));\n-\n+ private final Filter filter = nestedFilter();\n private final int hashCode = filter.hashCode();\n \n private NestedDocsFilter() {\n-\n }\n \n @Override\n@@ -56,4 +59,9 @@ public int hashCode() {\n public boolean equals(Object obj) {\n return obj == INSTANCE;\n }\n+\n+ static Filter nestedFilter() {\n+ return new PrefixFilter(new Term(TypeFieldMapper.NAME, new BytesRef(\"__\")));\n+ }\n+\n }\n\\ No newline at end of file", "filename": "src/main/java/org/elasticsearch/index/search/nested/NestedDocsFilter.java", "status": "modified" }, { "diff": "@@ -20,40 +20,30 @@\n package org.elasticsearch.index.search.nested;\n \n import org.apache.lucene.index.AtomicReaderContext;\n-import org.apache.lucene.index.Term;\n import org.apache.lucene.search.DocIdSet;\n import org.apache.lucene.search.Filter;\n-import org.apache.lucene.search.PrefixFilter;\n import org.apache.lucene.util.Bits;\n-import org.apache.lucene.util.BytesRef;\n-import org.apache.lucene.util.FixedBitSet;\n-import org.elasticsearch.common.lucene.docset.DocIdSets;\n-import org.elasticsearch.index.mapper.internal.TypeFieldMapper;\n+import org.elasticsearch.common.lucene.search.NotFilter;\n \n import java.io.IOException;\n \n+/**\n+ * A filter that returns all root (non nested) documents.\n+ * Root documents have an unique id, a type and optionally have a _source and other indexed and stored fields.\n+ */\n public class NonNestedDocsFilter extends Filter {\n \n public static final NonNestedDocsFilter INSTANCE = new NonNestedDocsFilter();\n \n- private final PrefixFilter filter = new PrefixFilter(new Term(TypeFieldMapper.NAME, new BytesRef(\"__\")));\n-\n+ private final Filter filter = new NotFilter(NestedDocsFilter.nestedFilter());\n private final int hashCode = filter.hashCode();\n \n private NonNestedDocsFilter() {\n-\n }\n \n @Override\n public DocIdSet getDocIdSet(AtomicReaderContext context, Bits acceptDocs) throws IOException {\n- DocIdSet docSet = filter.getDocIdSet(context, acceptDocs);\n- if (DocIdSets.isEmpty(docSet)) {\n- // will almost never happen, and we need an OpenBitSet for the parent filter in\n- // BlockJoinQuery, we cache it anyhow...\n- docSet = new FixedBitSet(context.reader().maxDoc());\n- }\n- ((FixedBitSet) docSet).flip(0, context.reader().maxDoc());\n- return docSet;\n+ return filter.getDocIdSet(context, acceptDocs);\n }\n \n @Override", "filename": "src/main/java/org/elasticsearch/index/search/nested/NonNestedDocsFilter.java", "status": "modified" } ] }
{ "body": "Followup for #3977:\n\nif #3977 a check was added, that indexing completion suggester fields are rejected, if the weight is not an integer. My problem is that this check introduced there only works if the weight is a number. Unfortunately the parser has a bit strange logic: The new code is not executed if the client (creating the JSON) is passing the weight as \"string\", e.g. { \"weight\" : \"10.5\" }\n\nIn fact the weight is then ignored completely and not even an error is given (this is an additional bug in the parser logic). This caused me headaches yesterday, because the weight was given as JSON string in the indexing document. For other fields this makes no difference while indexing.\n\nThe parser for completion fields should be improved to have the outer check on the JSON key first and later check the types, not vice versa. This would also be consistent with indexing other fields, where the type of JSON value does not matter.\n", "comments": [ { "body": "@areek FYI I cherry-picked this commit into `1.4` as well I thing you missed it :)\n", "created_at": "2014-10-29T08:42:36Z" } ], "number": 8090, "title": "Completion Suggester: Fix CompletionFieldMapper to correctly parse weight" }
{ "body": "Allows weight to be defined as a string representation of a positive integer\n\ncloses #8090\n", "number": 8197, "review_comments": [ { "body": "Maybe use \"containing\" instead of \"representing\"?\n", "created_at": "2014-10-22T18:03:31Z" }, { "body": "I think you mean \"get overflow\" not \"get the overflow value\"?\n", "created_at": "2014-10-22T18:07:49Z" }, { "body": "I think it might be cleaner to separate this to one elseif handling the string, and have an additional elsif after handling a direct value? This would eliminate the need for the if/else below for value/string, and would clearly separate what happens in each case (and then you don't need the string checking in the other conditionals as well).\n", "created_at": "2014-10-22T18:09:55Z" }, { "body": "This seems to be the only \"shared\" portion for string/value. Perhaps it could be moved out? Or just have a helper method, or even just leave the duplication (it is only 2 lines really, not bad).\n", "created_at": "2014-10-22T18:12:34Z" }, { "body": "changed\n", "created_at": "2014-10-22T21:06:54Z" }, { "body": "changed to \"get overflow\"\n", "created_at": "2014-10-22T21:08:16Z" }, { "body": "separated out handling String and number\n", "created_at": "2014-10-22T21:08:54Z" }, { "body": "refactored to a helper method\n", "created_at": "2014-10-22T21:09:10Z" }, { "body": "I think just \"overflow\" not \"the overflow\"?\n", "created_at": "2014-10-22T22:29:15Z" } ], "title": "Fix CompletionFieldMapper to correctly parse weight" }
{ "commits": [ { "message": "Completion Suggester: Fix CompletionFieldMapper to correctly parse weight\n - Allows weight to be defined as a string representation of a positive integer\n\ncloses #8090" } ], "files": [ { "diff": "@@ -119,8 +119,9 @@ The following parameters are supported:\n might not yield any results, if `input` and `output` differ strongly).\n \n `weight`::\n- A positive integer, which defines a weight and allows you to\n- rank your suggestions. This field is optional.\n+ A positive integer or a string containing a positive integer,\n+ which defines a weight and allows you to rank your suggestions.\n+ This field is optional.\n \n NOTE: Even though you will lose most of the features of the\n completion suggest, you can choose to use the following shorthand form.", "filename": "docs/reference/search/suggesters/completion-suggest.asciidoc", "status": "modified" }, { "diff": "@@ -295,16 +295,24 @@ public void parse(ParseContext context) throws IOException {\n if (Fields.CONTENT_FIELD_NAME_INPUT.equals(currentFieldName)) {\n inputs.add(parser.text());\n }\n+ if (Fields.CONTENT_FIELD_NAME_WEIGHT.equals(currentFieldName)) {\n+ Number weightValue;\n+ try {\n+ weightValue = Long.parseLong(parser.text());\n+ } catch (NumberFormatException e) {\n+ throw new ElasticsearchIllegalArgumentException(\"Weight must be a string representing a numeric value, but was [\" + parser.text() + \"]\");\n+ }\n+ weight = weightValue.longValue(); // always parse a long to make sure we don't get overflow\n+ checkWeight(weight);\n+ }\n } else if (token == XContentParser.Token.VALUE_NUMBER) {\n if (Fields.CONTENT_FIELD_NAME_WEIGHT.equals(currentFieldName)) {\n NumberType numberType = parser.numberType();\n if (NumberType.LONG != numberType && NumberType.INT != numberType) {\n throw new ElasticsearchIllegalArgumentException(\"Weight must be an integer, but was [\" + parser.numberValue() + \"]\");\n }\n- weight = parser.longValue(); // always parse a long to make sure we don't get the overflow value\n- if (weight < 0 || weight > Integer.MAX_VALUE) {\n- throw new ElasticsearchIllegalArgumentException(\"Weight must be in the interval [0..2147483647], but was [\" + weight + \"]\");\n- }\n+ weight = parser.longValue(); // always parse a long to make sure we don't get overflow\n+ checkWeight(weight);\n }\n } else if (token == XContentParser.Token.START_ARRAY) {\n if (Fields.CONTENT_FIELD_NAME_INPUT.equals(currentFieldName)) {\n@@ -341,6 +349,12 @@ public void parse(ParseContext context) throws IOException {\n }\n }\n \n+ private void checkWeight(long weight) {\n+ if (weight < 0 || weight > Integer.MAX_VALUE) {\n+ throw new ElasticsearchIllegalArgumentException(\"Weight must be in the interval [0..2147483647], but was [\" + weight + \"]\");\n+ }\n+ }\n+\n /**\n * Get the context mapping associated with this completion field.\n */", "filename": "src/main/java/org/elasticsearch/index/mapper/core/CompletionFieldMapper.java", "status": "modified" }, { "diff": "@@ -178,6 +178,68 @@ public void testThatWeightMustBeAnInteger() throws Exception {\n }\n }\n \n+ @Test\n+ public void testThatWeightCanBeAString() throws Exception {\n+ createIndexAndMapping(completionMappingBuilder);\n+\n+ client().prepareIndex(INDEX, TYPE, \"1\").setSource(jsonBuilder()\n+ .startObject().startObject(FIELD)\n+ .startArray(\"input\").value(\"testing\").endArray()\n+ .field(\"weight\", \"10\")\n+ .endObject().endObject()\n+ ).get();\n+\n+ refresh();\n+\n+ SuggestResponse suggestResponse = client().prepareSuggest(INDEX).addSuggestion(\n+ new CompletionSuggestionBuilder(\"testSuggestions\").field(FIELD).text(\"test\").size(10)\n+ ).execute().actionGet();\n+\n+ assertSuggestions(suggestResponse, \"testSuggestions\", \"testing\");\n+ Suggest.Suggestion.Entry.Option option = suggestResponse.getSuggest().getSuggestion(\"testSuggestions\").getEntries().get(0).getOptions().get(0);\n+ assertThat(option, is(instanceOf(CompletionSuggestion.Entry.Option.class)));\n+ CompletionSuggestion.Entry.Option prefixOption = (CompletionSuggestion.Entry.Option) option;\n+\n+ assertThat(prefixOption.getText().string(), equalTo(\"testing\"));\n+ assertThat((long) prefixOption.getScore(), equalTo(10l));\n+ }\n+\n+\n+ @Test\n+ public void testThatWeightMustNotBeANonNumberString() throws Exception {\n+ createIndexAndMapping(completionMappingBuilder);\n+\n+ try {\n+ client().prepareIndex(INDEX, TYPE, \"1\").setSource(jsonBuilder()\n+ .startObject().startObject(FIELD)\n+ .startArray(\"input\").value(\"sth\").endArray()\n+ .field(\"weight\", \"thisIsNotValid\")\n+ .endObject().endObject()\n+ ).get();\n+ fail(\"Indexing with a non-number representing string as weight was successful, but should not be\");\n+ } catch (MapperParsingException e) {\n+ assertThat(ExceptionsHelper.detailedMessage(e), containsString(\"thisIsNotValid\"));\n+ }\n+ }\n+\n+ @Test\n+ public void testThatWeightAsStringMustBeInt() throws Exception {\n+ createIndexAndMapping(completionMappingBuilder);\n+\n+ String weight = String.valueOf(Long.MAX_VALUE - 4);\n+ try {\n+ client().prepareIndex(INDEX, TYPE, \"1\").setSource(jsonBuilder()\n+ .startObject().startObject(FIELD)\n+ .startArray(\"input\").value(\"testing\").endArray()\n+ .field(\"weight\", weight)\n+ .endObject().endObject()\n+ ).get();\n+ fail(\"Indexing with weight string representing value > Int.MAX_VALUE was successful, but should not be\");\n+ } catch (MapperParsingException e) {\n+ assertThat(ExceptionsHelper.detailedMessage(e), containsString(weight));\n+ }\n+ }\n+\n @Test\n public void testThatInputCanBeAStringInsteadOfAnArray() throws Exception {\n createIndexAndMapping(completionMappingBuilder);", "filename": "src/test/java/org/elasticsearch/search/suggest/CompletionSuggestSearchTests.java", "status": "modified" } ] }